add coig data
This commit is contained in:
@ -32,7 +32,7 @@ OpenAI 的 ChatGPT 大型语言模型(LLM)并未开源,这部分收录一
|
||||
|[peft](https://github.com/huggingface/peft) |  | PEFT: State-of-the-art Parameter-Efficient Fine-Tuning. |HuggingFace 出品——PEFT:最先进的参数高效微调。|
|
||||
|[ChatGLM-Efficient-Tuning](https://github.com/hiyouga/ChatGLM-Efficient-Tuning) |  | Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调|
|
||||
|[LLaMA-Efficient-Tuning](https://github.com/hiyouga/LLaMA-Efficient-Tuning) |  | Fine-tuning LLaMA with PEFT (PT+SFT+RLHF with QLoRA) |支持多种模型 LLaMA (7B/13B/33B/65B) ,BLOOM & BLOOMZ (560M/1.1B/1.7B/3B/7.1B/176B),baichuan (7B),支持多种微调方式LoRA,QLoRA|
|
||||
|
||||
|[微调中文数据集 COIG](https://github.com/BAAI-Zlab/COIG) |  | Chinese Open Instruction Generalist (COIG) project aims to maintain a harmless, helpful, and diverse set of Chinese instruction corpora. |中文开放教学通才(COIG)项目旨在维护一套无害、有用和多样化的中文教学语料库。|
|
||||
|
||||
### 更多模型列表
|
||||
|名称|Stars|简介| 备注 |
|
||||
|
||||
Reference in New Issue
Block a user