From 6ba40fea71c7aae48369e111d044264c4c238276 Mon Sep 17 00:00:00 2001 From: yzfly Date: Wed, 21 Jun 2023 07:42:25 +0800 Subject: [PATCH] add open_llama --- docs/ChatGPT_dev.md | 2 -- docs/LLMs.md | 4 ++++ 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/docs/ChatGPT_dev.md b/docs/ChatGPT_dev.md index 0677300..9dd14a0 100644 --- a/docs/ChatGPT_dev.md +++ b/docs/ChatGPT_dev.md @@ -105,5 +105,3 @@ |[privateGPT](https://github.com/imartinez/privateGPT)|![GitHub Repo stars](https://badgen.net/github/stars/imartinez/privateGPT)|基于 Llama 的本地私人文档助手|-| |[rebuff](https://github.com/woop/rebuff) |![GitHub Repo stars](https://badgen.net/github/stars/woop/rebuff)|Rebuff.ai - Prompt Injection Detector.|Prompt 攻击检测,内容检测| |[text-generation-webui](https://github.com/oobabooga/text-generation-webui)|![GitHub Repo stars](https://badgen.net/github/stars/oobabooga/text-generation-webui)|-|一个用于运行大型语言模型(如LLaMA, LLaMA .cpp, GPT-J, Pythia, OPT和GALACTICA)的 web UI。| -|[MLC LLM](https://github.com/mlc-ai/mlc-llm)|![GitHub Repo stars](https://badgen.net/github/stars/mlc-ai/mlc-llm)|Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.|陈天奇大佬力作——MLC LLM,在各类硬件上原生部署任意大型语言模型。可将大模型应用于移动端(例如 iPhone)、消费级电脑端(例如 Mac)和 Web 浏览器。| -|[languagemodels](https://github.com/jncraton/languagemodels)|![GitHub Repo stars](https://badgen.net/github/stars/mlc-ai/mlc-llm)|Explore large language models on any computer with 512MB of RAM.|在512MB RAM的计算机上探索大型语言模型使用| \ No newline at end of file diff --git a/docs/LLMs.md b/docs/LLMs.md index 49bfefa..56bea05 100644 --- a/docs/LLMs.md +++ b/docs/LLMs.md @@ -24,12 +24,16 @@ OpenAI 的 ChatGPT 大型语言模型(LLM)并未开源,这部分收录一 |[FreedomGPT](https://github.com/ohmplatform/FreedomGPT) |![GitHub Repo stars](https://badgen.net/github/stars/ohmplatform/FreedomGPT)|-|自由无限制的可以在 windows 和 mac 上本地运行的 GPT,基于 Alpaca Lora 模型。| |[FinGPT](https://github.com/AI4Finance-Foundation/FinGPT)|![GitHub Repo stars](https://badgen.net/github/stars/AI4Finance-Foundation/FinGPT)|Data-Centric FinGPT. Open-source for open finance! Revolutionize 🔥 We'll soon release the trained model.|金融领域大模型| |[baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) |![GitHub Repo stars](https://badgen.net/github/stars/baichuan-inc/baichuan-7B)|A large-scale 7B pretraining language model developed by Baichuan |baichuan-7B 是由百川智能开发的一个开源可商用的大规模预训练语言模型。基于 Transformer 结构,在大约1.2万亿 tokens 上训练的70亿参数模型,支持中英双语,上下文窗口长度为4096。在标准的中文和英文权威 benchmark(C-EVAL/MMLU)上均取得同尺寸最好的效果。| +|[open_llama](https://github.com/openlm-research/open_llama) |![GitHub Repo stars](https://badgen.net/github/stars/openlm-research/open_llama)|OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset. |OpenLLaMA,允许开源复制Meta AI的LLaMA-7B 模型,在red睡衣数据集上训练得到。| ### 大模型训练和微调 |名称|Stars|简介| 备注 | |-------|-------|-------|------| |[transformers](https://github.com/huggingface/transformers) | ![GitHub Repo stars](https://badgen.net/github/stars/huggingface/transformers) | 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX. |HuggingFace 经典之作, Transformers 模型必用库| |[peft](https://github.com/huggingface/peft) | ![GitHub Repo stars](https://badgen.net/github/stars/huggingface/peft) | PEFT: State-of-the-art Parameter-Efficient Fine-Tuning. |HuggingFace 出品——PEFT:最先进的参数高效微调。| +|[OpenLLM](https://github.com/bentoml/OpenLLM) | ![GitHub Repo stars](https://badgen.net/github/stars/bentoml/OpenLLM) |An open platform for operating large language models (LLMs) in production. Fine-tune, serve, deploy, and monitor any LLMs with ease. |微调,服务,部署和监控所有LLMS。用于运营大型语言模型(LLM)的开放平台。| +|[MLC LLM](https://github.com/mlc-ai/mlc-llm)|![GitHub Repo stars](https://badgen.net/github/stars/mlc-ai/mlc-llm)|Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.|陈天奇大佬力作——MLC LLM,在各类硬件上原生部署任意大型语言模型。可将大模型应用于移动端(例如 iPhone)、消费级电脑端(例如 Mac)和 Web 浏览器。| +|[languagemodels](https://github.com/jncraton/languagemodels)|![GitHub Repo stars](https://badgen.net/github/stars/mlc-ai/mlc-llm)|Explore large language models on any computer with 512MB of RAM.|在512MB RAM的计算机上探索大型语言模型使用| |[ChatGLM-Efficient-Tuning](https://github.com/hiyouga/ChatGLM-Efficient-Tuning) | ![GitHub Repo stars](https://badgen.net/github/stars/hiyouga/ChatGLM-Efficient-Tuning) | Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调| |[LLaMA-Efficient-Tuning](https://github.com/hiyouga/LLaMA-Efficient-Tuning) | ![GitHub Repo stars](https://badgen.net/github/stars/hiyouga/LLaMA-Efficient-Tuning) | Fine-tuning LLaMA with PEFT (PT+SFT+RLHF with QLoRA) |支持多种模型 LLaMA (7B/13B/33B/65B) ,BLOOM & BLOOMZ (560M/1.1B/1.7B/3B/7.1B/176B),baichuan (7B),支持多种微调方式LoRA,QLoRA| |[微调中文数据集 COIG](https://github.com/BAAI-Zlab/COIG) | ![GitHub Repo stars](https://badgen.net/github/stars/BAAI-Zlab/COIG) | Chinese Open Instruction Generalist (COIG) project aims to maintain a harmless, helpful, and diverse set of Chinese instruction corpora. |中文开放教学通才(COIG)项目旨在维护一套无害、有用和多样化的中文教学语料库。|