From c6a3d0dfd3004777d9423e7d3430cbcabfc1c8c4 Mon Sep 17 00:00:00 2001
From: joyenjoye
Date: Sat, 1 Jul 2023 16:04:49 +0800
Subject: [PATCH] Create consistent format for all chapaters and align the
title of each chapter with readme
---
.../1.开篇介绍.md | 33 -----
.../1.简介.ipynb | 87 ++++++++++++
.../2.模型、提示和解析器.ipynb | 129 +++++++++---------
.../{3.存储 .ipynb => 3.存储.ipynb} | 65 ++++-----
.../4.模型链.ipynb | 104 ++++++--------
....文档问答.ipynb => 5.基于文档的问答.ipynb} | 56 +++++---
.../6.评估.ipynb | 70 ++++++----
.../7.代理.ipynb | 50 +++++--
.../8.总结.ipynb | 64 +++++++++
.../8.课程总结.md | 20 ---
.../readme.md | 2 +-
11 files changed, 397 insertions(+), 283 deletions(-)
delete mode 100644 content/LangChain for LLM Application Development/1.开篇介绍.md
create mode 100644 content/LangChain for LLM Application Development/1.简介.ipynb
rename content/LangChain for LLM Application Development/{3.存储 .ipynb => 3.存储.ipynb} (97%)
rename content/LangChain for LLM Application Development/{5.文档问答.ipynb => 5.基于文档的问答.ipynb} (94%)
create mode 100644 content/LangChain for LLM Application Development/8.总结.ipynb
delete mode 100644 content/LangChain for LLM Application Development/8.课程总结.md
diff --git a/content/LangChain for LLM Application Development/1.开篇介绍.md b/content/LangChain for LLM Application Development/1.开篇介绍.md
deleted file mode 100644
index 67e9cbd..0000000
--- a/content/LangChain for LLM Application Development/1.开篇介绍.md
+++ /dev/null
@@ -1,33 +0,0 @@
-## 吴恩达 LangChain大模型应用开发 开端篇
-
-## LangChain for LLM Application Development
-
-欢迎来到LangChain大模型应用开发短期课程👏🏻👏🏻
-
-本课程由哈里森·蔡斯 (Harrison Chase,LangChain作者)与Deeplearning.ai合作开发,旨在教大家使用这个神奇工具。
-
-### 🚀 LangChain的诞生和发展
-
-通过对LLM或大型语言模型给出提示(prompt),现在可以比以往更快地开发AI应用程序,但是一个应用程序可能需要进行多轮提示以及解析输出。
-
-在此过程有很多胶水代码需要编写,基于此需求,哈里森·蔡斯 (Harrison Chase) 创建了LangChain,使开发过程变得更加丝滑。
-
-LangChain开源社区快速发展,贡献者已达数百人,正以惊人的速度更新代码和功能。
-
-
-### 📚 课程基本内容
-
-LangChain是用于构建大模型应用程序的开源框架,有Python和JavaScript两个不同版本的包。LangChain基于模块化组合,有许多单独的组件,可以一起使用或单独使用。此外LangChain还拥有很多应用案例,帮助我们了解如何将这些模块化组件以链式方式组合,以形成更多端到端的应用程序 。
-
-在本课程中,我们将介绍LandChain的常见组件。具体而言我们会讨论一下几个方面
-- 模型(Models)
-- 提示(Prompts): 使模型执行操作的方式
-- 索引(Indexes): 获取数据的方式,可以与模型结合使用
-- 链式(Chains): 端到端功能实现
-- 代理(Agents): 使用模型作为推理引擎
-
-
-
-### 🌹致谢课程重要贡献者
-
-最后特别感谢Ankush Gholar(LandChain的联合作者)、Geoff Ladwig,、Eddy Shyu 以及 Diala Ezzedine,他们也为本课程内容贡献颇多~
\ No newline at end of file
diff --git a/content/LangChain for LLM Application Development/1.简介.ipynb b/content/LangChain for LLM Application Development/1.简介.ipynb
new file mode 100644
index 0000000..62cee6b
--- /dev/null
+++ b/content/LangChain for LLM Application Development/1.简介.ipynb
@@ -0,0 +1,87 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "cfab521b-77fa-41be-a964-1f50f2ef4689",
+ "metadata": {},
+ "source": [
+ "# 1. 简介\n",
+ "\n",
+ "\n",
+ "欢迎来到LangChain大模型应用开发短期课程👏🏻👏🏻\n",
+ "\n",
+ "本课程由哈里森·蔡斯 (Harrison Chase,LangChain作者)与Deeplearning.ai合作开发,旨在教大家使用这个神奇工具。\n",
+ "\n",
+ "\n",
+ "\n",
+ "## 1.1 LangChain的诞生和发展\n",
+ "\n",
+ "通过对LLM或大型语言模型给出提示(prompt),现在可以比以往更快地开发AI应用程序,但是一个应用程序可能需要进行多轮提示以及解析输出。\n",
+ "\n",
+ "在此过程有很多胶水代码需要编写,基于此需求,哈里森·蔡斯 (Harrison Chase) 创建了LangChain,使开发过程变得更加丝滑。\n",
+ "\n",
+ "LangChain开源社区快速发展,贡献者已达数百人,正以惊人的速度更新代码和功能。\n",
+ "\n",
+ "\n",
+ "## 1.2 课程基本内容\n",
+ "\n",
+ "LangChain是用于构建大模型应用程序的开源框架,有Python和JavaScript两个不同版本的包。LangChain基于模块化组合,有许多单独的组件,可以一起使用或单独使用。此外LangChain还拥有很多应用案例,帮助我们了解如何将这些模块化组件以链式方式组合,以形成更多端到端的应用程序 。\n",
+ "\n",
+ "在本课程中,我们将介绍LandChain的常见组件。具体而言我们会讨论一下几个方面\n",
+ "- 模型(Models)\n",
+ "- 提示(Prompts): 使模型执行操作的方式\n",
+ "- 索引(Indexes): 获取数据的方式,可以与模型结合使用\n",
+ "- 链式(Chains): 端到端功能实现\n",
+ "- 代理(Agents): 使用模型作为推理引擎\n",
+ "\n",
+ " \n",
+ "\n",
+ "## 1.3 致谢课程重要贡献者\n",
+ "\n",
+ "最后特别感谢Ankush Gholar(LandChain的联合作者)、Geoff Ladwig,、Eddy Shyu 以及 Diala Ezzedine,他们也为本课程内容贡献颇多~ "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e3618ca8",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.12"
+ },
+ "toc": {
+ "base_numbering": 1,
+ "nav_menu": {},
+ "number_sections": false,
+ "sideBar": true,
+ "skip_h1_title": false,
+ "title_cell": "Table of Contents",
+ "title_sidebar": "Contents",
+ "toc_cell": false,
+ "toc_position": {},
+ "toc_section_display": true,
+ "toc_window_display": true
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/content/LangChain for LLM Application Development/2.模型、提示和解析器.ipynb b/content/LangChain for LLM Application Development/2.模型、提示和解析器.ipynb
index d06b746..35a85ea 100644
--- a/content/LangChain for LLM Application Development/2.模型、提示和解析器.ipynb
+++ b/content/LangChain for LLM Application Development/2.模型、提示和解析器.ipynb
@@ -1,38 +1,58 @@
{
"cells": [
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
- "# 模型,提示和输出解释器\n",
+ "# 2. 模型,提示和输出解释器\n",
"\n",
- "\n",
- "**目录**\n",
- "* 获取你的OpenAI API Key\n",
- "* 直接调用OpenAI的API\n",
- "* 通过LangChain进行的API调用:\n",
- " * 提示(Prompts)\n",
- " * [模型(Models)](#model)\n",
- " * 输出解析器(Output parsers)\n",
- " "
+ "\n",
+ " - 2.1\n",
+ " 获取你的OpenAI API Key
\n",
+ "- 2.2 使用Chat\n",
+ " API:OpenAI\n",
+ " \n",
+ "
\n",
+ "- 2.3 使用Chat\n",
+ " API:LangChain\n",
+ " \n",
+ "
\n",
+ "- 2.4 补充材料\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "\n",
+ "\n",
+ ""
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
- "## 获取你的OpenAI API Key\n",
+ "## 2.1 获取你的OpenAI API Key\n",
"\n",
"登陆[OpenAI账户获取你的API Key](https://platform.openai.com/account/api-keys) "
]
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
@@ -60,13 +80,12 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
- "## Chat API:OpenAI\n",
+ "## 2.2 使用Chat API:OpenAI\n",
"\n",
"我们先从直接调用OpenAI的API开始。\n",
"\n",
@@ -99,13 +118,12 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
- "### 一个简单的例子\n",
+ "### 2.2.1 一个简单的例子\n",
"\n",
"我们来一个简单的例子 - 分别用中英文问问模型\n",
"\n",
@@ -156,11 +174,10 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 复杂一点例子\n",
+ "### 2.2.2 复杂一点的例子\n",
"\n",
"上面的简单例子,模型`gpt-3.5-turbo`对我们的关于1+1是什么的提问给出了回答。\n",
"\n",
@@ -188,7 +205,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -215,7 +231,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -255,7 +270,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -296,7 +310,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -310,13 +323,12 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
- "## Chat API:LangChain\n",
+ "## 2.3 使用Chat API:LangChain\n",
"\n",
"在前面一部分,我们通过封装函数`get_completion`直接调用了OpenAI完成了对海岛邮件进行了翻译,得到用平和尊重的语气、美式英语表达的邮件。\n",
"\n",
@@ -337,13 +349,12 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
- "### 模型\n",
+ "### 2.3.1 模型\n",
"\n",
"从`langchain.chat_models`导入`OpenAI`的对话模型`ChatOpenAI`。 除去OpenAI以外,`langchain.chat_models`还集成了其他对话模型,更多细节可以查看[Langchain官方文档](https://python.langchain.com/en/latest/modules/models/chat/integrations.html)。"
]
@@ -385,7 +396,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -393,11 +403,10 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 提示模板\n",
+ "### 2.3.2 提示模板\n",
"\n",
"在前面的例子中,我们通过[f字符串](https://docs.python.org/zh-cn/3/tutorial/inputoutput.html#tut-f-strings)把Python表达式的值`style`和`customer_email`添加到`prompt`字符串内。\n",
"\n",
@@ -412,7 +421,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -437,7 +445,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -481,7 +488,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -511,7 +517,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -554,7 +559,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -596,7 +600,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -625,7 +628,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -665,7 +667,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -735,7 +736,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -765,7 +765,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {
"tags": []
@@ -777,7 +776,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -861,15 +859,13 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 输出解析器"
+ "### 2.3.3 输出解析器"
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -906,7 +902,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -941,7 +936,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -968,7 +962,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -985,7 +978,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -1016,7 +1008,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -1066,7 +1057,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -1074,7 +1064,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -1106,7 +1095,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -1123,7 +1111,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -1183,7 +1170,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -1237,7 +1223,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -1269,7 +1254,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -1300,7 +1284,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -1349,15 +1332,13 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 补充材料"
+ "## 2.4 补充材料"
]
},
{
- "attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -1435,8 +1416,30 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.11"
- }
+ "version": "3.9.12"
+ },
+ "toc": {
+ "base_numbering": 1,
+ "nav_menu": {},
+ "number_sections": false,
+ "sideBar": true,
+ "skip_h1_title": false,
+ "title_cell": "Table of Contents",
+ "title_sidebar": "Contents",
+ "toc_cell": false,
+ "toc_position": {
+ "height": "calc(100% - 180px)",
+ "left": "10px",
+ "top": "150px",
+ "width": "165px"
+ },
+ "toc_section_display": true,
+ "toc_window_display": true
+ },
+ "toc-autonumbering": false,
+ "toc-showcode": false,
+ "toc-showmarkdowntxt": false,
+ "toc-showtags": false
},
"nbformat": 4,
"nbformat_minor": 4
diff --git a/content/LangChain for LLM Application Development/3.存储 .ipynb b/content/LangChain for LLM Application Development/3.存储.ipynb
similarity index 97%
rename from content/LangChain for LLM Application Development/3.存储 .ipynb
rename to content/LangChain for LLM Application Development/3.存储.ipynb
index 72f5059..305bc33 100644
--- a/content/LangChain for LLM Application Development/3.存储 .ipynb
+++ b/content/LangChain for LLM Application Development/3.存储.ipynb
@@ -1,12 +1,14 @@
{
"cells": [
{
- "attachments": {},
"cell_type": "markdown",
"id": "a786c77c",
"metadata": {},
"source": [
- "# LangChain: Memory(记忆)\n",
+ "# 3. 储存\n",
+ "\n",
+ "\n",
+ "\n",
"当你与那些语言模型进行交互的时候,他们不会记得你之前和他进行的交流内容,这在我们构建一些应用程序(如聊天机器人)的时候,是一个很大的问题---显得不够智能!\n",
"\n",
"因此,在本节中我们将介绍LangChain 中的 **Memory(记忆)** 模块,即他是如何将先前的对话嵌入到语言模型中的,使其具有连续对话的能力\n",
@@ -29,7 +31,7 @@
" \n",
"此次课程主要介绍其中四种记忆模块,其他模块可查看文档学习。\n",
"\n",
- "## 大纲\n",
+ "\n",
"* ConversationBufferMemory(对话缓存记忆)\n",
"* ConversationBufferWindowMemory(对话缓存窗口记忆)\n",
"* ConversationTokenBufferMemory(对话令牌缓存记忆)\n",
@@ -37,7 +39,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "7e10db6f",
"metadata": {},
@@ -48,18 +49,16 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "1297dcd5",
"metadata": {},
"source": [
- "## ConversationBufferMemory(对话缓存记忆) \n",
+ "## 3.1 对话缓存储存 \n",
" \n",
"这种记忆允许存储消息,然后从变量中提取消息。"
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "0768ca9b",
"metadata": {},
@@ -68,7 +67,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "d76f6ba7",
"metadata": {},
@@ -133,7 +131,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "dea83837",
"metadata": {},
@@ -142,7 +139,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "1a3b4c42",
"metadata": {},
@@ -232,7 +228,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "e71564ad",
"metadata": {},
@@ -241,7 +236,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "54d006bd",
"metadata": {},
@@ -345,7 +339,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "33cb734b",
"metadata": {},
@@ -354,7 +347,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "0393df3d",
"metadata": {},
@@ -462,7 +454,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "5a96a8d9",
"metadata": {},
@@ -532,7 +523,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "6bd222c3",
"metadata": {},
@@ -541,7 +531,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "0b5de846",
"metadata": {},
@@ -595,7 +584,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "07d2e892",
"metadata": {},
@@ -698,7 +686,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "2ac544f2",
"metadata": {},
@@ -767,7 +754,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "8839314a",
"metadata": {},
@@ -778,12 +764,11 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "cf98e9ff",
"metadata": {},
"source": [
- "## ConversationBufferWindowMemory(对话缓存窗口记忆)\n",
+ "## 3.2 对话缓存窗口储存\n",
" \n",
"随着对话变得越来越长,所需的内存量也变得非常长。将大量的tokens发送到LLM的成本,也会变得更加昂贵,这也就是为什么API的调用费用,通常是基于它需要处理的tokens数量而收费的。\n",
" \n",
@@ -804,7 +789,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "641477a4",
"metadata": {},
@@ -863,7 +847,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "9b401f0b",
"metadata": {},
@@ -899,7 +882,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "63bda148",
"metadata": {},
@@ -927,7 +909,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "b6d661e3",
"metadata": {},
@@ -1005,7 +986,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "a1080168",
"metadata": {},
@@ -1038,16 +1018,14 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "d2931b92",
"metadata": {},
"source": [
- "## ConversationTokenBufferMemory(对话token缓存记忆)"
+ "## 3.3 对话token缓存储存"
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "dff5b4c7",
"metadata": {},
@@ -1069,7 +1047,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "2187cfe6",
"metadata": {},
@@ -1094,7 +1071,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "f3a84112",
"metadata": {},
@@ -1121,7 +1097,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "7b62b2e1",
"metadata": {},
@@ -1153,7 +1128,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "f7f6be43",
"metadata": {},
@@ -1188,7 +1162,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "5e4d918b",
"metadata": {},
@@ -1204,16 +1177,14 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "5ff55d5d",
"metadata": {},
"source": [
- "## ConversationSummaryBufferMemory(对话摘要缓存记忆)"
+ "## 3.4 对话摘要缓存储存"
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "7d39b83a",
"metadata": {},
@@ -1247,7 +1218,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "6572ef39",
"metadata": {},
@@ -1305,7 +1275,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "7ccb97b6",
"metadata": {},
@@ -1396,7 +1365,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "4ba827aa",
"metadata": {
@@ -1530,7 +1498,20 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.9"
+ "version": "3.9.12"
+ },
+ "toc": {
+ "base_numbering": 1,
+ "nav_menu": {},
+ "number_sections": false,
+ "sideBar": true,
+ "skip_h1_title": false,
+ "title_cell": "",
+ "title_sidebar": "Contents",
+ "toc_cell": false,
+ "toc_position": {},
+ "toc_section_display": true,
+ "toc_window_display": true
}
},
"nbformat": 4,
diff --git a/content/LangChain for LLM Application Development/4.模型链.ipynb b/content/LangChain for LLM Application Development/4.模型链.ipynb
index 4eadf48..396fe07 100644
--- a/content/LangChain for LLM Application Development/4.模型链.ipynb
+++ b/content/LangChain for LLM Application Development/4.模型链.ipynb
@@ -1,30 +1,29 @@
{
"cells": [
{
- "attachments": {},
+ "cell_type": "markdown",
+ "id": "7ee04154",
+ "metadata": {
+ "toc": true
+ },
+ "source": [
+ "4. 模型链"
+ ]
+ },
+ {
"cell_type": "markdown",
"id": "52824b89-532a-4e54-87e9-1410813cd39e",
"metadata": {},
"source": [
- "# Chains in LangChain(LangChain中的链)\n",
- "\n",
- "## Outline\n",
- "\n",
- "* LLMChain(大语言模型链)\n",
- "* Sequential Chains(顺序链)\n",
- " * SimpleSequentialChain\n",
- " * SequentialChain\n",
- "* Router Chain(路由链)"
+ "# 4. 模型链"
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "54810ef7",
"metadata": {},
"source": [
- "### 为什么我们需要Chains ?\n",
- "链允许我们将多个组件组合在一起,以创建一个单一的、连贯的应用程序。链(Chains)通常将一个LLM(大语言模型)与提示结合在一起,使用这个构建块,您还可以将一堆这些构建块组合在一起,对您的文本或其他数据进行一系列操作。例如,我们可以创建一个链,该链接受用户输入,使用提示模板对其进行格式化,然后将格式化的响应传递给LLM。我们可以通过将多个链组合在一起,或者通过将链与其他组件组合在一起来构建更复杂的链。"
+ "模型链允许我们将多个组件组合在一起,以创建一个单一的、连贯的应用程序。链(Chains)通常将一个LLM(大语言模型)与提示结合在一起,使用这个构建块,您还可以将一堆这些构建块组合在一起,对您的文本或其他数据进行一系列操作。例如,我们可以创建一个链,该链接受用户输入,使用提示模板对其进行格式化,然后将格式化的响应传递给LLM。我们可以通过将多个链组合在一起,或者通过将链与其他组件组合在一起来构建更复杂的链。"
]
},
{
@@ -64,7 +63,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "663fc885",
"metadata": {},
@@ -163,21 +161,19 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "b940ce7c",
"metadata": {},
"source": [
- "## 1. LLMChain"
+ "## 4.1 大语言模型链"
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "e000bd16",
"metadata": {},
"source": [
- "LLMChain是一个简单但非常强大的链,也是后面我们将要介绍的许多链的基础。"
+ "大语言模型链(LLMChain)是一个简单但非常强大的链,也是后面我们将要介绍的许多链的基础。"
]
},
{
@@ -193,7 +189,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "94a32c6f",
"metadata": {},
@@ -213,7 +208,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "81887434",
"metadata": {},
@@ -235,7 +229,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "5c22cb13",
"metadata": {},
@@ -254,7 +247,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "8d7d5ff6",
"metadata": {},
@@ -285,7 +277,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "1e1ede1c",
"metadata": {},
@@ -321,21 +312,19 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "49158430",
"metadata": {},
"source": [
- "## 2. Sequential Chains"
+ "## 4.2 顺序链"
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "69b03469",
"metadata": {},
"source": [
- "### 2.1 SimpleSequentialChain\n",
+ "### 4.2.1 简单顺序链\n",
"\n",
"顺序链(Sequential Chains)是按预定义顺序执行其链接的链。具体来说,我们将使用简单顺序链(SimpleSequentialChain),这是顺序链的最简单类型,其中每个步骤都有一个输入/输出,一个步骤的输出是下一个步骤的输入"
]
@@ -362,7 +351,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "0e732589",
"metadata": {},
@@ -389,7 +377,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "dcfca7bd",
"metadata": {},
@@ -415,7 +402,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "3a1991f4",
"metadata": {},
@@ -436,7 +422,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "5122f26a",
"metadata": {},
@@ -530,16 +515,14 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "7b5ce18c",
"metadata": {},
"source": [
- "### 2.2 SequentialChain"
+ "### 4.2.2 顺序链"
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "1e69f4c0",
"metadata": {},
@@ -563,7 +546,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "3d4be4e8",
"metadata": {},
@@ -583,7 +565,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "9811445c",
"metadata": {},
@@ -690,7 +671,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "0509de01",
"metadata": {},
@@ -751,16 +731,14 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "3041ea4c",
"metadata": {},
"source": [
- "## 3. Router Chain(路由链)"
+ "## 4.3. 路由链"
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "f0c32f97",
"metadata": {},
@@ -769,7 +747,7 @@
"\n",
"一个相当常见但基本的操作是根据输入将其路由到一条链,具体取决于该输入到底是什么。如果你有多个子链,每个子链都专门用于特定类型的输入,那么可以组成一个路由链,它首先决定将它传递给哪个子链,然后将它传递给那个链。\n",
"\n",
- "路由器由两个组件组成:\n",
+ "Router Chain(路由链)由两个组件组成:\n",
"\n",
"- 路由器链本身(负责选择要调用的下一个链)\n",
"- destination_chains:路由器链可以路由到的链\n",
@@ -778,12 +756,11 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "cb1b4708",
"metadata": {},
"source": [
- "### 定义提示模板"
+ "### 4.3.1 定义提示模板"
]
},
{
@@ -846,7 +823,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "6922b35e",
"metadata": {},
@@ -886,12 +862,11 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "80eb1de8",
"metadata": {},
"source": [
- "### 导入相关的包"
+ "### 4.3.2 导入相关的包"
]
},
{
@@ -907,12 +882,11 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "50c16f01",
"metadata": {},
"source": [
- "### 定义语言模型"
+ "### 4.3.3 定义语言模型"
]
},
{
@@ -927,12 +901,13 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "8795cd42",
"metadata": {},
"source": [
- "### LLMRouterChain(此链使用 LLM 来确定如何路由事物)\n",
+ "### 4.3.4 大语言模型路由链\n",
+ "\n",
+ "大语言模型路由链(LLMRouterChain)使用大语言模型(LLM)来确定如何路由事物\n",
"\n",
"在这里,我们需要一个**多提示链**。这是一种特定类型的链,用于在多个不同的提示模板之间进行路由。\n",
"但是,这只是你可以路由的一种类型。你也可以在任何类型的链之间进行路由。\n",
@@ -942,7 +917,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "46633b43",
"metadata": {},
@@ -972,7 +946,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "eba115de",
"metadata": {},
@@ -993,7 +966,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "948700c4",
"metadata": {},
@@ -1041,7 +1013,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "de5c46d0",
"metadata": {},
@@ -1075,7 +1046,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "7e92355c",
"metadata": {},
@@ -1099,7 +1069,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "086503f7",
"metadata": {},
@@ -1108,7 +1077,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "969cd878",
"metadata": {},
@@ -1183,7 +1151,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "289c5ca9",
"metadata": {},
@@ -1225,7 +1192,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "4186a2b9",
"metadata": {},
@@ -1318,7 +1284,25 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.9.16"
+ "version": "3.9.12"
+ },
+ "toc": {
+ "base_numbering": 1,
+ "nav_menu": {},
+ "number_sections": false,
+ "sideBar": true,
+ "skip_h1_title": false,
+ "title_cell": "Table of Contents",
+ "title_sidebar": "Contents",
+ "toc_cell": true,
+ "toc_position": {
+ "height": "calc(100% - 180px)",
+ "left": "10px",
+ "top": "150px",
+ "width": "261.818px"
+ },
+ "toc_section_display": true,
+ "toc_window_display": true
}
},
"nbformat": 4,
diff --git a/content/LangChain for LLM Application Development/5.文档问答.ipynb b/content/LangChain for LLM Application Development/5.基于文档的问答.ipynb
similarity index 94%
rename from content/LangChain for LLM Application Development/5.文档问答.ipynb
rename to content/LangChain for LLM Application Development/5.基于文档的问答.ipynb
index 6993bc7..811e8df 100644
--- a/content/LangChain for LLM Application Development/5.文档问答.ipynb
+++ b/content/LangChain for LLM Application Development/5.基于文档的问答.ipynb
@@ -1,11 +1,20 @@
{
"cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f200ba9a",
+ "metadata": {},
+ "source": [
+ "# 5 基于文档的问答 \n",
+ ""
+ ]
+ },
{
"cell_type": "markdown",
"id": "52824b89-532a-4e54-87e9-1410813cd39e",
"metadata": {},
"source": [
- "# 第四章 基于LangChain的文档问答\n",
+ "\n",
"本章内容主要利用langchain构建向量数据库,可以在文档上方或关于文档回答问题,因此,给定从PDF文件、网页或某些公司的内部文档收集中提取的文本,使用llm回答有关这些文档内容的问题"
]
},
@@ -16,18 +25,22 @@
"height": 30
},
"source": [
- "## 环境配置\n",
+ "\n",
"\n",
"安装langchain,设置chatGPT的OPENAI_API_KEY\n",
+ "\n",
"* 安装langchain\n",
+ "\n",
"```\n",
"pip install langchain\n",
"```\n",
"* 安装docarray\n",
+ "\n",
"```\n",
"pip install docarray\n",
"```\n",
"* 设置API-KEY环境变量\n",
+ "\n",
"```\n",
"export OPENAI_API_KEY='api-key'\n",
"\n",
@@ -81,7 +94,7 @@
"height": 30
},
"source": [
- "### 导入embedding模型和向量存储组件\n",
+ "## 5.1 导入embedding模型和向量存储组件\n",
"使用Dock Array内存搜索向量存储,作为一个内存向量存储,不需要连接外部数据库"
]
},
@@ -264,7 +277,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "3bd6422c",
"metadata": {},
@@ -273,12 +285,11 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "2963fc63",
"metadata": {},
"source": [
- "#### 创建向量存储\n",
+ "### 5.1.2 创建向量存储\n",
"将导入一个索引,即向量存储索引创建器"
]
},
@@ -373,7 +384,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "eb74cc79",
"metadata": {},
@@ -382,12 +392,11 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "dd34e50e",
"metadata": {},
"source": [
- "#### 使用语言模型与文档结合使用\n",
+ "### 5.1.3 使用语言模型与文档结合使用\n",
"想要使用语言模型并将其与我们的许多文档结合使用,但是语言模型一次只能检查几千个单词,如果我们有非常大的文档,如何让语言模型回答关于其中所有内容的问题呢?通过embedding和向量存储实现\n",
"* embedding \n",
"文本片段创建数值表示文本语义,相似内容的文本片段将具有相似的向量,这使我们可以在向量空间中比较文本片段\n",
@@ -594,12 +603,11 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "fe41b36f",
"metadata": {},
"source": [
- "### 如何回答我们文档的相关问题\n",
+ "## 5.2 如何回答我们文档的相关问题\n",
"首先,我们需要从这个向量存储中创建一个检索器,检索器是一个通用接口,可以由任何接受查询并返回文档的方法支持。接下来,因为我们想要进行文本生成并返回自然语言响应\n"
]
},
@@ -685,7 +693,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "12f042e7",
"metadata": {},
@@ -779,7 +786,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "e28c5657",
"metadata": {},
@@ -788,12 +794,11 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "44f1fa38",
"metadata": {},
"source": [
- "#### 不同类型的chain链\n",
+ "### 5.2.1 不同类型的chain链\n",
"想在许多不同类型的块上执行相同类型的问答,该怎么办?之前的实验中只返回了4个文档,如果有多个文档,那么我们可以使用几种不同的方法\n",
"* Map Reduce \n",
"将所有块与问题一起传递给语言模型,获取回复,使用另一个语言模型调用将所有单独的回复总结成最终答案,它可以在任意数量的文档上运行。可以并行处理单个问题,同时也需要更多的调用。它将所有文档视为独立的\n",
@@ -804,12 +809,6 @@
"* Stuff \n",
"将所有内容组合成一个文档"
]
- },
- {
- "cell_type": "markdown",
- "id": "7988f412",
- "metadata": {},
- "source": []
}
],
"metadata": {
@@ -828,7 +827,20 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.9.16"
+ "version": "3.9.12"
+ },
+ "toc": {
+ "base_numbering": 1,
+ "nav_menu": {},
+ "number_sections": false,
+ "sideBar": true,
+ "skip_h1_title": false,
+ "title_cell": "Table of Contents",
+ "title_sidebar": "Contents",
+ "toc_cell": false,
+ "toc_position": {},
+ "toc_section_display": true,
+ "toc_window_display": true
}
},
"nbformat": 4,
diff --git a/content/LangChain for LLM Application Development/6.评估.ipynb b/content/LangChain for LLM Application Development/6.评估.ipynb
index 081a749..1125634 100644
--- a/content/LangChain for LLM Application Development/6.评估.ipynb
+++ b/content/LangChain for LLM Application Development/6.评估.ipynb
@@ -1,12 +1,14 @@
{
"cells": [
{
- "attachments": {},
"cell_type": "markdown",
"id": "52824b89-532a-4e54-87e9-1410813cd39e",
"metadata": {},
"source": [
- "# 第五章 如何评估基于LLM的应用程序\n",
+ "# 6. 评估\n",
+ "\n",
+ "\n",
+ "\n",
"当使用llm构建复杂应用程序时,评估应用程序的表现是一个重要但有时棘手的步骤,它是否满足某些准确性标准?\n",
"通常更有用的是从许多不同的数据点中获得更全面的模型表现情况\n",
"一种是使用语言模型本身和链本身来评估其他语言模型、其他链和其他应用程序"
@@ -29,12 +31,11 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "28008949",
"metadata": {},
"source": [
- "## 创建LLM应用\n",
+ "## 6.1 创建LLM应用\n",
"按照langchain链的方式进行构建"
]
},
@@ -255,12 +256,11 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "791ebd73",
"metadata": {},
"source": [
- "### 创建评估数据点\n",
+ "### 6.1.1 创建评估数据点\n",
"们需要做的第一件事是真正弄清楚我们想要评估它的一些数据点,我们将介绍几种不同的方法来完成这个任务\n",
"1、将自己想出好的数据点作为例子,查看一些数据,然后想出例子问题和答案,以便以后用于评估"
]
@@ -312,7 +312,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "b9c52116",
"metadata": {},
@@ -321,12 +320,11 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "8d548aef",
"metadata": {},
"source": [
- "### 创建测试用例数据\n"
+ "### 6.1.2 创建测试用例数据\n"
]
},
{
@@ -353,7 +351,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "b73ce510",
"metadata": {},
@@ -363,12 +360,11 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "c7ce3e4f",
"metadata": {},
"source": [
- "### 通过LLM生成测试用例"
+ "### 6.1.3 通过LLM生成测试用例"
]
},
{
@@ -487,12 +483,11 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "faf25f2f",
"metadata": {},
"source": [
- "### 组合用例数据"
+ "### 6.1.4 组合用例数据"
]
},
{
@@ -542,12 +537,11 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "63f3cb08",
"metadata": {},
"source": [
- "## 人工评估\n",
+ "## 6.2 人工评估\n",
"现在有了这些示例,但是我们如何评估正在发生的事情呢?\n",
"通过运行一个示例通过链,并查看它产生的输出\n",
"在这里我们传递一个查询,然后我们得到一个答案。实际上正在发生的事情,进入语言模型的实际提示是什么? \n",
@@ -653,7 +647,6 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "8dee0f24",
"metadata": {},
@@ -669,13 +662,11 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "7b37c7bc",
"metadata": {},
"source": [
- "##### 如何评估新创建的实例\n",
- "与创建它们类似,可以运行链条来处理所有示例,然后查看输出并尝试弄清楚,发生了什么,它是否正确"
+ "**如何评估新创建的实例**: 与创建它们类似,可以运行链条来处理所有示例,然后查看输出并尝试弄清楚,发生了什么,它是否正确"
]
},
{
@@ -692,12 +683,11 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "d5bdbdce",
"metadata": {},
"source": [
- "## 通过LLM进行评估实例"
+ "## 6.3 通过LLM进行评估实例"
]
},
{
@@ -872,12 +862,11 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "9ad64f72",
"metadata": {},
"source": [
- "##### 评估思路\n",
+ "### 6.3.1 评估思路\n",
"当它面前有整个文档时,它可以生成一个真实的答案,我们将打印出预测的答,当它进行QA链时,使用embedding和向量数据库进行检索时,将其传递到语言模型中,然后尝试猜测预测的答案,我们还将打印出成绩,这也是语言模型生成的。当它要求评估链评估正在发生的事情时,以及它是否正确或不正确。因此,当我们循环遍历所有这些示例并将它们打印出来时,可以详细了解每个示例"
]
},
@@ -950,18 +939,25 @@
]
},
{
- "attachments": {},
"cell_type": "markdown",
"id": "87ecb476",
"metadata": {},
"source": [
- "#### 结果分析\n",
+ "### 6.3.2 结果分析\n",
"对于每个示例,它看起来都是正确的,让我们看看第一个例子。\n",
"这里的问题是,舒适的套头衫套装,有侧口袋吗?真正的答案,我们创建了这个,是肯定的。模型预测的答案是舒适的套头衫套装条纹,确实有侧口袋。因此,我们可以理解这是一个正确的答案。它将其评为正确。 \n",
- "#### 使用模型评估的优势\n",
+ "### 6.3.3 使用模型评估的优势\n",
"\n",
"你有这些答案,它们是任意的字符串。没有单一的真实字符串是最好的可能答案,有许多不同的变体,只要它们具有相同的语义,它们应该被评为相似。如果使用正则进行精准匹配就会丢失语义信息,到目前为止存在的许多评估指标都不够好。目前最有趣和最受欢迎的之一就是使用语言模型进行评估。"
]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e4014482",
+ "metadata": {},
+ "outputs": [],
+ "source": []
}
],
"metadata": {
@@ -980,7 +976,25 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.9.16"
+ "version": "3.9.12"
+ },
+ "toc": {
+ "base_numbering": 1,
+ "nav_menu": {},
+ "number_sections": false,
+ "sideBar": true,
+ "skip_h1_title": false,
+ "title_cell": "Table of Contents",
+ "title_sidebar": "Contents",
+ "toc_cell": false,
+ "toc_position": {
+ "height": "calc(100% - 180px)",
+ "left": "10px",
+ "top": "150px",
+ "width": "261.818px"
+ },
+ "toc_section_display": true,
+ "toc_window_display": true
}
},
"nbformat": 4,
diff --git a/content/LangChain for LLM Application Development/7.代理.ipynb b/content/LangChain for LLM Application Development/7.代理.ipynb
index b7fe19b..c54fa3a 100644
--- a/content/LangChain for LLM Application Development/7.代理.ipynb
+++ b/content/LangChain for LLM Application Development/7.代理.ipynb
@@ -5,20 +5,29 @@
"id": "2caa79ba-45e3-437c-9cb6-e433f443f0bf",
"metadata": {},
"source": [
- "# 代理\n",
+ "# 7. 代理\n",
+ "\n",
+ "\n",
+ "\n",
"\n",
"大语言模型学习并记住许多的网络公开信息,大语言模型最常见的应用场景是,将它当作知识库,让它对给定的问题做出回答。\n",
"\n",
"另一种思路是将大语言模型当作推理引擎,让它基于已有的知识库,并利用新的信息(新的大段文本或者其他信息)来帮助回答问题或者进行推理LongChain的内置代理工具便是适用该场景。\n",
"\n",
- "本节我们将会了解什么是代理,如何创建代理, 如何使用代理,以及如何与不同类型的工具集成,例如搜索引擎。\n",
- "\n",
- "**目录**\n",
- "* 使用LangChain内置工具\n",
- " * 使用llm-math和wikipedia工具\n",
- " * 使用PythonREPLTool工具\n",
- "* 定义自己的工具并在代理中使用\n",
- " * 创建和使用自定义时间工具"
+ "本节我们将会了解什么是代理,如何创建代理, 如何使用代理,以及如何与不同类型的工具集成,例如搜索引擎。"
]
},
{
@@ -42,7 +51,7 @@
"id": "631c764b-68fa-483a-80a5-9a322cd1117c",
"metadata": {},
"source": [
- "## LangChain内置工具"
+ "## 7.1 LangChain内置工具"
]
},
{
@@ -76,7 +85,7 @@
"id": "d5a3655c-4d5e-4a86-8bf4-a11bd1525059",
"metadata": {},
"source": [
- "### 📚 使用llm-math和wikipedia工具"
+ "### 7.1.1 使用llm-math和wikipedia工具"
]
},
{
@@ -332,7 +341,7 @@
"id": "5901cab6-a7c9-4590-b35d-d41c29e39a39",
"metadata": {},
"source": [
- "### 📚 使用PythonREPLTool工具"
+ "### 7.1.2 使用PythonREPLTool工具"
]
},
{
@@ -656,7 +665,7 @@
"tags": []
},
"source": [
- "## 定义自己的工具并在代理中使用"
+ "## 7.2 定义自己的工具并在代理中使用"
]
},
{
@@ -675,7 +684,7 @@
"id": "fb92f6e4-21ab-494d-9c22-d0678050fd37",
"metadata": {},
"source": [
- "### 📚 创建和使用自定义时间工具"
+ "### 7.2.1 创建和使用自定义时间工具"
]
},
{
@@ -852,6 +861,19 @@
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
+ },
+ "toc": {
+ "base_numbering": 1,
+ "nav_menu": {},
+ "number_sections": false,
+ "sideBar": true,
+ "skip_h1_title": false,
+ "title_cell": "Table of Contents",
+ "title_sidebar": "Contents",
+ "toc_cell": false,
+ "toc_position": {},
+ "toc_section_display": true,
+ "toc_window_display": true
}
},
"nbformat": 4,
diff --git a/content/LangChain for LLM Application Development/8.总结.ipynb b/content/LangChain for LLM Application Development/8.总结.ipynb
new file mode 100644
index 0000000..1a3e885
--- /dev/null
+++ b/content/LangChain for LLM Application Development/8.总结.ipynb
@@ -0,0 +1,64 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "87f7cfaa",
+ "metadata": {},
+ "source": [
+ "# 8. 总结\n",
+ "\n",
+ "\n",
+ "本次简短课程涵盖了一系列LangChain的应用实践,包括处理顾客评论和基于文档回答问题,以及通过LLM判断何时求助外部工具 (如网站) 来回答复杂问题。\n",
+ "\n",
+ "**👍🏻 LangChain如此强大**\n",
+ "\n",
+ "构建这类应用曾经需要耗费数周时间,而现在只需要非常少的代码,就可以通过LangChain高效构建所需的应用程序。LangChain已成为开发大模型应用的有力范式,希望大家拥抱这个强大工具,积极探索更多更广泛的应用场景。\n",
+ "\n",
+ "**🌈 不同组合, 更多可能性**\n",
+ "\n",
+ "LangChain还可以协助我们做什么呢:基于CSV文件回答问题、查询sql数据库、与api交互,有很多例子通过Chain以及不同的提示(Prompts)和输出解析器(output parsers)组合得以实现。\n",
+ "\n",
+ "**💪🏻 出发 去探索新世界吧**\n",
+ "\n",
+ "因此非常感谢社区中做出贡献的每一个人,无论是协助文档的改进,还是让其他人更容易上手,还是构建新的Chain打开一个全新的世界。\n",
+ "\n",
+ "如果你还没有这样做,快去打开电脑,运行 pip install LangChain,然后去使用LangChain、搭建惊艳的应用吧~\n",
+ "\n"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.12"
+ },
+ "toc": {
+ "base_numbering": 1,
+ "nav_menu": {},
+ "number_sections": false,
+ "sideBar": true,
+ "skip_h1_title": false,
+ "title_cell": "Table of Contents",
+ "title_sidebar": "Contents",
+ "toc_cell": false,
+ "toc_position": {},
+ "toc_section_display": true,
+ "toc_window_display": true
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/content/LangChain for LLM Application Development/8.课程总结.md b/content/LangChain for LLM Application Development/8.课程总结.md
deleted file mode 100644
index ce57191..0000000
--- a/content/LangChain for LLM Application Development/8.课程总结.md
+++ /dev/null
@@ -1,20 +0,0 @@
-## 吴恩达 LangChain大模型应用开发 总结篇
-
-## LangChain for LLM Application Development
-
-本次简短课程涵盖了一系列LangChain的应用实践,包括处理顾客评论和基于文档回答问题,以及通过LLM判断何时求助外部工具 (如网站) 来回答复杂问题。
-
-### 👍🏻 LangChain如此强大
-
-构建这类应用曾经需要耗费数周时间,而现在只需要非常少的代码,就可以通过LangChain高效构建所需的应用程序。LangChain已成为开发大模型应用的有力范式,希望大家拥抱这个强大工具,积极探索更多更广泛的应用场景。
-
-### 🌈 不同组合->更多可能性
-
-LangChain还可以协助我们做什么呢:基于CSV文件回答问题、查询sql数据库、与api交互,有很多例子通过Chain以及不同的提示(Prompts)和输出解析器(output parsers)组合得以实现。
-
-### 💪🏻 出发~去探索新世界吧~
-
-因此非常感谢社区中做出贡献的每一个人,无论是协助文档的改进,还是让其他人更容易上手,还是构建新的Chain打开一个全新的世界。
-
-如果你还没有这样做,快去打开电脑,运行 pip install LangChain,然后去使用LangChain、搭建惊艳的应用吧~
-
diff --git a/content/LangChain for LLM Application Development/readme.md b/content/LangChain for LLM Application Development/readme.md
index 2a99c32..ea3b506 100644
--- a/content/LangChain for LLM Application Development/readme.md
+++ b/content/LangChain for LLM Application Development/readme.md
@@ -4,7 +4,7 @@
### 目录
1. 简介 Introduction @Sarai
-2. 模型,提示和解析器 Models, Prompts and Output Parsers @Joye
+2. 模型、提示和解析器 Models, Prompts and Output Parsers @Joye
3. 存储 Memory @徐虎
4. 模型链 Chains @徐虎
5. 基于文档的问答 Question and Answer @苟晓攀