From bc7bec2d5f392bd949131fa76a620367fa3a4733 Mon Sep 17 00:00:00 2001 From: joyenjoye Date: Sun, 16 Jul 2023 15:34:37 +0800 Subject: [PATCH] Update Chapter 2 and 3 --- ...Œè§£æžå™¨ Models, Prompts and Output Parsers.ipynb | 2 +- .../3.储存 Memory.ipynb | 1698 +---------------- 2 files changed, 2 insertions(+), 1698 deletions(-) diff --git a/content/LangChain for LLM Application Development/2.æ¨¡åž‹ã€æç¤ºå’Œè§£æžå™¨ Models, Prompts and Output Parsers.ipynb b/content/LangChain for LLM Application Development/2.æ¨¡åž‹ã€æç¤ºå’Œè§£æžå™¨ Models, Prompts and Output Parsers.ipynb index 934ed7b..bbe2ba7 100644 --- a/content/LangChain for LLM Application Development/2.æ¨¡åž‹ã€æç¤ºå’Œè§£æžå™¨ Models, Prompts and Output Parsers.ipynb +++ b/content/LangChain for LLM Application Development/2.æ¨¡åž‹ã€æç¤ºå’Œè§£æžå™¨ Models, Prompts and Output Parsers.ipynb @@ -1 +1 @@ -{"cells": [{"cell_type": "markdown", "metadata": {}, "source": ["# \u7b2c\u4e8c\u7ae0 \u6a21\u578b\uff0c\u63d0\u793a\u548c\u8f93\u51fa\u89e3\u91ca\u5668\n", "\n", " - [\u4e00\u3001\u8bbe\u7f6eOpenAI API Key](#\u4e00\u3001\u8bbe\u7f6eOpenAI-API-Key)\n", " - [\u4e8c\u3001\u76f4\u63a5\u4f7f\u7528OpenAI](#\u4e8c\u3001\u76f4\u63a5\u4f7f\u7528OpenAI)\n", " - [2.1 \u8ba1\u7b971+1](#2.1-\u8ba1\u7b971+1)\n", " - [2.2 \u7528\u7f8e\u5f0f\u82f1\u8bed\u8868\u8fbe\u6d77\u76d7\u90ae\u4ef6](#2.2-\u7528\u7f8e\u5f0f\u82f1\u8bed\u8868\u8fbe\u6d77\u76d7\u90ae\u4ef6)\n", " - [2.3 \u4e2d\u6587\u7248](#2.3-\u4e2d\u6587\u7248)\n", " - [\u4e09\u3001\u901a\u8fc7LangChain\u4f7f\u7528OpenAI](#\u4e09\u3001\u901a\u8fc7LangChain\u4f7f\u7528OpenAI)\n", " - [3.1 \u6a21\u578b](#3.1-\u6a21\u578b)\n", " - [3.2 \u63d0\u793a\u6a21\u677f](#3.2-\u63d0\u793a\u6a21\u677f)\n", " - [3.3 \u8f93\u51fa\u89e3\u6790\u5668](#3.3-\u8f93\u51fa\u89e3\u6790\u5668)\n", " - [\u56db\u3001\u8865\u5145\u6750\u6599](#\u56db\u3001\u8865\u5145\u6750\u6599)\n", " - [4.1 \u94fe\u5f0f\u601d\u8003\u63a8\u7406(ReAct)](#4.1-\u94fe\u5f0f\u601d\u8003\u63a8\u7406(ReAct))\n"]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["## \u4e00\u3001\u8bbe\u7f6eOpenAI API Key\n", "\n", "\u767b\u9646 [OpenAI \u8d26\u6237](https://platform.openai.com/account/api-keys) \u83b7\u53d6API Key\uff0c\u7136\u540e\u5c06\u5176\u8bbe\u7f6e\u4e3a\u73af\u5883\u53d8\u91cf\u3002\n", "\n", "- \u5982\u679c\u4f60\u60f3\u8981\u8bbe\u7f6e\u4e3a\u5168\u5c40\u73af\u5883\u53d8\u91cf\uff0c\u53ef\u4ee5\u53c2\u8003[\u77e5\u4e4e\u6587\u7ae0](https://zhuanlan.zhihu.com/p/627665725)\u3002\n", "- \u5982\u679c\u4f60\u60f3\u8981\u8bbe\u7f6e\u4e3a\u672c\u5730/\u9879\u76ee\u73af\u5883\u53d8\u91cf\uff0c\u5728\u672c\u6587\u4ef6\u76ee\u5f55\u4e0b\u521b\u5efa`.env`\u6587\u4ef6, \u6253\u5f00\u6587\u4ef6\u8f93\u5165\u4ee5\u4e0b\u5185\u5bb9\u3002\n", "\n", "

\n", " OPENAI_API_KEY=\"your_api_key\" \n", "

\n", " \n", " \u66ff\u6362\"your_api_key\"\u4e3a\u4f60\u81ea\u5df1\u7684 API Key"]}, {"cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": ["# \u4e0b\u8f7d\u9700\u8981\u7684\u5305python-dotenv\u548copenai\n", "# \u5982\u679c\u4f60\u9700\u8981\u67e5\u770b\u5b89\u88c5\u8fc7\u7a0b\u65e5\u5fd7\uff0c\u53ef\u5220\u9664 -q \n", "!pip install -q python-dotenv\n", "!pip install -q openai"]}, {"cell_type": "code", "execution_count": 2, "metadata": {"tags": []}, "outputs": [], "source": ["import os\n", "import openai\n", "from dotenv import load_dotenv, find_dotenv\n", "\n", "# \u8bfb\u53d6\u672c\u5730/\u9879\u76ee\u7684\u73af\u5883\u53d8\u91cf\u3002\n", "\n", "# find_dotenv()\u5bfb\u627e\u5e76\u5b9a\u4f4d.env\u6587\u4ef6\u7684\u8def\u5f84\n", "# load_dotenv()\u8bfb\u53d6\u8be5.env\u6587\u4ef6\uff0c\u5e76\u5c06\u5176\u4e2d\u7684\u73af\u5883\u53d8\u91cf\u52a0\u8f7d\u5230\u5f53\u524d\u7684\u8fd0\u884c\u73af\u5883\u4e2d \n", "# \u5982\u679c\u4f60\u8bbe\u7f6e\u7684\u662f\u5168\u5c40\u7684\u73af\u5883\u53d8\u91cf\uff0c\u8fd9\u884c\u4ee3\u7801\u5219\u6ca1\u6709\u4efb\u4f55\u4f5c\u7528\u3002\n", "_ = load_dotenv(find_dotenv())\n", "\n", "# \u83b7\u53d6\u73af\u5883\u53d8\u91cf OPENAI_API_KEY\n", "openai.api_key = os.environ['OPENAI_API_KEY'] "]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["## \u4e8c\u3001\u76f4\u63a5\u4f7f\u7528OpenAI\n", "\n", "\u6211\u4eec\u5148\u4ece\u76f4\u63a5\u8c03\u7528OpenAI\u7684API\u5f00\u59cb\u3002\n", "\n", "`get_completion`\u51fd\u6570\u662f\u57fa\u4e8e`openai`\u7684\u5c01\u88c5\u51fd\u6570\uff0c\u5bf9\u4e8e\u7ed9\u5b9a\u63d0\u793a\uff08prompt\uff09\u8f93\u51fa\u76f8\u5e94\u7684\u56de\u7b54\u3002\u5176\u5305\u542b\u4e24\u4e2a\u53c2\u6570\n", " \n", " - `prompt` \u5fc5\u9700\u8f93\u5165\u53c2\u6570\u3002 \u4f60\u7ed9\u6a21\u578b\u7684**\u63d0\u793a\uff0c\u53ef\u4ee5\u662f\u4e00\u4e2a\u95ee\u9898\uff0c\u53ef\u4ee5\u662f\u4f60\u9700\u8981\u6a21\u578b\u5e2e\u52a9\u4f60\u505a\u7684\u4e8b**\uff08\u6539\u53d8\u6587\u672c\u5199\u4f5c\u98ce\u683c\uff0c\u7ffb\u8bd1\uff0c\u56de\u590d\u6d88\u606f\u7b49\u7b49\uff09\u3002\n", " - `model` \u975e\u5fc5\u9700\u8f93\u5165\u53c2\u6570\u3002\u9ed8\u8ba4\u4f7f\u7528gpt-3.5-turbo\u3002\u4f60\u4e5f\u53ef\u4ee5\u9009\u62e9\u5176\u4ed6\u6a21\u578b\u3002\n", " \n", "\u8fd9\u91cc\u7684\u63d0\u793a\u5bf9\u5e94\u6211\u4eec\u7ed9chatgpt\u7684\u95ee\u9898\uff0c\u51fd\u6570\u7ed9\u51fa\u7684\u8f93\u51fa\u5219\u5bf9\u5e94chatpgt\u7ed9\u6211\u4eec\u7684\u7b54\u6848\u3002"]}, {"cell_type": "code", "execution_count": 3, "metadata": {"tags": []}, "outputs": [], "source": ["def get_completion(prompt, model=\"gpt-3.5-turbo\"):\n", " \n", " messages = [{\"role\": \"user\", \"content\": prompt}]\n", " \n", " response = openai.ChatCompletion.create(\n", " model=model,\n", " messages=messages,\n", " temperature=0, \n", " )\n", " return response.choices[0].message[\"content\"]"]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["### 2.1 \u8ba1\u7b971+1\n", "\n", "\u6211\u4eec\u6765\u4e00\u4e2a\u7b80\u5355\u7684\u4f8b\u5b50 - \u5206\u522b\u7528\u4e2d\u82f1\u6587\u95ee\u95ee\u6a21\u578b\n", "\n", "- \u4e2d\u6587\u63d0\u793a(Prompt in Chinese)\uff1a `1+1\u662f\u4ec0\u4e48\uff1f`\n", "- \u82f1\u6587\u63d0\u793a(Prompt in English)\uff1a `What is 1+1?`"]}, {"cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [{"data": {"text/plain": ["'1+1\u7b49\u4e8e2\u3002'"]}, "execution_count": 4, "metadata": {}, "output_type": "execute_result"}], "source": ["# \u4e2d\u6587\n", "get_completion(\"1+1\u662f\u4ec0\u4e48\uff1f\")"]}, {"cell_type": "code", "execution_count": 5, "metadata": {"tags": []}, "outputs": [{"data": {"text/plain": ["'1+1 equals 2.'"]}, "execution_count": 5, "metadata": {}, "output_type": "execute_result"}], "source": ["# \u82f1\u6587\n", "get_completion(\"What is 1+1?\")"]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["### 2.2 \u7528\u7f8e\u5f0f\u82f1\u8bed\u8868\u8fbe\u6d77\u76d7\u90ae\u4ef6\n", "\n", "\u4e0a\u9762\u7684\u7b80\u5355\u4f8b\u5b50\uff0c\u6a21\u578b`gpt-3.5-turbo`\u5bf9\u6211\u4eec\u7684\u5173\u4e8e1+1\u662f\u4ec0\u4e48\u7684\u63d0\u95ee\u7ed9\u51fa\u4e86\u56de\u7b54\u3002\n", "\n", "\u73b0\u5728\u6211\u4eec\u6765\u770b\u4e00\u4e2a\u590d\u6742\u4e00\u70b9\u7684\u4f8b\u5b50\uff1a \n", "\n", "\u5047\u8bbe\u6211\u4eec\u662f\u7535\u5546\u516c\u53f8\u5458\u5de5\uff0c\u6211\u4eec\u7684\u987e\u5ba2\u662f\u4e00\u540d\u6d77\u76d7A\uff0c\u4ed6\u5728\u6211\u4eec\u7684\u7f51\u7ad9\u4e0a\u4e70\u4e86\u4e00\u4e2a\u69a8\u6c41\u673a\u7528\u6765\u505a\u5976\u6614\uff0c\u5728\u5236\u4f5c\u5976\u6614\u7684\u8fc7\u7a0b\u4e2d\uff0c\u5976\u6614\u7684\u76d6\u5b50\u98de\u4e86\u51fa\u53bb\uff0c\u5f04\u5f97\u53a8\u623f\u5899\u4e0a\u5230\u5904\u90fd\u662f\u3002\u4e8e\u662f\u6d77\u76d7A\u7ed9\u6211\u4eec\u7684\u5ba2\u670d\u4e2d\u5fc3\u5199\u6765\u4ee5\u4e0b\u90ae\u4ef6\uff1a`customer_email`"]}, {"cell_type": "code", "execution_count": 6, "metadata": {"tags": []}, "outputs": [], "source": ["customer_email = \"\"\"\n", "Arrr, I be fuming that me blender lid \\\n", "flew off and splattered me kitchen walls \\\n", "with smoothie! And to make matters worse,\\\n", "the warranty don't cover the cost of \\\n", "cleaning up me kitchen. I need yer help \\\n", "right now, matey!\n", "\"\"\""]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u6211\u4eec\u7684\u5ba2\u670d\u4eba\u5458\u5bf9\u4e8e\u6d77\u76d7\u7684\u63aa\u8f9e\u8868\u8fbe\u89c9\u5f97\u6709\u70b9\u96be\u4ee5\u7406\u89e3\u3002 \u73b0\u5728\u6211\u4eec\u60f3\u8981\u5b9e\u73b0\u4e24\u4e2a\u5c0f\u76ee\u6807\uff1a\n", "\n", "- \u8ba9\u6a21\u578b\u7528\u7f8e\u5f0f\u82f1\u8bed\u7684\u8868\u8fbe\u65b9\u5f0f\u5c06\u6d77\u76d7\u7684\u90ae\u4ef6\u8fdb\u884c\u7ffb\u8bd1\uff0c\u5ba2\u670d\u4eba\u5458\u53ef\u4ee5\u66f4\u597d\u7406\u89e3\u3002*\u8fd9\u91cc\u6d77\u76d7\u7684\u82f1\u6587\u8868\u8fbe\u53ef\u4ee5\u7406\u89e3\u4e3a\u82f1\u6587\u7684\u65b9\u8a00\uff0c\u5176\u4e0e\u7f8e\u5f0f\u82f1\u8bed\u7684\u5173\u7cfb\uff0c\u5c31\u5982\u56db\u5ddd\u8bdd\u4e0e\u666e\u901a\u8bdd\u7684\u5173\u7cfb\u3002\n", "- \u8ba9\u6a21\u578b\u5728\u7ffb\u8bd1\u662f\u7528\u5e73\u548c\u5c0a\u91cd\u7684\u8bed\u6c14\u8fdb\u884c\u8868\u8fbe\uff0c\u5ba2\u670d\u4eba\u5458\u7684\u5fc3\u60c5\u4e5f\u4f1a\u66f4\u597d\u3002\n", "\n", "\u6839\u636e\u8fd9\u4e24\u4e2a\u5c0f\u76ee\u6807\uff0c\u5b9a\u4e49\u4e00\u4e0b\u6587\u672c\u8868\u8fbe\u98ce\u683c\uff1a`style`"]}, {"cell_type": "code", "execution_count": 7, "metadata": {"tags": []}, "outputs": [], "source": ["# \u7f8e\u5f0f\u82f1\u8bed + \u5e73\u9759\u3001\u5c0a\u656c\u7684\u8bed\u8c03\n", "style = \"\"\"American English \\\n", "in a calm and respectful tone\n", "\"\"\""]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u4e0b\u4e00\u6b65\u9700\u8981\u505a\u7684\u662f\u5c06`customer_email`\u548c`style`\u7ed3\u5408\u8d77\u6765\u6784\u9020\u6211\u4eec\u7684\u63d0\u793a:`prompt`"]}, {"cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": ["# \u975e\u6b63\u5f0f\u7528\u8bed\n", "customer_email = \"\"\" \n", "\u963f\uff0c\u6211\u5f88\u751f\u6c14\uff0c\\\n", "\u56e0\u4e3a\u6211\u7684\u6405\u62cc\u673a\u76d6\u6389\u4e86\uff0c\\\n", "\u628a\u5976\u6614\u6e85\u5230\u4e86\u53a8\u623f\u7684\u5899\u4e0a\uff01\\\n", "\u66f4\u7cdf\u7cd5\u7684\u662f\uff0c\u4fdd\u4fee\u4e0d\u5305\u62ec\u6253\u626b\u53a8\u623f\u7684\u8d39\u7528\u3002\\\n", "\u6211\u73b0\u5728\u9700\u8981\u4f60\u7684\u5e2e\u52a9\uff0c\u4f19\u8ba1\uff01\n", "\"\"\""]}, {"cell_type": "code", "execution_count": 9, "metadata": {"tags": []}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["Translate the text that is delimited by triple backticks \n", "into a style that is American English in a calm and respectful tone\n", ".\n", "text: ``` \n", "\u963f\uff0c\u6211\u5f88\u751f\u6c14\uff0c\u56e0\u4e3a\u6211\u7684\u6405\u62cc\u673a\u76d6\u6389\u4e86\uff0c\u628a\u5976\u6614\u6e85\u5230\u4e86\u53a8\u623f\u7684\u5899\u4e0a\uff01\u66f4\u7cdf\u7cd5\u7684\u662f\uff0c\u4fdd\u4fee\u4e0d\u5305\u62ec\u6253\u626b\u53a8\u623f\u7684\u8d39\u7528\u3002\u6211\u73b0\u5728\u9700\u8981\u4f60\u7684\u5e2e\u52a9\uff0c\u4f19\u8ba1\uff01\n", "```\n", "\n"]}], "source": ["# \u8981\u6c42\u6a21\u578b\u6839\u636e\u7ed9\u51fa\u7684\u8bed\u8c03\u8fdb\u884c\u8f6c\u5316\n", "prompt = f\"\"\"Translate the text \\\n", "that is delimited by triple backticks \n", "into a style that is {style}.\n", "text: ```{customer_email}```\n", "\"\"\"\n", "\n", "print(prompt)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["`prompt` \u6784\u9020\u597d\u4e86\uff0c\u6211\u4eec\u53ef\u4ee5\u8c03\u7528`get_completion`\u5f97\u5230\u6211\u4eec\u60f3\u8981\u7684\u7ed3\u679c - \u7528\u5e73\u548c\u5c0a\u91cd\u7684\u8bed\u6c14\uff0c\u7f8e\u5f0f\u82f1\u8bed\u8868\u8fbe\u7684\u6d77\u76d7\u8bed\u8a00\u90ae\u4ef6"]}, {"cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": ["response = get_completion(prompt)"]}, {"cell_type": "code", "execution_count": 11, "metadata": {"tags": []}, "outputs": [{"data": {"text/plain": ["\"Oh, I'm really frustrated because the lid of my blender fell off and splattered the milkshake all over the kitchen wall! To make matters worse, the warranty doesn't cover the cost of cleaning the kitchen. I could really use your help right now, buddy!\""]}, "execution_count": 11, "metadata": {}, "output_type": "execute_result"}], "source": ["response"]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u5bf9\u6bd4\u8bed\u8a00\u98ce\u683c\u8f6c\u6362\u524d\u540e\uff0c\u7528\u8bcd\u66f4\u4e3a\u6b63\u5f0f\uff0c\u66ff\u6362\u4e86\u6781\u7aef\u60c5\u7eea\u7684\u8868\u8fbe\uff0c\u5e76\u8868\u8fbe\u4e86\u611f\u8c22\u3002\n", "\n", "\u2728 \u4f60\u53ef\u4ee5\u5c1d\u8bd5\u4fee\u6539\u63d0\u793a\uff0c\u770b\u53ef\u4ee5\u5f97\u5230\u4ec0\u4e48\u4e0d\u4e00\u6837\u7684\u7ed3\u679c\ud83d\ude09"]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["### 2.3 \u4e2d\u6587\u7248"]}, {"cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": ["# \u666e\u901a\u8bdd + \u5e73\u9759\u3001\u5c0a\u656c\u7684\u8bed\u8c03\n", "style = \"\"\"\u6b63\u5f0f\u666e\u901a\u8bdd \\\n", "\u7528\u4e00\u4e2a\u5e73\u9759\u3001\u5c0a\u656c\u7684\u8bed\u8c03\n", "\"\"\""]}, {"cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\u628a\u7531\u4e09\u4e2a\u53cd\u5f15\u53f7\u5206\u9694\u7684\u6587\u672c\u7ffb\u8bd1\u6210\u4e00\u79cd\u6b63\u5f0f\u666e\u901a\u8bdd \u7528\u4e00\u4e2a\u5e73\u9759\u3001\u5c0a\u656c\u7684\u8bed\u8c03\n", "\u98ce\u683c\u3002\n", "\u6587\u672c: ``` \n", "\u963f\uff0c\u6211\u5f88\u751f\u6c14\uff0c\u56e0\u4e3a\u6211\u7684\u6405\u62cc\u673a\u76d6\u6389\u4e86\uff0c\u628a\u5976\u6614\u6e85\u5230\u4e86\u53a8\u623f\u7684\u5899\u4e0a\uff01\u66f4\u7cdf\u7cd5\u7684\u662f\uff0c\u4fdd\u4fee\u4e0d\u5305\u62ec\u6253\u626b\u53a8\u623f\u7684\u8d39\u7528\u3002\u6211\u73b0\u5728\u9700\u8981\u4f60\u7684\u5e2e\u52a9\uff0c\u4f19\u8ba1\uff01\n", "```\n", "\n"]}], "source": ["# \u8981\u6c42\u6a21\u578b\u6839\u636e\u7ed9\u51fa\u7684\u8bed\u8c03\u8fdb\u884c\u8f6c\u5316\n", "prompt = f\"\"\"\u628a\u7531\u4e09\u4e2a\u53cd\u5f15\u53f7\u5206\u9694\u7684\u6587\u672c\\\n", "\u7ffb\u8bd1\u6210\u4e00\u79cd{style}\u98ce\u683c\u3002\n", "\u6587\u672c: ```{customer_email}```\n", "\"\"\"\n", "\n", "print(prompt)\n", "\n"]}, {"cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [{"data": {"text/plain": ["'\u5c0a\u656c\u7684\u670b\u53cb\u4eec\uff0c\u6211\u611f\u5230\u975e\u5e38\u4e0d\u5b89\uff0c\u56e0\u4e3a\u6211\u7684\u6405\u62cc\u673a\u76d6\u5b50\u4e0d\u614e\u6389\u843d\uff0c\u5bfc\u81f4\u5976\u6614\u6e85\u5230\u4e86\u53a8\u623f\u7684\u5899\u58c1\u4e0a\uff01\u66f4\u52a0\u4ee4\u4eba\u7cdf\u5fc3\u7684\u662f\uff0c\u4fdd\u4fee\u670d\u52a1\u5e76\u4e0d\u5305\u542b\u53a8\u623f\u6e05\u6d01\u7684\u8d39\u7528\u3002\u6b64\u523b\uff0c\u6211\u771f\u8bda\u5730\u8bf7\u6c42\u5404\u4f4d\u7684\u5e2e\u52a9\uff0c\u670b\u53cb\u4eec\uff01'"]}, "execution_count": 14, "metadata": {}, "output_type": "execute_result"}], "source": ["response = get_completion(prompt)\n", "\n", "response"]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["## \u4e09\u3001\u901a\u8fc7LangChain\u4f7f\u7528OpenAI\n", "\n", "\u5728\u524d\u9762\u4e00\u90e8\u5206\uff0c\u6211\u4eec\u901a\u8fc7\u5c01\u88c5\u51fd\u6570`get_completion`\u76f4\u63a5\u8c03\u7528\u4e86OpenAI\u5b8c\u6210\u4e86\u5bf9\u65b9\u8a00\u90ae\u4ef6\u8fdb\u884c\u4e86\u7684\u7ffb\u8bd1\uff0c\u5f97\u5230\u7528\u5e73\u548c\u5c0a\u91cd\u7684\u8bed\u6c14\u3001\u6b63\u5f0f\u7684\u666e\u901a\u8bdd\u8868\u8fbe\u7684\u90ae\u4ef6\u3002\n", "\n", "\u8ba9\u6211\u4eec\u5c1d\u8bd5\u4f7f\u7528LangChain\u6765\u5b9e\u73b0\u76f8\u540c\u7684\u529f\u80fd\u3002"]}, {"cell_type": "code", "execution_count": 15, "metadata": {"tags": []}, "outputs": [], "source": ["# \u5982\u679c\u4f60\u9700\u8981\u67e5\u770b\u5b89\u88c5\u8fc7\u7a0b\u65e5\u5fd7\uff0c\u53ef\u5220\u9664 -q \n", "# --upgrade \u8ba9\u6211\u4eec\u53ef\u4ee5\u5b89\u88c5\u5230\u6700\u65b0\u7248\u672c\u7684 langchain\n", "!pip install -q --upgrade langchain"]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["### 3.1 \u6a21\u578b\n", "\n", "\u4ece`langchain.chat_models`\u5bfc\u5165`OpenAI`\u7684\u5bf9\u8bdd\u6a21\u578b`ChatOpenAI`\u3002 \u9664\u53bbOpenAI\u4ee5\u5916\uff0c`langchain.chat_models`\u8fd8\u96c6\u6210\u4e86\u5176\u4ed6\u5bf9\u8bdd\u6a21\u578b\uff0c\u66f4\u591a\u7ec6\u8282\u53ef\u4ee5\u67e5\u770b[Langchain\u5b98\u65b9\u6587\u6863](https://python.langchain.com/en/latest/modules/models/chat/integrations.html)\u3002"]}, {"cell_type": "code", "execution_count": 16, "metadata": {"tags": []}, "outputs": [], "source": ["from langchain.chat_models import ChatOpenAI"]}, {"cell_type": "code", "execution_count": 17, "metadata": {"tags": []}, "outputs": [{"data": {"text/plain": ["ChatOpenAI(cache=None, verbose=False, callbacks=None, callback_manager=None, tags=None, metadata=None, client=, model_name='gpt-3.5-turbo', temperature=0.0, model_kwargs={}, openai_api_key='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', openai_api_base='', openai_organization='', openai_proxy='', request_timeout=None, max_retries=6, streaming=False, n=1, max_tokens=None, tiktoken_model_name=None)"]}, "execution_count": 17, "metadata": {}, "output_type": "execute_result"}], "source": ["# \u8fd9\u91cc\u6211\u4eec\u5c06\u53c2\u6570temperature\u8bbe\u7f6e\u4e3a0.0\uff0c\u4ece\u800c\u51cf\u5c11\u751f\u6210\u7b54\u6848\u7684\u968f\u673a\u6027\u3002\n", "# \u5982\u679c\u4f60\u60f3\u8981\u6bcf\u6b21\u5f97\u5230\u4e0d\u4e00\u6837\u7684\u6709\u65b0\u610f\u7684\u7b54\u6848\uff0c\u53ef\u4ee5\u5c1d\u8bd5\u8c03\u6574\u8be5\u53c2\u6570\u3002\n", "chat = ChatOpenAI(temperature=0.0)\n", "chat"]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u4e0a\u9762\u7684\u8f93\u51fa\u663e\u793aChatOpenAI\u7684\u9ed8\u8ba4\u6a21\u578b\u4e3a`gpt-3.5-turbo`"]}, {"cell_type": "markdown", "metadata": {}, "source": ["### 3.2 \u63d0\u793a\u6a21\u677f\n", "\n", "\u5728\u524d\u9762\u7684\u4f8b\u5b50\u4e2d\uff0c\u6211\u4eec\u901a\u8fc7[f\u5b57\u7b26\u4e32](https://docs.python.org/zh-cn/3/tutorial/inputoutput.html#tut-f-strings)\u628aPython\u8868\u8fbe\u5f0f\u7684\u503c`style`\u548c`customer_email`\u6dfb\u52a0\u5230`prompt`\u5b57\u7b26\u4e32\u5185\u3002\n", "\n", "```python\n", "prompt = f\"\"\"Translate the text \\\n", "that is delimited by triple backticks \n", "into a style that is {style}.\n", "text: ```{customer_email}```\n", "\"\"\"\n", "```\n", "`langchain`\u63d0\u4f9b\u4e86\u63a5\u53e3\u65b9\u4fbf\u5feb\u901f\u7684\u6784\u9020\u548c\u4f7f\u7528\u63d0\u793a\u3002\u73b0\u5728\u6211\u4eec\u6765\u770b\u770b\u5982\u4f55\u4f7f\u7528`langchain`\u6765\u6784\u9020\u63d0\u793a\u3002"]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["#### 3.2.1 \u4f7f\u7528LangChain\u63d0\u793a\u6a21\u7248"]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["##### 1\ufe0f\u20e3 \u6784\u9020\u63d0\u793a\u6a21\u7248\u5b57\u7b26\u4e32\n", "\u6211\u4eec\u6784\u9020\u4e00\u4e2a\u63d0\u793a\u6a21\u7248\u5b57\u7b26\u4e32\uff1a`template_string`"]}, {"cell_type": "code", "execution_count": 18, "metadata": {"tags": []}, "outputs": [], "source": ["template_string = \"\"\"Translate the text \\\n", "that is delimited by triple backticks \\\n", "into a style that is {style}. \\\n", "text: ```{text}```\n", "\"\"\""]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["##### 2\ufe0f\u20e3 \u6784\u9020LangChain\u63d0\u793a\u6a21\u7248\n", "\u6211\u4eec\u8c03\u7528`ChatPromptTemplatee.from_template()`\u51fd\u6570\u5c06\u4e0a\u9762\u7684\u63d0\u793a\u6a21\u7248\u5b57\u7b26`template_string`\u8f6c\u6362\u4e3a\u63d0\u793a\u6a21\u7248`prompt_template`"]}, {"cell_type": "code", "execution_count": 19, "metadata": {"tags": []}, "outputs": [], "source": ["from langchain.prompts import ChatPromptTemplate\n", "prompt_template = ChatPromptTemplate.from_template(template_string)"]}, {"cell_type": "code", "execution_count": 20, "metadata": {"tags": []}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["input_variables=['style', 'text'] output_parser=None partial_variables={} template='Translate the text that is delimited by triple backticks into a style that is {style}. text: ```{text}```\\n' template_format='f-string' validate_template=True\n"]}], "source": ["print(prompt_template.messages[0].prompt)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u4ece\u4e0a\u9762\u7684\u8f93\u51fa\u53ef\u4ee5\u770b\u51fa\uff0c`prompt_template` \u6709\u4e24\u4e2a\u8f93\u5165\u53d8\u91cf\uff1a `style` \u548c `text`\u3002"]}, {"cell_type": "code", "execution_count": 21, "metadata": {"tags": []}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["['style', 'text']\n"]}], "source": ["print(prompt_template.messages[0].prompt.input_variables)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 3\ufe0f\u20e3 \u4f7f\u7528\u6a21\u7248\u5f97\u5230\u5ba2\u6237\u6d88\u606f\u63d0\u793a\n", "\n", "langchain\u63d0\u793a\u6a21\u7248`prompt_template`\u9700\u8981\u4e24\u4e2a\u8f93\u5165\u53d8\u91cf\uff1a `style` \u548c `text`\u3002 \u8fd9\u91cc\u5206\u522b\u5bf9\u5e94 \n", "- `customer_style`: \u6211\u4eec\u60f3\u8981\u7684\u987e\u5ba2\u90ae\u4ef6\u98ce\u683c\n", "- `customer_email`: \u987e\u5ba2\u7684\u539f\u59cb\u90ae\u4ef6\u6587\u672c\u3002"]}, {"cell_type": "code", "execution_count": 22, "metadata": {"tags": []}, "outputs": [], "source": ["customer_style = \"\"\"American English \\\n", "in a calm and respectful tone\n", "\"\"\""]}, {"cell_type": "code", "execution_count": 23, "metadata": {"tags": []}, "outputs": [], "source": ["customer_email = \"\"\"\n", "Arrr, I be fuming that me blender lid \\\n", "flew off and splattered me kitchen walls \\\n", "with smoothie! And to make matters worse, \\\n", "the warranty don't cover the cost of \\\n", "cleaning up me kitchen. I need yer help \\\n", "right now, matey!\n", "\"\"\""]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u5bf9\u4e8e\u7ed9\u5b9a\u7684`customer_style`\u548c`customer_email`, \u6211\u4eec\u53ef\u4ee5\u4f7f\u7528\u63d0\u793a\u6a21\u7248`prompt_template`\u7684`format_messages`\u65b9\u6cd5\u751f\u6210\u60f3\u8981\u7684\u5ba2\u6237\u6d88\u606f`customer_messages`\u3002"]}, {"cell_type": "code", "execution_count": 24, "metadata": {"tags": []}, "outputs": [], "source": ["customer_messages = prompt_template.format_messages(\n", " style=customer_style,\n", " text=customer_email)"]}, {"cell_type": "code", "execution_count": 25, "metadata": {"tags": []}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\n", "\n"]}], "source": ["print(type(customer_messages))\n", "print(type(customer_messages[0]))"]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u53ef\u4ee5\u770b\u51fa`customer_messages`\u53d8\u91cf\u7c7b\u578b\u4e3a\u5217\u8868(`list`)\uff0c\u800c\u5217\u8868\u91cc\u7684\u5143\u7d20\u53d8\u91cf\u7c7b\u578b\u4e3alangchain\u81ea\u5b9a\u4e49\u6d88\u606f(`langchain.schema.HumanMessage`)\u3002\n", "\n", "\u6253\u5370\u7b2c\u4e00\u4e2a\u5143\u7d20\u53ef\u4ee5\u5f97\u5230\u5982\u4e0b:"]}, {"cell_type": "code", "execution_count": 26, "metadata": {"tags": []}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["content=\"Translate the text that is delimited by triple backticks into a style that is American English in a calm and respectful tone\\n. text: ```\\nArrr, I be fuming that me blender lid flew off and splattered me kitchen walls with smoothie! And to make matters worse, the warranty don't cover the cost of cleaning up me kitchen. I need yer help right now, matey!\\n```\\n\" additional_kwargs={} example=False\n"]}], "source": ["print(customer_messages[0])"]}, {"cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["content='\u628a\u7531\u4e09\u4e2a\u53cd\u5f15\u53f7\u5206\u9694\u7684\u6587\u672c\u7ffb\u8bd1\u6210\u4e00\u79cd\u6b63\u5f0f\u666e\u901a\u8bdd \u7528\u4e00\u4e2a\u5e73\u9759\u3001\u5c0a\u656c\u7684\u8bed\u6c14\\n\u98ce\u683c\u3002\u6587\u672c: ```\\n\u963f\uff0c\u6211\u5f88\u751f\u6c14\uff0c\u56e0\u4e3a\u6211\u7684\u6405\u62cc\u673a\u76d6\u6389\u4e86\uff0c\u628a\u5976\u6614\u6e85\u5230\u4e86\u53a8\u623f\u7684\u5899\u4e0a\uff01\u66f4\u7cdf\u7cd5\u7684\u662f\uff0c\u4fdd\u4fee\u4e0d\u5305\u62ec\u6253\u626b\u53a8\u623f\u7684\u8d39\u7528\u3002\u6211\u73b0\u5728\u9700\u8981\u4f60\u7684\u5e2e\u52a9\uff0c\u4f19\u8ba1\uff01\\n```\\n' additional_kwargs={} example=False\n"]}], "source": ["# \u4e2d\u6587\u63d0\u793a\n", "from langchain.prompts import ChatPromptTemplate\n", "\n", "template_string = \"\"\"\u628a\u7531\u4e09\u4e2a\u53cd\u5f15\u53f7\u5206\u9694\u7684\u6587\u672c\\\n", "\u7ffb\u8bd1\u6210\u4e00\u79cd{style}\u98ce\u683c\u3002\\\n", "\u6587\u672c: ```{text}```\n", "\"\"\"\n", "prompt_template = ChatPromptTemplate.from_template(template_string)\n", "\n", "customer_style = \"\"\"\u6b63\u5f0f\u666e\u901a\u8bdd \\\n", "\u7528\u4e00\u4e2a\u5e73\u9759\u3001\u5c0a\u656c\u7684\u8bed\u6c14\n", "\"\"\"\n", "\n", "customer_email = \"\"\"\n", "\u963f\uff0c\u6211\u5f88\u751f\u6c14\uff0c\\\n", "\u56e0\u4e3a\u6211\u7684\u6405\u62cc\u673a\u76d6\u6389\u4e86\uff0c\\\n", "\u628a\u5976\u6614\u6e85\u5230\u4e86\u53a8\u623f\u7684\u5899\u4e0a\uff01\\\n", "\u66f4\u7cdf\u7cd5\u7684\u662f\uff0c\u4fdd\u4fee\u4e0d\u5305\u62ec\u6253\u626b\u53a8\u623f\u7684\u8d39\u7528\u3002\\\n", "\u6211\u73b0\u5728\u9700\u8981\u4f60\u7684\u5e2e\u52a9\uff0c\u4f19\u8ba1\uff01\n", "\"\"\"\n", "\n", "customer_messages = prompt_template.format_messages(\n", " style=customer_style,\n", " text=customer_email)\n", "\n", "\n", "print(customer_messages[0])"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 4\ufe0f\u20e3 \u8c03\u7528chat\u6a21\u578b\u8f6c\u6362\u5ba2\u6237\u6d88\u606f\u98ce\u683c\n", "\n", "\u73b0\u5728\u6211\u4eec\u53ef\u4ee5\u8c03\u7528[\u6a21\u578b](#model)\u90e8\u5206\u5b9a\u4e49\u7684chat\u6a21\u578b\u6765\u5b9e\u73b0\u8f6c\u6362\u5ba2\u6237\u6d88\u606f\u98ce\u683c\u3002\u5230\u76ee\u524d\u4e3a\u6b62\uff0c\u6211\u4eec\u5df2\u7ecf\u5b9e\u73b0\u4e86\u5728\u524d\u4e00\u90e8\u5206\u7684\u4efb\u52a1\u3002"]}, {"cell_type": "code", "execution_count": 28, "metadata": {"tags": []}, "outputs": [], "source": ["customer_response = chat(customer_messages)"]}, {"cell_type": "code", "execution_count": 29, "metadata": {"tags": []}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\u5c0a\u656c\u7684\u4f19\u8ba1\uff0c\u6211\u611f\u5230\u975e\u5e38\u6124\u6012\uff0c\u56e0\u4e3a\u6211\u7684\u6405\u62cc\u673a\u76d6\u5b50\u4e0d\u614e\u6389\u843d\uff0c\u5bfc\u81f4\u5976\u6614\u6e85\u5230\u4e86\u53a8\u623f\u7684\u5899\u58c1\u4e0a\uff01\u66f4\u52a0\u4ee4\u4eba\u7cdf\u5fc3\u7684\u662f\uff0c\u4fdd\u4fee\u5e76\u4e0d\u5305\u62ec\u53a8\u623f\u6e05\u6d01\u7684\u8d39\u7528\u3002\u73b0\u5728\uff0c\u6211\u9700\u8981\u4f60\u7684\u5e2e\u52a9\uff0c\u8bf7\u4f60\u7ed9\u4e88\u63f4\u624b\uff01\n"]}], "source": ["print(customer_response.content)"]}, {"cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\u5c0a\u656c\u7684\u4f19\u8ba1\uff0c\u6211\u611f\u5230\u975e\u5e38\u6124\u6012\uff0c\u56e0\u4e3a\u6211\u7684\u6405\u62cc\u673a\u76d6\u5b50\u4e0d\u614e\u6389\u843d\uff0c\u5bfc\u81f4\u5976\u6614\u6e85\u5230\u4e86\u53a8\u623f\u7684\u5899\u58c1\u4e0a\uff01\u66f4\u52a0\u4ee4\u4eba\u7cdf\u5fc3\u7684\u662f\uff0c\u4fdd\u4fee\u5e76\u4e0d\u5305\u62ec\u53a8\u623f\u6e05\u6d01\u7684\u8d39\u7528\u3002\u73b0\u5728\uff0c\u6211\u9700\u8981\u4f60\u7684\u5e2e\u52a9\uff0c\u8bf7\u4f60\u7ed9\u4e88\u63f4\u624b\uff01\n"]}], "source": ["print(customer_response.content)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 5\ufe0f\u20e3 \u4f7f\u7528\u6a21\u7248\u5f97\u5230\u56de\u590d\u6d88\u606f\u63d0\u793a\n", "\n", "\u63a5\u4e0b\u6765\uff0c\u6211\u4eec\u66f4\u8fdb\u4e00\u6b65\uff0c\u5c06\u5ba2\u670d\u4eba\u5458\u56de\u590d\u7684\u6d88\u606f\uff0c\u8f6c\u6362\u4e3a\u6d77\u76d7\u7684\u8bed\u8a00\u98ce\u683c\uff0c\u5e76\u786e\u4fdd\u6d88\u606f\u6bd4\u8f83\u6709\u793c\u8c8c\u3002 \n", "\n", "\u8fd9\u91cc\uff0c\u6211\u4eec\u53ef\u4ee5\u7ee7\u7eed\u4f7f\u7528\u7b2c2\ufe0f\u20e3\u6b65\u6784\u9020\u7684langchain\u63d0\u793a\u6a21\u7248\uff0c\u6765\u83b7\u5f97\u6211\u4eec\u56de\u590d\u6d88\u606f\u63d0\u793a\u3002"]}, {"cell_type": "code", "execution_count": 31, "metadata": {"tags": []}, "outputs": [], "source": ["service_reply = \"\"\"Hey there customer, \\\n", "the warranty does not cover \\\n", "cleaning expenses for your kitchen \\\n", "because it's your fault that \\\n", "you misused your blender \\\n", "by forgetting to put the lid on before \\\n", "starting the blender. \\\n", "Tough luck! See ya!\n", "\"\"\""]}, {"cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [], "source": ["service_style_pirate = \"\"\"\\\n", "a polite tone \\\n", "that speaks in English Pirate\\\n", "\"\"\""]}, {"cell_type": "code", "execution_count": 33, "metadata": {"tags": []}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\u628a\u7531\u4e09\u4e2a\u53cd\u5f15\u53f7\u5206\u9694\u7684\u6587\u672c\u7ffb\u8bd1\u6210\u4e00\u79cda polite tone that speaks in English Pirate\u98ce\u683c\u3002\u6587\u672c: ```Hey there customer, the warranty does not cover cleaning expenses for your kitchen because it's your fault that you misused your blender by forgetting to put the lid on before starting the blender. Tough luck! See ya!\n", "```\n", "\n"]}], "source": ["service_messages = prompt_template.format_messages(\n", " style=service_style_pirate,\n", " text=service_reply)\n", "\n", "print(service_messages[0].content)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 6\ufe0f\u20e3 \u8c03\u7528chat\u6a21\u578b\u8f6c\u6362\u56de\u590d\u6d88\u606f\u98ce\u683c\n", "\n", "\u8c03\u7528[\u6a21\u578b](#model)\u90e8\u5206\u5b9a\u4e49\u7684chat\u6a21\u578b\u6765\u8f6c\u6362\u56de\u590d\u6d88\u606f\u98ce\u683c"]}, {"cell_type": "code", "execution_count": 34, "metadata": {"tags": []}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["Ahoy there, matey! I regret to inform ye, valued customer, that the warranty be not coverin' the expenses o' cleanin' yer galley due to yer own negligence. 'Tis yer own fault, ye see, fer ye be misusin' yer blender by forgettin' to secure the lid afore settin' it in motion. Aye, 'tis a tough break, indeed! Fare thee well, me heartie!\n"]}], "source": ["service_response = chat(service_messages)\n", "print(service_response.content)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["#### 3.2.2 \u4e2d\u6587\u7248"]}, {"cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\u628a\u7531\u4e09\u4e2a\u53cd\u5f15\u53f7\u5206\u9694\u7684\u6587\u672c\u7ffb\u8bd1\u6210\u4e00\u79cd\u4e00\u4e2a\u6709\u793c\u8c8c\u7684\u8bed\u6c14 \u4f7f\u7528\u6b63\u5f0f\u7684\u666e\u901a\u8bdd \u98ce\u683c\u3002\u6587\u672c: ```\u563f\uff0c\u987e\u5ba2\uff0c \u4fdd\u4fee\u4e0d\u5305\u62ec\u53a8\u623f\u7684\u6e05\u6d01\u8d39\u7528\uff0c \u56e0\u4e3a\u60a8\u5728\u542f\u52a8\u6405\u62cc\u673a\u4e4b\u524d \u5fd8\u8bb0\u76d6\u4e0a\u76d6\u5b50\u800c\u8bef\u7528\u6405\u62cc\u673a, \u8fd9\u662f\u60a8\u7684\u9519\u3002 \u5012\u9709\uff01 \u518d\u89c1\uff01\n", "```\n", "\n"]}], "source": ["service_reply = \"\"\"\u563f\uff0c\u987e\u5ba2\uff0c \\\n", "\u4fdd\u4fee\u4e0d\u5305\u62ec\u53a8\u623f\u7684\u6e05\u6d01\u8d39\u7528\uff0c \\\n", "\u56e0\u4e3a\u60a8\u5728\u542f\u52a8\u6405\u62cc\u673a\u4e4b\u524d \\\n", "\u5fd8\u8bb0\u76d6\u4e0a\u76d6\u5b50\u800c\u8bef\u7528\u6405\u62cc\u673a, \\\n", "\u8fd9\u662f\u60a8\u7684\u9519\u3002 \\\n", "\u5012\u9709\uff01 \u518d\u89c1\uff01\n", "\"\"\"\n", "\n", "service_style_pirate = \"\"\"\\\n", "\u4e00\u4e2a\u6709\u793c\u8c8c\u7684\u8bed\u6c14 \\\n", "\u4f7f\u7528\u6b63\u5f0f\u7684\u666e\u901a\u8bdd \\\n", "\"\"\"\n", "service_messages = prompt_template.format_messages(\n", " style=service_style_pirate,\n", " text=service_reply)\n", "\n", "print(service_messages[0].content)"]}, {"cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\u5c0a\u656c\u7684\u987e\u5ba2\uff0c\u5f88\u62b1\u6b49\u544a\u77e5\u60a8\uff0c\u4fdd\u4fee\u670d\u52a1\u4e0d\u5305\u542b\u53a8\u623f\u6e05\u6d01\u8d39\u7528\u3002\u8fd9\u662f\u56e0\u4e3a\u5728\u60a8\u4f7f\u7528\u6405\u62cc\u673a\u4e4b\u524d\uff0c\u4e0d\u614e\u5fd8\u8bb0\u76d6\u4e0a\u76d6\u5b50\u800c\u5bfc\u81f4\u8bef\u7528\u6405\u62cc\u673a\uff0c\u8fd9\u5c5e\u4e8e\u60a8\u7684\u758f\u5ffd\u3002\u975e\u5e38\u9057\u61be\uff01\u795d\u60a8\u4e00\u5207\u987a\u5229\uff01\u518d\u89c1\uff01\n"]}], "source": ["service_response = chat(service_messages)\n", "print(service_response.content)"]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["#### 3.2.2 \u4e3a\u4ec0\u4e48\u9700\u8981\u63d0\u793a\u6a21\u7248"]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u5728\u5e94\u7528\u4e8e\u6bd4\u8f83\u590d\u6742\u7684\u573a\u666f\u65f6\uff0c\u63d0\u793a\u53ef\u80fd\u4f1a\u975e\u5e38\u957f\u5e76\u4e14\u5305\u542b\u6d89\u53ca\u8bb8\u591a\u7ec6\u8282\u3002**\u4f7f\u7528\u63d0\u793a\u6a21\u7248\uff0c\u53ef\u4ee5\u8ba9\u6211\u4eec\u66f4\u4e3a\u65b9\u4fbf\u5730\u91cd\u590d\u4f7f\u7528\u8bbe\u8ba1\u597d\u7684\u63d0\u793a**\u3002"]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u4e0b\u9762\u7ed9\u51fa\u4e86\u4e00\u4e2a\u6bd4\u8f83\u957f\u7684\u63d0\u793a\u6a21\u7248\u6848\u4f8b\u3002\u5b66\u751f\u4eec\u7ebf\u4e0a\u5b66\u4e60\u5e76\u63d0\u4ea4\u4f5c\u4e1a\uff0c\u901a\u8fc7\u4ee5\u4e0b\u7684\u63d0\u793a\u6765\u5b9e\u73b0\u5bf9\u5b66\u751f\u7684\u63d0\u4ea4\u7684\u4f5c\u4e1a\u7684\u8bc4\u5206\u3002"]}, {"cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [], "source": ["# \u82f1\u6587\u7248\n", "prompt = \"\"\" Your task is to determine if the student's solution is correct or not\n", "\n", " To solve the problem do the following:\n", " - First, workout your own solution to the problem\n", " - Then compare your solution to the student's solution \n", " and evaluate if the sudtent's solution is correct or not.\n", " ...\n", " Use the following format:\n", " Question:\n", " ```\n", " question here\n", " ```\n", " Student's solution:\n", " ```\n", " student's solution here\n", " ```\n", " Actual solution:\n", " ```\n", " ...\n", " steps to work out the solution and your solution here\n", " ```\n", " Is the student's solution the same as acutal solution \\\n", " just calculated:\n", " ```\n", " yes or no\n", " ```\n", " Student grade\n", " ```\n", " correct or incorrect\n", " ```\n", " \n", " Question:\n", " ```\n", " {question}\n", " ```\n", " Student's solution:\n", " ```\n", " {student's solution}\n", " ```\n", " Actual solution:\n", " \n", " \"\"\""]}, {"cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [], "source": ["# \u4e2d\u6587\u7248\n", "prompt = \"\"\" \u4f60\u7684\u4efb\u52a1\u662f\u5224\u65ad\u5b66\u751f\u7684\u89e3\u51b3\u65b9\u6848\u662f\u6b63\u786e\u7684\u8fd8\u662f\u4e0d\u6b63\u786e\u7684\n", "\n", "\u8981\u89e3\u51b3\u8be5\u95ee\u9898\uff0c\u8bf7\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\uff1a\n", " - \u9996\u5148\uff0c\u5236\u5b9a\u81ea\u5df1\u7684\u95ee\u9898\u89e3\u51b3\u65b9\u6848\n", " - \u7136\u540e\u5c06\u60a8\u7684\u89e3\u51b3\u65b9\u6848\u4e0e\u5b66\u751f\u7684\u89e3\u51b3\u65b9\u6848\u8fdb\u884c\u6bd4\u8f83\n", " \u5e76\u8bc4\u4f30\u5b66\u751f\u7684\u89e3\u51b3\u65b9\u6848\u662f\u5426\u6b63\u786e\u3002\n", "...\n", "\u4f7f\u7528\u4e0b\u9762\u7684\u683c\u5f0f:\n", "\n", "\u95ee\u9898:\n", "```\n", "\u95ee\u9898\u6587\u672c\n", "```\n", "\u5b66\u751f\u7684\u89e3\u51b3\u65b9\u6848:\n", "```\n", "\u5b66\u751f\u7684\u89e3\u51b3\u65b9\u6848\u6587\u672c\n", "```\n", "\u5b9e\u9645\u89e3\u51b3\u65b9\u6848:\n", "```\n", "...\n", "\u5236\u5b9a\u89e3\u51b3\u65b9\u6848\u7684\u6b65\u9aa4\u4ee5\u53ca\u60a8\u7684\u89e3\u51b3\u65b9\u6848\u8bf7\u53c2\u89c1\u6b64\u5904\n", "```\n", "\u5b66\u751f\u7684\u89e3\u51b3\u65b9\u6848\u548c\u5b9e\u9645\u89e3\u51b3\u65b9\u6848\u662f\u5426\u76f8\u540c \\\n", "\u53ea\u8ba1\u7b97\uff1a\n", "```\n", "\u662f\u6216\u8005\u4e0d\u662f\n", "```\n", "\u5b66\u751f\u7684\u6210\u7ee9\n", "```\n", "\u6b63\u786e\u6216\u8005\u4e0d\u6b63\u786e\n", "```\n", "\n", "\u95ee\u9898:\n", "```\n", "{question}\n", "```\n", "\u5b66\u751f\u7684\u89e3\u51b3\u65b9\u6848:\n", "```\n", "{student's solution}\n", "```\n", "\u5b9e\u9645\u89e3\u51b3\u65b9\u6848:\n", "\n", "\"\"\""]}, {"cell_type": "markdown", "metadata": {"jp-MarkdownHeadingCollapsed": true, "tags": []}, "source": ["\u6b64\u5916\uff0cLangChain\u8fd8\u63d0\u4f9b\u4e86\u63d0\u793a\u6a21\u7248\u7528\u4e8e\u4e00\u4e9b\u5e38\u7528\u573a\u666f\u3002\u6bd4\u5982\u81ea\u52a8\u6458\u8981\u3001\u95ee\u7b54\u3001\u8fde\u63a5\u5230SQL\u6570\u636e\u5e93\u3001\u8fde\u63a5\u5230\u4e0d\u540c\u7684API. \u901a\u8fc7\u4f7f\u7528LongChain\u5185\u7f6e\u7684\u63d0\u793a\u6a21\u7248\uff0c\u4f60\u53ef\u4ee5\u5feb\u901f\u5efa\u7acb\u81ea\u5df1\u7684\u5927\u6a21\u578b\u5e94\u7528\uff0c\u800c\u4e0d\u9700\u8981\u82b1\u65f6\u95f4\u53bb\u8bbe\u8ba1\u548c\u6784\u9020\u63d0\u793a\u3002\n", "\n", "\u6700\u540e\uff0c\u6211\u4eec\u5728\u5efa\u7acb\u5927\u6a21\u578b\u5e94\u7528\u65f6\uff0c\u901a\u5e38\u5e0c\u671b\u6a21\u578b\u7684\u8f93\u51fa\u4e3a\u7ed9\u5b9a\u7684\u683c\u5f0f\uff0c\u6bd4\u5982\u5728\u8f93\u51fa\u4f7f\u7528\u7279\u5b9a\u7684\u5173\u952e\u8bcd\u6765\u8ba9\u8f93\u51fa\u7ed3\u6784\u5316\u3002 \u4e0b\u9762\u4e3a\u4e00\u4e2a\u4f7f\u7528\u5927\u6a21\u578b\u8fdb\u884c\u94fe\u5f0f\u601d\u8003\u63a8\u7406\u4f8b\u5b50\uff0c\u5bf9\u4e8e\u95ee\u9898\uff1a*What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?*\n", "\n", "\u901a\u8fc7\u4f7f\u7528LangChain\u5e93\u51fd\u6570\uff0c\u8f93\u51fa\u91c7\u7528\"Thought\"\uff08\u601d\u8003\uff09\u3001\"Action\"\uff08\u884c\u52a8\uff09\u3001\"Observation\"\uff08\u89c2\u5bdf\uff09\u4f5c\u4e3a\u94fe\u5f0f\u601d\u8003\u63a8\u7406\u7684\u5173\u952e\u8bcd\uff0c\u8ba9\u8f93\u51fa\u7ed3\u6784\u5316\u3002\n", "\n", "```\n", "Thought: I need to search Colorado orogeny, find the area that the eastern sector of the Colorado orogeny extends into, then find the elevation range of the area.\n", "Action: Search[Colorado orogeny]\n", "Observation: The Colorado orogeny was an episode of mountain building (an orogeny) in Colorado and surrounding areas.\n", "\n", "Thought: It does not mention the eastern sector. So I need to look up eastern sector.\n", "Action: Lookup[eastern sector]\n", "Observation: (Result 1 / 1) The eastern sector extends into the High Plains and is called the Central Plains orogeny.\n", "\n", "Thought: The eastern sector of Colorado orogeny extends into the High Plains. So I need to search High Plains and find its elevation range.\n", "Action: Search[High Plains]\n", "Observation: High Plains refers to one of two distinct land regions\n", "\n", "Thought: I need to instead search High Plains (United States).\n", "Action: Search[High Plains (United States)]\n", "Observation: The High Plains are a subregion of the Great Plains. From east to west, the High Plains rise in elevation from around 1,800 to 7,000 ft (550 to 2,130 m).[3]\n", "\n", "Thought: High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer is 1,800 to 7,000 ft.\n", "Action: Finish[1,800 to 7,000 ft]\n", "```"]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u5728\u8865\u5145\u6750\u6599\u4e2d\uff0c\u53ef\u4ee5\u67e5\u770b\u4f7f\u7528LangChain\u548cOpenAI\u8fdb\u884c\u94fe\u5f0f\u601d\u8003\u63a8\u7406\u7684\u53e6\u4e00\u4e2a\u4ee3\u7801\u5b9e\u4f8b\u3002"]}, {"cell_type": "markdown", "metadata": {}, "source": ["### 3.3 \u8f93\u51fa\u89e3\u6790\u5668"]}, {"cell_type": "markdown", "metadata": {}, "source": ["#### 3.3.1 \u5982\u679c\u6ca1\u6709\u8f93\u51fa\u89e3\u6790\u5668\n", "\n", "\u5bf9\u4e8e\u7ed9\u5b9a\u7684\u8bc4\u4ef7`customer_review`, \u6211\u4eec\u5e0c\u671b\u63d0\u53d6\u4fe1\u606f\uff0c\u5e76\u6309\u4ee5\u4e0b\u683c\u5f0f\u8f93\u51fa\uff1a\n", "\n", "```python\n", "{\n", " \"gift\": False,\n", " \"delivery_days\": 5,\n", " \"price_value\": \"pretty affordable!\"\n", "}\n", "```"]}, {"cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [], "source": ["customer_review = \"\"\"\\\n", "This leaf blower is pretty amazing. It has four settings:\\\n", "candle blower, gentle breeze, windy city, and tornado. \\\n", "It arrived in two days, just in time for my wife's \\\n", "anniversary present. \\\n", "I think my wife liked it so much she was speechless. \\\n", "So far I've been the only one using it, and I've been \\\n", "using it every other morning to clear the leaves on our lawn. \\\n", "It's slightly more expensive than the other leaf blowers \\\n", "out there, but I think it's worth it for the extra features.\n", "\"\"\""]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 1\ufe0f\u20e3 \u6784\u9020\u63d0\u793a\u6a21\u7248\u5b57\u7b26\u4e32"]}, {"cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [], "source": ["review_template = \"\"\"\\\n", "For the following text, extract the following information:\n", "\n", "gift: Was the item purchased as a gift for someone else? \\\n", "Answer True if yes, False if not or unknown.\n", "\n", "delivery_days: How many days did it take for the product \\\n", "to arrive? If this information is not found, output -1.\n", "\n", "price_value: Extract any sentences about the value or price,\\\n", "and output them as a comma separated Python list.\n", "\n", "Format the output as JSON with the following keys:\n", "gift\n", "delivery_days\n", "price_value\n", "\n", "text: {text}\n", "\"\"\""]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 2\ufe0f\u20e3 \u6784\u9020langchain\u63d0\u793a\u6a21\u7248"]}, {"cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["input_variables=['text'] output_parser=None partial_variables={} messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['text'], output_parser=None, partial_variables={}, template='For the following text, extract the following information:\\n\\ngift: Was the item purchased as a gift for someone else? Answer True if yes, False if not or unknown.\\n\\ndelivery_days: How many days did it take for the product to arrive? If this information is not found, output -1.\\n\\nprice_value: Extract any sentences about the value or price,and output them as a comma separated Python list.\\n\\nFormat the output as JSON with the following keys:\\ngift\\ndelivery_days\\nprice_value\\n\\ntext: {text}\\n', template_format='f-string', validate_template=True), additional_kwargs={})]\n"]}], "source": ["from langchain.prompts import ChatPromptTemplate\n", "prompt_template = ChatPromptTemplate.from_template(review_template)\n", "print(prompt_template)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 3\ufe0f\u20e3 \u4f7f\u7528\u6a21\u7248\u5f97\u5230\u63d0\u793a\u6d88\u606f"]}, {"cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [], "source": ["messages = prompt_template.format_messages(text=customer_review)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 4\ufe0f\u20e3 \u8c03\u7528chat\u6a21\u578b\u63d0\u53d6\u4fe1\u606f"]}, {"cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["{\n", " \"gift\": false,\n", " \"delivery_days\": 2,\n", " \"price_value\": [\"It's slightly more expensive than the other leaf blowers out there, but I think it's worth it for the extra features.\"]\n", "}\n"]}], "source": ["chat = ChatOpenAI(temperature=0.0)\n", "response = chat(messages)\n", "print(response.content)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### \ud83d\udcdd \u5206\u6790\u4e0e\u603b\u7ed3\n", "`response.content`\u7c7b\u578b\u4e3a\u5b57\u7b26\u4e32\uff08`str`\uff09\uff0c\u800c\u5e76\u975e\u5b57\u5178(`dict`), \u76f4\u63a5\u4f7f\u7528`get`\u65b9\u6cd5\u4f1a\u62a5\u9519\u3002\u56e0\u6b64\uff0c\u6211\u4eec\u9700\u8981\u8f93\u51fa\u89e3\u91ca\u5668\u3002"]}, {"cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [{"data": {"text/plain": ["str"]}, "execution_count": 44, "metadata": {}, "output_type": "execute_result"}], "source": ["type(response.content)"]}, {"cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [{"ename": "AttributeError", "evalue": "'str' object has no attribute 'get'", "output_type": "error", "traceback": ["\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mAttributeError\u001b[0m Traceback (most recent call last)", "Input \u001b[0;32mIn [45]\u001b[0m, in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[43mresponse\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcontent\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget\u001b[49m(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mgift\u001b[39m\u001b[38;5;124m'\u001b[39m)\n", "\u001b[0;31mAttributeError\u001b[0m: 'str' object has no attribute 'get'"]}], "source": ["response.content.get('gift')"]}, {"cell_type": "markdown", "metadata": {}, "source": ["#### 3.3.2 \u4e2d\u6587\u7248"]}, {"cell_type": "code", "execution_count": 46, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["input_variables=['text'] output_parser=None partial_variables={} messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['text'], output_parser=None, partial_variables={}, template='\u5bf9\u4e8e\u4ee5\u4e0b\u6587\u672c\uff0c\u8bf7\u4ece\u4e2d\u63d0\u53d6\u4ee5\u4e0b\u4fe1\u606f\uff1a\\n\\n\u793c\u7269\uff1a\u8be5\u5546\u54c1\u662f\u4f5c\u4e3a\u793c\u7269\u9001\u7ed9\u522b\u4eba\u7684\u5417\uff1f \u5982\u679c\u662f\uff0c\u5219\u56de\u7b54 \u662f\u7684\uff1b\u5982\u679c\u5426\u6216\u672a\u77e5\uff0c\u5219\u56de\u7b54 \u4e0d\u662f\u3002\\n\\n\u4ea4\u8d27\u5929\u6570\uff1a\u4ea7\u54c1\u9700\u8981\u591a\u5c11\u5929\u5230\u8fbe\uff1f \u5982\u679c\u6ca1\u6709\u627e\u5230\u8be5\u4fe1\u606f\uff0c\u5219\u8f93\u51fa-1\u3002\\n\\n\u4ef7\u94b1\uff1a\u63d0\u53d6\u6709\u5173\u4ef7\u503c\u6216\u4ef7\u683c\u7684\u4efb\u4f55\u53e5\u5b50\uff0c\u5e76\u5c06\u5b83\u4eec\u8f93\u51fa\u4e3a\u9017\u53f7\u5206\u9694\u7684 Python \u5217\u8868\u3002\\n\\n\u4f7f\u7528\u4ee5\u4e0b\u952e\u5c06\u8f93\u51fa\u683c\u5f0f\u5316\u4e3a JSON\uff1a\\n\u793c\u7269\\n\u4ea4\u8d27\u5929\u6570\\n\u4ef7\u94b1\\n\\n\u6587\u672c: {text}\\n', template_format='f-string', validate_template=True), additional_kwargs={})]\n"]}], "source": ["from langchain.prompts import ChatPromptTemplate\n", "\n", "customer_review = \"\"\"\\\n", "\u8fd9\u6b3e\u5439\u53f6\u673a\u975e\u5e38\u795e\u5947\u3002 \u5b83\u6709\u56db\u4e2a\u8bbe\u7f6e\uff1a\\\n", "\u5439\u8721\u70db\u3001\u5fae\u98ce\u3001\u98ce\u57ce\u3001\u9f99\u5377\u98ce\u3002 \\\n", "\u4e24\u5929\u540e\u5c31\u5230\u4e86\uff0c\u6b63\u597d\u8d76\u4e0a\u6211\u59bb\u5b50\u7684\\\n", "\u5468\u5e74\u7eaa\u5ff5\u793c\u7269\u3002 \\\n", "\u6211\u60f3\u6211\u7684\u59bb\u5b50\u4f1a\u559c\u6b22\u5b83\u5230\u8bf4\u4e0d\u51fa\u8bdd\u6765\u3002 \\\n", "\u5230\u76ee\u524d\u4e3a\u6b62\uff0c\u6211\u662f\u552f\u4e00\u4e00\u4e2a\u4f7f\u7528\u5b83\u7684\u4eba\uff0c\u800c\u4e14\u6211\u4e00\u76f4\\\n", "\u6bcf\u9694\u4e00\u5929\u65e9\u4e0a\u7528\u5b83\u6765\u6e05\u7406\u8349\u576a\u4e0a\u7684\u53f6\u5b50\u3002 \\\n", "\u5b83\u6bd4\u5176\u4ed6\u5439\u53f6\u673a\u7a0d\u5fae\u8d35\u4e00\u70b9\uff0c\\\n", "\u4f46\u6211\u8ba4\u4e3a\u5b83\u7684\u989d\u5916\u529f\u80fd\u662f\u503c\u5f97\u7684\u3002\n", "\"\"\"\n", "\n", "review_template = \"\"\"\\\n", "\u5bf9\u4e8e\u4ee5\u4e0b\u6587\u672c\uff0c\u8bf7\u4ece\u4e2d\u63d0\u53d6\u4ee5\u4e0b\u4fe1\u606f\uff1a\n", "\n", "\u793c\u7269\uff1a\u8be5\u5546\u54c1\u662f\u4f5c\u4e3a\u793c\u7269\u9001\u7ed9\u522b\u4eba\u7684\u5417\uff1f \\\n", "\u5982\u679c\u662f\uff0c\u5219\u56de\u7b54 \u662f\u7684\uff1b\u5982\u679c\u5426\u6216\u672a\u77e5\uff0c\u5219\u56de\u7b54 \u4e0d\u662f\u3002\n", "\n", "\u4ea4\u8d27\u5929\u6570\uff1a\u4ea7\u54c1\u9700\u8981\u591a\u5c11\u5929\\\n", "\u5230\u8fbe\uff1f \u5982\u679c\u6ca1\u6709\u627e\u5230\u8be5\u4fe1\u606f\uff0c\u5219\u8f93\u51fa-1\u3002\n", "\n", "\u4ef7\u94b1\uff1a\u63d0\u53d6\u6709\u5173\u4ef7\u503c\u6216\u4ef7\u683c\u7684\u4efb\u4f55\u53e5\u5b50\uff0c\\\n", "\u5e76\u5c06\u5b83\u4eec\u8f93\u51fa\u4e3a\u9017\u53f7\u5206\u9694\u7684 Python \u5217\u8868\u3002\n", "\n", "\u4f7f\u7528\u4ee5\u4e0b\u952e\u5c06\u8f93\u51fa\u683c\u5f0f\u5316\u4e3a JSON\uff1a\n", "\u793c\u7269\n", "\u4ea4\u8d27\u5929\u6570\n", "\u4ef7\u94b1\n", "\n", "\u6587\u672c: {text}\n", "\"\"\"\n", "\n", "prompt_template = ChatPromptTemplate.from_template(review_template)\n", "print(prompt_template)\n"]}, {"cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["{\n", " \"\u793c\u7269\": \"\u662f\u7684\",\n", " \"\u4ea4\u8d27\u5929\u6570\": 2,\n", " \"\u4ef7\u94b1\": [\"\u5b83\u6bd4\u5176\u4ed6\u5439\u53f6\u673a\u7a0d\u5fae\u8d35\u4e00\u70b9\"]\n", "}\n"]}], "source": ["messages = prompt_template.format_messages(text=customer_review)\n", "\n", "chat = ChatOpenAI(temperature=0.0)\n", "response = chat(messages)\n", "print(response.content)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["#### 3.3.3 LangChain\u8f93\u51fa\u89e3\u6790\u5668"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 1\ufe0f\u20e3 \u6784\u9020\u63d0\u793a\u6a21\u7248\u5b57\u7b26\u4e32"]}, {"cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [], "source": ["review_template_2 = \"\"\"\\\n", "For the following text, extract the following information:\n", "\n", "gift: Was the item purchased as a gift for someone else? \\\n", "Answer True if yes, False if not or unknown.\n", "\n", "delivery_days: How many days did it take for the product\\\n", "to arrive? If this information is not found, output -1.\n", "\n", "price_value: Extract any sentences about the value or price,\\\n", "and output them as a comma separated Python list.\n", "\n", "text: {text}\n", "\n", "{format_instructions}\n", "\"\"\""]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 2\ufe0f\u20e3 \u6784\u9020langchain\u63d0\u793a\u6a21\u7248"]}, {"cell_type": "code", "execution_count": 49, "metadata": {}, "outputs": [], "source": ["prompt = ChatPromptTemplate.from_template(template=review_template_2)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### \ud83d\udd25 \u6784\u9020\u8f93\u51fa\u89e3\u6790\u5668"]}, {"cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["The output should be a markdown code snippet formatted in the following schema, including the leading and trailing \"```json\" and \"```\":\n", "\n", "```json\n", "{\n", "\t\"gift\": string // Was the item purchased as a gift for someone else? Answer True if yes, False if not or unknown.\n", "\t\"delivery_days\": string // How many days did it take for the product to arrive? If this information is not found, output -1.\n", "\t\"price_value\": string // Extract any sentences about the value or price, and output them as a comma separated Python list.\n", "}\n", "```\n"]}], "source": ["from langchain.output_parsers import ResponseSchema\n", "from langchain.output_parsers import StructuredOutputParser\n", "\n", "gift_schema = ResponseSchema(name=\"gift\",\n", " description=\"Was the item purchased\\\n", " as a gift for someone else? \\\n", " Answer True if yes,\\\n", " False if not or unknown.\")\n", "\n", "delivery_days_schema = ResponseSchema(name=\"delivery_days\",\n", " description=\"How many days\\\n", " did it take for the product\\\n", " to arrive? If this \\\n", " information is not found,\\\n", " output -1.\")\n", "\n", "price_value_schema = ResponseSchema(name=\"price_value\",\n", " description=\"Extract any\\\n", " sentences about the value or \\\n", " price, and output them as a \\\n", " comma separated Python list.\")\n", "\n", "\n", "response_schemas = [gift_schema, \n", " delivery_days_schema,\n", " price_value_schema]\n", "output_parser = StructuredOutputParser.from_response_schemas(response_schemas)\n", "format_instructions = output_parser.get_format_instructions()\n", "print(format_instructions)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 3\ufe0f\u20e3 \u4f7f\u7528\u6a21\u7248\u5f97\u5230\u63d0\u793a\u6d88\u606f"]}, {"cell_type": "code", "execution_count": 51, "metadata": {}, "outputs": [], "source": ["messages = prompt.format_messages(text=customer_review, format_instructions=format_instructions)"]}, {"cell_type": "code", "execution_count": 52, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["For the following text, extract the following information:\n", "\n", "gift: Was the item purchased as a gift for someone else? Answer True if yes, False if not or unknown.\n", "\n", "delivery_days: How many days did it take for the productto arrive? If this information is not found, output -1.\n", "\n", "price_value: Extract any sentences about the value or price,and output them as a comma separated Python list.\n", "\n", "text: \u8fd9\u6b3e\u5439\u53f6\u673a\u975e\u5e38\u795e\u5947\u3002 \u5b83\u6709\u56db\u4e2a\u8bbe\u7f6e\uff1a\u5439\u8721\u70db\u3001\u5fae\u98ce\u3001\u98ce\u57ce\u3001\u9f99\u5377\u98ce\u3002 \u4e24\u5929\u540e\u5c31\u5230\u4e86\uff0c\u6b63\u597d\u8d76\u4e0a\u6211\u59bb\u5b50\u7684\u5468\u5e74\u7eaa\u5ff5\u793c\u7269\u3002 \u6211\u60f3\u6211\u7684\u59bb\u5b50\u4f1a\u559c\u6b22\u5b83\u5230\u8bf4\u4e0d\u51fa\u8bdd\u6765\u3002 \u5230\u76ee\u524d\u4e3a\u6b62\uff0c\u6211\u662f\u552f\u4e00\u4e00\u4e2a\u4f7f\u7528\u5b83\u7684\u4eba\uff0c\u800c\u4e14\u6211\u4e00\u76f4\u6bcf\u9694\u4e00\u5929\u65e9\u4e0a\u7528\u5b83\u6765\u6e05\u7406\u8349\u576a\u4e0a\u7684\u53f6\u5b50\u3002 \u5b83\u6bd4\u5176\u4ed6\u5439\u53f6\u673a\u7a0d\u5fae\u8d35\u4e00\u70b9\uff0c\u4f46\u6211\u8ba4\u4e3a\u5b83\u7684\u989d\u5916\u529f\u80fd\u662f\u503c\u5f97\u7684\u3002\n", "\n", "\n", "The output should be a markdown code snippet formatted in the following schema, including the leading and trailing \"```json\" and \"```\":\n", "\n", "```json\n", "{\n", "\t\"gift\": string // Was the item purchased as a gift for someone else? Answer True if yes, False if not or unknown.\n", "\t\"delivery_days\": string // How many days did it take for the product to arrive? If this information is not found, output -1.\n", "\t\"price_value\": string // Extract any sentences about the value or price, and output them as a comma separated Python list.\n", "}\n", "```\n", "\n"]}], "source": ["print(messages[0].content)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 4\ufe0f\u20e3 \u8c03\u7528chat\u6a21\u578b\u63d0\u53d6\u4fe1\u606f"]}, {"cell_type": "code", "execution_count": 53, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["```json\n", "{\n", "\t\"gift\": false,\n", "\t\"delivery_days\": \"2\",\n", "\t\"price_value\": \"\u5b83\u6bd4\u5176\u4ed6\u5439\u53f6\u673a\u7a0d\u5fae\u8d35\u4e00\u70b9\"\n", "}\n", "```\n"]}], "source": ["response = chat(messages)\n", "print(response.content)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 5\ufe0f\u20e3 \u4f7f\u7528\u8f93\u51fa\u89e3\u6790\u5668\u89e3\u6790\u8f93\u51fa"]}, {"cell_type": "code", "execution_count": 54, "metadata": {}, "outputs": [{"data": {"text/plain": ["{'gift': False, 'delivery_days': '2', 'price_value': '\u5b83\u6bd4\u5176\u4ed6\u5439\u53f6\u673a\u7a0d\u5fae\u8d35\u4e00\u70b9'}"]}, "execution_count": 54, "metadata": {}, "output_type": "execute_result"}], "source": ["output_dict = output_parser.parse(response.content)\n", "output_dict"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### \ud83d\udcdd \u5206\u6790\u4e0e\u603b\u7ed3\n", "`output_dict`\u7c7b\u578b\u4e3a\u5b57\u5178(`dict`), \u53ef\u76f4\u63a5\u4f7f\u7528`get`\u65b9\u6cd5\u3002\u8fd9\u6837\u7684\u8f93\u51fa\u66f4\u65b9\u4fbf\u4e0b\u6e38\u4efb\u52a1\u7684\u5904\u7406\u3002"]}, {"cell_type": "code", "execution_count": 55, "metadata": {}, "outputs": [{"data": {"text/plain": ["dict"]}, "execution_count": 55, "metadata": {}, "output_type": "execute_result"}], "source": ["type(output_dict)"]}, {"cell_type": "code", "execution_count": 56, "metadata": {}, "outputs": [{"data": {"text/plain": ["'2'"]}, "execution_count": 56, "metadata": {}, "output_type": "execute_result"}], "source": ["output_dict.get('delivery_days')"]}, {"cell_type": "markdown", "metadata": {}, "source": ["#### 3.3.4 \u4e2d\u6587\u7248"]}, {"cell_type": "code", "execution_count": 57, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["The output should be a markdown code snippet formatted in the following schema, including the leading and trailing \"```json\" and \"```\":\n", "\n", "```json\n", "{\n", "\t\"\u793c\u7269\": string // \u8fd9\u4ef6\u7269\u54c1\u662f\u4f5c\u4e3a\u793c\u7269\u9001\u7ed9\u522b\u4eba\u7684\u5417\uff1f \u5982\u679c\u662f\uff0c\u5219\u56de\u7b54 \u662f\u7684\uff0c \u5982\u679c\u5426\u6216\u672a\u77e5\uff0c\u5219\u56de\u7b54 \u4e0d\u662f\u3002\n", "\t\"\u4ea4\u8d27\u5929\u6570\": string // \u4ea7\u54c1\u9700\u8981\u591a\u5c11\u5929\u624d\u80fd\u5230\u8fbe\uff1f \u5982\u679c\u6ca1\u6709\u627e\u5230\u8be5\u4fe1\u606f\uff0c\u5219\u8f93\u51fa-1\u3002\n", "\t\"\u4ef7\u94b1\": string // \u63d0\u53d6\u6709\u5173\u4ef7\u503c\u6216\u4ef7\u683c\u7684\u4efb\u4f55\u53e5\u5b50\uff0c \u5e76\u5c06\u5b83\u4eec\u8f93\u51fa\u4e3a\u9017\u53f7\u5206\u9694\u7684 Python \u5217\u8868\n", "}\n", "```\n"]}], "source": ["# \u4e2d\u6587\n", "review_template_2 = \"\"\"\\\n", "\u5bf9\u4e8e\u4ee5\u4e0b\u6587\u672c\uff0c\u8bf7\u4ece\u4e2d\u63d0\u53d6\u4ee5\u4e0b\u4fe1\u606f\uff1a\uff1a\n", "\n", "\u793c\u7269\uff1a\u8be5\u5546\u54c1\u662f\u4f5c\u4e3a\u793c\u7269\u9001\u7ed9\u522b\u4eba\u7684\u5417\uff1f\n", "\u5982\u679c\u662f\uff0c\u5219\u56de\u7b54 \u662f\u7684\uff1b\u5982\u679c\u5426\u6216\u672a\u77e5\uff0c\u5219\u56de\u7b54 \u4e0d\u662f\u3002\n", "\n", "\u4ea4\u8d27\u5929\u6570\uff1a\u4ea7\u54c1\u5230\u8fbe\u9700\u8981\u591a\u5c11\u5929\uff1f \u5982\u679c\u6ca1\u6709\u627e\u5230\u8be5\u4fe1\u606f\uff0c\u5219\u8f93\u51fa-1\u3002\n", "\n", "\u4ef7\u94b1\uff1a\u63d0\u53d6\u6709\u5173\u4ef7\u503c\u6216\u4ef7\u683c\u7684\u4efb\u4f55\u53e5\u5b50\uff0c\u5e76\u5c06\u5b83\u4eec\u8f93\u51fa\u4e3a\u9017\u53f7\u5206\u9694\u7684 Python \u5217\u8868\u3002\n", "\n", "\u6587\u672c: {text}\n", "\n", "{format_instructions}\n", "\"\"\"\n", "\n", "from langchain.output_parsers import ResponseSchema\n", "from langchain.output_parsers import StructuredOutputParser\n", "\n", "gift_schema = ResponseSchema(name=\"\u793c\u7269\",\n", " description=\"\u8fd9\u4ef6\u7269\u54c1\u662f\u4f5c\u4e3a\u793c\u7269\u9001\u7ed9\u522b\u4eba\u7684\u5417\uff1f\\\n", " \u5982\u679c\u662f\uff0c\u5219\u56de\u7b54 \u662f\u7684\uff0c\\\n", " \u5982\u679c\u5426\u6216\u672a\u77e5\uff0c\u5219\u56de\u7b54 \u4e0d\u662f\u3002\")\n", "\n", "delivery_days_schema = ResponseSchema(name=\"\u4ea4\u8d27\u5929\u6570\",\n", " description=\"\u4ea7\u54c1\u9700\u8981\u591a\u5c11\u5929\u624d\u80fd\u5230\u8fbe\uff1f\\\n", " \u5982\u679c\u6ca1\u6709\u627e\u5230\u8be5\u4fe1\u606f\uff0c\u5219\u8f93\u51fa-1\u3002\")\n", "\n", "price_value_schema = ResponseSchema(name=\"\u4ef7\u94b1\",\n", " description=\"\u63d0\u53d6\u6709\u5173\u4ef7\u503c\u6216\u4ef7\u683c\u7684\u4efb\u4f55\u53e5\u5b50\uff0c\\\n", " \u5e76\u5c06\u5b83\u4eec\u8f93\u51fa\u4e3a\u9017\u53f7\u5206\u9694\u7684 Python \u5217\u8868\")\n", "\n", "\n", "response_schemas = [gift_schema, \n", " delivery_days_schema,\n", " price_value_schema]\n", "output_parser = StructuredOutputParser.from_response_schemas(response_schemas)\n", "format_instructions = output_parser.get_format_instructions()\n", "print(format_instructions)"]}, {"cell_type": "code", "execution_count": 58, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["For the following text, extract the following information:\n", "\n", "gift: Was the item purchased as a gift for someone else? Answer True if yes, False if not or unknown.\n", "\n", "delivery_days: How many days did it take for the productto arrive? If this information is not found, output -1.\n", "\n", "price_value: Extract any sentences about the value or price,and output them as a comma separated Python list.\n", "\n", "text: \u8fd9\u6b3e\u5439\u53f6\u673a\u975e\u5e38\u795e\u5947\u3002 \u5b83\u6709\u56db\u4e2a\u8bbe\u7f6e\uff1a\u5439\u8721\u70db\u3001\u5fae\u98ce\u3001\u98ce\u57ce\u3001\u9f99\u5377\u98ce\u3002 \u4e24\u5929\u540e\u5c31\u5230\u4e86\uff0c\u6b63\u597d\u8d76\u4e0a\u6211\u59bb\u5b50\u7684\u5468\u5e74\u7eaa\u5ff5\u793c\u7269\u3002 \u6211\u60f3\u6211\u7684\u59bb\u5b50\u4f1a\u559c\u6b22\u5b83\u5230\u8bf4\u4e0d\u51fa\u8bdd\u6765\u3002 \u5230\u76ee\u524d\u4e3a\u6b62\uff0c\u6211\u662f\u552f\u4e00\u4e00\u4e2a\u4f7f\u7528\u5b83\u7684\u4eba\uff0c\u800c\u4e14\u6211\u4e00\u76f4\u6bcf\u9694\u4e00\u5929\u65e9\u4e0a\u7528\u5b83\u6765\u6e05\u7406\u8349\u576a\u4e0a\u7684\u53f6\u5b50\u3002 \u5b83\u6bd4\u5176\u4ed6\u5439\u53f6\u673a\u7a0d\u5fae\u8d35\u4e00\u70b9\uff0c\u4f46\u6211\u8ba4\u4e3a\u5b83\u7684\u989d\u5916\u529f\u80fd\u662f\u503c\u5f97\u7684\u3002\n", "\n", "\n", "The output should be a markdown code snippet formatted in the following schema, including the leading and trailing \"```json\" and \"```\":\n", "\n", "```json\n", "{\n", "\t\"\u793c\u7269\": string // \u8fd9\u4ef6\u7269\u54c1\u662f\u4f5c\u4e3a\u793c\u7269\u9001\u7ed9\u522b\u4eba\u7684\u5417\uff1f \u5982\u679c\u662f\uff0c\u5219\u56de\u7b54 \u662f\u7684\uff0c \u5982\u679c\u5426\u6216\u672a\u77e5\uff0c\u5219\u56de\u7b54 \u4e0d\u662f\u3002\n", "\t\"\u4ea4\u8d27\u5929\u6570\": string // \u4ea7\u54c1\u9700\u8981\u591a\u5c11\u5929\u624d\u80fd\u5230\u8fbe\uff1f \u5982\u679c\u6ca1\u6709\u627e\u5230\u8be5\u4fe1\u606f\uff0c\u5219\u8f93\u51fa-1\u3002\n", "\t\"\u4ef7\u94b1\": string // \u63d0\u53d6\u6709\u5173\u4ef7\u503c\u6216\u4ef7\u683c\u7684\u4efb\u4f55\u53e5\u5b50\uff0c \u5e76\u5c06\u5b83\u4eec\u8f93\u51fa\u4e3a\u9017\u53f7\u5206\u9694\u7684 Python \u5217\u8868\n", "}\n", "```\n", "\n"]}], "source": ["messages = prompt.format_messages(text=customer_review, format_instructions=format_instructions)\n", "print(messages[0].content)"]}, {"cell_type": "code", "execution_count": 59, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["```json\n", "{\n", "\t\"\u793c\u7269\": false,\n", "\t\"\u4ea4\u8d27\u5929\u6570\": \"\u4e24\u5929\u540e\",\n", "\t\"\u4ef7\u94b1\": \"\u5b83\u6bd4\u5176\u4ed6\u5439\u53f6\u673a\u7a0d\u5fae\u8d35\u4e00\u70b9\"\n", "}\n", "```\n"]}], "source": ["response = chat(messages)\n", "print(response.content)\n"]}, {"cell_type": "code", "execution_count": 60, "metadata": {}, "outputs": [{"data": {"text/plain": ["{'\u793c\u7269': False, '\u4ea4\u8d27\u5929\u6570': '\u4e24\u5929\u540e', '\u4ef7\u94b1': '\u5b83\u6bd4\u5176\u4ed6\u5439\u53f6\u673a\u7a0d\u5fae\u8d35\u4e00\u70b9'}"]}, "execution_count": 60, "metadata": {}, "output_type": "execute_result"}], "source": ["output_dict = output_parser.parse(response.content)\n", "output_dict"]}, {"cell_type": "markdown", "metadata": {}, "source": ["## \u56db\u3001\u8865\u5145\u6750\u6599"]}, {"cell_type": "markdown", "metadata": {}, "source": ["### 4.1 \u94fe\u5f0f\u601d\u8003\u63a8\u7406(ReAct)\n", "\u53c2\u8003\u8d44\u6599\uff1a[ReAct (Reason+Act) prompting in OpenAI GPT and LangChain](https://tsmatz.wordpress.com/2023/03/07/react-with-openai-gpt-and-langchain/)"]}, {"cell_type": "code", "execution_count": 68, "metadata": {}, "outputs": [], "source": ["!pip install -q wikipedia"]}, {"cell_type": "code", "execution_count": 69, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\n", "\n", "\u001b[1m> Entering new chain...\u001b[0m\n", "\u001b[32;1m\u001b[1;3mThought: I need to search David Chanoff and find the U.S. Navy admiral he collaborated with. Then I need to find the president under whom the admiral served as the ambassador to the United Kingdom.\n", "Action: Search[David Chanoff]\u001b[0m\n", "Observation: \u001b[36;1m\u001b[1;3mDavid Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, \u0110o\u00e0n V\u0103n To\u1ea1i, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.\u001b[0m\n", "Thought:\u001b[32;1m\u001b[1;3mDavid Chanoff has collaborated with several individuals, including Augustus A. White, Joycelyn Elders, \u0110o\u00e0n V\u0103n To\u1ea1i, William J. Crowe, Ariel Sharon, Kenneth Good, and Felix Zandman. I need to search each of these individuals to find the U.S. Navy admiral. \n", "Action: Search[Augustus A. White]\u001b[0m\n", "Observation: \u001b[36;1m\u001b[1;3mAugustus A. White III (born June 4, 1936) is the Ellen and Melvin Gordon Distinguished Professor of Medical Education and Professor of Orthopedic Surgery at Harvard Medical School and a former Orthopaedic Surgeon-in-Chief at Beth Israel Hospital, Boston, Massachusetts. White was the first African American medical student at Stanford, surgical resident at Yale University, professor of medicine at Yale, and department head at a Harvard-affiliated hospital (Beth Israel Hospital).\u001b[0m\n", "Thought:\u001b[32;1m\u001b[1;3mAugustus A. White III is not a U.S. Navy admiral. I need to search the next individual, Joycelyn Elders.\n", "Action: Search[Joycelyn Elders]\u001b[0m\n", "Observation: \u001b[36;1m\u001b[1;3mMinnie Joycelyn Elders (born Minnie Lee Jones; August 13, 1933) is an American pediatrician and public health administrator who served as Surgeon General of the United States from 1993 to 1994. A vice admiral in the Public Health Service Commissioned Corps, she was the second woman, second person of color, and first African American to serve as Surgeon General. \n", "Elders is best known for her frank discussion of her views on controversial issues such as drug legalization, masturbation, and distributing contraception in schools. She was forced to resign in December 1994 amidst controversy as a result of her views. She is currently a professor emerita of pediatrics at the University of Arkansas for Medical Sciences.\u001b[0m\n", "Thought:\u001b[32;1m\u001b[1;3mJoycelyn Elders is a pediatrician and public health administrator, not a U.S. Navy admiral. I need to search the next individual, \u0110o\u00e0n V\u0103n To\u1ea1i.\n", "Action: Search[\u0110o\u00e0n V\u0103n To\u1ea1i]\u001b[0m\n", "Observation: \u001b[36;1m\u001b[1;3m\u0110o\u00e0n V\u0103n To\u1ea1i (1945 in Vietnam \u2013 November 2017 in California) was a Vietnamese-born naturalized American activist and the author of The Vietnamese Gulag (Simon & Schuster, 1986).\u001b[0m\n", "Thought:\u001b[32;1m\u001b[1;3m\u0110o\u00e0n V\u0103n To\u1ea1i is an activist and author, not a U.S. Navy admiral. I need to search the next individual, William J. Crowe.\n", "Action: Search[William J. Crowe]\u001b[0m\n", "Observation: \u001b[36;1m\u001b[1;3mWilliam James Crowe Jr. (January 2, 1925 \u2013 October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.\u001b[0m\n", "Thought:\u001b[32;1m\u001b[1;3mWilliam J. Crowe is a U.S. Navy admiral and diplomat. I need to find the president under whom he served as the ambassador to the United Kingdom.\n", "Action: Search[William J. Crowe ambassador to United Kingdom]\u001b[0m\n", "Observation: \u001b[36;1m\u001b[1;3mWilliam James Crowe Jr. (January 2, 1925 \u2013 October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.\u001b[0m\n", "Thought:\u001b[32;1m\u001b[1;3mWilliam J. Crowe served as the ambassador to the United Kingdom under President Bill Clinton. So the answer is Bill Clinton.\n", "Action: Finish[Bill Clinton]\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n"]}, {"data": {"text/plain": ["'Bill Clinton'"]}, "execution_count": 69, "metadata": {}, "output_type": "execute_result"}], "source": ["from langchain.docstore.wikipedia import Wikipedia\n", "from langchain.llms import OpenAI\n", "from langchain.agents import initialize_agent, Tool, AgentExecutor\n", "from langchain.agents.react.base import DocstoreExplorer\n", "\n", "docstore=DocstoreExplorer(Wikipedia())\n", "tools = [\n", " Tool(\n", " name=\"Search\",\n", " func=docstore.search,\n", " description=\"Search for a term in the docstore.\",\n", " ),\n", " Tool(\n", " name=\"Lookup\",\n", " func=docstore.lookup,\n", " description=\"Lookup a term in the docstore.\",\n", " )\n", "]\n", "\n", "# \u4f7f\u7528\u5927\u8bed\u8a00\u6a21\u578b\n", "llm = OpenAI(\n", " model_name=\"gpt-3.5-turbo\",\n", " temperature=0,\n", ")\n", "\n", "# \u521d\u59cb\u5316ReAct\u4ee3\u7406\n", "react = initialize_agent(tools, llm, agent=\"react-docstore\", verbose=True)\n", "agent_executor = AgentExecutor.from_agent_and_tools(\n", " agent=react.agent,\n", " tools=tools,\n", " verbose=True,\n", ")\n", "\n", "\n", "question = \"Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?\"\n", "agent_executor.run(question)"]}], "metadata": {"kernelspec": {"display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3"}, "language_info": {"codemirror_mode": {"name": "ipython", "version": 3}, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.12"}}, "nbformat": 4, "nbformat_minor": 4} \ No newline at end of file +{"cells": [{"cell_type": "markdown", "metadata": {}, "source": ["# \u7b2c\u4e8c\u7ae0 \u6a21\u578b\uff0c\u63d0\u793a\u548c\u8f93\u51fa\u89e3\u91ca\u5668\n", "\n", " - [\u4e00\u3001\u8bbe\u7f6eOpenAI API Key](#\u4e00\u3001\u8bbe\u7f6eOpenAI-API-Key)\n", " - [\u4e8c\u3001\u76f4\u63a5\u4f7f\u7528OpenAI](#\u4e8c\u3001\u76f4\u63a5\u4f7f\u7528OpenAI)\n", " - [2.1 \u8ba1\u7b971+1](#2.1-\u8ba1\u7b971+1)\n", " - [2.2 \u7528\u7f8e\u5f0f\u82f1\u8bed\u8868\u8fbe\u6d77\u76d7\u90ae\u4ef6](#2.2-\u7528\u7f8e\u5f0f\u82f1\u8bed\u8868\u8fbe\u6d77\u76d7\u90ae\u4ef6)\n", " - [2.3 \u4e2d\u6587\u7248](#2.3-\u4e2d\u6587\u7248)\n", " - [\u4e09\u3001\u901a\u8fc7LangChain\u4f7f\u7528OpenAI](#\u4e09\u3001\u901a\u8fc7LangChain\u4f7f\u7528OpenAI)\n", " - [3.1 \u6a21\u578b](#3.1-\u6a21\u578b)\n", " - [3.2 \u63d0\u793a\u6a21\u677f](#3.2-\u63d0\u793a\u6a21\u677f)\n", " - [3.2.1 \u4f7f\u7528LangChain\u63d0\u793a\u6a21\u7248](#3.2.1-\u4f7f\u7528LangChain\u63d0\u793a\u6a21\u7248)\n", " - [3.2.2 \u4e2d\u6587\u7248](#3.2.2-\u4e2d\u6587\u7248)\n", " - [3.2.2 \u4e3a\u4ec0\u4e48\u9700\u8981\u63d0\u793a\u6a21\u7248](#3.2.2-\u4e3a\u4ec0\u4e48\u9700\u8981\u63d0\u793a\u6a21\u7248)\n", " - [3.3 \u8f93\u51fa\u89e3\u6790\u5668](#3.3-\u8f93\u51fa\u89e3\u6790\u5668)\n", " - [3.3.1 \u5982\u679c\u6ca1\u6709\u8f93\u51fa\u89e3\u6790\u5668](#3.3.1-\u5982\u679c\u6ca1\u6709\u8f93\u51fa\u89e3\u6790\u5668)\n", " - [3.3.2 \u4e2d\u6587\u7248](#3.3.2-\u4e2d\u6587\u7248)\n", " - [3.3.3 LangChain\u8f93\u51fa\u89e3\u6790\u5668](#3.3.3-LangChain\u8f93\u51fa\u89e3\u6790\u5668)\n", " - [3.3.4 \u4e2d\u6587\u7248](#3.3.4-\u4e2d\u6587\u7248)\n", " - [\u56db\u3001\u8865\u5145\u6750\u6599](#\u56db\u3001\u8865\u5145\u6750\u6599)\n", " - [4.1 \u94fe\u5f0f\u601d\u8003\u63a8\u7406(ReAct)](#4.1-\u94fe\u5f0f\u601d\u8003\u63a8\u7406(ReAct))\n"]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["## \u4e00\u3001\u8bbe\u7f6eOpenAI API Key\n", "\n", "\u767b\u9646 [OpenAI \u8d26\u6237](https://platform.openai.com/account/api-keys) \u83b7\u53d6API Key\uff0c\u7136\u540e\u5c06\u5176\u8bbe\u7f6e\u4e3a\u73af\u5883\u53d8\u91cf\u3002\n", "\n", "- \u5982\u679c\u4f60\u60f3\u8981\u8bbe\u7f6e\u4e3a\u5168\u5c40\u73af\u5883\u53d8\u91cf\uff0c\u53ef\u4ee5\u53c2\u8003[\u77e5\u4e4e\u6587\u7ae0](https://zhuanlan.zhihu.com/p/627665725)\u3002\n", "- \u5982\u679c\u4f60\u60f3\u8981\u8bbe\u7f6e\u4e3a\u672c\u5730/\u9879\u76ee\u73af\u5883\u53d8\u91cf\uff0c\u5728\u672c\u6587\u4ef6\u76ee\u5f55\u4e0b\u521b\u5efa`.env`\u6587\u4ef6, \u6253\u5f00\u6587\u4ef6\u8f93\u5165\u4ee5\u4e0b\u5185\u5bb9\u3002\n", "\n", "

\n", " OPENAI_API_KEY=\"your_api_key\" \n", "

\n", " \n", " \u66ff\u6362\"your_api_key\"\u4e3a\u4f60\u81ea\u5df1\u7684 API Key"]}, {"cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": ["# \u4e0b\u8f7d\u9700\u8981\u7684\u5305python-dotenv\u548copenai\n", "# \u5982\u679c\u4f60\u9700\u8981\u67e5\u770b\u5b89\u88c5\u8fc7\u7a0b\u65e5\u5fd7\uff0c\u53ef\u5220\u9664 -q \n", "!pip install -q python-dotenv\n", "!pip install -q openai"]}, {"cell_type": "code", "execution_count": 2, "metadata": {"tags": []}, "outputs": [], "source": ["import os\n", "import openai\n", "from dotenv import load_dotenv, find_dotenv\n", "\n", "# \u8bfb\u53d6\u672c\u5730/\u9879\u76ee\u7684\u73af\u5883\u53d8\u91cf\u3002\n", "\n", "# find_dotenv()\u5bfb\u627e\u5e76\u5b9a\u4f4d.env\u6587\u4ef6\u7684\u8def\u5f84\n", "# load_dotenv()\u8bfb\u53d6\u8be5.env\u6587\u4ef6\uff0c\u5e76\u5c06\u5176\u4e2d\u7684\u73af\u5883\u53d8\u91cf\u52a0\u8f7d\u5230\u5f53\u524d\u7684\u8fd0\u884c\u73af\u5883\u4e2d \n", "# \u5982\u679c\u4f60\u8bbe\u7f6e\u7684\u662f\u5168\u5c40\u7684\u73af\u5883\u53d8\u91cf\uff0c\u8fd9\u884c\u4ee3\u7801\u5219\u6ca1\u6709\u4efb\u4f55\u4f5c\u7528\u3002\n", "_ = load_dotenv(find_dotenv())\n", "\n", "# \u83b7\u53d6\u73af\u5883\u53d8\u91cf OPENAI_API_KEY\n", "openai.api_key = os.environ['OPENAI_API_KEY'] "]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["## \u4e8c\u3001\u76f4\u63a5\u4f7f\u7528OpenAI\n", "\n", "\u6211\u4eec\u5148\u4ece\u76f4\u63a5\u8c03\u7528OpenAI\u7684API\u5f00\u59cb\u3002\n", "\n", "`get_completion`\u51fd\u6570\u662f\u57fa\u4e8e`openai`\u7684\u5c01\u88c5\u51fd\u6570\uff0c\u5bf9\u4e8e\u7ed9\u5b9a\u63d0\u793a\uff08prompt\uff09\u8f93\u51fa\u76f8\u5e94\u7684\u56de\u7b54\u3002\u5176\u5305\u542b\u4e24\u4e2a\u53c2\u6570\n", " \n", " - `prompt` \u5fc5\u9700\u8f93\u5165\u53c2\u6570\u3002 \u4f60\u7ed9\u6a21\u578b\u7684**\u63d0\u793a\uff0c\u53ef\u4ee5\u662f\u4e00\u4e2a\u95ee\u9898\uff0c\u53ef\u4ee5\u662f\u4f60\u9700\u8981\u6a21\u578b\u5e2e\u52a9\u4f60\u505a\u7684\u4e8b**\uff08\u6539\u53d8\u6587\u672c\u5199\u4f5c\u98ce\u683c\uff0c\u7ffb\u8bd1\uff0c\u56de\u590d\u6d88\u606f\u7b49\u7b49\uff09\u3002\n", " - `model` \u975e\u5fc5\u9700\u8f93\u5165\u53c2\u6570\u3002\u9ed8\u8ba4\u4f7f\u7528gpt-3.5-turbo\u3002\u4f60\u4e5f\u53ef\u4ee5\u9009\u62e9\u5176\u4ed6\u6a21\u578b\u3002\n", " \n", "\u8fd9\u91cc\u7684\u63d0\u793a\u5bf9\u5e94\u6211\u4eec\u7ed9chatgpt\u7684\u95ee\u9898\uff0c\u51fd\u6570\u7ed9\u51fa\u7684\u8f93\u51fa\u5219\u5bf9\u5e94chatpgt\u7ed9\u6211\u4eec\u7684\u7b54\u6848\u3002"]}, {"cell_type": "code", "execution_count": 3, "metadata": {"tags": []}, "outputs": [], "source": ["def get_completion(prompt, model=\"gpt-3.5-turbo\"):\n", " \n", " messages = [{\"role\": \"user\", \"content\": prompt}]\n", " \n", " response = openai.ChatCompletion.create(\n", " model=model,\n", " messages=messages,\n", " temperature=0, \n", " )\n", " return response.choices[0].message[\"content\"]"]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["### 2.1 \u8ba1\u7b971+1\n", "\n", "\u6211\u4eec\u6765\u4e00\u4e2a\u7b80\u5355\u7684\u4f8b\u5b50 - \u5206\u522b\u7528\u4e2d\u82f1\u6587\u95ee\u95ee\u6a21\u578b\n", "\n", "- \u4e2d\u6587\u63d0\u793a(Prompt in Chinese)\uff1a `1+1\u662f\u4ec0\u4e48\uff1f`\n", "- \u82f1\u6587\u63d0\u793a(Prompt in English)\uff1a `What is 1+1?`"]}, {"cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [{"data": {"text/plain": ["'1+1\u7b49\u4e8e2\u3002'"]}, "execution_count": 4, "metadata": {}, "output_type": "execute_result"}], "source": ["# \u4e2d\u6587\n", "get_completion(\"1+1\u662f\u4ec0\u4e48\uff1f\")"]}, {"cell_type": "code", "execution_count": 5, "metadata": {"tags": []}, "outputs": [{"data": {"text/plain": ["'1+1 equals 2.'"]}, "execution_count": 5, "metadata": {}, "output_type": "execute_result"}], "source": ["# \u82f1\u6587\n", "get_completion(\"What is 1+1?\")"]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["### 2.2 \u7528\u7f8e\u5f0f\u82f1\u8bed\u8868\u8fbe\u6d77\u76d7\u90ae\u4ef6\n", "\n", "\u4e0a\u9762\u7684\u7b80\u5355\u4f8b\u5b50\uff0c\u6a21\u578b`gpt-3.5-turbo`\u5bf9\u6211\u4eec\u7684\u5173\u4e8e1+1\u662f\u4ec0\u4e48\u7684\u63d0\u95ee\u7ed9\u51fa\u4e86\u56de\u7b54\u3002\n", "\n", "\u73b0\u5728\u6211\u4eec\u6765\u770b\u4e00\u4e2a\u590d\u6742\u4e00\u70b9\u7684\u4f8b\u5b50\uff1a \n", "\n", "\u5047\u8bbe\u6211\u4eec\u662f\u7535\u5546\u516c\u53f8\u5458\u5de5\uff0c\u6211\u4eec\u7684\u987e\u5ba2\u662f\u4e00\u540d\u6d77\u76d7A\uff0c\u4ed6\u5728\u6211\u4eec\u7684\u7f51\u7ad9\u4e0a\u4e70\u4e86\u4e00\u4e2a\u69a8\u6c41\u673a\u7528\u6765\u505a\u5976\u6614\uff0c\u5728\u5236\u4f5c\u5976\u6614\u7684\u8fc7\u7a0b\u4e2d\uff0c\u5976\u6614\u7684\u76d6\u5b50\u98de\u4e86\u51fa\u53bb\uff0c\u5f04\u5f97\u53a8\u623f\u5899\u4e0a\u5230\u5904\u90fd\u662f\u3002\u4e8e\u662f\u6d77\u76d7A\u7ed9\u6211\u4eec\u7684\u5ba2\u670d\u4e2d\u5fc3\u5199\u6765\u4ee5\u4e0b\u90ae\u4ef6\uff1a`customer_email`"]}, {"cell_type": "code", "execution_count": 6, "metadata": {"tags": []}, "outputs": [], "source": ["customer_email = \"\"\"\n", "Arrr, I be fuming that me blender lid \\\n", "flew off and splattered me kitchen walls \\\n", "with smoothie! And to make matters worse,\\\n", "the warranty don't cover the cost of \\\n", "cleaning up me kitchen. I need yer help \\\n", "right now, matey!\n", "\"\"\""]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u6211\u4eec\u7684\u5ba2\u670d\u4eba\u5458\u5bf9\u4e8e\u6d77\u76d7\u7684\u63aa\u8f9e\u8868\u8fbe\u89c9\u5f97\u6709\u70b9\u96be\u4ee5\u7406\u89e3\u3002 \u73b0\u5728\u6211\u4eec\u60f3\u8981\u5b9e\u73b0\u4e24\u4e2a\u5c0f\u76ee\u6807\uff1a\n", "\n", "- \u8ba9\u6a21\u578b\u7528\u7f8e\u5f0f\u82f1\u8bed\u7684\u8868\u8fbe\u65b9\u5f0f\u5c06\u6d77\u76d7\u7684\u90ae\u4ef6\u8fdb\u884c\u7ffb\u8bd1\uff0c\u5ba2\u670d\u4eba\u5458\u53ef\u4ee5\u66f4\u597d\u7406\u89e3\u3002*\u8fd9\u91cc\u6d77\u76d7\u7684\u82f1\u6587\u8868\u8fbe\u53ef\u4ee5\u7406\u89e3\u4e3a\u82f1\u6587\u7684\u65b9\u8a00\uff0c\u5176\u4e0e\u7f8e\u5f0f\u82f1\u8bed\u7684\u5173\u7cfb\uff0c\u5c31\u5982\u56db\u5ddd\u8bdd\u4e0e\u666e\u901a\u8bdd\u7684\u5173\u7cfb\u3002\n", "- \u8ba9\u6a21\u578b\u5728\u7ffb\u8bd1\u662f\u7528\u5e73\u548c\u5c0a\u91cd\u7684\u8bed\u6c14\u8fdb\u884c\u8868\u8fbe\uff0c\u5ba2\u670d\u4eba\u5458\u7684\u5fc3\u60c5\u4e5f\u4f1a\u66f4\u597d\u3002\n", "\n", "\u6839\u636e\u8fd9\u4e24\u4e2a\u5c0f\u76ee\u6807\uff0c\u5b9a\u4e49\u4e00\u4e0b\u6587\u672c\u8868\u8fbe\u98ce\u683c\uff1a`style`"]}, {"cell_type": "code", "execution_count": 7, "metadata": {"tags": []}, "outputs": [], "source": ["# \u7f8e\u5f0f\u82f1\u8bed + \u5e73\u9759\u3001\u5c0a\u656c\u7684\u8bed\u8c03\n", "style = \"\"\"American English \\\n", "in a calm and respectful tone\n", "\"\"\""]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u4e0b\u4e00\u6b65\u9700\u8981\u505a\u7684\u662f\u5c06`customer_email`\u548c`style`\u7ed3\u5408\u8d77\u6765\u6784\u9020\u6211\u4eec\u7684\u63d0\u793a:`prompt`"]}, {"cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": ["# \u975e\u6b63\u5f0f\u7528\u8bed\n", "customer_email = \"\"\" \n", "\u963f\uff0c\u6211\u5f88\u751f\u6c14\uff0c\\\n", "\u56e0\u4e3a\u6211\u7684\u6405\u62cc\u673a\u76d6\u6389\u4e86\uff0c\\\n", "\u628a\u5976\u6614\u6e85\u5230\u4e86\u53a8\u623f\u7684\u5899\u4e0a\uff01\\\n", "\u66f4\u7cdf\u7cd5\u7684\u662f\uff0c\u4fdd\u4fee\u4e0d\u5305\u62ec\u6253\u626b\u53a8\u623f\u7684\u8d39\u7528\u3002\\\n", "\u6211\u73b0\u5728\u9700\u8981\u4f60\u7684\u5e2e\u52a9\uff0c\u4f19\u8ba1\uff01\n", "\"\"\""]}, {"cell_type": "code", "execution_count": 9, "metadata": {"tags": []}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["Translate the text that is delimited by triple backticks \n", "into a style that is American English in a calm and respectful tone\n", ".\n", "text: ``` \n", "\u963f\uff0c\u6211\u5f88\u751f\u6c14\uff0c\u56e0\u4e3a\u6211\u7684\u6405\u62cc\u673a\u76d6\u6389\u4e86\uff0c\u628a\u5976\u6614\u6e85\u5230\u4e86\u53a8\u623f\u7684\u5899\u4e0a\uff01\u66f4\u7cdf\u7cd5\u7684\u662f\uff0c\u4fdd\u4fee\u4e0d\u5305\u62ec\u6253\u626b\u53a8\u623f\u7684\u8d39\u7528\u3002\u6211\u73b0\u5728\u9700\u8981\u4f60\u7684\u5e2e\u52a9\uff0c\u4f19\u8ba1\uff01\n", "```\n", "\n"]}], "source": ["# \u8981\u6c42\u6a21\u578b\u6839\u636e\u7ed9\u51fa\u7684\u8bed\u8c03\u8fdb\u884c\u8f6c\u5316\n", "prompt = f\"\"\"Translate the text \\\n", "that is delimited by triple backticks \n", "into a style that is {style}.\n", "text: ```{customer_email}```\n", "\"\"\"\n", "\n", "print(prompt)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["`prompt` \u6784\u9020\u597d\u4e86\uff0c\u6211\u4eec\u53ef\u4ee5\u8c03\u7528`get_completion`\u5f97\u5230\u6211\u4eec\u60f3\u8981\u7684\u7ed3\u679c - \u7528\u5e73\u548c\u5c0a\u91cd\u7684\u8bed\u6c14\uff0c\u7f8e\u5f0f\u82f1\u8bed\u8868\u8fbe\u7684\u6d77\u76d7\u8bed\u8a00\u90ae\u4ef6"]}, {"cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": ["response = get_completion(prompt)"]}, {"cell_type": "code", "execution_count": 11, "metadata": {"tags": []}, "outputs": [{"data": {"text/plain": ["\"Oh, I'm really frustrated because the lid of my blender fell off and splattered the milkshake all over the kitchen wall! To make matters worse, the warranty doesn't cover the cost of cleaning the kitchen. I could really use your help right now, buddy!\""]}, "execution_count": 11, "metadata": {}, "output_type": "execute_result"}], "source": ["response"]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u5bf9\u6bd4\u8bed\u8a00\u98ce\u683c\u8f6c\u6362\u524d\u540e\uff0c\u7528\u8bcd\u66f4\u4e3a\u6b63\u5f0f\uff0c\u66ff\u6362\u4e86\u6781\u7aef\u60c5\u7eea\u7684\u8868\u8fbe\uff0c\u5e76\u8868\u8fbe\u4e86\u611f\u8c22\u3002\n", "\n", "\u2728 \u4f60\u53ef\u4ee5\u5c1d\u8bd5\u4fee\u6539\u63d0\u793a\uff0c\u770b\u53ef\u4ee5\u5f97\u5230\u4ec0\u4e48\u4e0d\u4e00\u6837\u7684\u7ed3\u679c\ud83d\ude09"]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["### 2.3 \u4e2d\u6587\u7248"]}, {"cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": ["# \u666e\u901a\u8bdd + \u5e73\u9759\u3001\u5c0a\u656c\u7684\u8bed\u8c03\n", "style = \"\"\"\u6b63\u5f0f\u666e\u901a\u8bdd \\\n", "\u7528\u4e00\u4e2a\u5e73\u9759\u3001\u5c0a\u656c\u7684\u8bed\u8c03\n", "\"\"\""]}, {"cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\u628a\u7531\u4e09\u4e2a\u53cd\u5f15\u53f7\u5206\u9694\u7684\u6587\u672c\u7ffb\u8bd1\u6210\u4e00\u79cd\u6b63\u5f0f\u666e\u901a\u8bdd \u7528\u4e00\u4e2a\u5e73\u9759\u3001\u5c0a\u656c\u7684\u8bed\u8c03\n", "\u98ce\u683c\u3002\n", "\u6587\u672c: ``` \n", "\u963f\uff0c\u6211\u5f88\u751f\u6c14\uff0c\u56e0\u4e3a\u6211\u7684\u6405\u62cc\u673a\u76d6\u6389\u4e86\uff0c\u628a\u5976\u6614\u6e85\u5230\u4e86\u53a8\u623f\u7684\u5899\u4e0a\uff01\u66f4\u7cdf\u7cd5\u7684\u662f\uff0c\u4fdd\u4fee\u4e0d\u5305\u62ec\u6253\u626b\u53a8\u623f\u7684\u8d39\u7528\u3002\u6211\u73b0\u5728\u9700\u8981\u4f60\u7684\u5e2e\u52a9\uff0c\u4f19\u8ba1\uff01\n", "```\n", "\n"]}], "source": ["# \u8981\u6c42\u6a21\u578b\u6839\u636e\u7ed9\u51fa\u7684\u8bed\u8c03\u8fdb\u884c\u8f6c\u5316\n", "prompt = f\"\"\"\u628a\u7531\u4e09\u4e2a\u53cd\u5f15\u53f7\u5206\u9694\u7684\u6587\u672c\\\n", "\u7ffb\u8bd1\u6210\u4e00\u79cd{style}\u98ce\u683c\u3002\n", "\u6587\u672c: ```{customer_email}```\n", "\"\"\"\n", "\n", "print(prompt)\n", "\n"]}, {"cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [{"data": {"text/plain": ["'\u5c0a\u656c\u7684\u670b\u53cb\u4eec\uff0c\u6211\u611f\u5230\u975e\u5e38\u4e0d\u5b89\uff0c\u56e0\u4e3a\u6211\u7684\u6405\u62cc\u673a\u76d6\u5b50\u4e0d\u614e\u6389\u843d\uff0c\u5bfc\u81f4\u5976\u6614\u6e85\u5230\u4e86\u53a8\u623f\u7684\u5899\u58c1\u4e0a\uff01\u66f4\u52a0\u4ee4\u4eba\u7cdf\u5fc3\u7684\u662f\uff0c\u4fdd\u4fee\u670d\u52a1\u5e76\u4e0d\u5305\u542b\u53a8\u623f\u6e05\u6d01\u7684\u8d39\u7528\u3002\u6b64\u523b\uff0c\u6211\u771f\u8bda\u5730\u8bf7\u6c42\u5404\u4f4d\u7684\u5e2e\u52a9\uff0c\u670b\u53cb\u4eec\uff01'"]}, "execution_count": 14, "metadata": {}, "output_type": "execute_result"}], "source": ["response = get_completion(prompt)\n", "\n", "response"]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["## \u4e09\u3001\u901a\u8fc7LangChain\u4f7f\u7528OpenAI\n", "\n", "\u5728\u524d\u9762\u4e00\u90e8\u5206\uff0c\u6211\u4eec\u901a\u8fc7\u5c01\u88c5\u51fd\u6570`get_completion`\u76f4\u63a5\u8c03\u7528\u4e86OpenAI\u5b8c\u6210\u4e86\u5bf9\u65b9\u8a00\u90ae\u4ef6\u8fdb\u884c\u4e86\u7684\u7ffb\u8bd1\uff0c\u5f97\u5230\u7528\u5e73\u548c\u5c0a\u91cd\u7684\u8bed\u6c14\u3001\u6b63\u5f0f\u7684\u666e\u901a\u8bdd\u8868\u8fbe\u7684\u90ae\u4ef6\u3002\n", "\n", "\u8ba9\u6211\u4eec\u5c1d\u8bd5\u4f7f\u7528LangChain\u6765\u5b9e\u73b0\u76f8\u540c\u7684\u529f\u80fd\u3002"]}, {"cell_type": "code", "execution_count": 15, "metadata": {"tags": []}, "outputs": [], "source": ["# \u5982\u679c\u4f60\u9700\u8981\u67e5\u770b\u5b89\u88c5\u8fc7\u7a0b\u65e5\u5fd7\uff0c\u53ef\u5220\u9664 -q \n", "# --upgrade \u8ba9\u6211\u4eec\u53ef\u4ee5\u5b89\u88c5\u5230\u6700\u65b0\u7248\u672c\u7684 langchain\n", "!pip install -q --upgrade langchain"]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["### 3.1 \u6a21\u578b\n", "\n", "\u4ece`langchain.chat_models`\u5bfc\u5165`OpenAI`\u7684\u5bf9\u8bdd\u6a21\u578b`ChatOpenAI`\u3002 \u9664\u53bbOpenAI\u4ee5\u5916\uff0c`langchain.chat_models`\u8fd8\u96c6\u6210\u4e86\u5176\u4ed6\u5bf9\u8bdd\u6a21\u578b\uff0c\u66f4\u591a\u7ec6\u8282\u53ef\u4ee5\u67e5\u770b[Langchain\u5b98\u65b9\u6587\u6863](https://python.langchain.com/en/latest/modules/models/chat/integrations.html)\u3002"]}, {"cell_type": "code", "execution_count": 16, "metadata": {"tags": []}, "outputs": [], "source": ["from langchain.chat_models import ChatOpenAI"]}, {"cell_type": "code", "execution_count": 17, "metadata": {"tags": []}, "outputs": [{"data": {"text/plain": ["ChatOpenAI(cache=None, verbose=False, callbacks=None, callback_manager=None, tags=None, metadata=None, client=, model_name='gpt-3.5-turbo', temperature=0.0, model_kwargs={}, openai_api_key='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', openai_api_base='', openai_organization='', openai_proxy='', request_timeout=None, max_retries=6, streaming=False, n=1, max_tokens=None, tiktoken_model_name=None)"]}, "execution_count": 17, "metadata": {}, "output_type": "execute_result"}], "source": ["# \u8fd9\u91cc\u6211\u4eec\u5c06\u53c2\u6570temperature\u8bbe\u7f6e\u4e3a0.0\uff0c\u4ece\u800c\u51cf\u5c11\u751f\u6210\u7b54\u6848\u7684\u968f\u673a\u6027\u3002\n", "# \u5982\u679c\u4f60\u60f3\u8981\u6bcf\u6b21\u5f97\u5230\u4e0d\u4e00\u6837\u7684\u6709\u65b0\u610f\u7684\u7b54\u6848\uff0c\u53ef\u4ee5\u5c1d\u8bd5\u8c03\u6574\u8be5\u53c2\u6570\u3002\n", "chat = ChatOpenAI(temperature=0.0)\n", "chat"]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u4e0a\u9762\u7684\u8f93\u51fa\u663e\u793aChatOpenAI\u7684\u9ed8\u8ba4\u6a21\u578b\u4e3a`gpt-3.5-turbo`"]}, {"cell_type": "markdown", "metadata": {}, "source": ["### 3.2 \u63d0\u793a\u6a21\u677f\n", "\n", "\u5728\u524d\u9762\u7684\u4f8b\u5b50\u4e2d\uff0c\u6211\u4eec\u901a\u8fc7[f\u5b57\u7b26\u4e32](https://docs.python.org/zh-cn/3/tutorial/inputoutput.html#tut-f-strings)\u628aPython\u8868\u8fbe\u5f0f\u7684\u503c`style`\u548c`customer_email`\u6dfb\u52a0\u5230`prompt`\u5b57\u7b26\u4e32\u5185\u3002\n", "\n", "```python\n", "prompt = f\"\"\"Translate the text \\\n", "that is delimited by triple backticks \n", "into a style that is {style}.\n", "text: ```{customer_email}```\n", "\"\"\"\n", "```\n", "`langchain`\u63d0\u4f9b\u4e86\u63a5\u53e3\u65b9\u4fbf\u5feb\u901f\u7684\u6784\u9020\u548c\u4f7f\u7528\u63d0\u793a\u3002\u73b0\u5728\u6211\u4eec\u6765\u770b\u770b\u5982\u4f55\u4f7f\u7528`langchain`\u6765\u6784\u9020\u63d0\u793a\u3002"]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["#### 3.2.1 \u4f7f\u7528LangChain\u63d0\u793a\u6a21\u7248"]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["##### 1\ufe0f\u20e3 \u6784\u9020\u63d0\u793a\u6a21\u7248\u5b57\u7b26\u4e32\n", "\u6211\u4eec\u6784\u9020\u4e00\u4e2a\u63d0\u793a\u6a21\u7248\u5b57\u7b26\u4e32\uff1a`template_string`"]}, {"cell_type": "code", "execution_count": 18, "metadata": {"tags": []}, "outputs": [], "source": ["template_string = \"\"\"Translate the text \\\n", "that is delimited by triple backticks \\\n", "into a style that is {style}. \\\n", "text: ```{text}```\n", "\"\"\""]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["##### 2\ufe0f\u20e3 \u6784\u9020LangChain\u63d0\u793a\u6a21\u7248\n", "\u6211\u4eec\u8c03\u7528`ChatPromptTemplatee.from_template()`\u51fd\u6570\u5c06\u4e0a\u9762\u7684\u63d0\u793a\u6a21\u7248\u5b57\u7b26`template_string`\u8f6c\u6362\u4e3a\u63d0\u793a\u6a21\u7248`prompt_template`"]}, {"cell_type": "code", "execution_count": 19, "metadata": {"tags": []}, "outputs": [], "source": ["from langchain.prompts import ChatPromptTemplate\n", "prompt_template = ChatPromptTemplate.from_template(template_string)"]}, {"cell_type": "code", "execution_count": 20, "metadata": {"tags": []}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["input_variables=['style', 'text'] output_parser=None partial_variables={} template='Translate the text that is delimited by triple backticks into a style that is {style}. text: ```{text}```\\n' template_format='f-string' validate_template=True\n"]}], "source": ["print(prompt_template.messages[0].prompt)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u4ece\u4e0a\u9762\u7684\u8f93\u51fa\u53ef\u4ee5\u770b\u51fa\uff0c`prompt_template` \u6709\u4e24\u4e2a\u8f93\u5165\u53d8\u91cf\uff1a `style` \u548c `text`\u3002"]}, {"cell_type": "code", "execution_count": 21, "metadata": {"tags": []}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["['style', 'text']\n"]}], "source": ["print(prompt_template.messages[0].prompt.input_variables)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 3\ufe0f\u20e3 \u4f7f\u7528\u6a21\u7248\u5f97\u5230\u5ba2\u6237\u6d88\u606f\u63d0\u793a\n", "\n", "langchain\u63d0\u793a\u6a21\u7248`prompt_template`\u9700\u8981\u4e24\u4e2a\u8f93\u5165\u53d8\u91cf\uff1a `style` \u548c `text`\u3002 \u8fd9\u91cc\u5206\u522b\u5bf9\u5e94 \n", "- `customer_style`: \u6211\u4eec\u60f3\u8981\u7684\u987e\u5ba2\u90ae\u4ef6\u98ce\u683c\n", "- `customer_email`: \u987e\u5ba2\u7684\u539f\u59cb\u90ae\u4ef6\u6587\u672c\u3002"]}, {"cell_type": "code", "execution_count": 22, "metadata": {"tags": []}, "outputs": [], "source": ["customer_style = \"\"\"American English \\\n", "in a calm and respectful tone\n", "\"\"\""]}, {"cell_type": "code", "execution_count": 23, "metadata": {"tags": []}, "outputs": [], "source": ["customer_email = \"\"\"\n", "Arrr, I be fuming that me blender lid \\\n", "flew off and splattered me kitchen walls \\\n", "with smoothie! And to make matters worse, \\\n", "the warranty don't cover the cost of \\\n", "cleaning up me kitchen. I need yer help \\\n", "right now, matey!\n", "\"\"\""]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u5bf9\u4e8e\u7ed9\u5b9a\u7684`customer_style`\u548c`customer_email`, \u6211\u4eec\u53ef\u4ee5\u4f7f\u7528\u63d0\u793a\u6a21\u7248`prompt_template`\u7684`format_messages`\u65b9\u6cd5\u751f\u6210\u60f3\u8981\u7684\u5ba2\u6237\u6d88\u606f`customer_messages`\u3002"]}, {"cell_type": "code", "execution_count": 24, "metadata": {"tags": []}, "outputs": [], "source": ["customer_messages = prompt_template.format_messages(\n", " style=customer_style,\n", " text=customer_email)"]}, {"cell_type": "code", "execution_count": 25, "metadata": {"tags": []}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\n", "\n"]}], "source": ["print(type(customer_messages))\n", "print(type(customer_messages[0]))"]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u53ef\u4ee5\u770b\u51fa`customer_messages`\u53d8\u91cf\u7c7b\u578b\u4e3a\u5217\u8868(`list`)\uff0c\u800c\u5217\u8868\u91cc\u7684\u5143\u7d20\u53d8\u91cf\u7c7b\u578b\u4e3alangchain\u81ea\u5b9a\u4e49\u6d88\u606f(`langchain.schema.HumanMessage`)\u3002\n", "\n", "\u6253\u5370\u7b2c\u4e00\u4e2a\u5143\u7d20\u53ef\u4ee5\u5f97\u5230\u5982\u4e0b:"]}, {"cell_type": "code", "execution_count": 26, "metadata": {"tags": []}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["content=\"Translate the text that is delimited by triple backticks into a style that is American English in a calm and respectful tone\\n. text: ```\\nArrr, I be fuming that me blender lid flew off and splattered me kitchen walls with smoothie! And to make matters worse, the warranty don't cover the cost of cleaning up me kitchen. I need yer help right now, matey!\\n```\\n\" additional_kwargs={} example=False\n"]}], "source": ["print(customer_messages[0])"]}, {"cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["content='\u628a\u7531\u4e09\u4e2a\u53cd\u5f15\u53f7\u5206\u9694\u7684\u6587\u672c\u7ffb\u8bd1\u6210\u4e00\u79cd\u6b63\u5f0f\u666e\u901a\u8bdd \u7528\u4e00\u4e2a\u5e73\u9759\u3001\u5c0a\u656c\u7684\u8bed\u6c14\\n\u98ce\u683c\u3002\u6587\u672c: ```\\n\u963f\uff0c\u6211\u5f88\u751f\u6c14\uff0c\u56e0\u4e3a\u6211\u7684\u6405\u62cc\u673a\u76d6\u6389\u4e86\uff0c\u628a\u5976\u6614\u6e85\u5230\u4e86\u53a8\u623f\u7684\u5899\u4e0a\uff01\u66f4\u7cdf\u7cd5\u7684\u662f\uff0c\u4fdd\u4fee\u4e0d\u5305\u62ec\u6253\u626b\u53a8\u623f\u7684\u8d39\u7528\u3002\u6211\u73b0\u5728\u9700\u8981\u4f60\u7684\u5e2e\u52a9\uff0c\u4f19\u8ba1\uff01\\n```\\n' additional_kwargs={} example=False\n"]}], "source": ["# \u4e2d\u6587\u63d0\u793a\n", "from langchain.prompts import ChatPromptTemplate\n", "\n", "template_string = \"\"\"\u628a\u7531\u4e09\u4e2a\u53cd\u5f15\u53f7\u5206\u9694\u7684\u6587\u672c\\\n", "\u7ffb\u8bd1\u6210\u4e00\u79cd{style}\u98ce\u683c\u3002\\\n", "\u6587\u672c: ```{text}```\n", "\"\"\"\n", "prompt_template = ChatPromptTemplate.from_template(template_string)\n", "\n", "customer_style = \"\"\"\u6b63\u5f0f\u666e\u901a\u8bdd \\\n", "\u7528\u4e00\u4e2a\u5e73\u9759\u3001\u5c0a\u656c\u7684\u8bed\u6c14\n", "\"\"\"\n", "\n", "customer_email = \"\"\"\n", "\u963f\uff0c\u6211\u5f88\u751f\u6c14\uff0c\\\n", "\u56e0\u4e3a\u6211\u7684\u6405\u62cc\u673a\u76d6\u6389\u4e86\uff0c\\\n", "\u628a\u5976\u6614\u6e85\u5230\u4e86\u53a8\u623f\u7684\u5899\u4e0a\uff01\\\n", "\u66f4\u7cdf\u7cd5\u7684\u662f\uff0c\u4fdd\u4fee\u4e0d\u5305\u62ec\u6253\u626b\u53a8\u623f\u7684\u8d39\u7528\u3002\\\n", "\u6211\u73b0\u5728\u9700\u8981\u4f60\u7684\u5e2e\u52a9\uff0c\u4f19\u8ba1\uff01\n", "\"\"\"\n", "\n", "customer_messages = prompt_template.format_messages(\n", " style=customer_style,\n", " text=customer_email)\n", "\n", "\n", "print(customer_messages[0])"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 4\ufe0f\u20e3 \u8c03\u7528chat\u6a21\u578b\u8f6c\u6362\u5ba2\u6237\u6d88\u606f\u98ce\u683c\n", "\n", "\u73b0\u5728\u6211\u4eec\u53ef\u4ee5\u8c03\u7528[\u6a21\u578b](#model)\u90e8\u5206\u5b9a\u4e49\u7684chat\u6a21\u578b\u6765\u5b9e\u73b0\u8f6c\u6362\u5ba2\u6237\u6d88\u606f\u98ce\u683c\u3002\u5230\u76ee\u524d\u4e3a\u6b62\uff0c\u6211\u4eec\u5df2\u7ecf\u5b9e\u73b0\u4e86\u5728\u524d\u4e00\u90e8\u5206\u7684\u4efb\u52a1\u3002"]}, {"cell_type": "code", "execution_count": 28, "metadata": {"tags": []}, "outputs": [], "source": ["customer_response = chat(customer_messages)"]}, {"cell_type": "code", "execution_count": 29, "metadata": {"tags": []}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\u5c0a\u656c\u7684\u4f19\u8ba1\uff0c\u6211\u611f\u5230\u975e\u5e38\u6124\u6012\uff0c\u56e0\u4e3a\u6211\u7684\u6405\u62cc\u673a\u76d6\u5b50\u4e0d\u614e\u6389\u843d\uff0c\u5bfc\u81f4\u5976\u6614\u6e85\u5230\u4e86\u53a8\u623f\u7684\u5899\u58c1\u4e0a\uff01\u66f4\u52a0\u4ee4\u4eba\u7cdf\u5fc3\u7684\u662f\uff0c\u4fdd\u4fee\u5e76\u4e0d\u5305\u62ec\u53a8\u623f\u6e05\u6d01\u7684\u8d39\u7528\u3002\u73b0\u5728\uff0c\u6211\u9700\u8981\u4f60\u7684\u5e2e\u52a9\uff0c\u8bf7\u4f60\u7ed9\u4e88\u63f4\u624b\uff01\n"]}], "source": ["print(customer_response.content)"]}, {"cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\u5c0a\u656c\u7684\u4f19\u8ba1\uff0c\u6211\u611f\u5230\u975e\u5e38\u6124\u6012\uff0c\u56e0\u4e3a\u6211\u7684\u6405\u62cc\u673a\u76d6\u5b50\u4e0d\u614e\u6389\u843d\uff0c\u5bfc\u81f4\u5976\u6614\u6e85\u5230\u4e86\u53a8\u623f\u7684\u5899\u58c1\u4e0a\uff01\u66f4\u52a0\u4ee4\u4eba\u7cdf\u5fc3\u7684\u662f\uff0c\u4fdd\u4fee\u5e76\u4e0d\u5305\u62ec\u53a8\u623f\u6e05\u6d01\u7684\u8d39\u7528\u3002\u73b0\u5728\uff0c\u6211\u9700\u8981\u4f60\u7684\u5e2e\u52a9\uff0c\u8bf7\u4f60\u7ed9\u4e88\u63f4\u624b\uff01\n"]}], "source": ["print(customer_response.content)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 5\ufe0f\u20e3 \u4f7f\u7528\u6a21\u7248\u5f97\u5230\u56de\u590d\u6d88\u606f\u63d0\u793a\n", "\n", "\u63a5\u4e0b\u6765\uff0c\u6211\u4eec\u66f4\u8fdb\u4e00\u6b65\uff0c\u5c06\u5ba2\u670d\u4eba\u5458\u56de\u590d\u7684\u6d88\u606f\uff0c\u8f6c\u6362\u4e3a\u6d77\u76d7\u7684\u8bed\u8a00\u98ce\u683c\uff0c\u5e76\u786e\u4fdd\u6d88\u606f\u6bd4\u8f83\u6709\u793c\u8c8c\u3002 \n", "\n", "\u8fd9\u91cc\uff0c\u6211\u4eec\u53ef\u4ee5\u7ee7\u7eed\u4f7f\u7528\u7b2c2\ufe0f\u20e3\u6b65\u6784\u9020\u7684langchain\u63d0\u793a\u6a21\u7248\uff0c\u6765\u83b7\u5f97\u6211\u4eec\u56de\u590d\u6d88\u606f\u63d0\u793a\u3002"]}, {"cell_type": "code", "execution_count": 31, "metadata": {"tags": []}, "outputs": [], "source": ["service_reply = \"\"\"Hey there customer, \\\n", "the warranty does not cover \\\n", "cleaning expenses for your kitchen \\\n", "because it's your fault that \\\n", "you misused your blender \\\n", "by forgetting to put the lid on before \\\n", "starting the blender. \\\n", "Tough luck! See ya!\n", "\"\"\""]}, {"cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [], "source": ["service_style_pirate = \"\"\"\\\n", "a polite tone \\\n", "that speaks in English Pirate\\\n", "\"\"\""]}, {"cell_type": "code", "execution_count": 33, "metadata": {"tags": []}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\u628a\u7531\u4e09\u4e2a\u53cd\u5f15\u53f7\u5206\u9694\u7684\u6587\u672c\u7ffb\u8bd1\u6210\u4e00\u79cda polite tone that speaks in English Pirate\u98ce\u683c\u3002\u6587\u672c: ```Hey there customer, the warranty does not cover cleaning expenses for your kitchen because it's your fault that you misused your blender by forgetting to put the lid on before starting the blender. Tough luck! See ya!\n", "```\n", "\n"]}], "source": ["service_messages = prompt_template.format_messages(\n", " style=service_style_pirate,\n", " text=service_reply)\n", "\n", "print(service_messages[0].content)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 6\ufe0f\u20e3 \u8c03\u7528chat\u6a21\u578b\u8f6c\u6362\u56de\u590d\u6d88\u606f\u98ce\u683c\n", "\n", "\u8c03\u7528[\u6a21\u578b](#model)\u90e8\u5206\u5b9a\u4e49\u7684chat\u6a21\u578b\u6765\u8f6c\u6362\u56de\u590d\u6d88\u606f\u98ce\u683c"]}, {"cell_type": "code", "execution_count": 34, "metadata": {"tags": []}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["Ahoy there, matey! I regret to inform ye, valued customer, that the warranty be not coverin' the expenses o' cleanin' yer galley due to yer own negligence. 'Tis yer own fault, ye see, fer ye be misusin' yer blender by forgettin' to secure the lid afore settin' it in motion. Aye, 'tis a tough break, indeed! Fare thee well, me heartie!\n"]}], "source": ["service_response = chat(service_messages)\n", "print(service_response.content)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["#### 3.2.2 \u4e2d\u6587\u7248"]}, {"cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\u628a\u7531\u4e09\u4e2a\u53cd\u5f15\u53f7\u5206\u9694\u7684\u6587\u672c\u7ffb\u8bd1\u6210\u4e00\u79cd\u4e00\u4e2a\u6709\u793c\u8c8c\u7684\u8bed\u6c14 \u4f7f\u7528\u6b63\u5f0f\u7684\u666e\u901a\u8bdd \u98ce\u683c\u3002\u6587\u672c: ```\u563f\uff0c\u987e\u5ba2\uff0c \u4fdd\u4fee\u4e0d\u5305\u62ec\u53a8\u623f\u7684\u6e05\u6d01\u8d39\u7528\uff0c \u56e0\u4e3a\u60a8\u5728\u542f\u52a8\u6405\u62cc\u673a\u4e4b\u524d \u5fd8\u8bb0\u76d6\u4e0a\u76d6\u5b50\u800c\u8bef\u7528\u6405\u62cc\u673a, \u8fd9\u662f\u60a8\u7684\u9519\u3002 \u5012\u9709\uff01 \u518d\u89c1\uff01\n", "```\n", "\n"]}], "source": ["service_reply = \"\"\"\u563f\uff0c\u987e\u5ba2\uff0c \\\n", "\u4fdd\u4fee\u4e0d\u5305\u62ec\u53a8\u623f\u7684\u6e05\u6d01\u8d39\u7528\uff0c \\\n", "\u56e0\u4e3a\u60a8\u5728\u542f\u52a8\u6405\u62cc\u673a\u4e4b\u524d \\\n", "\u5fd8\u8bb0\u76d6\u4e0a\u76d6\u5b50\u800c\u8bef\u7528\u6405\u62cc\u673a, \\\n", "\u8fd9\u662f\u60a8\u7684\u9519\u3002 \\\n", "\u5012\u9709\uff01 \u518d\u89c1\uff01\n", "\"\"\"\n", "\n", "service_style_pirate = \"\"\"\\\n", "\u4e00\u4e2a\u6709\u793c\u8c8c\u7684\u8bed\u6c14 \\\n", "\u4f7f\u7528\u6b63\u5f0f\u7684\u666e\u901a\u8bdd \\\n", "\"\"\"\n", "service_messages = prompt_template.format_messages(\n", " style=service_style_pirate,\n", " text=service_reply)\n", "\n", "print(service_messages[0].content)"]}, {"cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\u5c0a\u656c\u7684\u987e\u5ba2\uff0c\u5f88\u62b1\u6b49\u544a\u77e5\u60a8\uff0c\u4fdd\u4fee\u670d\u52a1\u4e0d\u5305\u542b\u53a8\u623f\u6e05\u6d01\u8d39\u7528\u3002\u8fd9\u662f\u56e0\u4e3a\u5728\u60a8\u4f7f\u7528\u6405\u62cc\u673a\u4e4b\u524d\uff0c\u4e0d\u614e\u5fd8\u8bb0\u76d6\u4e0a\u76d6\u5b50\u800c\u5bfc\u81f4\u8bef\u7528\u6405\u62cc\u673a\uff0c\u8fd9\u5c5e\u4e8e\u60a8\u7684\u758f\u5ffd\u3002\u975e\u5e38\u9057\u61be\uff01\u795d\u60a8\u4e00\u5207\u987a\u5229\uff01\u518d\u89c1\uff01\n"]}], "source": ["service_response = chat(service_messages)\n", "print(service_response.content)"]}, {"cell_type": "markdown", "metadata": {"tags": []}, "source": ["#### 3.2.2 \u4e3a\u4ec0\u4e48\u9700\u8981\u63d0\u793a\u6a21\u7248"]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u5728\u5e94\u7528\u4e8e\u6bd4\u8f83\u590d\u6742\u7684\u573a\u666f\u65f6\uff0c\u63d0\u793a\u53ef\u80fd\u4f1a\u975e\u5e38\u957f\u5e76\u4e14\u5305\u542b\u6d89\u53ca\u8bb8\u591a\u7ec6\u8282\u3002**\u4f7f\u7528\u63d0\u793a\u6a21\u7248\uff0c\u53ef\u4ee5\u8ba9\u6211\u4eec\u66f4\u4e3a\u65b9\u4fbf\u5730\u91cd\u590d\u4f7f\u7528\u8bbe\u8ba1\u597d\u7684\u63d0\u793a**\u3002"]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u4e0b\u9762\u7ed9\u51fa\u4e86\u4e00\u4e2a\u6bd4\u8f83\u957f\u7684\u63d0\u793a\u6a21\u7248\u6848\u4f8b\u3002\u5b66\u751f\u4eec\u7ebf\u4e0a\u5b66\u4e60\u5e76\u63d0\u4ea4\u4f5c\u4e1a\uff0c\u901a\u8fc7\u4ee5\u4e0b\u7684\u63d0\u793a\u6765\u5b9e\u73b0\u5bf9\u5b66\u751f\u7684\u63d0\u4ea4\u7684\u4f5c\u4e1a\u7684\u8bc4\u5206\u3002"]}, {"cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [], "source": ["# \u82f1\u6587\u7248\n", "prompt = \"\"\" Your task is to determine if the student's solution is correct or not\n", "\n", " To solve the problem do the following:\n", " - First, workout your own solution to the problem\n", " - Then compare your solution to the student's solution \n", " and evaluate if the sudtent's solution is correct or not.\n", " ...\n", " Use the following format:\n", " Question:\n", " ```\n", " question here\n", " ```\n", " Student's solution:\n", " ```\n", " student's solution here\n", " ```\n", " Actual solution:\n", " ```\n", " ...\n", " steps to work out the solution and your solution here\n", " ```\n", " Is the student's solution the same as acutal solution \\\n", " just calculated:\n", " ```\n", " yes or no\n", " ```\n", " Student grade\n", " ```\n", " correct or incorrect\n", " ```\n", " \n", " Question:\n", " ```\n", " {question}\n", " ```\n", " Student's solution:\n", " ```\n", " {student's solution}\n", " ```\n", " Actual solution:\n", " \n", " \"\"\""]}, {"cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [], "source": ["# \u4e2d\u6587\u7248\n", "prompt = \"\"\" \u4f60\u7684\u4efb\u52a1\u662f\u5224\u65ad\u5b66\u751f\u7684\u89e3\u51b3\u65b9\u6848\u662f\u6b63\u786e\u7684\u8fd8\u662f\u4e0d\u6b63\u786e\u7684\n", "\n", "\u8981\u89e3\u51b3\u8be5\u95ee\u9898\uff0c\u8bf7\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\uff1a\n", " - \u9996\u5148\uff0c\u5236\u5b9a\u81ea\u5df1\u7684\u95ee\u9898\u89e3\u51b3\u65b9\u6848\n", " - \u7136\u540e\u5c06\u60a8\u7684\u89e3\u51b3\u65b9\u6848\u4e0e\u5b66\u751f\u7684\u89e3\u51b3\u65b9\u6848\u8fdb\u884c\u6bd4\u8f83\n", " \u5e76\u8bc4\u4f30\u5b66\u751f\u7684\u89e3\u51b3\u65b9\u6848\u662f\u5426\u6b63\u786e\u3002\n", "...\n", "\u4f7f\u7528\u4e0b\u9762\u7684\u683c\u5f0f:\n", "\n", "\u95ee\u9898:\n", "```\n", "\u95ee\u9898\u6587\u672c\n", "```\n", "\u5b66\u751f\u7684\u89e3\u51b3\u65b9\u6848:\n", "```\n", "\u5b66\u751f\u7684\u89e3\u51b3\u65b9\u6848\u6587\u672c\n", "```\n", "\u5b9e\u9645\u89e3\u51b3\u65b9\u6848:\n", "```\n", "...\n", "\u5236\u5b9a\u89e3\u51b3\u65b9\u6848\u7684\u6b65\u9aa4\u4ee5\u53ca\u60a8\u7684\u89e3\u51b3\u65b9\u6848\u8bf7\u53c2\u89c1\u6b64\u5904\n", "```\n", "\u5b66\u751f\u7684\u89e3\u51b3\u65b9\u6848\u548c\u5b9e\u9645\u89e3\u51b3\u65b9\u6848\u662f\u5426\u76f8\u540c \\\n", "\u53ea\u8ba1\u7b97\uff1a\n", "```\n", "\u662f\u6216\u8005\u4e0d\u662f\n", "```\n", "\u5b66\u751f\u7684\u6210\u7ee9\n", "```\n", "\u6b63\u786e\u6216\u8005\u4e0d\u6b63\u786e\n", "```\n", "\n", "\u95ee\u9898:\n", "```\n", "{question}\n", "```\n", "\u5b66\u751f\u7684\u89e3\u51b3\u65b9\u6848:\n", "```\n", "{student's solution}\n", "```\n", "\u5b9e\u9645\u89e3\u51b3\u65b9\u6848:\n", "\n", "\"\"\""]}, {"cell_type": "markdown", "metadata": {"jp-MarkdownHeadingCollapsed": true, "tags": []}, "source": ["\u6b64\u5916\uff0cLangChain\u8fd8\u63d0\u4f9b\u4e86\u63d0\u793a\u6a21\u7248\u7528\u4e8e\u4e00\u4e9b\u5e38\u7528\u573a\u666f\u3002\u6bd4\u5982\u81ea\u52a8\u6458\u8981\u3001\u95ee\u7b54\u3001\u8fde\u63a5\u5230SQL\u6570\u636e\u5e93\u3001\u8fde\u63a5\u5230\u4e0d\u540c\u7684API. \u901a\u8fc7\u4f7f\u7528LongChain\u5185\u7f6e\u7684\u63d0\u793a\u6a21\u7248\uff0c\u4f60\u53ef\u4ee5\u5feb\u901f\u5efa\u7acb\u81ea\u5df1\u7684\u5927\u6a21\u578b\u5e94\u7528\uff0c\u800c\u4e0d\u9700\u8981\u82b1\u65f6\u95f4\u53bb\u8bbe\u8ba1\u548c\u6784\u9020\u63d0\u793a\u3002\n", "\n", "\u6700\u540e\uff0c\u6211\u4eec\u5728\u5efa\u7acb\u5927\u6a21\u578b\u5e94\u7528\u65f6\uff0c\u901a\u5e38\u5e0c\u671b\u6a21\u578b\u7684\u8f93\u51fa\u4e3a\u7ed9\u5b9a\u7684\u683c\u5f0f\uff0c\u6bd4\u5982\u5728\u8f93\u51fa\u4f7f\u7528\u7279\u5b9a\u7684\u5173\u952e\u8bcd\u6765\u8ba9\u8f93\u51fa\u7ed3\u6784\u5316\u3002 \u4e0b\u9762\u4e3a\u4e00\u4e2a\u4f7f\u7528\u5927\u6a21\u578b\u8fdb\u884c\u94fe\u5f0f\u601d\u8003\u63a8\u7406\u4f8b\u5b50\uff0c\u5bf9\u4e8e\u95ee\u9898\uff1a*What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?*\n", "\n", "\u901a\u8fc7\u4f7f\u7528LangChain\u5e93\u51fd\u6570\uff0c\u8f93\u51fa\u91c7\u7528\"Thought\"\uff08\u601d\u8003\uff09\u3001\"Action\"\uff08\u884c\u52a8\uff09\u3001\"Observation\"\uff08\u89c2\u5bdf\uff09\u4f5c\u4e3a\u94fe\u5f0f\u601d\u8003\u63a8\u7406\u7684\u5173\u952e\u8bcd\uff0c\u8ba9\u8f93\u51fa\u7ed3\u6784\u5316\u3002\n", "\n", "```\n", "Thought: I need to search Colorado orogeny, find the area that the eastern sector of the Colorado orogeny extends into, then find the elevation range of the area.\n", "Action: Search[Colorado orogeny]\n", "Observation: The Colorado orogeny was an episode of mountain building (an orogeny) in Colorado and surrounding areas.\n", "\n", "Thought: It does not mention the eastern sector. So I need to look up eastern sector.\n", "Action: Lookup[eastern sector]\n", "Observation: (Result 1 / 1) The eastern sector extends into the High Plains and is called the Central Plains orogeny.\n", "\n", "Thought: The eastern sector of Colorado orogeny extends into the High Plains. So I need to search High Plains and find its elevation range.\n", "Action: Search[High Plains]\n", "Observation: High Plains refers to one of two distinct land regions\n", "\n", "Thought: I need to instead search High Plains (United States).\n", "Action: Search[High Plains (United States)]\n", "Observation: The High Plains are a subregion of the Great Plains. From east to west, the High Plains rise in elevation from around 1,800 to 7,000 ft (550 to 2,130 m).[3]\n", "\n", "Thought: High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer is 1,800 to 7,000 ft.\n", "Action: Finish[1,800 to 7,000 ft]\n", "```"]}, {"cell_type": "markdown", "metadata": {}, "source": ["\u5728\u8865\u5145\u6750\u6599\u4e2d\uff0c\u53ef\u4ee5\u67e5\u770b\u4f7f\u7528LangChain\u548cOpenAI\u8fdb\u884c\u94fe\u5f0f\u601d\u8003\u63a8\u7406\u7684\u53e6\u4e00\u4e2a\u4ee3\u7801\u5b9e\u4f8b\u3002"]}, {"cell_type": "markdown", "metadata": {}, "source": ["### 3.3 \u8f93\u51fa\u89e3\u6790\u5668"]}, {"cell_type": "markdown", "metadata": {}, "source": ["#### 3.3.1 \u5982\u679c\u6ca1\u6709\u8f93\u51fa\u89e3\u6790\u5668\n", "\n", "\u5bf9\u4e8e\u7ed9\u5b9a\u7684\u8bc4\u4ef7`customer_review`, \u6211\u4eec\u5e0c\u671b\u63d0\u53d6\u4fe1\u606f\uff0c\u5e76\u6309\u4ee5\u4e0b\u683c\u5f0f\u8f93\u51fa\uff1a\n", "\n", "```python\n", "{\n", " \"gift\": False,\n", " \"delivery_days\": 5,\n", " \"price_value\": \"pretty affordable!\"\n", "}\n", "```"]}, {"cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [], "source": ["customer_review = \"\"\"\\\n", "This leaf blower is pretty amazing. It has four settings:\\\n", "candle blower, gentle breeze, windy city, and tornado. \\\n", "It arrived in two days, just in time for my wife's \\\n", "anniversary present. \\\n", "I think my wife liked it so much she was speechless. \\\n", "So far I've been the only one using it, and I've been \\\n", "using it every other morning to clear the leaves on our lawn. \\\n", "It's slightly more expensive than the other leaf blowers \\\n", "out there, but I think it's worth it for the extra features.\n", "\"\"\""]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 1\ufe0f\u20e3 \u6784\u9020\u63d0\u793a\u6a21\u7248\u5b57\u7b26\u4e32"]}, {"cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [], "source": ["review_template = \"\"\"\\\n", "For the following text, extract the following information:\n", "\n", "gift: Was the item purchased as a gift for someone else? \\\n", "Answer True if yes, False if not or unknown.\n", "\n", "delivery_days: How many days did it take for the product \\\n", "to arrive? If this information is not found, output -1.\n", "\n", "price_value: Extract any sentences about the value or price,\\\n", "and output them as a comma separated Python list.\n", "\n", "Format the output as JSON with the following keys:\n", "gift\n", "delivery_days\n", "price_value\n", "\n", "text: {text}\n", "\"\"\""]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 2\ufe0f\u20e3 \u6784\u9020langchain\u63d0\u793a\u6a21\u7248"]}, {"cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["input_variables=['text'] output_parser=None partial_variables={} messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['text'], output_parser=None, partial_variables={}, template='For the following text, extract the following information:\\n\\ngift: Was the item purchased as a gift for someone else? Answer True if yes, False if not or unknown.\\n\\ndelivery_days: How many days did it take for the product to arrive? If this information is not found, output -1.\\n\\nprice_value: Extract any sentences about the value or price,and output them as a comma separated Python list.\\n\\nFormat the output as JSON with the following keys:\\ngift\\ndelivery_days\\nprice_value\\n\\ntext: {text}\\n', template_format='f-string', validate_template=True), additional_kwargs={})]\n"]}], "source": ["from langchain.prompts import ChatPromptTemplate\n", "prompt_template = ChatPromptTemplate.from_template(review_template)\n", "print(prompt_template)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 3\ufe0f\u20e3 \u4f7f\u7528\u6a21\u7248\u5f97\u5230\u63d0\u793a\u6d88\u606f"]}, {"cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [], "source": ["messages = prompt_template.format_messages(text=customer_review)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 4\ufe0f\u20e3 \u8c03\u7528chat\u6a21\u578b\u63d0\u53d6\u4fe1\u606f"]}, {"cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["{\n", " \"gift\": false,\n", " \"delivery_days\": 2,\n", " \"price_value\": [\"It's slightly more expensive than the other leaf blowers out there, but I think it's worth it for the extra features.\"]\n", "}\n"]}], "source": ["chat = ChatOpenAI(temperature=0.0)\n", "response = chat(messages)\n", "print(response.content)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### \ud83d\udcdd \u5206\u6790\u4e0e\u603b\u7ed3\n", "`response.content`\u7c7b\u578b\u4e3a\u5b57\u7b26\u4e32\uff08`str`\uff09\uff0c\u800c\u5e76\u975e\u5b57\u5178(`dict`), \u76f4\u63a5\u4f7f\u7528`get`\u65b9\u6cd5\u4f1a\u62a5\u9519\u3002\u56e0\u6b64\uff0c\u6211\u4eec\u9700\u8981\u8f93\u51fa\u89e3\u91ca\u5668\u3002"]}, {"cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [{"data": {"text/plain": ["str"]}, "execution_count": 44, "metadata": {}, "output_type": "execute_result"}], "source": ["type(response.content)"]}, {"cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [{"ename": "AttributeError", "evalue": "'str' object has no attribute 'get'", "output_type": "error", "traceback": ["\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mAttributeError\u001b[0m Traceback (most recent call last)", "Input \u001b[0;32mIn [45]\u001b[0m, in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[43mresponse\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcontent\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget\u001b[49m(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mgift\u001b[39m\u001b[38;5;124m'\u001b[39m)\n", "\u001b[0;31mAttributeError\u001b[0m: 'str' object has no attribute 'get'"]}], "source": ["response.content.get('gift')"]}, {"cell_type": "markdown", "metadata": {}, "source": ["#### 3.3.2 \u4e2d\u6587\u7248"]}, {"cell_type": "code", "execution_count": 46, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["input_variables=['text'] output_parser=None partial_variables={} messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['text'], output_parser=None, partial_variables={}, template='\u5bf9\u4e8e\u4ee5\u4e0b\u6587\u672c\uff0c\u8bf7\u4ece\u4e2d\u63d0\u53d6\u4ee5\u4e0b\u4fe1\u606f\uff1a\\n\\n\u793c\u7269\uff1a\u8be5\u5546\u54c1\u662f\u4f5c\u4e3a\u793c\u7269\u9001\u7ed9\u522b\u4eba\u7684\u5417\uff1f \u5982\u679c\u662f\uff0c\u5219\u56de\u7b54 \u662f\u7684\uff1b\u5982\u679c\u5426\u6216\u672a\u77e5\uff0c\u5219\u56de\u7b54 \u4e0d\u662f\u3002\\n\\n\u4ea4\u8d27\u5929\u6570\uff1a\u4ea7\u54c1\u9700\u8981\u591a\u5c11\u5929\u5230\u8fbe\uff1f \u5982\u679c\u6ca1\u6709\u627e\u5230\u8be5\u4fe1\u606f\uff0c\u5219\u8f93\u51fa-1\u3002\\n\\n\u4ef7\u94b1\uff1a\u63d0\u53d6\u6709\u5173\u4ef7\u503c\u6216\u4ef7\u683c\u7684\u4efb\u4f55\u53e5\u5b50\uff0c\u5e76\u5c06\u5b83\u4eec\u8f93\u51fa\u4e3a\u9017\u53f7\u5206\u9694\u7684 Python \u5217\u8868\u3002\\n\\n\u4f7f\u7528\u4ee5\u4e0b\u952e\u5c06\u8f93\u51fa\u683c\u5f0f\u5316\u4e3a JSON\uff1a\\n\u793c\u7269\\n\u4ea4\u8d27\u5929\u6570\\n\u4ef7\u94b1\\n\\n\u6587\u672c: {text}\\n', template_format='f-string', validate_template=True), additional_kwargs={})]\n"]}], "source": ["from langchain.prompts import ChatPromptTemplate\n", "\n", "customer_review = \"\"\"\\\n", "\u8fd9\u6b3e\u5439\u53f6\u673a\u975e\u5e38\u795e\u5947\u3002 \u5b83\u6709\u56db\u4e2a\u8bbe\u7f6e\uff1a\\\n", "\u5439\u8721\u70db\u3001\u5fae\u98ce\u3001\u98ce\u57ce\u3001\u9f99\u5377\u98ce\u3002 \\\n", "\u4e24\u5929\u540e\u5c31\u5230\u4e86\uff0c\u6b63\u597d\u8d76\u4e0a\u6211\u59bb\u5b50\u7684\\\n", "\u5468\u5e74\u7eaa\u5ff5\u793c\u7269\u3002 \\\n", "\u6211\u60f3\u6211\u7684\u59bb\u5b50\u4f1a\u559c\u6b22\u5b83\u5230\u8bf4\u4e0d\u51fa\u8bdd\u6765\u3002 \\\n", "\u5230\u76ee\u524d\u4e3a\u6b62\uff0c\u6211\u662f\u552f\u4e00\u4e00\u4e2a\u4f7f\u7528\u5b83\u7684\u4eba\uff0c\u800c\u4e14\u6211\u4e00\u76f4\\\n", "\u6bcf\u9694\u4e00\u5929\u65e9\u4e0a\u7528\u5b83\u6765\u6e05\u7406\u8349\u576a\u4e0a\u7684\u53f6\u5b50\u3002 \\\n", "\u5b83\u6bd4\u5176\u4ed6\u5439\u53f6\u673a\u7a0d\u5fae\u8d35\u4e00\u70b9\uff0c\\\n", "\u4f46\u6211\u8ba4\u4e3a\u5b83\u7684\u989d\u5916\u529f\u80fd\u662f\u503c\u5f97\u7684\u3002\n", "\"\"\"\n", "\n", "review_template = \"\"\"\\\n", "\u5bf9\u4e8e\u4ee5\u4e0b\u6587\u672c\uff0c\u8bf7\u4ece\u4e2d\u63d0\u53d6\u4ee5\u4e0b\u4fe1\u606f\uff1a\n", "\n", "\u793c\u7269\uff1a\u8be5\u5546\u54c1\u662f\u4f5c\u4e3a\u793c\u7269\u9001\u7ed9\u522b\u4eba\u7684\u5417\uff1f \\\n", "\u5982\u679c\u662f\uff0c\u5219\u56de\u7b54 \u662f\u7684\uff1b\u5982\u679c\u5426\u6216\u672a\u77e5\uff0c\u5219\u56de\u7b54 \u4e0d\u662f\u3002\n", "\n", "\u4ea4\u8d27\u5929\u6570\uff1a\u4ea7\u54c1\u9700\u8981\u591a\u5c11\u5929\\\n", "\u5230\u8fbe\uff1f \u5982\u679c\u6ca1\u6709\u627e\u5230\u8be5\u4fe1\u606f\uff0c\u5219\u8f93\u51fa-1\u3002\n", "\n", "\u4ef7\u94b1\uff1a\u63d0\u53d6\u6709\u5173\u4ef7\u503c\u6216\u4ef7\u683c\u7684\u4efb\u4f55\u53e5\u5b50\uff0c\\\n", "\u5e76\u5c06\u5b83\u4eec\u8f93\u51fa\u4e3a\u9017\u53f7\u5206\u9694\u7684 Python \u5217\u8868\u3002\n", "\n", "\u4f7f\u7528\u4ee5\u4e0b\u952e\u5c06\u8f93\u51fa\u683c\u5f0f\u5316\u4e3a JSON\uff1a\n", "\u793c\u7269\n", "\u4ea4\u8d27\u5929\u6570\n", "\u4ef7\u94b1\n", "\n", "\u6587\u672c: {text}\n", "\"\"\"\n", "\n", "prompt_template = ChatPromptTemplate.from_template(review_template)\n", "print(prompt_template)\n"]}, {"cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["{\n", " \"\u793c\u7269\": \"\u662f\u7684\",\n", " \"\u4ea4\u8d27\u5929\u6570\": 2,\n", " \"\u4ef7\u94b1\": [\"\u5b83\u6bd4\u5176\u4ed6\u5439\u53f6\u673a\u7a0d\u5fae\u8d35\u4e00\u70b9\"]\n", "}\n"]}], "source": ["messages = prompt_template.format_messages(text=customer_review)\n", "\n", "chat = ChatOpenAI(temperature=0.0)\n", "response = chat(messages)\n", "print(response.content)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["#### 3.3.3 LangChain\u8f93\u51fa\u89e3\u6790\u5668"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 1\ufe0f\u20e3 \u6784\u9020\u63d0\u793a\u6a21\u7248\u5b57\u7b26\u4e32"]}, {"cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [], "source": ["review_template_2 = \"\"\"\\\n", "For the following text, extract the following information:\n", "\n", "gift: Was the item purchased as a gift for someone else? \\\n", "Answer True if yes, False if not or unknown.\n", "\n", "delivery_days: How many days did it take for the product\\\n", "to arrive? If this information is not found, output -1.\n", "\n", "price_value: Extract any sentences about the value or price,\\\n", "and output them as a comma separated Python list.\n", "\n", "text: {text}\n", "\n", "{format_instructions}\n", "\"\"\""]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 2\ufe0f\u20e3 \u6784\u9020langchain\u63d0\u793a\u6a21\u7248"]}, {"cell_type": "code", "execution_count": 49, "metadata": {}, "outputs": [], "source": ["prompt = ChatPromptTemplate.from_template(template=review_template_2)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### \ud83d\udd25 \u6784\u9020\u8f93\u51fa\u89e3\u6790\u5668"]}, {"cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["The output should be a markdown code snippet formatted in the following schema, including the leading and trailing \"```json\" and \"```\":\n", "\n", "```json\n", "{\n", "\t\"gift\": string // Was the item purchased as a gift for someone else? Answer True if yes, False if not or unknown.\n", "\t\"delivery_days\": string // How many days did it take for the product to arrive? If this information is not found, output -1.\n", "\t\"price_value\": string // Extract any sentences about the value or price, and output them as a comma separated Python list.\n", "}\n", "```\n"]}], "source": ["from langchain.output_parsers import ResponseSchema\n", "from langchain.output_parsers import StructuredOutputParser\n", "\n", "gift_schema = ResponseSchema(name=\"gift\",\n", " description=\"Was the item purchased\\\n", " as a gift for someone else? \\\n", " Answer True if yes,\\\n", " False if not or unknown.\")\n", "\n", "delivery_days_schema = ResponseSchema(name=\"delivery_days\",\n", " description=\"How many days\\\n", " did it take for the product\\\n", " to arrive? If this \\\n", " information is not found,\\\n", " output -1.\")\n", "\n", "price_value_schema = ResponseSchema(name=\"price_value\",\n", " description=\"Extract any\\\n", " sentences about the value or \\\n", " price, and output them as a \\\n", " comma separated Python list.\")\n", "\n", "\n", "response_schemas = [gift_schema, \n", " delivery_days_schema,\n", " price_value_schema]\n", "output_parser = StructuredOutputParser.from_response_schemas(response_schemas)\n", "format_instructions = output_parser.get_format_instructions()\n", "print(format_instructions)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 3\ufe0f\u20e3 \u4f7f\u7528\u6a21\u7248\u5f97\u5230\u63d0\u793a\u6d88\u606f"]}, {"cell_type": "code", "execution_count": 51, "metadata": {}, "outputs": [], "source": ["messages = prompt.format_messages(text=customer_review, format_instructions=format_instructions)"]}, {"cell_type": "code", "execution_count": 52, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["For the following text, extract the following information:\n", "\n", "gift: Was the item purchased as a gift for someone else? Answer True if yes, False if not or unknown.\n", "\n", "delivery_days: How many days did it take for the productto arrive? If this information is not found, output -1.\n", "\n", "price_value: Extract any sentences about the value or price,and output them as a comma separated Python list.\n", "\n", "text: \u8fd9\u6b3e\u5439\u53f6\u673a\u975e\u5e38\u795e\u5947\u3002 \u5b83\u6709\u56db\u4e2a\u8bbe\u7f6e\uff1a\u5439\u8721\u70db\u3001\u5fae\u98ce\u3001\u98ce\u57ce\u3001\u9f99\u5377\u98ce\u3002 \u4e24\u5929\u540e\u5c31\u5230\u4e86\uff0c\u6b63\u597d\u8d76\u4e0a\u6211\u59bb\u5b50\u7684\u5468\u5e74\u7eaa\u5ff5\u793c\u7269\u3002 \u6211\u60f3\u6211\u7684\u59bb\u5b50\u4f1a\u559c\u6b22\u5b83\u5230\u8bf4\u4e0d\u51fa\u8bdd\u6765\u3002 \u5230\u76ee\u524d\u4e3a\u6b62\uff0c\u6211\u662f\u552f\u4e00\u4e00\u4e2a\u4f7f\u7528\u5b83\u7684\u4eba\uff0c\u800c\u4e14\u6211\u4e00\u76f4\u6bcf\u9694\u4e00\u5929\u65e9\u4e0a\u7528\u5b83\u6765\u6e05\u7406\u8349\u576a\u4e0a\u7684\u53f6\u5b50\u3002 \u5b83\u6bd4\u5176\u4ed6\u5439\u53f6\u673a\u7a0d\u5fae\u8d35\u4e00\u70b9\uff0c\u4f46\u6211\u8ba4\u4e3a\u5b83\u7684\u989d\u5916\u529f\u80fd\u662f\u503c\u5f97\u7684\u3002\n", "\n", "\n", "The output should be a markdown code snippet formatted in the following schema, including the leading and trailing \"```json\" and \"```\":\n", "\n", "```json\n", "{\n", "\t\"gift\": string // Was the item purchased as a gift for someone else? Answer True if yes, False if not or unknown.\n", "\t\"delivery_days\": string // How many days did it take for the product to arrive? If this information is not found, output -1.\n", "\t\"price_value\": string // Extract any sentences about the value or price, and output them as a comma separated Python list.\n", "}\n", "```\n", "\n"]}], "source": ["print(messages[0].content)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 4\ufe0f\u20e3 \u8c03\u7528chat\u6a21\u578b\u63d0\u53d6\u4fe1\u606f"]}, {"cell_type": "code", "execution_count": 53, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["```json\n", "{\n", "\t\"gift\": false,\n", "\t\"delivery_days\": \"2\",\n", "\t\"price_value\": \"\u5b83\u6bd4\u5176\u4ed6\u5439\u53f6\u673a\u7a0d\u5fae\u8d35\u4e00\u70b9\"\n", "}\n", "```\n"]}], "source": ["response = chat(messages)\n", "print(response.content)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### 5\ufe0f\u20e3 \u4f7f\u7528\u8f93\u51fa\u89e3\u6790\u5668\u89e3\u6790\u8f93\u51fa"]}, {"cell_type": "code", "execution_count": 54, "metadata": {}, "outputs": [{"data": {"text/plain": ["{'gift': False, 'delivery_days': '2', 'price_value': '\u5b83\u6bd4\u5176\u4ed6\u5439\u53f6\u673a\u7a0d\u5fae\u8d35\u4e00\u70b9'}"]}, "execution_count": 54, "metadata": {}, "output_type": "execute_result"}], "source": ["output_dict = output_parser.parse(response.content)\n", "output_dict"]}, {"cell_type": "markdown", "metadata": {}, "source": ["##### \ud83d\udcdd \u5206\u6790\u4e0e\u603b\u7ed3\n", "`output_dict`\u7c7b\u578b\u4e3a\u5b57\u5178(`dict`), \u53ef\u76f4\u63a5\u4f7f\u7528`get`\u65b9\u6cd5\u3002\u8fd9\u6837\u7684\u8f93\u51fa\u66f4\u65b9\u4fbf\u4e0b\u6e38\u4efb\u52a1\u7684\u5904\u7406\u3002"]}, {"cell_type": "code", "execution_count": 55, "metadata": {}, "outputs": [{"data": {"text/plain": ["dict"]}, "execution_count": 55, "metadata": {}, "output_type": "execute_result"}], "source": ["type(output_dict)"]}, {"cell_type": "code", "execution_count": 56, "metadata": {}, "outputs": [{"data": {"text/plain": ["'2'"]}, "execution_count": 56, "metadata": {}, "output_type": "execute_result"}], "source": ["output_dict.get('delivery_days')"]}, {"cell_type": "markdown", "metadata": {}, "source": ["#### 3.3.4 \u4e2d\u6587\u7248"]}, {"cell_type": "code", "execution_count": 57, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["The output should be a markdown code snippet formatted in the following schema, including the leading and trailing \"```json\" and \"```\":\n", "\n", "```json\n", "{\n", "\t\"\u793c\u7269\": string // \u8fd9\u4ef6\u7269\u54c1\u662f\u4f5c\u4e3a\u793c\u7269\u9001\u7ed9\u522b\u4eba\u7684\u5417\uff1f \u5982\u679c\u662f\uff0c\u5219\u56de\u7b54 \u662f\u7684\uff0c \u5982\u679c\u5426\u6216\u672a\u77e5\uff0c\u5219\u56de\u7b54 \u4e0d\u662f\u3002\n", "\t\"\u4ea4\u8d27\u5929\u6570\": string // \u4ea7\u54c1\u9700\u8981\u591a\u5c11\u5929\u624d\u80fd\u5230\u8fbe\uff1f \u5982\u679c\u6ca1\u6709\u627e\u5230\u8be5\u4fe1\u606f\uff0c\u5219\u8f93\u51fa-1\u3002\n", "\t\"\u4ef7\u94b1\": string // \u63d0\u53d6\u6709\u5173\u4ef7\u503c\u6216\u4ef7\u683c\u7684\u4efb\u4f55\u53e5\u5b50\uff0c \u5e76\u5c06\u5b83\u4eec\u8f93\u51fa\u4e3a\u9017\u53f7\u5206\u9694\u7684 Python \u5217\u8868\n", "}\n", "```\n"]}], "source": ["# \u4e2d\u6587\n", "review_template_2 = \"\"\"\\\n", "\u5bf9\u4e8e\u4ee5\u4e0b\u6587\u672c\uff0c\u8bf7\u4ece\u4e2d\u63d0\u53d6\u4ee5\u4e0b\u4fe1\u606f\uff1a\uff1a\n", "\n", "\u793c\u7269\uff1a\u8be5\u5546\u54c1\u662f\u4f5c\u4e3a\u793c\u7269\u9001\u7ed9\u522b\u4eba\u7684\u5417\uff1f\n", "\u5982\u679c\u662f\uff0c\u5219\u56de\u7b54 \u662f\u7684\uff1b\u5982\u679c\u5426\u6216\u672a\u77e5\uff0c\u5219\u56de\u7b54 \u4e0d\u662f\u3002\n", "\n", "\u4ea4\u8d27\u5929\u6570\uff1a\u4ea7\u54c1\u5230\u8fbe\u9700\u8981\u591a\u5c11\u5929\uff1f \u5982\u679c\u6ca1\u6709\u627e\u5230\u8be5\u4fe1\u606f\uff0c\u5219\u8f93\u51fa-1\u3002\n", "\n", "\u4ef7\u94b1\uff1a\u63d0\u53d6\u6709\u5173\u4ef7\u503c\u6216\u4ef7\u683c\u7684\u4efb\u4f55\u53e5\u5b50\uff0c\u5e76\u5c06\u5b83\u4eec\u8f93\u51fa\u4e3a\u9017\u53f7\u5206\u9694\u7684 Python \u5217\u8868\u3002\n", "\n", "\u6587\u672c: {text}\n", "\n", "{format_instructions}\n", "\"\"\"\n", "\n", "from langchain.output_parsers import ResponseSchema\n", "from langchain.output_parsers import StructuredOutputParser\n", "\n", "gift_schema = ResponseSchema(name=\"\u793c\u7269\",\n", " description=\"\u8fd9\u4ef6\u7269\u54c1\u662f\u4f5c\u4e3a\u793c\u7269\u9001\u7ed9\u522b\u4eba\u7684\u5417\uff1f\\\n", " \u5982\u679c\u662f\uff0c\u5219\u56de\u7b54 \u662f\u7684\uff0c\\\n", " \u5982\u679c\u5426\u6216\u672a\u77e5\uff0c\u5219\u56de\u7b54 \u4e0d\u662f\u3002\")\n", "\n", "delivery_days_schema = ResponseSchema(name=\"\u4ea4\u8d27\u5929\u6570\",\n", " description=\"\u4ea7\u54c1\u9700\u8981\u591a\u5c11\u5929\u624d\u80fd\u5230\u8fbe\uff1f\\\n", " \u5982\u679c\u6ca1\u6709\u627e\u5230\u8be5\u4fe1\u606f\uff0c\u5219\u8f93\u51fa-1\u3002\")\n", "\n", "price_value_schema = ResponseSchema(name=\"\u4ef7\u94b1\",\n", " description=\"\u63d0\u53d6\u6709\u5173\u4ef7\u503c\u6216\u4ef7\u683c\u7684\u4efb\u4f55\u53e5\u5b50\uff0c\\\n", " \u5e76\u5c06\u5b83\u4eec\u8f93\u51fa\u4e3a\u9017\u53f7\u5206\u9694\u7684 Python \u5217\u8868\")\n", "\n", "\n", "response_schemas = [gift_schema, \n", " delivery_days_schema,\n", " price_value_schema]\n", "output_parser = StructuredOutputParser.from_response_schemas(response_schemas)\n", "format_instructions = output_parser.get_format_instructions()\n", "print(format_instructions)"]}, {"cell_type": "code", "execution_count": 58, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["For the following text, extract the following information:\n", "\n", "gift: Was the item purchased as a gift for someone else? Answer True if yes, False if not or unknown.\n", "\n", "delivery_days: How many days did it take for the productto arrive? If this information is not found, output -1.\n", "\n", "price_value: Extract any sentences about the value or price,and output them as a comma separated Python list.\n", "\n", "text: \u8fd9\u6b3e\u5439\u53f6\u673a\u975e\u5e38\u795e\u5947\u3002 \u5b83\u6709\u56db\u4e2a\u8bbe\u7f6e\uff1a\u5439\u8721\u70db\u3001\u5fae\u98ce\u3001\u98ce\u57ce\u3001\u9f99\u5377\u98ce\u3002 \u4e24\u5929\u540e\u5c31\u5230\u4e86\uff0c\u6b63\u597d\u8d76\u4e0a\u6211\u59bb\u5b50\u7684\u5468\u5e74\u7eaa\u5ff5\u793c\u7269\u3002 \u6211\u60f3\u6211\u7684\u59bb\u5b50\u4f1a\u559c\u6b22\u5b83\u5230\u8bf4\u4e0d\u51fa\u8bdd\u6765\u3002 \u5230\u76ee\u524d\u4e3a\u6b62\uff0c\u6211\u662f\u552f\u4e00\u4e00\u4e2a\u4f7f\u7528\u5b83\u7684\u4eba\uff0c\u800c\u4e14\u6211\u4e00\u76f4\u6bcf\u9694\u4e00\u5929\u65e9\u4e0a\u7528\u5b83\u6765\u6e05\u7406\u8349\u576a\u4e0a\u7684\u53f6\u5b50\u3002 \u5b83\u6bd4\u5176\u4ed6\u5439\u53f6\u673a\u7a0d\u5fae\u8d35\u4e00\u70b9\uff0c\u4f46\u6211\u8ba4\u4e3a\u5b83\u7684\u989d\u5916\u529f\u80fd\u662f\u503c\u5f97\u7684\u3002\n", "\n", "\n", "The output should be a markdown code snippet formatted in the following schema, including the leading and trailing \"```json\" and \"```\":\n", "\n", "```json\n", "{\n", "\t\"\u793c\u7269\": string // \u8fd9\u4ef6\u7269\u54c1\u662f\u4f5c\u4e3a\u793c\u7269\u9001\u7ed9\u522b\u4eba\u7684\u5417\uff1f \u5982\u679c\u662f\uff0c\u5219\u56de\u7b54 \u662f\u7684\uff0c \u5982\u679c\u5426\u6216\u672a\u77e5\uff0c\u5219\u56de\u7b54 \u4e0d\u662f\u3002\n", "\t\"\u4ea4\u8d27\u5929\u6570\": string // \u4ea7\u54c1\u9700\u8981\u591a\u5c11\u5929\u624d\u80fd\u5230\u8fbe\uff1f \u5982\u679c\u6ca1\u6709\u627e\u5230\u8be5\u4fe1\u606f\uff0c\u5219\u8f93\u51fa-1\u3002\n", "\t\"\u4ef7\u94b1\": string // \u63d0\u53d6\u6709\u5173\u4ef7\u503c\u6216\u4ef7\u683c\u7684\u4efb\u4f55\u53e5\u5b50\uff0c \u5e76\u5c06\u5b83\u4eec\u8f93\u51fa\u4e3a\u9017\u53f7\u5206\u9694\u7684 Python \u5217\u8868\n", "}\n", "```\n", "\n"]}], "source": ["messages = prompt.format_messages(text=customer_review, format_instructions=format_instructions)\n", "print(messages[0].content)"]}, {"cell_type": "code", "execution_count": 59, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["```json\n", "{\n", "\t\"\u793c\u7269\": false,\n", "\t\"\u4ea4\u8d27\u5929\u6570\": \"\u4e24\u5929\u540e\",\n", "\t\"\u4ef7\u94b1\": \"\u5b83\u6bd4\u5176\u4ed6\u5439\u53f6\u673a\u7a0d\u5fae\u8d35\u4e00\u70b9\"\n", "}\n", "```\n"]}], "source": ["response = chat(messages)\n", "print(response.content)\n"]}, {"cell_type": "code", "execution_count": 60, "metadata": {}, "outputs": [{"data": {"text/plain": ["{'\u793c\u7269': False, '\u4ea4\u8d27\u5929\u6570': '\u4e24\u5929\u540e', '\u4ef7\u94b1': '\u5b83\u6bd4\u5176\u4ed6\u5439\u53f6\u673a\u7a0d\u5fae\u8d35\u4e00\u70b9'}"]}, "execution_count": 60, "metadata": {}, "output_type": "execute_result"}], "source": ["output_dict = output_parser.parse(response.content)\n", "output_dict"]}, {"cell_type": "markdown", "metadata": {}, "source": ["## \u56db\u3001\u8865\u5145\u6750\u6599"]}, {"cell_type": "markdown", "metadata": {}, "source": ["### 4.1 \u94fe\u5f0f\u601d\u8003\u63a8\u7406(ReAct)\n", "\u53c2\u8003\u8d44\u6599\uff1a[ReAct (Reason+Act) prompting in OpenAI GPT and LangChain](https://tsmatz.wordpress.com/2023/03/07/react-with-openai-gpt-and-langchain/)"]}, {"cell_type": "code", "execution_count": 68, "metadata": {}, "outputs": [], "source": ["!pip install -q wikipedia"]}, {"cell_type": "code", "execution_count": 69, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\n", "\n", "\u001b[1m> Entering new chain...\u001b[0m\n", "\u001b[32;1m\u001b[1;3mThought: I need to search David Chanoff and find the U.S. Navy admiral he collaborated with. Then I need to find the president under whom the admiral served as the ambassador to the United Kingdom.\n", "Action: Search[David Chanoff]\u001b[0m\n", "Observation: \u001b[36;1m\u001b[1;3mDavid Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, \u0110o\u00e0n V\u0103n To\u1ea1i, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.\u001b[0m\n", "Thought:\u001b[32;1m\u001b[1;3mDavid Chanoff has collaborated with several individuals, including Augustus A. White, Joycelyn Elders, \u0110o\u00e0n V\u0103n To\u1ea1i, William J. Crowe, Ariel Sharon, Kenneth Good, and Felix Zandman. I need to search each of these individuals to find the U.S. Navy admiral. \n", "Action: Search[Augustus A. White]\u001b[0m\n", "Observation: \u001b[36;1m\u001b[1;3mAugustus A. White III (born June 4, 1936) is the Ellen and Melvin Gordon Distinguished Professor of Medical Education and Professor of Orthopedic Surgery at Harvard Medical School and a former Orthopaedic Surgeon-in-Chief at Beth Israel Hospital, Boston, Massachusetts. White was the first African American medical student at Stanford, surgical resident at Yale University, professor of medicine at Yale, and department head at a Harvard-affiliated hospital (Beth Israel Hospital).\u001b[0m\n", "Thought:\u001b[32;1m\u001b[1;3mAugustus A. White III is not a U.S. Navy admiral. I need to search the next individual, Joycelyn Elders.\n", "Action: Search[Joycelyn Elders]\u001b[0m\n", "Observation: \u001b[36;1m\u001b[1;3mMinnie Joycelyn Elders (born Minnie Lee Jones; August 13, 1933) is an American pediatrician and public health administrator who served as Surgeon General of the United States from 1993 to 1994. A vice admiral in the Public Health Service Commissioned Corps, she was the second woman, second person of color, and first African American to serve as Surgeon General. \n", "Elders is best known for her frank discussion of her views on controversial issues such as drug legalization, masturbation, and distributing contraception in schools. She was forced to resign in December 1994 amidst controversy as a result of her views. She is currently a professor emerita of pediatrics at the University of Arkansas for Medical Sciences.\u001b[0m\n", "Thought:\u001b[32;1m\u001b[1;3mJoycelyn Elders is a pediatrician and public health administrator, not a U.S. Navy admiral. I need to search the next individual, \u0110o\u00e0n V\u0103n To\u1ea1i.\n", "Action: Search[\u0110o\u00e0n V\u0103n To\u1ea1i]\u001b[0m\n", "Observation: \u001b[36;1m\u001b[1;3m\u0110o\u00e0n V\u0103n To\u1ea1i (1945 in Vietnam \u2013 November 2017 in California) was a Vietnamese-born naturalized American activist and the author of The Vietnamese Gulag (Simon & Schuster, 1986).\u001b[0m\n", "Thought:\u001b[32;1m\u001b[1;3m\u0110o\u00e0n V\u0103n To\u1ea1i is an activist and author, not a U.S. Navy admiral. I need to search the next individual, William J. Crowe.\n", "Action: Search[William J. Crowe]\u001b[0m\n", "Observation: \u001b[36;1m\u001b[1;3mWilliam James Crowe Jr. (January 2, 1925 \u2013 October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.\u001b[0m\n", "Thought:\u001b[32;1m\u001b[1;3mWilliam J. Crowe is a U.S. Navy admiral and diplomat. I need to find the president under whom he served as the ambassador to the United Kingdom.\n", "Action: Search[William J. Crowe ambassador to United Kingdom]\u001b[0m\n", "Observation: \u001b[36;1m\u001b[1;3mWilliam James Crowe Jr. (January 2, 1925 \u2013 October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.\u001b[0m\n", "Thought:\u001b[32;1m\u001b[1;3mWilliam J. Crowe served as the ambassador to the United Kingdom under President Bill Clinton. So the answer is Bill Clinton.\n", "Action: Finish[Bill Clinton]\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n"]}, {"data": {"text/plain": ["'Bill Clinton'"]}, "execution_count": 69, "metadata": {}, "output_type": "execute_result"}], "source": ["from langchain.docstore.wikipedia import Wikipedia\n", "from langchain.llms import OpenAI\n", "from langchain.agents import initialize_agent, Tool, AgentExecutor\n", "from langchain.agents.react.base import DocstoreExplorer\n", "\n", "docstore=DocstoreExplorer(Wikipedia())\n", "tools = [\n", " Tool(\n", " name=\"Search\",\n", " func=docstore.search,\n", " description=\"Search for a term in the docstore.\",\n", " ),\n", " Tool(\n", " name=\"Lookup\",\n", " func=docstore.lookup,\n", " description=\"Lookup a term in the docstore.\",\n", " )\n", "]\n", "\n", "# \u4f7f\u7528\u5927\u8bed\u8a00\u6a21\u578b\n", "llm = OpenAI(\n", " model_name=\"gpt-3.5-turbo\",\n", " temperature=0,\n", ")\n", "\n", "# \u521d\u59cb\u5316ReAct\u4ee3\u7406\n", "react = initialize_agent(tools, llm, agent=\"react-docstore\", verbose=True)\n", "agent_executor = AgentExecutor.from_agent_and_tools(\n", " agent=react.agent,\n", " tools=tools,\n", " verbose=True,\n", ")\n", "\n", "\n", "question = \"Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?\"\n", "agent_executor.run(question)"]}], "metadata": {"kernelspec": {"display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3"}, "language_info": {"codemirror_mode": {"name": "ipython", "version": 3}, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.12"}}, "nbformat": 4, "nbformat_minor": 4} \ No newline at end of file diff --git a/content/LangChain for LLM Application Development/3.储存 Memory.ipynb b/content/LangChain for LLM Application Development/3.储存 Memory.ipynb index 06d63e7..2d8fe64 100644 --- a/content/LangChain for LLM Application Development/3.储存 Memory.ipynb +++ b/content/LangChain for LLM Application Development/3.储存 Memory.ipynb @@ -1,1697 +1 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "a786c77c", - "metadata": { - "tags": [] - }, - "source": [ - "# 第三章 储存\n", - "\n", - " - [一ã€è®¾ç½®OpenAI API Key](#一ã€è®¾ç½®OpenAI-API-Key)\n", - " - [二ã€å¯¹è¯ç¼“存储存 ](#二ã€å¯¹è¯ç¼“存储存--)\n", - " - [2.1 开始对è¯ï¼Œç¬¬ä¸€è½®](#2.1-开始对è¯ï¼Œç¬¬ä¸€è½®)\n", - " - [2.2 第二轮对è¯](#2.2-第二轮对è¯)\n", - " - [2.3 第三轮对è¯](#2.3-第三轮对è¯)\n", - " - [2.4 .memory.buffer储存了当å‰ä¸ºæ­¢æ‰€æœ‰çš„对è¯ä¿¡æ¯](#2.4-.memory.buffer储存了当å‰ä¸ºæ­¢æ‰€æœ‰çš„对è¯ä¿¡æ¯)\n", - " - [2.5 也å¯ä»¥é€šè¿‡memory.load_memory_variables({})打å°åކ岿¶ˆæ¯](#2.5-也å¯ä»¥é€šè¿‡memory.load_memory_variables({})打å°åކ岿¶ˆæ¯)\n", - " - [2.6 添加指定的输入输出内容到记忆缓存区](#2.6-添加指定的输入输出内容到记忆缓存区)\n", - " - [三ã€å¯¹è¯ç¼“存窗å£å‚¨å­˜](#三ã€å¯¹è¯ç¼“存窗å£å‚¨å­˜)\n", - " - [3.1 å‘memory添加两轮对è¯ï¼Œå¹¶æŸ¥çœ‹è®°å¿†å˜é‡å½“å‰çš„记录](#3.1-å‘memory添加两轮对è¯ï¼Œå¹¶æŸ¥çœ‹è®°å¿†å˜é‡å½“å‰çš„记录)\n", - " - [3.2 在看一个例å­ï¼Œå‘现和上é¢çš„结果一样,åªä¿ç•™äº†ä¸€è½®å¯¹è¯è®°å¿†](#3.2-在看一个例å­ï¼Œå‘现和上é¢çš„结果一样,åªä¿ç•™äº†ä¸€è½®å¯¹è¯è®°å¿†)\n", - " - [3.3 将对è¯ç¼“存窗å£è®°å¿†åº”用到对è¯é“¾ä¸­](#3.3-将对è¯ç¼“存窗å£è®°å¿†åº”用到对è¯é“¾ä¸­)\n", - " - [å››ã€å¯¹è¯token缓存储存](#å››ã€å¯¹è¯token缓存储存)\n", - " - [4.1 导入相关包和API密钥](#4.1-导入相关包和API密钥)\n", - " - [4.2 é™åˆ¶tokenæ•°é‡ï¼Œè¿›è¡Œæµ‹è¯•](#4.2-é™åˆ¶tokenæ•°é‡ï¼Œè¿›è¡Œæµ‹è¯•)\n", - " - [4.3 中文例å­](#4.3-中文例å­)\n", - " - [五ã€å¯¹è¯æ‘˜è¦ç¼“存储存](#五ã€å¯¹è¯æ‘˜è¦ç¼“存储存)\n", - " - [5.1 åˆ›å»ºä¸€ä¸ªé•¿å­—ç¬¦ä¸²ï¼Œå…¶ä¸­åŒ…å«æŸäººçš„æ—¥ç¨‹å®‰æŽ’](#5.1-åˆ›å»ºä¸€ä¸ªé•¿å­—ç¬¦ä¸²ï¼Œå…¶ä¸­åŒ…å«æŸäººçš„æ—¥ç¨‹å®‰æŽ’)\n", - " - [5.2 基于上é¢çš„memory,新建一个对è¯é“¾](#5.2-基于上é¢çš„memory,新建一个对è¯é“¾)\n", - " - [5.3 中文例å­](#5.3-中文例å­)\n" - ] - }, - { - "cell_type": "markdown", - "id": "7e10db6f", - "metadata": {}, - "source": [ - "当你与那些语言模型进行交互的时候,他们ä¸ä¼šè®°å¾—你之å‰å’Œä»–进行的交æµå†…容,这在我们构建一些应用程åºï¼ˆå¦‚èŠå¤©æœºå™¨äººï¼‰çš„æ—¶å€™ï¼Œæ˜¯ä¸€ä¸ªå¾ˆå¤§çš„问题 -- 显得ä¸å¤Ÿæ™ºèƒ½ï¼å› æ­¤ï¼Œåœ¨æœ¬èŠ‚ä¸­æˆ‘ä»¬å°†ä»‹ç» LangChain 中的储存模å—,å³å¦‚何将先å‰çš„对è¯åµŒå…¥åˆ°è¯­è¨€æ¨¡åž‹ä¸­çš„,使其具有连续对è¯çš„能力。\n", - "\n", - "当使用 LangChain 中的储存(Memory)æ¨¡å—æ—¶ï¼Œå®ƒå¯ä»¥å¸®åŠ©ä¿å­˜å’Œç®¡ç†åކå²èŠå¤©æ¶ˆæ¯ï¼Œä»¥åŠæž„建关于特定实体的知识。这些组件å¯ä»¥è·¨å¤šè½®å¯¹è¯å‚¨å­˜ä¿¡æ¯ï¼Œå¹¶å…è®¸åœ¨å¯¹è¯æœŸé—´è·Ÿè¸ªç‰¹å®šä¿¡æ¯å’Œä¸Šä¸‹æ–‡ã€‚\n", - "\n", - "LangChain æä¾›äº†å¤šç§å‚¨å­˜ç±»åž‹ã€‚其中,缓冲区储存å…许ä¿ç•™æœ€è¿‘çš„èŠå¤©æ¶ˆæ¯ï¼Œæ‘˜è¦å‚¨å­˜åˆ™æä¾›äº†å¯¹æ•´ä¸ªå¯¹è¯çš„æ‘˜è¦ã€‚实体储存 则å…许在多轮对è¯ä¸­ä¿ç•™æœ‰å…³ç‰¹å®šå®žä½“的信æ¯ã€‚这些记忆组件都是模å—化的,å¯ä¸Žå…¶ä»–组件组åˆä½¿ç”¨ï¼Œä»Žè€Œå¢žå¼ºæœºå™¨äººçš„对è¯ç®¡ç†èƒ½åŠ›ã€‚å‚¨å­˜æ¨¡å—å¯ä»¥é€šè¿‡ç®€å•çš„API调用æ¥è®¿é—®å’Œæ›´æ–°ï¼Œå…许开å‘人员更轻æ¾åœ°å®žçް坹è¯åކå²è®°å½•的管ç†å’Œç»´æŠ¤ã€‚\n", - "\n", - "此次课程主è¦ä»‹ç»å…¶ä¸­å››ç§å‚¨å­˜æ¨¡å—,其他模å—坿Ÿ¥çœ‹æ–‡æ¡£å­¦ä¹ ã€‚\n", - "- 对è¯ç¼“存储存 (ConversationBufferMemory)\n", - "- 对è¯ç¼“存窗å£å‚¨å­˜ (ConversationBufferWindowMemory)\n", - "- 对è¯ä»¤ç‰Œç¼“存储存 (ConversationTokenBufferMemory)\n", - "- å¯¹è¯æ‘˜è¦ç¼“存储存 (ConversationSummaryBufferMemory)\n", - "\n", - "在LangChain中,储存 指的是大语言模型(LLM)的短期记忆。为什么是短期记忆?那是因为LLMè®­ç»ƒå¥½ä¹‹åŽ (获得了一些长期记忆)ï¼Œå®ƒçš„å‚æ•°ä¾¿ä¸ä¼šå› ä¸ºç”¨æˆ·çš„输入而å‘生改å˜ã€‚当用户与训练好的LLMè¿›è¡Œå¯¹è¯æ—¶ï¼ŒLLM会暂时记ä½ç”¨æˆ·çš„输入和它已ç»ç”Ÿæˆçš„输出,以便预测之åŽçš„输出,而模型输出完毕åŽï¼Œå®ƒä¾¿ä¼šâ€œé—忘â€ä¹‹å‰ç”¨æˆ·çš„输入和它的输出。因此,之å‰çš„这些信æ¯åªèƒ½ç§°ä½œä¸ºLLM的短期记忆。 \n", - " \n", - "为了延长LLM短期记忆的ä¿ç•™æ—¶é—´ï¼Œåˆ™éœ€è¦å€ŸåŠ©ä¸€äº›å¤–éƒ¨å‚¨å­˜æ–¹å¼æ¥è¿›è¡Œè®°å¿†ï¼Œä»¥ä¾¿åœ¨ç”¨æˆ·ä¸ŽLLM对è¯ä¸­ï¼ŒLLM能够尽å¯èƒ½çš„知é“用户与它所进行的历å²å¯¹è¯ä¿¡æ¯ã€‚ " - ] - }, - { - "cell_type": "markdown", - "id": "1ca56e6b-1e07-4405-a1ca-f4237f20fa75", - "metadata": { - "jp-MarkdownHeadingCollapsed": true, - "tags": [] - }, - "source": [ - "## 一ã€è®¾ç½®OpenAI API Key\n", - "\n", - "登陆 [OpenAI 账户](https://platform.openai.com/account/api-keys) 获å–API Key,然åŽå°†å…¶è®¾ç½®ä¸ºçŽ¯å¢ƒå˜é‡ã€‚\n", - "\n", - "- 如果你想è¦è®¾ç½®ä¸ºå…¨å±€çŽ¯å¢ƒå˜é‡ï¼Œå¯ä»¥å‚考[知乎文章](https://zhuanlan.zhihu.com/p/627665725)。\n", - "- 如果你想è¦è®¾ç½®ä¸ºæœ¬åœ°/项目环境å˜é‡ï¼Œåœ¨æœ¬æ–‡ä»¶ç›®å½•下创建`.env`文件, 打开文件输入以下内容。\n", - "\n", - "

\n", - " OPENAI_API_KEY=\"your_api_key\" \n", - "

\n", - " \n", - " 替æ¢\"your_api_key\"为你自己的 API Key" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "6932bd47-c6d5-4794-8102-a12b84412a93", - "metadata": {}, - "outputs": [], - "source": [ - "# 下载需è¦çš„包python-dotenvå’Œopenai\n", - "# å¦‚æžœä½ éœ€è¦æŸ¥çœ‹å®‰è£…过程日志,å¯åˆ é™¤ -q\n", - "!pip install -q python-dotenv\n", - "!pip install -q openai" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "id": "10446712-9fa6-4d71-94ce-2ea4cf197e54", - "metadata": {}, - "outputs": [], - "source": [ - "import os\n", - "\n", - "import openai\n", - "from dotenv import find_dotenv, load_dotenv\n", - "\n", - "# è¯»å–æœ¬åœ°/项目的环境å˜é‡ã€‚\n", - "\n", - "# find_dotenv()寻找并定ä½.env文件的路径\n", - "# load_dotenv()读å–该.env文件,并将其中的环境å˜é‡åŠ è½½åˆ°å½“å‰çš„è¿è¡ŒçŽ¯å¢ƒä¸­\n", - "# 如果你设置的是全局的环境å˜é‡ï¼Œè¿™è¡Œä»£ç åˆ™æ²¡æœ‰ä»»ä½•作用。\n", - "_ = load_dotenv(find_dotenv())\n", - "\n", - "# 获å–环境å˜é‡ OPENAI_API_KEY\n", - "openai.api_key = os.environ[\"OPENAI_API_KEY\"]" - ] - }, - { - "cell_type": "markdown", - "id": "1297dcd5", - "metadata": { - "tags": [] - }, - "source": [ - "## 二ã€å¯¹è¯ç¼“存储存\n", - " " - ] - }, - { - "cell_type": "markdown", - "id": "ffa8cd26-45c5-4bae-90b2-b07d23bf3bb2", - "metadata": {}, - "source": [ - "### 2.1 英文版" - ] - }, - { - "cell_type": "markdown", - "id": "b7e77a3d-7aaa-48c1-b219-19bd6f4eb674", - "metadata": { - "tags": [] - }, - "source": [ - "#### 2.1.1 åˆå§‹åŒ–å¯¹è¯æ¨¡åž‹" - ] - }, - { - "cell_type": "code", - "execution_count": 47, - "id": "20ad6fe2", - "metadata": { - "height": 98 - }, - "outputs": [], - "source": [ - "from langchain.chains import ConversationChain\n", - "from langchain.chat_models import ChatOpenAI\n", - "from langchain.memory import ConversationBufferMemory" - ] - }, - { - "cell_type": "code", - "execution_count": 48, - "id": "88bdf13d", - "metadata": { - "height": 133 - }, - "outputs": [], - "source": [ - "# è¿™é‡Œæˆ‘ä»¬å°†å‚æ•°temperature设置为0.0,从而å‡å°‘生æˆç­”æ¡ˆçš„éšæœºæ€§ã€‚\n", - "# å¦‚æžœä½ æƒ³è¦æ¯æ¬¡å¾—到ä¸ä¸€æ ·çš„æœ‰æ–°æ„的答案,å¯ä»¥å°è¯•å¢žå¤§è¯¥å‚æ•°ã€‚\n", - "llm = ChatOpenAI(temperature=0.0) \n", - "\n", - "memory = ConversationBufferMemory()\n", - "\n", - "# 新建一个 ConversationChain Class 实例\n", - "# verbose傿•°è®¾ç½®ä¸ºTrue时,程åºä¼šè¾“出更详细的信æ¯ï¼Œä»¥æä¾›æ›´å¤šçš„调试或è¿è¡Œæ—¶ä¿¡æ¯ã€‚\n", - "# 相å,当将verbose傿•°è®¾ç½®ä¸ºFalse时,程åºä¼šä»¥æ›´ç®€æ´çš„æ–¹å¼è¿è¡Œï¼Œåªè¾“出关键的信æ¯ã€‚\n", - "conversation = ConversationChain(llm=llm, memory = memory, verbose=True )" - ] - }, - { - "cell_type": "markdown", - "id": "dea83837", - "metadata": { - "tags": [] - }, - "source": [ - "#### 2.1.2 第一轮对è¯" - ] - }, - { - "cell_type": "markdown", - "id": "1a3b4c42", - "metadata": {}, - "source": [ - "当我们è¿è¡Œé¢„测(predict)时,生æˆäº†ä¸€äº›æç¤ºï¼Œå¦‚下所è§ï¼Œä»–说“以下是人类和AI之间å‹å¥½çš„对è¯ï¼ŒAIå¥è°ˆâ€œç­‰ç­‰ï¼Œè¿™å®žé™…上是LangChain生æˆçš„æç¤ºï¼Œä»¥ä½¿ç³»ç»Ÿè¿›è¡Œå¸Œæœ›å’Œå‹å¥½çš„对è¯ï¼Œå¹¶ä¸”å¿…é¡»ä¿å­˜å¯¹è¯ï¼Œå¹¶æç¤ºäº†å½“å‰å·²å®Œæˆçš„æ¨¡åž‹é“¾ã€‚" - ] - }, - { - "cell_type": "code", - "execution_count": 49, - "id": "db24677d", - "metadata": { - "height": 47 - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n", - "\n", - "\u001b[1m> Entering new chain...\u001b[0m\n", - "Prompt after formatting:\n", - "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", - "\n", - "Current conversation:\n", - "\n", - "Human: Hi, my name is Andrew\n", - "AI:\u001b[0m\n", - "\n", - "\u001b[1m> Finished chain.\u001b[0m\n" - ] - }, - { - "data": { - "text/plain": [ - "\"Hello Andrew! It's nice to meet you. How can I assist you today?\"" - ] - }, - "execution_count": 49, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "conversation.predict(input=\"Hi, my name is Andrew\")" - ] - }, - { - "cell_type": "markdown", - "id": "e71564ad", - "metadata": {}, - "source": [ - "#### 2.1.3 第二轮对è¯" - ] - }, - { - "cell_type": "markdown", - "id": "54d006bd", - "metadata": {}, - "source": [ - "å½“æˆ‘ä»¬è¿›è¡Œç¬¬äºŒè½®å¯¹è¯æ—¶ï¼Œå®ƒä¼šä¿ç•™ä¸Šé¢çš„æç¤º" - ] - }, - { - "cell_type": "code", - "execution_count": 50, - "id": "cc3ef937", - "metadata": { - "height": 31 - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n", - "\n", - "\u001b[1m> Entering new chain...\u001b[0m\n", - "Prompt after formatting:\n", - "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", - "\n", - "Current conversation:\n", - "Human: Hi, my name is Andrew\n", - "AI: Hello Andrew! It's nice to meet you. How can I assist you today?\n", - "Human: What is 1+1?\n", - "AI:\u001b[0m\n", - "\n", - "\u001b[1m> Finished chain.\u001b[0m\n" - ] - }, - { - "data": { - "text/plain": [ - "'1+1 is equal to 2.'" - ] - }, - "execution_count": 50, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "conversation.predict(input=\"What is 1+1?\")" - ] - }, - { - "cell_type": "markdown", - "id": "33cb734b", - "metadata": {}, - "source": [ - "#### 2.1.4 第三轮对è¯" - ] - }, - { - "cell_type": "markdown", - "id": "0393df3d", - "metadata": {}, - "source": [ - "为了验è¯ä»–是å¦è®°å¿†äº†å‰é¢çš„对è¯å†…容,我们让他回答å‰é¢å·²ç»è¯´è¿‡çš„内容(我的å字),å¯ä»¥çœ‹åˆ°ä»–确实输出了正确的å字,因此这个对è¯é“¾éšç€å¾€ä¸‹è¿›è¡Œä¼šè¶Šæ¥è¶Šé•¿" - ] - }, - { - "cell_type": "code", - "execution_count": 51, - "id": "acf3339a", - "metadata": { - "height": 31 - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n", - "\n", - "\u001b[1m> Entering new chain...\u001b[0m\n", - "Prompt after formatting:\n", - "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", - "\n", - "Current conversation:\n", - "Human: Hi, my name is Andrew\n", - "AI: Hello Andrew! It's nice to meet you. How can I assist you today?\n", - "Human: What is 1+1?\n", - "AI: 1+1 is equal to 2.\n", - "Human: What is my name?\n", - "AI:\u001b[0m\n", - "\n", - "\u001b[1m> Finished chain.\u001b[0m\n" - ] - }, - { - "data": { - "text/plain": [ - "'Your name is Andrew.'" - ] - }, - "execution_count": 51, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "conversation.predict(input=\"What is my name?\")" - ] - }, - { - "cell_type": "markdown", - "id": "5a96a8d9", - "metadata": {}, - "source": [ - "#### 2.1.5 查看储存缓存\n", - "\n", - "储存缓存(memory.buffer)\n", - "储存了当å‰ä¸ºæ­¢æ‰€æœ‰çš„对è¯ä¿¡æ¯" - ] - }, - { - "cell_type": "code", - "execution_count": 52, - "id": "2529400d", - "metadata": { - "height": 31 - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Human: Hi, my name is Andrew\n", - "AI: Hello Andrew! It's nice to meet you. How can I assist you today?\n", - "Human: What is 1+1?\n", - "AI: 1+1 is equal to 2.\n", - "Human: What is my name?\n", - "AI: Your name is Andrew.\n" - ] - } - ], - "source": [ - "print(memory.buffer) " - ] - }, - { - "cell_type": "markdown", - "id": "0b5de846", - "metadata": {}, - "source": [ - "也å¯ä»¥é€šè¿‡memory.load_memory_variables({})打å°ç¼“å­˜ä¸­çš„åŽ†å²æ¶ˆæ¯ã€‚这里的`{}`是一个空字典,有一些更高级的功能,使用户å¯ä»¥ä½¿ç”¨æ›´å¤æ‚的输入,但我们ä¸ä¼šåœ¨è¿™ä¸ªçŸ­æœŸè¯¾ç¨‹ä¸­è®¨è®ºå®ƒä»¬ï¼Œæ‰€ä»¥ä¸è¦æ‹…心为什么这里有一个空的花括å·ã€‚" - ] - }, - { - "cell_type": "code", - "execution_count": 53, - "id": "5018cb0a", - "metadata": { - "height": 31 - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "{'history': \"Human: Hi, my name is Andrew\\nAI: Hello Andrew! It's nice to meet you. How can I assist you today?\\nHuman: What is 1+1?\\nAI: 1+1 is equal to 2.\\nHuman: What is my name?\\nAI: Your name is Andrew.\"}\n" - ] - } - ], - "source": [ - "print(memory.load_memory_variables({}))" - ] - }, - { - "cell_type": "markdown", - "id": "07d2e892", - "metadata": {}, - "source": [ - "#### 2.1.6 直接添加内容到储存缓存" - ] - }, - { - "cell_type": "code", - "execution_count": 54, - "id": "14219b70", - "metadata": { - "height": 31 - }, - "outputs": [], - "source": [ - "memory = ConversationBufferMemory() # 新建一个空的对è¯ç¼“存记忆" - ] - }, - { - "cell_type": "code", - "execution_count": 55, - "id": "a36e9905", - "metadata": { - "height": 48 - }, - "outputs": [], - "source": [ - "memory.save_context({\"input\": \"Hi\"}, {\"output\": \"What's up\"}) # å‘缓存区添加指定对è¯çš„输入输出" - ] - }, - { - "cell_type": "code", - "execution_count": 56, - "id": "61631b1f", - "metadata": { - "height": 31 - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Human: Hi\n", - "AI: What's up\n" - ] - } - ], - "source": [ - "print(memory.buffer) # 查看缓存区结果" - ] - }, - { - "cell_type": "code", - "execution_count": 57, - "id": "a2fdf9ec", - "metadata": { - "height": 31 - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "{'history': \"Human: Hi\\nAI: What's up\"}\n" - ] - } - ], - "source": [ - "print(memory.load_memory_variables({}))# 冿¬¡åŠ è½½è®°å¿†å˜é‡" - ] - }, - { - "cell_type": "markdown", - "id": "2ac544f2", - "metadata": {}, - "source": [ - "继续添加新的内容,对è¯åކå²éƒ½ä¿å­˜ä¸‹æ¥åœ¨äº†ï¼" - ] - }, - { - "cell_type": "code", - "execution_count": 58, - "id": "7ca79256", - "metadata": { - "height": 64 - }, - "outputs": [], - "source": [ - "memory.save_context({\"input\": \"Not much, just hanging\"}, {\"output\": \"Cool\"})" - ] - }, - { - "cell_type": "code", - "execution_count": 59, - "id": "890a4497", - "metadata": { - "height": 31 - }, - "outputs": [ - { - "data": { - "text/plain": [ - "{'history': \"Human: Hi\\nAI: What's up\\nHuman: Not much, just hanging\\nAI: Cool\"}" - ] - }, - "execution_count": 59, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "memory.load_memory_variables({})" - ] - }, - { - "cell_type": "markdown", - "id": "6eddd91a-bf1c-4b82-b99c-c585420e4ecb", - "metadata": {}, - "source": [ - "### 2.2 中文版" - ] - }, - { - "cell_type": "markdown", - "id": "55b3e4e9-7a6a-4a09-9ac3-0096a67849c7", - "metadata": {}, - "source": [ - "#### 2.1.1 åˆå§‹åŒ–å¯¹è¯æ¨¡åž‹" - ] - }, - { - "cell_type": "code", - "execution_count": 30, - "id": "3577aaff-7edb-40b0-866a-e407e63d55e0", - "metadata": { - "height": 98 - }, - "outputs": [], - "source": [ - "from langchain.chains import ConversationChain\n", - "from langchain.chat_models import ChatOpenAI\n", - "from langchain.memory import ConversationBufferMemory" - ] - }, - { - "cell_type": "code", - "execution_count": 33, - "id": "a77d37ab-1f75-4ae8-8d7c-5066773ead81", - "metadata": { - "height": 133 - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n", - "\n", - "\u001b[1m> Entering new chain...\u001b[0m\n", - "Prompt after formatting:\n", - "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", - "\n", - "Current conversation:\n", - "\n", - "Human: 你好, 我å«çš®çš®é²\n", - "AI:\u001b[0m\n", - "\n", - "\u001b[1m> Finished chain.\u001b[0m\n" - ] - }, - { - "data": { - "text/plain": [ - "'你好,皮皮é²ï¼å¾ˆé«˜å…´è®¤è¯†ä½ ã€‚我是一个AI助手,å¯ä»¥å›žç­”你的问题和æä¾›å¸®åŠ©ã€‚æœ‰ä»€ä¹ˆæˆ‘å¯ä»¥å¸®ä½ çš„å—?'" - ] - }, - "execution_count": 33, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "llm = ChatOpenAI(temperature=0.0) \n", - "\n", - "memory = ConversationBufferMemory()\n", - "\n", - "conversation = ConversationChain(llm=llm, memory = memory, verbose=True )" - ] - }, - { - "cell_type": "markdown", - "id": "747cb539-abc4-4e47-8cb9-1ee608ab07fc", - "metadata": {}, - "source": [ - "#### 2.1.2 第一轮对è¯" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "99e48462-7a92-4842-bdaa-2a478ba2252c", - "metadata": {}, - "outputs": [], - "source": [ - "conversation.predict(input=\"你好, 我å«çš®çš®é²\")" - ] - }, - { - "cell_type": "markdown", - "id": "979d320b-6b20-4722-99db-c48a43711d6c", - "metadata": {}, - "source": [ - "#### 2.1.3 第二轮对è¯" - ] - }, - { - "cell_type": "code", - "execution_count": 34, - "id": "6ed6e97a-d7ea-4188-a6d7-f91d2a29d14a", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n", - "\n", - "\u001b[1m> Entering new chain...\u001b[0m\n", - "Prompt after formatting:\n", - "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", - "\n", - "Current conversation:\n", - "Human: 你好, 我å«çš®çš®é²\n", - "AI: 你好,皮皮é²ï¼å¾ˆé«˜å…´è®¤è¯†ä½ ã€‚我是一个AI助手,å¯ä»¥å›žç­”你的问题和æä¾›å¸®åŠ©ã€‚æœ‰ä»€ä¹ˆæˆ‘å¯ä»¥å¸®ä½ çš„å—?\n", - "Human: 1+1等于多少?\n", - "AI:\u001b[0m\n", - "\n", - "\u001b[1m> Finished chain.\u001b[0m\n" - ] - }, - { - "data": { - "text/plain": [ - "'1+1等于2。'" - ] - }, - "execution_count": 34, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "conversation.predict(input=\"1+1等于多少?\")" - ] - }, - { - "cell_type": "markdown", - "id": "8a1fd531-216e-42d8-b226-839747ad7dd3", - "metadata": {}, - "source": [ - "#### 2.1.4 第三轮对è¯" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "b5dfe488-2758-42c7-9c20-e483b4c22ab8", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n", - "\n", - "\u001b[1m> Entering new chain...\u001b[0m\n", - "Prompt after formatting:\n", - "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", - "\n", - "Current conversation:\n", - "Human: 你好, 我å«çš®çš®é²\n", - "AI: 你好,皮皮é²ï¼å¾ˆé«˜å…´è®¤è¯†ä½ ã€‚我是一个AI助手,å¯ä»¥å›žç­”你的问题和æä¾›å¸®åŠ©ã€‚æœ‰ä»€ä¹ˆæˆ‘å¯ä»¥å¸®ä½ çš„å—?\n", - "Human: 1+1等于多少?\n", - "AI: 1+1等于2。\n", - "Human: What is my name?\n", - "AI: ä½ çš„å字是皮皮é²ã€‚\n", - "Human: 我å«ä»€ä¹ˆå字?\n", - "AI:\u001b[0m\n", - "\n", - "\u001b[1m> Finished chain.\u001b[0m\n" - ] - }, - { - "data": { - "text/plain": [ - "'ä½ å«çš®çš®é²ã€‚'" - ] - }, - "execution_count": 36, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "conversation.predict(input=\"我å«ä»€ä¹ˆå字?\")" - ] - }, - { - "cell_type": "markdown", - "id": "05d9822e-943d-4905-a1f8-a0d28c215d60", - "metadata": {}, - "source": [ - "#### 2.1.5 查看储存缓存" - ] - }, - { - "cell_type": "code", - "execution_count": 37, - "id": "0795580f-b6b6-47e0-8882-26fe204560bd", - "metadata": { - "height": 31 - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Human: 你好, 我å«çš®çš®é²\n", - "AI: 你好,皮皮é²ï¼å¾ˆé«˜å…´è®¤è¯†ä½ ã€‚我是一个AI助手,å¯ä»¥å›žç­”你的问题和æä¾›å¸®åŠ©ã€‚æœ‰ä»€ä¹ˆæˆ‘å¯ä»¥å¸®ä½ çš„å—?\n", - "Human: 1+1等于多少?\n", - "AI: 1+1等于2。\n", - "Human: What is my name?\n", - "AI: ä½ çš„å字是皮皮é²ã€‚\n", - "Human: 我å«ä»€ä¹ˆå字?\n", - "AI: ä½ å«çš®çš®é²ã€‚\n" - ] - } - ], - "source": [ - "print(memory.buffer) " - ] - }, - { - "cell_type": "code", - "execution_count": 38, - "id": "dfe7824f-bd6e-4b95-92e2-1c85c62a92e9", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Human: 你好, 我å«çš®çš®é²\n", - "AI: 你好,皮皮é²ï¼å¾ˆé«˜å…´è®¤è¯†ä½ ã€‚我是一个AI助手,å¯ä»¥å›žç­”你的问题和æä¾›å¸®åŠ©ã€‚æœ‰ä»€ä¹ˆæˆ‘å¯ä»¥å¸®ä½ çš„å—?\n", - "Human: 1+1等于多少?\n", - "AI: 1+1等于2。\n", - "Human: What is my name?\n", - "AI: ä½ çš„å字是皮皮é²ã€‚\n", - "Human: 我å«ä»€ä¹ˆå字?\n", - "AI: ä½ å«çš®çš®é²ã€‚\n" - ] - } - ], - "source": [ - "print(memory.buffer) " - ] - }, - { - "cell_type": "code", - "execution_count": 39, - "id": "48942759-8afb-4aed-80c5-a48952a2b0c0", - "metadata": { - "height": 31 - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "{'history': 'Human: 你好, 我å«çš®çš®é²\\nAI: 你好,皮皮é²ï¼å¾ˆé«˜å…´è®¤è¯†ä½ ã€‚我是一个AI助手,å¯ä»¥å›žç­”你的问题和æä¾›å¸®åŠ©ã€‚æœ‰ä»€ä¹ˆæˆ‘å¯ä»¥å¸®ä½ çš„å—?\\nHuman: 1+1等于多少?\\nAI: 1+1等于2。\\nHuman: What is my name?\\nAI: ä½ çš„å字是皮皮é²ã€‚\\nHuman: 我å«ä»€ä¹ˆå字?\\nAI: ä½ å«çš®çš®é²ã€‚'}\n" - ] - } - ], - "source": [ - "print(memory.load_memory_variables({}))" - ] - }, - { - "cell_type": "code", - "execution_count": 40, - "id": "4d0c4625-e928-45dc-b8da-4ab865ac5f7e", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'history': 'Human: 你好, 我å«çš®çš®é²\\nAI: 你好,皮皮é²ï¼å¾ˆé«˜å…´è®¤è¯†ä½ ã€‚我是一个AI助手,å¯ä»¥å›žç­”你的问题和æä¾›å¸®åŠ©ã€‚æœ‰ä»€ä¹ˆæˆ‘å¯ä»¥å¸®ä½ çš„å—?\\nHuman: 1+1等于多少?\\nAI: 1+1等于2。\\nHuman: What is my name?\\nAI: ä½ çš„å字是皮皮é²ã€‚\\nHuman: 我å«ä»€ä¹ˆå字?\\nAI: ä½ å«çš®çš®é²ã€‚'}" - ] - }, - "execution_count": 40, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "memory.load_memory_variables({})" - ] - }, - { - "cell_type": "markdown", - "id": "80f3778e-5fbf-43ed-9df1-d57d98ec6fb0", - "metadata": {}, - "source": [ - "#### 2.1.6 直接添加内容到储存缓存" - ] - }, - { - "cell_type": "code", - "execution_count": 42, - "id": "147b2c30-1662-4b49-aaf8-c228428e5cc6", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'history': 'Human: 你好,我å«çš®çš®é²\\nAI: 你好啊,我å«é²è¥¿è¥¿'}" - ] - }, - "execution_count": 42, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "memory = ConversationBufferMemory()\n", - "memory.save_context({\"input\": \"你好,我å«çš®çš®é²\"}, {\"output\": \"你好啊,我å«é²è¥¿è¥¿\"})\n", - "memory.load_memory_variables({})" - ] - }, - { - "cell_type": "code", - "execution_count": 43, - "id": "6b5e27f0-dad7-41b6-9326-bebf6299638f", - "metadata": { - "height": 64 - }, - "outputs": [], - "source": [ - "memory.save_context({\"input\": \"Not much, just hanging\"}, {\"output\": \"Cool\"})" - ] - }, - { - "cell_type": "code", - "execution_count": 44, - "id": "cfa7c555-06ab-4906-b3dc-906f789e08f5", - "metadata": { - "height": 31 - }, - "outputs": [ - { - "data": { - "text/plain": [ - "{'history': 'Human: 你好,我å«çš®çš®é²\\nAI: 你好啊,我å«é²è¥¿è¥¿\\nHuman: Not much, just hanging\\nAI: Cool'}" - ] - }, - "execution_count": 44, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "memory.load_memory_variables({})" - ] - }, - { - "cell_type": "code", - "execution_count": 45, - "id": "deb33de8-37ea-4180-a73e-0fc456b14eb0", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'history': 'Human: 你好,我å«çš®çš®é²\\nAI: 你好啊,我å«é²è¥¿è¥¿\\nHuman: Not much, just hanging\\nAI: Cool\\nHuman: 很高兴和你æˆä¸ºæœ‹å‹ï¼\\nAI: 是的,让我们一起去冒险å§ï¼'}" - ] - }, - "execution_count": 45, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "memory.save_context({\"input\": \"很高兴和你æˆä¸ºæœ‹å‹ï¼\"}, {\"output\": \"是的,让我们一起去冒险å§ï¼\"})\n", - "memory.load_memory_variables({})" - ] - }, - { - "cell_type": "markdown", - "id": "10146f1a-0114-4902-8122-d19ae6f7c461", - "metadata": {}, - "source": [ - "### 2.3 总结" - ] - }, - { - "cell_type": "markdown", - "id": "2759b6bc-edb2-4cfe-b0f8-1bf6c4d796f9", - "metadata": {}, - "source": [ - "当我们在使用大型语言模型进行èŠå¤©å¯¹è¯æ—¶ï¼Œ**大型语言模型本身实际上是无状æ€çš„。语言模型本身并ä¸è®°å¾—到目å‰ä¸ºæ­¢çš„历å²å¯¹è¯**ã€‚æ¯æ¬¡è°ƒç”¨API结点都是独立的。储存(Memory)å¯ä»¥å‚¨å­˜åˆ°ç›®å‰ä¸ºæ­¢çš„æ‰€æœ‰æœ¯è¯­æˆ–对è¯ï¼Œå¹¶å°†å…¶è¾“入或附加上下文到LLM中用于生æˆè¾“出。如此看起æ¥å°±å¥½åƒå®ƒåœ¨è¿›è¡Œä¸‹ä¸€è½®å¯¹è¯çš„æ—¶å€™ï¼Œè®°å¾—之å‰è¯´è¿‡ä»€ä¹ˆã€‚\n" - ] - }, - { - "cell_type": "markdown", - "id": "cf98e9ff", - "metadata": {}, - "source": [ - "## 三ã€å¯¹è¯ç¼“存窗å£å‚¨å­˜\n", - " \n", - "éšç€å¯¹è¯å˜å¾—è¶Šæ¥è¶Šé•¿ï¼Œæ‰€éœ€çš„内存é‡ä¹Ÿå˜å¾—éžå¸¸é•¿ã€‚将大é‡çš„tokenså‘é€åˆ°LLMçš„æˆæœ¬ï¼Œä¹Ÿä¼šå˜å¾—更加昂贵,这也就是为什么API的调用费用,通常是基于它需è¦å¤„ç†çš„tokensæ•°é‡è€Œæ”¶è´¹çš„。\n", - " \n", - "针对以上问题,LangChain也æä¾›äº†å‡ ç§æ–¹ä¾¿çš„å‚¨å­˜æ–¹å¼æ¥ä¿å­˜åކå²å¯¹è¯ã€‚其中,对è¯ç¼“存窗å£å‚¨å­˜åªä¿ç•™ä¸€ä¸ªçª—å£å¤§å°çš„对è¯ã€‚它åªä½¿ç”¨æœ€è¿‘çš„n次交互。这å¯ä»¥ç”¨äºŽä¿æŒæœ€è¿‘交互的滑动窗å£ï¼Œä»¥ä¾¿ç¼“冲区ä¸ä¼šè¿‡å¤§" - ] - }, - { - "cell_type": "markdown", - "id": "b63c9061-9916-4524-b497-93a0aa2b7d06", - "metadata": {}, - "source": [ - "### 3.1 英文版" - ] - }, - { - "cell_type": "markdown", - "id": "641477a4", - "metadata": {}, - "source": [ - "#### 3.1.1 添加两轮对è¯åˆ°çª—å£å‚¨å­˜" - ] - }, - { - "cell_type": "code", - "execution_count": 65, - "id": "3ea6233e", - "metadata": { - "height": 47 - }, - "outputs": [], - "source": [ - "from langchain.memory import ConversationBufferWindowMemory\n", - "\n", - "# k 为窗å£å‚数,k=1表明åªä¿ç•™ä¸€ä¸ªå¯¹è¯è®°å¿†\n", - "memory = ConversationBufferWindowMemory(k=1) " - ] - }, - { - "cell_type": "code", - "execution_count": 66, - "id": "dc4553fb", - "metadata": { - "height": 115 - }, - "outputs": [], - "source": [ - "# å‘memory添加两轮对è¯\n", - "memory.save_context({\"input\": \"Hi\"}, {\"output\": \"What's up\"})\n", - "memory.save_context({\"input\": \"Not much, just hanging\"}, {\"output\": \"Cool\"})" - ] - }, - { - "cell_type": "code", - "execution_count": 67, - "id": "6a788403", - "metadata": { - "height": 31 - }, - "outputs": [ - { - "data": { - "text/plain": [ - "{'history': 'Human: Not much, just hanging\\nAI: Cool'}" - ] - }, - "execution_count": 67, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "# 并查看记忆å˜é‡å½“å‰çš„记录\n", - "memory.load_memory_variables({})" - ] - }, - { - "cell_type": "markdown", - "id": "63bda148", - "metadata": {}, - "source": [ - "#### 3.1.2 在对è¯é“¾ä¸­åº”用窗å£å‚¨å­˜" - ] - }, - { - "cell_type": "code", - "execution_count": 68, - "id": "4087bc87", - "metadata": { - "height": 133 - }, - "outputs": [], - "source": [ - "llm = ChatOpenAI(temperature=0.0)\n", - "memory = ConversationBufferWindowMemory(k=1)\n", - "conversation = ConversationChain(llm=llm, memory=memory, verbose=False )" - ] - }, - { - "cell_type": "markdown", - "id": "b6d661e3", - "metadata": {}, - "source": [ - "æ³¨æ„æ­¤å¤„ï¼ç”±äºŽè¿™é‡Œç”¨çš„æ˜¯ä¸€ä¸ªçª—å£çš„记忆,因此åªèƒ½ä¿å­˜ä¸€è½®çš„åŽ†å²æ¶ˆæ¯ï¼Œå› æ­¤AIå¹¶ä¸èƒ½çŸ¥é“你第一轮对è¯ä¸­æåˆ°çš„å字,他最多åªèƒ½è®°ä½ä¸Šä¸€è½®ï¼ˆç¬¬äºŒè½®ï¼‰çš„对è¯ä¿¡æ¯" - ] - }, - { - "cell_type": "code", - "execution_count": 69, - "id": "4faaa952", - "metadata": { - "height": 47 - }, - "outputs": [ - { - "data": { - "text/plain": [ - "\"Hello Andrew! It's nice to meet you. How can I assist you today?\"" - ] - }, - "execution_count": 69, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "conversation.predict(input=\"Hi, my name is Andrew\")" - ] - }, - { - "cell_type": "code", - "execution_count": 70, - "id": "bb20ddaa", - "metadata": { - "height": 31 - }, - "outputs": [ - { - "data": { - "text/plain": [ - "'1+1 is equal to 2.'" - ] - }, - "execution_count": 70, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "conversation.predict(input=\"What is 1+1?\")" - ] - }, - { - "cell_type": "code", - "execution_count": 71, - "id": "489b2194", - "metadata": { - "height": 31 - }, - "outputs": [ - { - "data": { - "text/plain": [ - "\"I'm sorry, but I don't have access to personal information.\"" - ] - }, - "execution_count": 71, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "conversation.predict(input=\"What is my name?\")" - ] - }, - { - "cell_type": "markdown", - "id": "88837e7c-cf4b-469e-b820-bbfc49ba876c", - "metadata": {}, - "source": [ - "### 3.2 中文版" - ] - }, - { - "cell_type": "markdown", - "id": "760ec3ad-6959-4a36-b1b5-4fcafe8088ad", - "metadata": {}, - "source": [ - "#### 3.1.1 添加两轮对è¯åˆ°çª—å£å‚¨å­˜" - ] - }, - { - "cell_type": "code", - "execution_count": 72, - "id": "68a2907c", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'history': 'Human: 很高兴和你æˆä¸ºæœ‹å‹ï¼\\nAI: 是的,让我们一起去冒险å§ï¼'}" - ] - }, - "execution_count": 72, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "from langchain.memory import ConversationBufferWindowMemory\n", - "\n", - "# k=1表明åªä¿ç•™ä¸€ä¸ªå¯¹è¯è®°å¿†\n", - "memory = ConversationBufferWindowMemory(k=1) \n", - "memory.save_context({\"input\": \"你好,我å«çš®çš®é²\"}, {\"output\": \"你好啊,我å«é²è¥¿è¥¿\"})\n", - "memory.save_context({\"input\": \"很高兴和你æˆä¸ºæœ‹å‹ï¼\"}, {\"output\": \"是的,让我们一起去冒险å§ï¼\"})\n", - "memory.load_memory_variables({})" - ] - }, - { - "cell_type": "markdown", - "id": "dcabf017-5bfd-4904-9f06-388f994eddc9", - "metadata": {}, - "source": [ - "#### 3.1.2 在对è¯é“¾ä¸­åº”用窗å£å‚¨å­˜" - ] - }, - { - "cell_type": "code", - "execution_count": 74, - "id": "1ee854d9", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "你好,皮皮é²ï¼å¾ˆé«˜å…´è®¤è¯†ä½ ã€‚我是一个AI助手,å¯ä»¥å›žç­”你的问题和æä¾›å¸®åŠ©ã€‚æœ‰ä»€ä¹ˆæˆ‘å¯ä»¥å¸®ä½ çš„å—?\n", - "1+1等于2。\n", - "å¾ˆæŠ±æ­‰ï¼Œæˆ‘æ— æ³•çŸ¥é“æ‚¨çš„å字。\n" - ] - } - ], - "source": [ - "llm = ChatOpenAI(temperature=0.0)\n", - "memory = ConversationBufferWindowMemory(k=1)\n", - "conversation = ConversationChain(llm=llm, memory=memory, verbose=False )\n", - "print(conversation.predict(input=\"你好, 我å«çš®çš®é²\"))\n", - "print(conversation.predict(input=\"1+1等于多少?\"))\n", - "print(conversation.predict(input=\"我å«ä»€ä¹ˆå字?\"))" - ] - }, - { - "cell_type": "markdown", - "id": "d2931b92", - "metadata": {}, - "source": [ - "## å››ã€å¯¹è¯token缓存储存" - ] - }, - { - "cell_type": "markdown", - "id": "dff5b4c7", - "metadata": {}, - "source": [ - "使用对è¯token缓存记忆,内存将é™åˆ¶ä¿å­˜çš„tokenæ•°é‡ã€‚如果tokenæ•°é‡è¶…出指定数目,它会切掉这个对è¯çš„æ—©æœŸéƒ¨åˆ†\n", - "以ä¿ç•™ä¸Žæœ€è¿‘的交æµç›¸å¯¹åº”çš„tokenæ•°é‡ï¼Œä½†ä¸è¶…过tokené™åˆ¶ã€‚" - ] - }, - { - "cell_type": "code", - "execution_count": 76, - "id": "9f6d063c", - "metadata": { - "height": 31 - }, - "outputs": [], - "source": [ - "!pip install -q tiktoken " - ] - }, - { - "cell_type": "code", - "execution_count": 77, - "id": "fb9020ed", - "metadata": { - "height": 81 - }, - "outputs": [], - "source": [ - "from langchain.llms import OpenAI\n", - "from langchain.memory import ConversationTokenBufferMemory" - ] - }, - { - "cell_type": "markdown", - "id": "f3a84112", - "metadata": {}, - "source": [ - "### 4.1 英文版\n", - "添加对è¯åˆ°Token缓存储存,é™åˆ¶tokenæ•°é‡ï¼Œè¿›è¡Œæµ‹è¯•" - ] - }, - { - "cell_type": "code", - "execution_count": 78, - "id": "43582ee6", - "metadata": { - "height": 149 - }, - "outputs": [], - "source": [ - "memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=30)\n", - "memory.save_context({\"input\": \"AI is what?!\"}, {\"output\": \"Amazing!\"})\n", - "memory.save_context({\"input\": \"Backpropagation is what?\"}, {\"output\": \"Beautiful!\"})\n", - "memory.save_context({\"input\": \"Chatbots are what?\"}, {\"output\": \"Charming!\"})" - ] - }, - { - "cell_type": "code", - "execution_count": 79, - "id": "284288e1", - "metadata": { - "height": 31 - }, - "outputs": [ - { - "data": { - "text/plain": [ - "{'history': 'AI: Beautiful!\\nHuman: Chatbots are what?\\nAI: Charming!'}" - ] - }, - "execution_count": 79, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "memory.load_memory_variables({})" - ] - }, - { - "cell_type": "markdown", - "id": "7b62b2e1", - "metadata": {}, - "source": [ - "å¯ä»¥çœ‹åˆ°å‰é¢è¶…出的的tokenå·²ç»è¢«èˆå¼ƒäº†ï¼ï¼ï¼" - ] - }, - { - "cell_type": "markdown", - "id": "f7f6be43", - "metadata": {}, - "source": [ - "### 4.2 中文版" - ] - }, - { - "cell_type": "code", - "execution_count": 80, - "id": "e9191020", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'history': 'AI: 轻舟已过万é‡å±±ã€‚'}" - ] - }, - "execution_count": 80, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=30)\n", - "memory.save_context({\"input\": \"æœè¾žç™½å¸å½©äº‘间,\"}, {\"output\": \"åƒé‡Œæ±Ÿé™µä¸€æ—¥è¿˜ã€‚\"})\n", - "memory.save_context({\"input\": \"两岸猿声啼ä¸ä½ï¼Œ\"}, {\"output\": \"轻舟已过万é‡å±±ã€‚\"})\n", - "memory.load_memory_variables({})" - ] - }, - { - "cell_type": "markdown", - "id": "fb08ef4a-876f-422a-81f9-4805288e5955", - "metadata": {}, - "source": [ - "### 4.3 补充" - ] - }, - { - "cell_type": "markdown", - "id": "5e4d918b", - "metadata": {}, - "source": [ - "ChatGPT使用一ç§åŸºäºŽå­—节对编ç ï¼ˆByte Pair Encoding,BPE)的方法æ¥è¿›è¡Œtokenization(将输入文本拆分为token)。BPE是一ç§å¸¸è§çš„tokenization技术,它将输入文本分割æˆè¾ƒå°çš„å­è¯å•元。 \n", - "\n", - "OpenAI在其官方GitHub上公开了一个最新的开æºPython库 [tiktoken](https://github.com/openai/tiktoken)ï¼Œè¿™ä¸ªåº“ä¸»è¦æ˜¯ç”¨æ¥è®¡ç®—tokensæ•°é‡çš„。相比较HuggingFaceçš„tokenizer,其速度æå‡äº†å¥½å‡ å€ã€‚\n", - "\n", - "具体token计算方å¼,特别是汉字和英文å•è¯çš„token区别,具体课å‚考[知乎文章](https://www.zhihu.com/question/594159910) 。" - ] - }, - { - "cell_type": "markdown", - "id": "5ff55d5d", - "metadata": {}, - "source": [ - "## 五ã€å¯¹è¯æ‘˜è¦ç¼“存储存" - ] - }, - { - "cell_type": "markdown", - "id": "7d39b83a", - "metadata": {}, - "source": [ - "å¯¹è¯æ‘˜è¦ç¼“存储存,**使用LLM编写到目å‰ä¸ºæ­¢åކå²å¯¹è¯çš„æ‘˜è¦**,并将其ä¿å­˜" - ] - }, - { - "cell_type": "code", - "execution_count": 82, - "id": "72dcf8b1", - "metadata": { - "height": 64 - }, - "outputs": [], - "source": [ - "from langchain.chains import ConversationChain\n", - "from langchain.chat_models import ChatOpenAI\n", - "from langchain.memory import ConversationSummaryBufferMemory" - ] - }, - { - "cell_type": "markdown", - "id": "243b213e-ce17-46a0-8652-03658ca58dd8", - "metadata": {}, - "source": [ - "### 5.1 英文版" - ] - }, - { - "cell_type": "markdown", - "id": "6572ef39", - "metadata": {}, - "source": [ - "#### 5.1.1 ä½¿ç”¨å¯¹è¯æ‘˜è¦ç¼“存储存\n", - "\n", - "åˆ›å»ºä¸€ä¸ªé•¿å­—ç¬¦ä¸²ï¼Œå…¶ä¸­åŒ…å«æŸäººçš„æ—¥ç¨‹å®‰æŽ’" - ] - }, - { - "cell_type": "code", - "execution_count": 84, - "id": "4a5b238f", - "metadata": { - "height": 285 - }, - "outputs": [], - "source": [ - "# 创建一个长字符串\n", - "schedule = \"There is a meeting at 8am with your product team. \\\n", - "You will need your powerpoint presentation prepared. \\\n", - "9am-12pm have time to work on your LangChain \\\n", - "project which will go quickly because Langchain is such a powerful tool. \\\n", - "At Noon, lunch at the italian resturant with a customer who is driving \\\n", - "from over an hour away to meet you to understand the latest in AI. \\\n", - "Be sure to bring your laptop to show the latest LLM demo.\"\n", - "\n", - "# ä½¿ç”¨å¯¹è¯æ‘˜è¦ç¼“存记忆\n", - "llm = ChatOpenAI(temperature=0.0)\n", - "memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=100) \n", - "memory.save_context({\"input\": \"Hello\"}, {\"output\": \"What's up\"})\n", - "memory.save_context({\"input\": \"Not much, just hanging\"}, {\"output\": \"Cool\"})\n", - "memory.save_context(\n", - " {\"input\": \"What is on the schedule today?\"}, {\"output\": f\"{schedule}\"}\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": 89, - "id": "15226a41-ab36-43a0-93f7-c03c6b374936", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "System: The human and AI exchange greetings. The human asks about the schedule for the day. The AI provides a detailed schedule, including a meeting with the product team, work on the LangChain project, and a lunch meeting with a customer interested in AI. The AI emphasizes the importance of bringing a laptop to showcase the latest LLM demo during the lunch meeting.\n" - ] - } - ], - "source": [ - "print(memory.load_memory_variables({})['history'])" - ] - }, - { - "cell_type": "markdown", - "id": "7ccb97b6", - "metadata": {}, - "source": [ - "#### 5.1.2 åŸºäºŽå¯¹è¯æ‘˜è¦ç¼“存储存的对è¯é“¾\n", - "基于上é¢çš„å¯¹è¯æ‘˜è¦ç¼“存储存,新建一个对è¯é“¾" - ] - }, - { - "cell_type": "code", - "execution_count": 90, - "id": "6728edba", - "metadata": { - "height": 99 - }, - "outputs": [], - "source": [ - "conversation = ConversationChain(llm=llm, memory=memory, verbose=True)" - ] - }, - { - "cell_type": "code", - "execution_count": 91, - "id": "9a221b1d", - "metadata": { - "height": 47 - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n", - "\n", - "\u001b[1m> Entering new chain...\u001b[0m\n", - "Prompt after formatting:\n", - "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", - "\n", - "Current conversation:\n", - "System: The human and AI exchange greetings. The human asks about the schedule for the day. The AI provides a detailed schedule, including a meeting with the product team, work on the LangChain project, and a lunch meeting with a customer interested in AI. The AI emphasizes the importance of bringing a laptop to showcase the latest LLM demo during the lunch meeting.\n", - "Human: What would be a good demo to show?\n", - "AI:\u001b[0m\n", - "\n", - "\u001b[1m> Finished chain.\u001b[0m\n" - ] - }, - { - "data": { - "text/plain": [ - "'A good demo to show during the lunch meeting with the customer interested in AI would be the latest LLM (Language Model) demo. The LLM is a cutting-edge AI model that can generate human-like text based on a given prompt. It has been trained on a vast amount of data and can generate coherent and contextually relevant responses. By showcasing the LLM demo, you can demonstrate the capabilities of AI in natural language processing and how it can be applied to various industries and use cases.'" - ] - }, - "execution_count": 91, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "conversation.predict(input=\"What would be a good demo to show?\")" - ] - }, - { - "cell_type": "code", - "execution_count": 92, - "id": "bb582617", - "metadata": { - "height": 31 - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "System: The human and AI exchange greetings. The human asks about the schedule for the day. The AI provides a detailed schedule, including a meeting with the product team, work on the LangChain project, and a lunch meeting with a customer interested in AI. The AI emphasizes the importance of bringing a laptop to showcase the latest LLM demo during the lunch meeting. A good demo to show during the lunch meeting with the customer interested in AI would be the latest LLM (Language Model) demo. The LLM is a cutting-edge AI model that can generate human-like text based on a given prompt. It has been trained on a vast amount of data and can generate coherent and contextually relevant responses. By showcasing the LLM demo, you can demonstrate the capabilities of AI in natural language processing and how it can be applied to various industries and use cases.\n" - ] - } - ], - "source": [ - "print(memory.load_memory_variables({})['history'])" - ] - }, - { - "cell_type": "markdown", - "id": "4ba827aa", - "metadata": { - "height": 31 - }, - "source": [ - "### 5.2 中文版" - ] - }, - { - "cell_type": "markdown", - "id": "64898f33-c538-4e68-b008-7123870b692b", - "metadata": {}, - "source": [ - "#### 5.2.1 ä½¿ç”¨å¯¹è¯æ‘˜è¦ç¼“存储存\n", - "\n", - "åˆ›å»ºä¸€ä¸ªé•¿å­—ç¬¦ä¸²ï¼Œå…¶ä¸­åŒ…å«æŸäººçš„æ—¥ç¨‹å®‰æŽ’" - ] - }, - { - "cell_type": "code", - "execution_count": 97, - "id": "2c07922b", - "metadata": { - "height": 31 - }, - "outputs": [], - "source": [ - "# 创建一个长字符串\n", - "schedule = \"在八点你和你的产å“团队有一个会议。 \\\n", - "你需è¦åšä¸€ä¸ªPPT。 \\\n", - "上åˆ9点到12点你需è¦å¿™äºŽLangChain。\\\n", - "Langchain是一个有用的工具,因此你的项目进展的éžå¸¸å¿«ã€‚\\\n", - "中åˆï¼Œåœ¨æ„大利é¤åŽ…ä¸Žä¸€ä½å¼€è½¦æ¥çš„顾客共进åˆé¤ \\\n", - "èµ°äº†ä¸€ä¸ªå¤šå°æ—¶çš„路程与你è§é¢ï¼Œåªä¸ºäº†è§£æœ€æ–°çš„ AI。 \\\n", - "ç¡®ä¿ä½ å¸¦äº†ç¬”记本电脑å¯ä»¥å±•示最新的 LLM 样例.\"\n", - "\n", - "llm = ChatOpenAI(temperature=0.0)\n", - "memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=100)\n", - "memory.save_context({\"input\": \"你好,我å«çš®çš®é²\"}, {\"output\": \"你好啊,我å«é²è¥¿è¥¿\"})\n", - "memory.save_context({\"input\": \"很高兴和你æˆä¸ºæœ‹å‹ï¼\"}, {\"output\": \"是的,让我们一起去冒险å§ï¼\"})\n", - "memory.save_context({\"input\": \"今天的日程安排是什么?\"}, {\"output\": f\"{schedule}\"})" - ] - }, - { - "cell_type": "code", - "execution_count": 98, - "id": "17424a12-430f-4529-9067-300978c6169e", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "System: The human introduces themselves as Pipilu and the AI introduces themselves as Luxixi. They express happiness at becoming friends and decide to go on an adventure together. The human asks about the schedule for the day. The AI informs them that they have a meeting with their product team at 8 o'clock and need to prepare a PowerPoint presentation. From 9 am to 12 pm, they will be busy with LangChain, a useful tool that helps their project progress quickly. At noon, they will have lunch with a customer who has driven for over an hour just to learn about the latest AI. The AI advises the human to bring their laptop to showcase the latest LLM samples.\n" - ] - } - ], - "source": [ - "print(memory.load_memory_variables({})['history'])" - ] - }, - { - "cell_type": "markdown", - "id": "9e29a956-607a-4247-9eb5-01285a370991", - "metadata": {}, - "source": [ - "#### 5.1.2 åŸºäºŽå¯¹è¯æ‘˜è¦ç¼“存储存的对è¯é“¾\n", - "基于上é¢çš„å¯¹è¯æ‘˜è¦ç¼“存储存,新建一个对è¯é“¾" - ] - }, - { - "cell_type": "code", - "execution_count": 99, - "id": "52696c8c", - "metadata": { - "height": 31 - }, - "outputs": [], - "source": [ - "conversation = ConversationChain(llm=llm, memory=memory, verbose=True)" - ] - }, - { - "cell_type": "code", - "execution_count": 100, - "id": "48690d13", - "metadata": { - "height": 31 - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n", - "\n", - "\u001b[1m> Entering new chain...\u001b[0m\n", - "Prompt after formatting:\n", - "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", - "\n", - "Current conversation:\n", - "System: The human introduces themselves as Pipilu and the AI introduces themselves as Luxixi. They express happiness at becoming friends and decide to go on an adventure together. The human asks about the schedule for the day. The AI informs them that they have a meeting with their product team at 8 o'clock and need to prepare a PowerPoint presentation. From 9 am to 12 pm, they will be busy with LangChain, a useful tool that helps their project progress quickly. At noon, they will have lunch with a customer who has driven for over an hour just to learn about the latest AI. The AI advises the human to bring their laptop to showcase the latest LLM samples.\n", - "Human: 展示什么样的样例最好呢?\n", - "AI:\u001b[0m\n", - "\n", - "\u001b[1m> Finished chain.\u001b[0m\n" - ] - }, - { - "data": { - "text/plain": [ - "'展示一些具有多样性和创新性的样例å¯èƒ½æ˜¯æœ€å¥½çš„选择。你å¯ä»¥é€‰æ‹©å±•示一些基于图åƒè¯†åˆ«çš„æ ·ä¾‹ï¼Œæ¯”如人脸识别ã€ç‰©ä½“识别等。å¦å¤–,你也å¯ä»¥å±•ç¤ºä¸€äº›è‡ªç„¶è¯­è¨€å¤„ç†æ–¹é¢çš„æ ·ä¾‹ï¼Œæ¯”如文本生æˆã€æƒ…感分æžç­‰ã€‚最é‡è¦çš„æ˜¯é€‰æ‹©é‚£äº›èƒ½å¤Ÿå±•示出你们团队的技术实力和创造力的样例。'" - ] - }, - "execution_count": 100, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "conversation.predict(input=\"展示什么样的样例最好呢?\")" - ] - }, - { - "cell_type": "code", - "execution_count": 101, - "id": "85bba1f8", - "metadata": { - "height": 31 - }, - "outputs": [ - { - "data": { - "text/plain": [ - "{'history': \"System: The human introduces themselves as Pipilu and the AI introduces themselves as Luxixi. They express happiness at becoming friends and decide to go on an adventure together. The human asks about the schedule for the day. The AI informs them that they have a meeting with their product team at 8 o'clock and need to prepare a PowerPoint presentation. From 9 am to 12 pm, they will be busy with LangChain, a useful tool that helps their project progress quickly. At noon, they will have lunch with a customer who has driven for over an hour just to learn about the latest AI. The AI advises the human to bring their laptop to showcase the latest LLM samples. The human asks what kind of samples would be best to showcase. The AI suggests that showcasing diverse and innovative samples would be the best choice. They recommend selecting samples based on image recognition, such as face recognition and object recognition. Additionally, they suggest showcasing samples related to natural language processing, such as text generation and sentiment analysis. The AI emphasizes the importance of choosing samples that demonstrate the team's technical expertise and creativity.\"}" - ] - }, - "execution_count": 101, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "memory.load_memory_variables({}) # 摘è¦è®°å½•更新了" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.9.12" - }, - "toc": { - "base_numbering": 1, - "nav_menu": {}, - "number_sections": false, - "sideBar": true, - "skip_h1_title": false, - "title_cell": "", - "title_sidebar": "Contents", - "toc_cell": false, - "toc_position": {}, - "toc_section_display": true, - "toc_window_display": true - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} +{"cells": [{"cell_type": "markdown", "id": "a786c77c", "metadata": {"tags": []}, "source": ["# \u7b2c\u4e09\u7ae0 \u50a8\u5b58\n", "\n", " - [\u4e00\u3001\u8bbe\u7f6eOpenAI API Key](#\u4e00\u3001\u8bbe\u7f6eOpenAI-API-Key)\n", " - [\u4e8c\u3001\u5bf9\u8bdd\u7f13\u5b58\u50a8\u5b58](#\u4e8c\u3001\u5bf9\u8bdd\u7f13\u5b58\u50a8\u5b58)\n", " - [2.1 \u82f1\u6587\u7248](#2.1-\u82f1\u6587\u7248)\n", " - [2.1.1 \u521d\u59cb\u5316\u5bf9\u8bdd\u6a21\u578b](#2.1.1-\u521d\u59cb\u5316\u5bf9\u8bdd\u6a21\u578b)\n", " - [2.1.2 \u7b2c\u4e00\u8f6e\u5bf9\u8bdd](#2.1.2-\u7b2c\u4e00\u8f6e\u5bf9\u8bdd)\n", " - [2.1.3 \u7b2c\u4e8c\u8f6e\u5bf9\u8bdd](#2.1.3-\u7b2c\u4e8c\u8f6e\u5bf9\u8bdd)\n", " - [2.1.4 \u7b2c\u4e09\u8f6e\u5bf9\u8bdd](#2.1.4-\u7b2c\u4e09\u8f6e\u5bf9\u8bdd)\n", " - [2.1.5 \u67e5\u770b\u50a8\u5b58\u7f13\u5b58](#2.1.5-\u67e5\u770b\u50a8\u5b58\u7f13\u5b58)\n", " - [2.1.6 \u76f4\u63a5\u6dfb\u52a0\u5185\u5bb9\u5230\u50a8\u5b58\u7f13\u5b58](#2.1.6-\u76f4\u63a5\u6dfb\u52a0\u5185\u5bb9\u5230\u50a8\u5b58\u7f13\u5b58)\n", " - [2.2 \u4e2d\u6587\u7248](#2.2-\u4e2d\u6587\u7248)\n", " - [2.1.1 \u521d\u59cb\u5316\u5bf9\u8bdd\u6a21\u578b](#2.1.1-\u521d\u59cb\u5316\u5bf9\u8bdd\u6a21\u578b)\n", " - [2.1.2 \u7b2c\u4e00\u8f6e\u5bf9\u8bdd](#2.1.2-\u7b2c\u4e00\u8f6e\u5bf9\u8bdd)\n", " - [2.1.3 \u7b2c\u4e8c\u8f6e\u5bf9\u8bdd](#2.1.3-\u7b2c\u4e8c\u8f6e\u5bf9\u8bdd)\n", " - [2.1.4 \u7b2c\u4e09\u8f6e\u5bf9\u8bdd](#2.1.4-\u7b2c\u4e09\u8f6e\u5bf9\u8bdd)\n", " - [2.1.5 \u67e5\u770b\u50a8\u5b58\u7f13\u5b58](#2.1.5-\u67e5\u770b\u50a8\u5b58\u7f13\u5b58)\n", " - [2.1.6 \u76f4\u63a5\u6dfb\u52a0\u5185\u5bb9\u5230\u50a8\u5b58\u7f13\u5b58](#2.1.6-\u76f4\u63a5\u6dfb\u52a0\u5185\u5bb9\u5230\u50a8\u5b58\u7f13\u5b58)\n", " - [2.3 \u603b\u7ed3](#2.3-\u603b\u7ed3)\n", " - [\u4e09\u3001\u5bf9\u8bdd\u7f13\u5b58\u7a97\u53e3\u50a8\u5b58](#\u4e09\u3001\u5bf9\u8bdd\u7f13\u5b58\u7a97\u53e3\u50a8\u5b58)\n", " - [3.1 \u82f1\u6587\u7248](#3.1-\u82f1\u6587\u7248)\n", " - [3.1.1 \u6dfb\u52a0\u4e24\u8f6e\u5bf9\u8bdd\u5230\u7a97\u53e3\u50a8\u5b58](#3.1.1-\u6dfb\u52a0\u4e24\u8f6e\u5bf9\u8bdd\u5230\u7a97\u53e3\u50a8\u5b58)\n", " - [3.1.2 \u5728\u5bf9\u8bdd\u94fe\u4e2d\u5e94\u7528\u7a97\u53e3\u50a8\u5b58](#3.1.2-\u5728\u5bf9\u8bdd\u94fe\u4e2d\u5e94\u7528\u7a97\u53e3\u50a8\u5b58)\n", " - [3.2 \u4e2d\u6587\u7248](#3.2-\u4e2d\u6587\u7248)\n", " - [3.1.1 \u6dfb\u52a0\u4e24\u8f6e\u5bf9\u8bdd\u5230\u7a97\u53e3\u50a8\u5b58](#3.1.1-\u6dfb\u52a0\u4e24\u8f6e\u5bf9\u8bdd\u5230\u7a97\u53e3\u50a8\u5b58)\n", " - [3.1.2 \u5728\u5bf9\u8bdd\u94fe\u4e2d\u5e94\u7528\u7a97\u53e3\u50a8\u5b58](#3.1.2-\u5728\u5bf9\u8bdd\u94fe\u4e2d\u5e94\u7528\u7a97\u53e3\u50a8\u5b58)\n", " - [\u56db\u3001\u5bf9\u8bddtoken\u7f13\u5b58\u50a8\u5b58](#\u56db\u3001\u5bf9\u8bddtoken\u7f13\u5b58\u50a8\u5b58)\n", " - [4.1 \u82f1\u6587\u7248](#4.1-\u82f1\u6587\u7248)\n", " - [4.2 \u4e2d\u6587\u7248](#4.2-\u4e2d\u6587\u7248)\n", " - [4.3 \u8865\u5145](#4.3-\u8865\u5145)\n", " - [\u4e94\u3001\u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u50a8\u5b58](#\u4e94\u3001\u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u50a8\u5b58)\n", " - [5.1 \u82f1\u6587\u7248](#5.1-\u82f1\u6587\u7248)\n", " - [5.1.1 \u4f7f\u7528\u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u50a8\u5b58](#5.1.1-\u4f7f\u7528\u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u50a8\u5b58)\n", " - [5.1.2 \u57fa\u4e8e\u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u50a8\u5b58\u7684\u5bf9\u8bdd\u94fe](#5.1.2-\u57fa\u4e8e\u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u50a8\u5b58\u7684\u5bf9\u8bdd\u94fe)\n", " - [5.2 \u4e2d\u6587\u7248](#5.2-\u4e2d\u6587\u7248)\n", " - [5.2.1 \u4f7f\u7528\u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u50a8\u5b58](#5.2.1-\u4f7f\u7528\u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u50a8\u5b58)\n", " - [5.1.2 \u57fa\u4e8e\u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u50a8\u5b58\u7684\u5bf9\u8bdd\u94fe](#5.1.2-\u57fa\u4e8e\u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u50a8\u5b58\u7684\u5bf9\u8bdd\u94fe)\n"]}, {"cell_type": "markdown", "id": "7e10db6f", "metadata": {}, "source": ["\u5f53\u4f60\u4e0e\u90a3\u4e9b\u8bed\u8a00\u6a21\u578b\u8fdb\u884c\u4ea4\u4e92\u7684\u65f6\u5019\uff0c\u4ed6\u4eec\u4e0d\u4f1a\u8bb0\u5f97\u4f60\u4e4b\u524d\u548c\u4ed6\u8fdb\u884c\u7684\u4ea4\u6d41\u5185\u5bb9\uff0c\u8fd9\u5728\u6211\u4eec\u6784\u5efa\u4e00\u4e9b\u5e94\u7528\u7a0b\u5e8f\uff08\u5982\u804a\u5929\u673a\u5668\u4eba\uff09\u7684\u65f6\u5019\uff0c\u662f\u4e00\u4e2a\u5f88\u5927\u7684\u95ee\u9898 -- \u663e\u5f97\u4e0d\u591f\u667a\u80fd\uff01\u56e0\u6b64\uff0c\u5728\u672c\u8282\u4e2d\u6211\u4eec\u5c06\u4ecb\u7ecd LangChain \u4e2d\u7684\u50a8\u5b58\u6a21\u5757\uff0c\u5373\u5982\u4f55\u5c06\u5148\u524d\u7684\u5bf9\u8bdd\u5d4c\u5165\u5230\u8bed\u8a00\u6a21\u578b\u4e2d\u7684\uff0c\u4f7f\u5176\u5177\u6709\u8fde\u7eed\u5bf9\u8bdd\u7684\u80fd\u529b\u3002\n", "\n", "\u5f53\u4f7f\u7528 LangChain \u4e2d\u7684\u50a8\u5b58(Memory)\u6a21\u5757\u65f6\uff0c\u5b83\u53ef\u4ee5\u5e2e\u52a9\u4fdd\u5b58\u548c\u7ba1\u7406\u5386\u53f2\u804a\u5929\u6d88\u606f\uff0c\u4ee5\u53ca\u6784\u5efa\u5173\u4e8e\u7279\u5b9a\u5b9e\u4f53\u7684\u77e5\u8bc6\u3002\u8fd9\u4e9b\u7ec4\u4ef6\u53ef\u4ee5\u8de8\u591a\u8f6e\u5bf9\u8bdd\u50a8\u5b58\u4fe1\u606f\uff0c\u5e76\u5141\u8bb8\u5728\u5bf9\u8bdd\u671f\u95f4\u8ddf\u8e2a\u7279\u5b9a\u4fe1\u606f\u548c\u4e0a\u4e0b\u6587\u3002\n", "\n", "LangChain \u63d0\u4f9b\u4e86\u591a\u79cd\u50a8\u5b58\u7c7b\u578b\u3002\u5176\u4e2d\uff0c\u7f13\u51b2\u533a\u50a8\u5b58\u5141\u8bb8\u4fdd\u7559\u6700\u8fd1\u7684\u804a\u5929\u6d88\u606f\uff0c\u6458\u8981\u50a8\u5b58\u5219\u63d0\u4f9b\u4e86\u5bf9\u6574\u4e2a\u5bf9\u8bdd\u7684\u6458\u8981\u3002\u5b9e\u4f53\u50a8\u5b58 \u5219\u5141\u8bb8\u5728\u591a\u8f6e\u5bf9\u8bdd\u4e2d\u4fdd\u7559\u6709\u5173\u7279\u5b9a\u5b9e\u4f53\u7684\u4fe1\u606f\u3002\u8fd9\u4e9b\u8bb0\u5fc6\u7ec4\u4ef6\u90fd\u662f\u6a21\u5757\u5316\u7684\uff0c\u53ef\u4e0e\u5176\u4ed6\u7ec4\u4ef6\u7ec4\u5408\u4f7f\u7528\uff0c\u4ece\u800c\u589e\u5f3a\u673a\u5668\u4eba\u7684\u5bf9\u8bdd\u7ba1\u7406\u80fd\u529b\u3002\u50a8\u5b58\u6a21\u5757\u53ef\u4ee5\u901a\u8fc7\u7b80\u5355\u7684API\u8c03\u7528\u6765\u8bbf\u95ee\u548c\u66f4\u65b0\uff0c\u5141\u8bb8\u5f00\u53d1\u4eba\u5458\u66f4\u8f7b\u677e\u5730\u5b9e\u73b0\u5bf9\u8bdd\u5386\u53f2\u8bb0\u5f55\u7684\u7ba1\u7406\u548c\u7ef4\u62a4\u3002\n", "\n", "\u6b64\u6b21\u8bfe\u7a0b\u4e3b\u8981\u4ecb\u7ecd\u5176\u4e2d\u56db\u79cd\u50a8\u5b58\u6a21\u5757\uff0c\u5176\u4ed6\u6a21\u5757\u53ef\u67e5\u770b\u6587\u6863\u5b66\u4e60\u3002\n", "- \u5bf9\u8bdd\u7f13\u5b58\u50a8\u5b58 (ConversationBufferMemory\uff09\n", "- \u5bf9\u8bdd\u7f13\u5b58\u7a97\u53e3\u50a8\u5b58 (ConversationBufferWindowMemory\uff09\n", "- \u5bf9\u8bdd\u4ee4\u724c\u7f13\u5b58\u50a8\u5b58 (ConversationTokenBufferMemory\uff09\n", "- \u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u50a8\u5b58 (ConversationSummaryBufferMemory\uff09\n", "\n", "\u5728LangChain\u4e2d\uff0c\u50a8\u5b58 \u6307\u7684\u662f\u5927\u8bed\u8a00\u6a21\u578b\uff08LLM\uff09\u7684\u77ed\u671f\u8bb0\u5fc6\u3002\u4e3a\u4ec0\u4e48\u662f\u77ed\u671f\u8bb0\u5fc6\uff1f\u90a3\u662f\u56e0\u4e3aLLM\u8bad\u7ec3\u597d\u4e4b\u540e (\u83b7\u5f97\u4e86\u4e00\u4e9b\u957f\u671f\u8bb0\u5fc6)\uff0c\u5b83\u7684\u53c2\u6570\u4fbf\u4e0d\u4f1a\u56e0\u4e3a\u7528\u6237\u7684\u8f93\u5165\u800c\u53d1\u751f\u6539\u53d8\u3002\u5f53\u7528\u6237\u4e0e\u8bad\u7ec3\u597d\u7684LLM\u8fdb\u884c\u5bf9\u8bdd\u65f6\uff0cLLM\u4f1a\u6682\u65f6\u8bb0\u4f4f\u7528\u6237\u7684\u8f93\u5165\u548c\u5b83\u5df2\u7ecf\u751f\u6210\u7684\u8f93\u51fa\uff0c\u4ee5\u4fbf\u9884\u6d4b\u4e4b\u540e\u7684\u8f93\u51fa\uff0c\u800c\u6a21\u578b\u8f93\u51fa\u5b8c\u6bd5\u540e\uff0c\u5b83\u4fbf\u4f1a\u201c\u9057\u5fd8\u201d\u4e4b\u524d\u7528\u6237\u7684\u8f93\u5165\u548c\u5b83\u7684\u8f93\u51fa\u3002\u56e0\u6b64\uff0c\u4e4b\u524d\u7684\u8fd9\u4e9b\u4fe1\u606f\u53ea\u80fd\u79f0\u4f5c\u4e3aLLM\u7684\u77ed\u671f\u8bb0\u5fc6\u3002 \n", " \n", "\u4e3a\u4e86\u5ef6\u957fLLM\u77ed\u671f\u8bb0\u5fc6\u7684\u4fdd\u7559\u65f6\u95f4\uff0c\u5219\u9700\u8981\u501f\u52a9\u4e00\u4e9b\u5916\u90e8\u50a8\u5b58\u65b9\u5f0f\u6765\u8fdb\u884c\u8bb0\u5fc6\uff0c\u4ee5\u4fbf\u5728\u7528\u6237\u4e0eLLM\u5bf9\u8bdd\u4e2d\uff0cLLM\u80fd\u591f\u5c3d\u53ef\u80fd\u7684\u77e5\u9053\u7528\u6237\u4e0e\u5b83\u6240\u8fdb\u884c\u7684\u5386\u53f2\u5bf9\u8bdd\u4fe1\u606f\u3002 "]}, {"cell_type": "markdown", "id": "1ca56e6b-1e07-4405-a1ca-f4237f20fa75", "metadata": {"jp-MarkdownHeadingCollapsed": true, "tags": []}, "source": ["## \u4e00\u3001\u8bbe\u7f6eOpenAI API Key\n", "\n", "\u767b\u9646 [OpenAI \u8d26\u6237](https://platform.openai.com/account/api-keys) \u83b7\u53d6API Key\uff0c\u7136\u540e\u5c06\u5176\u8bbe\u7f6e\u4e3a\u73af\u5883\u53d8\u91cf\u3002\n", "\n", "- \u5982\u679c\u4f60\u60f3\u8981\u8bbe\u7f6e\u4e3a\u5168\u5c40\u73af\u5883\u53d8\u91cf\uff0c\u53ef\u4ee5\u53c2\u8003[\u77e5\u4e4e\u6587\u7ae0](https://zhuanlan.zhihu.com/p/627665725)\u3002\n", "- \u5982\u679c\u4f60\u60f3\u8981\u8bbe\u7f6e\u4e3a\u672c\u5730/\u9879\u76ee\u73af\u5883\u53d8\u91cf\uff0c\u5728\u672c\u6587\u4ef6\u76ee\u5f55\u4e0b\u521b\u5efa`.env`\u6587\u4ef6, \u6253\u5f00\u6587\u4ef6\u8f93\u5165\u4ee5\u4e0b\u5185\u5bb9\u3002\n", "\n", "

\n", " OPENAI_API_KEY=\"your_api_key\" \n", "

\n", " \n", " \u66ff\u6362\"your_api_key\"\u4e3a\u4f60\u81ea\u5df1\u7684 API Key"]}, {"cell_type": "code", "execution_count": null, "id": "6932bd47-c6d5-4794-8102-a12b84412a93", "metadata": {}, "outputs": [], "source": ["# \u4e0b\u8f7d\u9700\u8981\u7684\u5305python-dotenv\u548copenai\n", "# \u5982\u679c\u4f60\u9700\u8981\u67e5\u770b\u5b89\u88c5\u8fc7\u7a0b\u65e5\u5fd7\uff0c\u53ef\u5220\u9664 -q\n", "!pip install -q python-dotenv\n", "!pip install -q openai"]}, {"cell_type": "code", "execution_count": 1, "id": "10446712-9fa6-4d71-94ce-2ea4cf197e54", "metadata": {}, "outputs": [], "source": ["import os\n", "\n", "import openai\n", "from dotenv import find_dotenv, load_dotenv\n", "\n", "# \u8bfb\u53d6\u672c\u5730/\u9879\u76ee\u7684\u73af\u5883\u53d8\u91cf\u3002\n", "\n", "# find_dotenv()\u5bfb\u627e\u5e76\u5b9a\u4f4d.env\u6587\u4ef6\u7684\u8def\u5f84\n", "# load_dotenv()\u8bfb\u53d6\u8be5.env\u6587\u4ef6\uff0c\u5e76\u5c06\u5176\u4e2d\u7684\u73af\u5883\u53d8\u91cf\u52a0\u8f7d\u5230\u5f53\u524d\u7684\u8fd0\u884c\u73af\u5883\u4e2d\n", "# \u5982\u679c\u4f60\u8bbe\u7f6e\u7684\u662f\u5168\u5c40\u7684\u73af\u5883\u53d8\u91cf\uff0c\u8fd9\u884c\u4ee3\u7801\u5219\u6ca1\u6709\u4efb\u4f55\u4f5c\u7528\u3002\n", "_ = load_dotenv(find_dotenv())\n", "\n", "# \u83b7\u53d6\u73af\u5883\u53d8\u91cf OPENAI_API_KEY\n", "openai.api_key = os.environ[\"OPENAI_API_KEY\"]"]}, {"cell_type": "markdown", "id": "1297dcd5", "metadata": {"tags": []}, "source": ["## \u4e8c\u3001\u5bf9\u8bdd\u7f13\u5b58\u50a8\u5b58\n", " "]}, {"cell_type": "markdown", "id": "ffa8cd26-45c5-4bae-90b2-b07d23bf3bb2", "metadata": {}, "source": ["### 2.1 \u82f1\u6587\u7248"]}, {"cell_type": "markdown", "id": "b7e77a3d-7aaa-48c1-b219-19bd6f4eb674", "metadata": {"tags": []}, "source": ["#### 2.1.1 \u521d\u59cb\u5316\u5bf9\u8bdd\u6a21\u578b"]}, {"cell_type": "code", "execution_count": 47, "id": "20ad6fe2", "metadata": {"height": 98}, "outputs": [], "source": ["from langchain.chains import ConversationChain\n", "from langchain.chat_models import ChatOpenAI\n", "from langchain.memory import ConversationBufferMemory"]}, {"cell_type": "code", "execution_count": 48, "id": "88bdf13d", "metadata": {"height": 133}, "outputs": [], "source": ["# \u8fd9\u91cc\u6211\u4eec\u5c06\u53c2\u6570temperature\u8bbe\u7f6e\u4e3a0.0\uff0c\u4ece\u800c\u51cf\u5c11\u751f\u6210\u7b54\u6848\u7684\u968f\u673a\u6027\u3002\n", "# \u5982\u679c\u4f60\u60f3\u8981\u6bcf\u6b21\u5f97\u5230\u4e0d\u4e00\u6837\u7684\u6709\u65b0\u610f\u7684\u7b54\u6848\uff0c\u53ef\u4ee5\u5c1d\u8bd5\u589e\u5927\u8be5\u53c2\u6570\u3002\n", "llm = ChatOpenAI(temperature=0.0) \n", "\n", "memory = ConversationBufferMemory()\n", "\n", "# \u65b0\u5efa\u4e00\u4e2a ConversationChain Class \u5b9e\u4f8b\n", "# verbose\u53c2\u6570\u8bbe\u7f6e\u4e3aTrue\u65f6\uff0c\u7a0b\u5e8f\u4f1a\u8f93\u51fa\u66f4\u8be6\u7ec6\u7684\u4fe1\u606f\uff0c\u4ee5\u63d0\u4f9b\u66f4\u591a\u7684\u8c03\u8bd5\u6216\u8fd0\u884c\u65f6\u4fe1\u606f\u3002\n", "# \u76f8\u53cd\uff0c\u5f53\u5c06verbose\u53c2\u6570\u8bbe\u7f6e\u4e3aFalse\u65f6\uff0c\u7a0b\u5e8f\u4f1a\u4ee5\u66f4\u7b80\u6d01\u7684\u65b9\u5f0f\u8fd0\u884c\uff0c\u53ea\u8f93\u51fa\u5173\u952e\u7684\u4fe1\u606f\u3002\n", "conversation = ConversationChain(llm=llm, memory = memory, verbose=True )"]}, {"cell_type": "markdown", "id": "dea83837", "metadata": {"tags": []}, "source": ["#### 2.1.2 \u7b2c\u4e00\u8f6e\u5bf9\u8bdd"]}, {"cell_type": "markdown", "id": "1a3b4c42", "metadata": {}, "source": ["\u5f53\u6211\u4eec\u8fd0\u884c\u9884\u6d4b(predict)\u65f6\uff0c\u751f\u6210\u4e86\u4e00\u4e9b\u63d0\u793a\uff0c\u5982\u4e0b\u6240\u89c1\uff0c\u4ed6\u8bf4\u201c\u4ee5\u4e0b\u662f\u4eba\u7c7b\u548cAI\u4e4b\u95f4\u53cb\u597d\u7684\u5bf9\u8bdd\uff0cAI\u5065\u8c08\u201c\u7b49\u7b49\uff0c\u8fd9\u5b9e\u9645\u4e0a\u662fLangChain\u751f\u6210\u7684\u63d0\u793a\uff0c\u4ee5\u4f7f\u7cfb\u7edf\u8fdb\u884c\u5e0c\u671b\u548c\u53cb\u597d\u7684\u5bf9\u8bdd\uff0c\u5e76\u4e14\u5fc5\u987b\u4fdd\u5b58\u5bf9\u8bdd\uff0c\u5e76\u63d0\u793a\u4e86\u5f53\u524d\u5df2\u5b8c\u6210\u7684\u6a21\u578b\u94fe\u3002"]}, {"cell_type": "code", "execution_count": 49, "id": "db24677d", "metadata": {"height": 47}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\n", "\n", "\u001b[1m> Entering new chain...\u001b[0m\n", "Prompt after formatting:\n", "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", "\n", "Current conversation:\n", "\n", "Human: Hi, my name is Andrew\n", "AI:\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n"]}, {"data": {"text/plain": ["\"Hello Andrew! It's nice to meet you. How can I assist you today?\""]}, "execution_count": 49, "metadata": {}, "output_type": "execute_result"}], "source": ["conversation.predict(input=\"Hi, my name is Andrew\")"]}, {"cell_type": "markdown", "id": "e71564ad", "metadata": {}, "source": ["#### 2.1.3 \u7b2c\u4e8c\u8f6e\u5bf9\u8bdd"]}, {"cell_type": "markdown", "id": "54d006bd", "metadata": {}, "source": ["\u5f53\u6211\u4eec\u8fdb\u884c\u7b2c\u4e8c\u8f6e\u5bf9\u8bdd\u65f6\uff0c\u5b83\u4f1a\u4fdd\u7559\u4e0a\u9762\u7684\u63d0\u793a"]}, {"cell_type": "code", "execution_count": 50, "id": "cc3ef937", "metadata": {"height": 31}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\n", "\n", "\u001b[1m> Entering new chain...\u001b[0m\n", "Prompt after formatting:\n", "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", "\n", "Current conversation:\n", "Human: Hi, my name is Andrew\n", "AI: Hello Andrew! It's nice to meet you. How can I assist you today?\n", "Human: What is 1+1?\n", "AI:\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n"]}, {"data": {"text/plain": ["'1+1 is equal to 2.'"]}, "execution_count": 50, "metadata": {}, "output_type": "execute_result"}], "source": ["conversation.predict(input=\"What is 1+1?\")"]}, {"cell_type": "markdown", "id": "33cb734b", "metadata": {}, "source": ["#### 2.1.4 \u7b2c\u4e09\u8f6e\u5bf9\u8bdd"]}, {"cell_type": "markdown", "id": "0393df3d", "metadata": {}, "source": ["\u4e3a\u4e86\u9a8c\u8bc1\u4ed6\u662f\u5426\u8bb0\u5fc6\u4e86\u524d\u9762\u7684\u5bf9\u8bdd\u5185\u5bb9\uff0c\u6211\u4eec\u8ba9\u4ed6\u56de\u7b54\u524d\u9762\u5df2\u7ecf\u8bf4\u8fc7\u7684\u5185\u5bb9\uff08\u6211\u7684\u540d\u5b57\uff09\uff0c\u53ef\u4ee5\u770b\u5230\u4ed6\u786e\u5b9e\u8f93\u51fa\u4e86\u6b63\u786e\u7684\u540d\u5b57\uff0c\u56e0\u6b64\u8fd9\u4e2a\u5bf9\u8bdd\u94fe\u968f\u7740\u5f80\u4e0b\u8fdb\u884c\u4f1a\u8d8a\u6765\u8d8a\u957f"]}, {"cell_type": "code", "execution_count": 51, "id": "acf3339a", "metadata": {"height": 31}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\n", "\n", "\u001b[1m> Entering new chain...\u001b[0m\n", "Prompt after formatting:\n", "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", "\n", "Current conversation:\n", "Human: Hi, my name is Andrew\n", "AI: Hello Andrew! It's nice to meet you. How can I assist you today?\n", "Human: What is 1+1?\n", "AI: 1+1 is equal to 2.\n", "Human: What is my name?\n", "AI:\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n"]}, {"data": {"text/plain": ["'Your name is Andrew.'"]}, "execution_count": 51, "metadata": {}, "output_type": "execute_result"}], "source": ["conversation.predict(input=\"What is my name?\")"]}, {"cell_type": "markdown", "id": "5a96a8d9", "metadata": {}, "source": ["#### 2.1.5 \u67e5\u770b\u50a8\u5b58\u7f13\u5b58\n", "\n", "\u50a8\u5b58\u7f13\u5b58(memory.buffer)\n", "\u50a8\u5b58\u4e86\u5f53\u524d\u4e3a\u6b62\u6240\u6709\u7684\u5bf9\u8bdd\u4fe1\u606f"]}, {"cell_type": "code", "execution_count": 52, "id": "2529400d", "metadata": {"height": 31}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["Human: Hi, my name is Andrew\n", "AI: Hello Andrew! It's nice to meet you. How can I assist you today?\n", "Human: What is 1+1?\n", "AI: 1+1 is equal to 2.\n", "Human: What is my name?\n", "AI: Your name is Andrew.\n"]}], "source": ["print(memory.buffer) "]}, {"cell_type": "markdown", "id": "0b5de846", "metadata": {}, "source": ["\u4e5f\u53ef\u4ee5\u901a\u8fc7memory.load_memory_variables({})\u6253\u5370\u7f13\u5b58\u4e2d\u7684\u5386\u53f2\u6d88\u606f\u3002\u8fd9\u91cc\u7684`{}`\u662f\u4e00\u4e2a\u7a7a\u5b57\u5178\uff0c\u6709\u4e00\u4e9b\u66f4\u9ad8\u7ea7\u7684\u529f\u80fd\uff0c\u4f7f\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u66f4\u590d\u6742\u7684\u8f93\u5165\uff0c\u4f46\u6211\u4eec\u4e0d\u4f1a\u5728\u8fd9\u4e2a\u77ed\u671f\u8bfe\u7a0b\u4e2d\u8ba8\u8bba\u5b83\u4eec\uff0c\u6240\u4ee5\u4e0d\u8981\u62c5\u5fc3\u4e3a\u4ec0\u4e48\u8fd9\u91cc\u6709\u4e00\u4e2a\u7a7a\u7684\u82b1\u62ec\u53f7\u3002"]}, {"cell_type": "code", "execution_count": 53, "id": "5018cb0a", "metadata": {"height": 31}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["{'history': \"Human: Hi, my name is Andrew\\nAI: Hello Andrew! It's nice to meet you. How can I assist you today?\\nHuman: What is 1+1?\\nAI: 1+1 is equal to 2.\\nHuman: What is my name?\\nAI: Your name is Andrew.\"}\n"]}], "source": ["print(memory.load_memory_variables({}))"]}, {"cell_type": "markdown", "id": "07d2e892", "metadata": {}, "source": ["#### 2.1.6 \u76f4\u63a5\u6dfb\u52a0\u5185\u5bb9\u5230\u50a8\u5b58\u7f13\u5b58"]}, {"cell_type": "code", "execution_count": 54, "id": "14219b70", "metadata": {"height": 31}, "outputs": [], "source": ["memory = ConversationBufferMemory() # \u65b0\u5efa\u4e00\u4e2a\u7a7a\u7684\u5bf9\u8bdd\u7f13\u5b58\u8bb0\u5fc6"]}, {"cell_type": "code", "execution_count": 55, "id": "a36e9905", "metadata": {"height": 48}, "outputs": [], "source": ["memory.save_context({\"input\": \"Hi\"}, {\"output\": \"What's up\"}) # \u5411\u7f13\u5b58\u533a\u6dfb\u52a0\u6307\u5b9a\u5bf9\u8bdd\u7684\u8f93\u5165\u8f93\u51fa"]}, {"cell_type": "code", "execution_count": 56, "id": "61631b1f", "metadata": {"height": 31}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["Human: Hi\n", "AI: What's up\n"]}], "source": ["print(memory.buffer) # \u67e5\u770b\u7f13\u5b58\u533a\u7ed3\u679c"]}, {"cell_type": "code", "execution_count": 57, "id": "a2fdf9ec", "metadata": {"height": 31}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["{'history': \"Human: Hi\\nAI: What's up\"}\n"]}], "source": ["print(memory.load_memory_variables({}))# \u518d\u6b21\u52a0\u8f7d\u8bb0\u5fc6\u53d8\u91cf"]}, {"cell_type": "markdown", "id": "2ac544f2", "metadata": {}, "source": ["\u7ee7\u7eed\u6dfb\u52a0\u65b0\u7684\u5185\u5bb9\uff0c\u5bf9\u8bdd\u5386\u53f2\u90fd\u4fdd\u5b58\u4e0b\u6765\u5728\u4e86\uff01"]}, {"cell_type": "code", "execution_count": 58, "id": "7ca79256", "metadata": {"height": 64}, "outputs": [], "source": ["memory.save_context({\"input\": \"Not much, just hanging\"}, {\"output\": \"Cool\"})"]}, {"cell_type": "code", "execution_count": 59, "id": "890a4497", "metadata": {"height": 31}, "outputs": [{"data": {"text/plain": ["{'history': \"Human: Hi\\nAI: What's up\\nHuman: Not much, just hanging\\nAI: Cool\"}"]}, "execution_count": 59, "metadata": {}, "output_type": "execute_result"}], "source": ["memory.load_memory_variables({})"]}, {"cell_type": "markdown", "id": "6eddd91a-bf1c-4b82-b99c-c585420e4ecb", "metadata": {}, "source": ["### 2.2 \u4e2d\u6587\u7248"]}, {"cell_type": "markdown", "id": "55b3e4e9-7a6a-4a09-9ac3-0096a67849c7", "metadata": {}, "source": ["#### 2.1.1 \u521d\u59cb\u5316\u5bf9\u8bdd\u6a21\u578b"]}, {"cell_type": "code", "execution_count": 30, "id": "3577aaff-7edb-40b0-866a-e407e63d55e0", "metadata": {"height": 98}, "outputs": [], "source": ["from langchain.chains import ConversationChain\n", "from langchain.chat_models import ChatOpenAI\n", "from langchain.memory import ConversationBufferMemory"]}, {"cell_type": "code", "execution_count": 33, "id": "a77d37ab-1f75-4ae8-8d7c-5066773ead81", "metadata": {"height": 133}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\n", "\n", "\u001b[1m> Entering new chain...\u001b[0m\n", "Prompt after formatting:\n", "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", "\n", "Current conversation:\n", "\n", "Human: \u4f60\u597d, \u6211\u53eb\u76ae\u76ae\u9c81\n", "AI:\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n"]}, {"data": {"text/plain": ["'\u4f60\u597d\uff0c\u76ae\u76ae\u9c81\uff01\u5f88\u9ad8\u5174\u8ba4\u8bc6\u4f60\u3002\u6211\u662f\u4e00\u4e2aAI\u52a9\u624b\uff0c\u53ef\u4ee5\u56de\u7b54\u4f60\u7684\u95ee\u9898\u548c\u63d0\u4f9b\u5e2e\u52a9\u3002\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u4f60\u7684\u5417\uff1f'"]}, "execution_count": 33, "metadata": {}, "output_type": "execute_result"}], "source": ["llm = ChatOpenAI(temperature=0.0) \n", "\n", "memory = ConversationBufferMemory()\n", "\n", "conversation = ConversationChain(llm=llm, memory = memory, verbose=True )"]}, {"cell_type": "markdown", "id": "747cb539-abc4-4e47-8cb9-1ee608ab07fc", "metadata": {}, "source": ["#### 2.1.2 \u7b2c\u4e00\u8f6e\u5bf9\u8bdd"]}, {"cell_type": "code", "execution_count": null, "id": "99e48462-7a92-4842-bdaa-2a478ba2252c", "metadata": {}, "outputs": [], "source": ["conversation.predict(input=\"\u4f60\u597d, \u6211\u53eb\u76ae\u76ae\u9c81\")"]}, {"cell_type": "markdown", "id": "979d320b-6b20-4722-99db-c48a43711d6c", "metadata": {}, "source": ["#### 2.1.3 \u7b2c\u4e8c\u8f6e\u5bf9\u8bdd"]}, {"cell_type": "code", "execution_count": 34, "id": "6ed6e97a-d7ea-4188-a6d7-f91d2a29d14a", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\n", "\n", "\u001b[1m> Entering new chain...\u001b[0m\n", "Prompt after formatting:\n", "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", "\n", "Current conversation:\n", "Human: \u4f60\u597d, \u6211\u53eb\u76ae\u76ae\u9c81\n", "AI: \u4f60\u597d\uff0c\u76ae\u76ae\u9c81\uff01\u5f88\u9ad8\u5174\u8ba4\u8bc6\u4f60\u3002\u6211\u662f\u4e00\u4e2aAI\u52a9\u624b\uff0c\u53ef\u4ee5\u56de\u7b54\u4f60\u7684\u95ee\u9898\u548c\u63d0\u4f9b\u5e2e\u52a9\u3002\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u4f60\u7684\u5417\uff1f\n", "Human: 1+1\u7b49\u4e8e\u591a\u5c11\uff1f\n", "AI:\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n"]}, {"data": {"text/plain": ["'1+1\u7b49\u4e8e2\u3002'"]}, "execution_count": 34, "metadata": {}, "output_type": "execute_result"}], "source": ["conversation.predict(input=\"1+1\u7b49\u4e8e\u591a\u5c11\uff1f\")"]}, {"cell_type": "markdown", "id": "8a1fd531-216e-42d8-b226-839747ad7dd3", "metadata": {}, "source": ["#### 2.1.4 \u7b2c\u4e09\u8f6e\u5bf9\u8bdd"]}, {"cell_type": "code", "execution_count": null, "id": "b5dfe488-2758-42c7-9c20-e483b4c22ab8", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\n", "\n", "\u001b[1m> Entering new chain...\u001b[0m\n", "Prompt after formatting:\n", "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", "\n", "Current conversation:\n", "Human: \u4f60\u597d, \u6211\u53eb\u76ae\u76ae\u9c81\n", "AI: \u4f60\u597d\uff0c\u76ae\u76ae\u9c81\uff01\u5f88\u9ad8\u5174\u8ba4\u8bc6\u4f60\u3002\u6211\u662f\u4e00\u4e2aAI\u52a9\u624b\uff0c\u53ef\u4ee5\u56de\u7b54\u4f60\u7684\u95ee\u9898\u548c\u63d0\u4f9b\u5e2e\u52a9\u3002\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u4f60\u7684\u5417\uff1f\n", "Human: 1+1\u7b49\u4e8e\u591a\u5c11\uff1f\n", "AI: 1+1\u7b49\u4e8e2\u3002\n", "Human: What is my name?\n", "AI: \u4f60\u7684\u540d\u5b57\u662f\u76ae\u76ae\u9c81\u3002\n", "Human: \u6211\u53eb\u4ec0\u4e48\u540d\u5b57\uff1f\n", "AI:\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n"]}, {"data": {"text/plain": ["'\u4f60\u53eb\u76ae\u76ae\u9c81\u3002'"]}, "execution_count": 36, "metadata": {}, "output_type": "execute_result"}], "source": ["conversation.predict(input=\"\u6211\u53eb\u4ec0\u4e48\u540d\u5b57\uff1f\")"]}, {"cell_type": "markdown", "id": "05d9822e-943d-4905-a1f8-a0d28c215d60", "metadata": {}, "source": ["#### 2.1.5 \u67e5\u770b\u50a8\u5b58\u7f13\u5b58"]}, {"cell_type": "code", "execution_count": 37, "id": "0795580f-b6b6-47e0-8882-26fe204560bd", "metadata": {"height": 31}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["Human: \u4f60\u597d, \u6211\u53eb\u76ae\u76ae\u9c81\n", "AI: \u4f60\u597d\uff0c\u76ae\u76ae\u9c81\uff01\u5f88\u9ad8\u5174\u8ba4\u8bc6\u4f60\u3002\u6211\u662f\u4e00\u4e2aAI\u52a9\u624b\uff0c\u53ef\u4ee5\u56de\u7b54\u4f60\u7684\u95ee\u9898\u548c\u63d0\u4f9b\u5e2e\u52a9\u3002\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u4f60\u7684\u5417\uff1f\n", "Human: 1+1\u7b49\u4e8e\u591a\u5c11\uff1f\n", "AI: 1+1\u7b49\u4e8e2\u3002\n", "Human: What is my name?\n", "AI: \u4f60\u7684\u540d\u5b57\u662f\u76ae\u76ae\u9c81\u3002\n", "Human: \u6211\u53eb\u4ec0\u4e48\u540d\u5b57\uff1f\n", "AI: \u4f60\u53eb\u76ae\u76ae\u9c81\u3002\n"]}], "source": ["print(memory.buffer) "]}, {"cell_type": "code", "execution_count": 38, "id": "dfe7824f-bd6e-4b95-92e2-1c85c62a92e9", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["Human: \u4f60\u597d, \u6211\u53eb\u76ae\u76ae\u9c81\n", "AI: \u4f60\u597d\uff0c\u76ae\u76ae\u9c81\uff01\u5f88\u9ad8\u5174\u8ba4\u8bc6\u4f60\u3002\u6211\u662f\u4e00\u4e2aAI\u52a9\u624b\uff0c\u53ef\u4ee5\u56de\u7b54\u4f60\u7684\u95ee\u9898\u548c\u63d0\u4f9b\u5e2e\u52a9\u3002\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u4f60\u7684\u5417\uff1f\n", "Human: 1+1\u7b49\u4e8e\u591a\u5c11\uff1f\n", "AI: 1+1\u7b49\u4e8e2\u3002\n", "Human: What is my name?\n", "AI: \u4f60\u7684\u540d\u5b57\u662f\u76ae\u76ae\u9c81\u3002\n", "Human: \u6211\u53eb\u4ec0\u4e48\u540d\u5b57\uff1f\n", "AI: \u4f60\u53eb\u76ae\u76ae\u9c81\u3002\n"]}], "source": ["print(memory.buffer) "]}, {"cell_type": "code", "execution_count": 39, "id": "48942759-8afb-4aed-80c5-a48952a2b0c0", "metadata": {"height": 31}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["{'history': 'Human: \u4f60\u597d, \u6211\u53eb\u76ae\u76ae\u9c81\\nAI: \u4f60\u597d\uff0c\u76ae\u76ae\u9c81\uff01\u5f88\u9ad8\u5174\u8ba4\u8bc6\u4f60\u3002\u6211\u662f\u4e00\u4e2aAI\u52a9\u624b\uff0c\u53ef\u4ee5\u56de\u7b54\u4f60\u7684\u95ee\u9898\u548c\u63d0\u4f9b\u5e2e\u52a9\u3002\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u4f60\u7684\u5417\uff1f\\nHuman: 1+1\u7b49\u4e8e\u591a\u5c11\uff1f\\nAI: 1+1\u7b49\u4e8e2\u3002\\nHuman: What is my name?\\nAI: \u4f60\u7684\u540d\u5b57\u662f\u76ae\u76ae\u9c81\u3002\\nHuman: \u6211\u53eb\u4ec0\u4e48\u540d\u5b57\uff1f\\nAI: \u4f60\u53eb\u76ae\u76ae\u9c81\u3002'}\n"]}], "source": ["print(memory.load_memory_variables({}))"]}, {"cell_type": "code", "execution_count": 40, "id": "4d0c4625-e928-45dc-b8da-4ab865ac5f7e", "metadata": {}, "outputs": [{"data": {"text/plain": ["{'history': 'Human: \u4f60\u597d, \u6211\u53eb\u76ae\u76ae\u9c81\\nAI: \u4f60\u597d\uff0c\u76ae\u76ae\u9c81\uff01\u5f88\u9ad8\u5174\u8ba4\u8bc6\u4f60\u3002\u6211\u662f\u4e00\u4e2aAI\u52a9\u624b\uff0c\u53ef\u4ee5\u56de\u7b54\u4f60\u7684\u95ee\u9898\u548c\u63d0\u4f9b\u5e2e\u52a9\u3002\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u4f60\u7684\u5417\uff1f\\nHuman: 1+1\u7b49\u4e8e\u591a\u5c11\uff1f\\nAI: 1+1\u7b49\u4e8e2\u3002\\nHuman: What is my name?\\nAI: \u4f60\u7684\u540d\u5b57\u662f\u76ae\u76ae\u9c81\u3002\\nHuman: \u6211\u53eb\u4ec0\u4e48\u540d\u5b57\uff1f\\nAI: \u4f60\u53eb\u76ae\u76ae\u9c81\u3002'}"]}, "execution_count": 40, "metadata": {}, "output_type": "execute_result"}], "source": ["memory.load_memory_variables({})"]}, {"cell_type": "markdown", "id": "80f3778e-5fbf-43ed-9df1-d57d98ec6fb0", "metadata": {}, "source": ["#### 2.1.6 \u76f4\u63a5\u6dfb\u52a0\u5185\u5bb9\u5230\u50a8\u5b58\u7f13\u5b58"]}, {"cell_type": "code", "execution_count": 42, "id": "147b2c30-1662-4b49-aaf8-c228428e5cc6", "metadata": {}, "outputs": [{"data": {"text/plain": ["{'history': 'Human: \u4f60\u597d\uff0c\u6211\u53eb\u76ae\u76ae\u9c81\\nAI: \u4f60\u597d\u554a\uff0c\u6211\u53eb\u9c81\u897f\u897f'}"]}, "execution_count": 42, "metadata": {}, "output_type": "execute_result"}], "source": ["memory = ConversationBufferMemory()\n", "memory.save_context({\"input\": \"\u4f60\u597d\uff0c\u6211\u53eb\u76ae\u76ae\u9c81\"}, {\"output\": \"\u4f60\u597d\u554a\uff0c\u6211\u53eb\u9c81\u897f\u897f\"})\n", "memory.load_memory_variables({})"]}, {"cell_type": "code", "execution_count": 43, "id": "6b5e27f0-dad7-41b6-9326-bebf6299638f", "metadata": {"height": 64}, "outputs": [], "source": ["memory.save_context({\"input\": \"Not much, just hanging\"}, {\"output\": \"Cool\"})"]}, {"cell_type": "code", "execution_count": 44, "id": "cfa7c555-06ab-4906-b3dc-906f789e08f5", "metadata": {"height": 31}, "outputs": [{"data": {"text/plain": ["{'history': 'Human: \u4f60\u597d\uff0c\u6211\u53eb\u76ae\u76ae\u9c81\\nAI: \u4f60\u597d\u554a\uff0c\u6211\u53eb\u9c81\u897f\u897f\\nHuman: Not much, just hanging\\nAI: Cool'}"]}, "execution_count": 44, "metadata": {}, "output_type": "execute_result"}], "source": ["memory.load_memory_variables({})"]}, {"cell_type": "code", "execution_count": 45, "id": "deb33de8-37ea-4180-a73e-0fc456b14eb0", "metadata": {}, "outputs": [{"data": {"text/plain": ["{'history': 'Human: \u4f60\u597d\uff0c\u6211\u53eb\u76ae\u76ae\u9c81\\nAI: \u4f60\u597d\u554a\uff0c\u6211\u53eb\u9c81\u897f\u897f\\nHuman: Not much, just hanging\\nAI: Cool\\nHuman: \u5f88\u9ad8\u5174\u548c\u4f60\u6210\u4e3a\u670b\u53cb\uff01\\nAI: \u662f\u7684\uff0c\u8ba9\u6211\u4eec\u4e00\u8d77\u53bb\u5192\u9669\u5427\uff01'}"]}, "execution_count": 45, "metadata": {}, "output_type": "execute_result"}], "source": ["memory.save_context({\"input\": \"\u5f88\u9ad8\u5174\u548c\u4f60\u6210\u4e3a\u670b\u53cb\uff01\"}, {\"output\": \"\u662f\u7684\uff0c\u8ba9\u6211\u4eec\u4e00\u8d77\u53bb\u5192\u9669\u5427\uff01\"})\n", "memory.load_memory_variables({})"]}, {"cell_type": "markdown", "id": "10146f1a-0114-4902-8122-d19ae6f7c461", "metadata": {}, "source": ["### 2.3 \u603b\u7ed3"]}, {"cell_type": "markdown", "id": "2759b6bc-edb2-4cfe-b0f8-1bf6c4d796f9", "metadata": {}, "source": ["\u5f53\u6211\u4eec\u5728\u4f7f\u7528\u5927\u578b\u8bed\u8a00\u6a21\u578b\u8fdb\u884c\u804a\u5929\u5bf9\u8bdd\u65f6\uff0c**\u5927\u578b\u8bed\u8a00\u6a21\u578b\u672c\u8eab\u5b9e\u9645\u4e0a\u662f\u65e0\u72b6\u6001\u7684\u3002\u8bed\u8a00\u6a21\u578b\u672c\u8eab\u5e76\u4e0d\u8bb0\u5f97\u5230\u76ee\u524d\u4e3a\u6b62\u7684\u5386\u53f2\u5bf9\u8bdd**\u3002\u6bcf\u6b21\u8c03\u7528API\u7ed3\u70b9\u90fd\u662f\u72ec\u7acb\u7684\u3002\u50a8\u5b58(Memory)\u53ef\u4ee5\u50a8\u5b58\u5230\u76ee\u524d\u4e3a\u6b62\u7684\u6240\u6709\u672f\u8bed\u6216\u5bf9\u8bdd\uff0c\u5e76\u5c06\u5176\u8f93\u5165\u6216\u9644\u52a0\u4e0a\u4e0b\u6587\u5230LLM\u4e2d\u7528\u4e8e\u751f\u6210\u8f93\u51fa\u3002\u5982\u6b64\u770b\u8d77\u6765\u5c31\u597d\u50cf\u5b83\u5728\u8fdb\u884c\u4e0b\u4e00\u8f6e\u5bf9\u8bdd\u7684\u65f6\u5019\uff0c\u8bb0\u5f97\u4e4b\u524d\u8bf4\u8fc7\u4ec0\u4e48\u3002\n"]}, {"cell_type": "markdown", "id": "cf98e9ff", "metadata": {}, "source": ["## \u4e09\u3001\u5bf9\u8bdd\u7f13\u5b58\u7a97\u53e3\u50a8\u5b58\n", " \n", "\u968f\u7740\u5bf9\u8bdd\u53d8\u5f97\u8d8a\u6765\u8d8a\u957f\uff0c\u6240\u9700\u7684\u5185\u5b58\u91cf\u4e5f\u53d8\u5f97\u975e\u5e38\u957f\u3002\u5c06\u5927\u91cf\u7684tokens\u53d1\u9001\u5230LLM\u7684\u6210\u672c\uff0c\u4e5f\u4f1a\u53d8\u5f97\u66f4\u52a0\u6602\u8d35,\u8fd9\u4e5f\u5c31\u662f\u4e3a\u4ec0\u4e48API\u7684\u8c03\u7528\u8d39\u7528\uff0c\u901a\u5e38\u662f\u57fa\u4e8e\u5b83\u9700\u8981\u5904\u7406\u7684tokens\u6570\u91cf\u800c\u6536\u8d39\u7684\u3002\n", " \n", "\u9488\u5bf9\u4ee5\u4e0a\u95ee\u9898\uff0cLangChain\u4e5f\u63d0\u4f9b\u4e86\u51e0\u79cd\u65b9\u4fbf\u7684\u50a8\u5b58\u65b9\u5f0f\u6765\u4fdd\u5b58\u5386\u53f2\u5bf9\u8bdd\u3002\u5176\u4e2d\uff0c\u5bf9\u8bdd\u7f13\u5b58\u7a97\u53e3\u50a8\u5b58\u53ea\u4fdd\u7559\u4e00\u4e2a\u7a97\u53e3\u5927\u5c0f\u7684\u5bf9\u8bdd\u3002\u5b83\u53ea\u4f7f\u7528\u6700\u8fd1\u7684n\u6b21\u4ea4\u4e92\u3002\u8fd9\u53ef\u4ee5\u7528\u4e8e\u4fdd\u6301\u6700\u8fd1\u4ea4\u4e92\u7684\u6ed1\u52a8\u7a97\u53e3\uff0c\u4ee5\u4fbf\u7f13\u51b2\u533a\u4e0d\u4f1a\u8fc7\u5927"]}, {"cell_type": "markdown", "id": "b63c9061-9916-4524-b497-93a0aa2b7d06", "metadata": {}, "source": ["### 3.1 \u82f1\u6587\u7248"]}, {"cell_type": "markdown", "id": "641477a4", "metadata": {}, "source": ["#### 3.1.1 \u6dfb\u52a0\u4e24\u8f6e\u5bf9\u8bdd\u5230\u7a97\u53e3\u50a8\u5b58"]}, {"cell_type": "code", "execution_count": 65, "id": "3ea6233e", "metadata": {"height": 47}, "outputs": [], "source": ["from langchain.memory import ConversationBufferWindowMemory\n", "\n", "# k \u4e3a\u7a97\u53e3\u53c2\u6570\uff0ck=1\u8868\u660e\u53ea\u4fdd\u7559\u4e00\u4e2a\u5bf9\u8bdd\u8bb0\u5fc6\n", "memory = ConversationBufferWindowMemory(k=1) "]}, {"cell_type": "code", "execution_count": 66, "id": "dc4553fb", "metadata": {"height": 115}, "outputs": [], "source": ["# \u5411memory\u6dfb\u52a0\u4e24\u8f6e\u5bf9\u8bdd\n", "memory.save_context({\"input\": \"Hi\"}, {\"output\": \"What's up\"})\n", "memory.save_context({\"input\": \"Not much, just hanging\"}, {\"output\": \"Cool\"})"]}, {"cell_type": "code", "execution_count": 67, "id": "6a788403", "metadata": {"height": 31}, "outputs": [{"data": {"text/plain": ["{'history': 'Human: Not much, just hanging\\nAI: Cool'}"]}, "execution_count": 67, "metadata": {}, "output_type": "execute_result"}], "source": ["# \u5e76\u67e5\u770b\u8bb0\u5fc6\u53d8\u91cf\u5f53\u524d\u7684\u8bb0\u5f55\n", "memory.load_memory_variables({})"]}, {"cell_type": "markdown", "id": "63bda148", "metadata": {}, "source": ["#### 3.1.2 \u5728\u5bf9\u8bdd\u94fe\u4e2d\u5e94\u7528\u7a97\u53e3\u50a8\u5b58"]}, {"cell_type": "code", "execution_count": 68, "id": "4087bc87", "metadata": {"height": 133}, "outputs": [], "source": ["llm = ChatOpenAI(temperature=0.0)\n", "memory = ConversationBufferWindowMemory(k=1)\n", "conversation = ConversationChain(llm=llm, memory=memory, verbose=False )"]}, {"cell_type": "markdown", "id": "b6d661e3", "metadata": {}, "source": ["\u6ce8\u610f\u6b64\u5904\uff01\u7531\u4e8e\u8fd9\u91cc\u7528\u7684\u662f\u4e00\u4e2a\u7a97\u53e3\u7684\u8bb0\u5fc6\uff0c\u56e0\u6b64\u53ea\u80fd\u4fdd\u5b58\u4e00\u8f6e\u7684\u5386\u53f2\u6d88\u606f\uff0c\u56e0\u6b64AI\u5e76\u4e0d\u80fd\u77e5\u9053\u4f60\u7b2c\u4e00\u8f6e\u5bf9\u8bdd\u4e2d\u63d0\u5230\u7684\u540d\u5b57\uff0c\u4ed6\u6700\u591a\u53ea\u80fd\u8bb0\u4f4f\u4e0a\u4e00\u8f6e\uff08\u7b2c\u4e8c\u8f6e\uff09\u7684\u5bf9\u8bdd\u4fe1\u606f"]}, {"cell_type": "code", "execution_count": 69, "id": "4faaa952", "metadata": {"height": 47}, "outputs": [{"data": {"text/plain": ["\"Hello Andrew! It's nice to meet you. How can I assist you today?\""]}, "execution_count": 69, "metadata": {}, "output_type": "execute_result"}], "source": ["conversation.predict(input=\"Hi, my name is Andrew\")"]}, {"cell_type": "code", "execution_count": 70, "id": "bb20ddaa", "metadata": {"height": 31}, "outputs": [{"data": {"text/plain": ["'1+1 is equal to 2.'"]}, "execution_count": 70, "metadata": {}, "output_type": "execute_result"}], "source": ["conversation.predict(input=\"What is 1+1?\")"]}, {"cell_type": "code", "execution_count": 71, "id": "489b2194", "metadata": {"height": 31}, "outputs": [{"data": {"text/plain": ["\"I'm sorry, but I don't have access to personal information.\""]}, "execution_count": 71, "metadata": {}, "output_type": "execute_result"}], "source": ["conversation.predict(input=\"What is my name?\")"]}, {"cell_type": "markdown", "id": "88837e7c-cf4b-469e-b820-bbfc49ba876c", "metadata": {}, "source": ["### 3.2 \u4e2d\u6587\u7248"]}, {"cell_type": "markdown", "id": "760ec3ad-6959-4a36-b1b5-4fcafe8088ad", "metadata": {}, "source": ["#### 3.1.1 \u6dfb\u52a0\u4e24\u8f6e\u5bf9\u8bdd\u5230\u7a97\u53e3\u50a8\u5b58"]}, {"cell_type": "code", "execution_count": 72, "id": "68a2907c", "metadata": {}, "outputs": [{"data": {"text/plain": ["{'history': 'Human: \u5f88\u9ad8\u5174\u548c\u4f60\u6210\u4e3a\u670b\u53cb\uff01\\nAI: \u662f\u7684\uff0c\u8ba9\u6211\u4eec\u4e00\u8d77\u53bb\u5192\u9669\u5427\uff01'}"]}, "execution_count": 72, "metadata": {}, "output_type": "execute_result"}], "source": ["from langchain.memory import ConversationBufferWindowMemory\n", "\n", "# k=1\u8868\u660e\u53ea\u4fdd\u7559\u4e00\u4e2a\u5bf9\u8bdd\u8bb0\u5fc6\n", "memory = ConversationBufferWindowMemory(k=1) \n", "memory.save_context({\"input\": \"\u4f60\u597d\uff0c\u6211\u53eb\u76ae\u76ae\u9c81\"}, {\"output\": \"\u4f60\u597d\u554a\uff0c\u6211\u53eb\u9c81\u897f\u897f\"})\n", "memory.save_context({\"input\": \"\u5f88\u9ad8\u5174\u548c\u4f60\u6210\u4e3a\u670b\u53cb\uff01\"}, {\"output\": \"\u662f\u7684\uff0c\u8ba9\u6211\u4eec\u4e00\u8d77\u53bb\u5192\u9669\u5427\uff01\"})\n", "memory.load_memory_variables({})"]}, {"cell_type": "markdown", "id": "dcabf017-5bfd-4904-9f06-388f994eddc9", "metadata": {}, "source": ["#### 3.1.2 \u5728\u5bf9\u8bdd\u94fe\u4e2d\u5e94\u7528\u7a97\u53e3\u50a8\u5b58"]}, {"cell_type": "code", "execution_count": 74, "id": "1ee854d9", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\u4f60\u597d\uff0c\u76ae\u76ae\u9c81\uff01\u5f88\u9ad8\u5174\u8ba4\u8bc6\u4f60\u3002\u6211\u662f\u4e00\u4e2aAI\u52a9\u624b\uff0c\u53ef\u4ee5\u56de\u7b54\u4f60\u7684\u95ee\u9898\u548c\u63d0\u4f9b\u5e2e\u52a9\u3002\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u4f60\u7684\u5417\uff1f\n", "1+1\u7b49\u4e8e2\u3002\n", "\u5f88\u62b1\u6b49\uff0c\u6211\u65e0\u6cd5\u77e5\u9053\u60a8\u7684\u540d\u5b57\u3002\n"]}], "source": ["llm = ChatOpenAI(temperature=0.0)\n", "memory = ConversationBufferWindowMemory(k=1)\n", "conversation = ConversationChain(llm=llm, memory=memory, verbose=False )\n", "print(conversation.predict(input=\"\u4f60\u597d, \u6211\u53eb\u76ae\u76ae\u9c81\"))\n", "print(conversation.predict(input=\"1+1\u7b49\u4e8e\u591a\u5c11\uff1f\"))\n", "print(conversation.predict(input=\"\u6211\u53eb\u4ec0\u4e48\u540d\u5b57\uff1f\"))"]}, {"cell_type": "markdown", "id": "d2931b92", "metadata": {}, "source": ["## \u56db\u3001\u5bf9\u8bddtoken\u7f13\u5b58\u50a8\u5b58"]}, {"cell_type": "markdown", "id": "dff5b4c7", "metadata": {}, "source": ["\u4f7f\u7528\u5bf9\u8bddtoken\u7f13\u5b58\u8bb0\u5fc6\uff0c\u5185\u5b58\u5c06\u9650\u5236\u4fdd\u5b58\u7684token\u6570\u91cf\u3002\u5982\u679ctoken\u6570\u91cf\u8d85\u51fa\u6307\u5b9a\u6570\u76ee\uff0c\u5b83\u4f1a\u5207\u6389\u8fd9\u4e2a\u5bf9\u8bdd\u7684\u65e9\u671f\u90e8\u5206\n", "\u4ee5\u4fdd\u7559\u4e0e\u6700\u8fd1\u7684\u4ea4\u6d41\u76f8\u5bf9\u5e94\u7684token\u6570\u91cf\uff0c\u4f46\u4e0d\u8d85\u8fc7token\u9650\u5236\u3002"]}, {"cell_type": "code", "execution_count": 76, "id": "9f6d063c", "metadata": {"height": 31}, "outputs": [], "source": ["!pip install -q tiktoken "]}, {"cell_type": "code", "execution_count": 77, "id": "fb9020ed", "metadata": {"height": 81}, "outputs": [], "source": ["from langchain.llms import OpenAI\n", "from langchain.memory import ConversationTokenBufferMemory"]}, {"cell_type": "markdown", "id": "f3a84112", "metadata": {}, "source": ["### 4.1 \u82f1\u6587\u7248\n", "\u6dfb\u52a0\u5bf9\u8bdd\u5230Token\u7f13\u5b58\u50a8\u5b58,\u9650\u5236token\u6570\u91cf\uff0c\u8fdb\u884c\u6d4b\u8bd5"]}, {"cell_type": "code", "execution_count": 78, "id": "43582ee6", "metadata": {"height": 149}, "outputs": [], "source": ["memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=30)\n", "memory.save_context({\"input\": \"AI is what?!\"}, {\"output\": \"Amazing!\"})\n", "memory.save_context({\"input\": \"Backpropagation is what?\"}, {\"output\": \"Beautiful!\"})\n", "memory.save_context({\"input\": \"Chatbots are what?\"}, {\"output\": \"Charming!\"})"]}, {"cell_type": "code", "execution_count": 79, "id": "284288e1", "metadata": {"height": 31}, "outputs": [{"data": {"text/plain": ["{'history': 'AI: Beautiful!\\nHuman: Chatbots are what?\\nAI: Charming!'}"]}, "execution_count": 79, "metadata": {}, "output_type": "execute_result"}], "source": ["memory.load_memory_variables({})"]}, {"cell_type": "markdown", "id": "7b62b2e1", "metadata": {}, "source": ["\u53ef\u4ee5\u770b\u5230\u524d\u9762\u8d85\u51fa\u7684\u7684token\u5df2\u7ecf\u88ab\u820d\u5f03\u4e86\uff01\uff01\uff01"]}, {"cell_type": "markdown", "id": "f7f6be43", "metadata": {}, "source": ["### 4.2 \u4e2d\u6587\u7248"]}, {"cell_type": "code", "execution_count": 80, "id": "e9191020", "metadata": {}, "outputs": [{"data": {"text/plain": ["{'history': 'AI: \u8f7b\u821f\u5df2\u8fc7\u4e07\u91cd\u5c71\u3002'}"]}, "execution_count": 80, "metadata": {}, "output_type": "execute_result"}], "source": ["memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=30)\n", "memory.save_context({\"input\": \"\u671d\u8f9e\u767d\u5e1d\u5f69\u4e91\u95f4\uff0c\"}, {\"output\": \"\u5343\u91cc\u6c5f\u9675\u4e00\u65e5\u8fd8\u3002\"})\n", "memory.save_context({\"input\": \"\u4e24\u5cb8\u733f\u58f0\u557c\u4e0d\u4f4f\uff0c\"}, {\"output\": \"\u8f7b\u821f\u5df2\u8fc7\u4e07\u91cd\u5c71\u3002\"})\n", "memory.load_memory_variables({})"]}, {"cell_type": "markdown", "id": "fb08ef4a-876f-422a-81f9-4805288e5955", "metadata": {}, "source": ["### 4.3 \u8865\u5145"]}, {"cell_type": "markdown", "id": "5e4d918b", "metadata": {}, "source": ["ChatGPT\u4f7f\u7528\u4e00\u79cd\u57fa\u4e8e\u5b57\u8282\u5bf9\u7f16\u7801\uff08Byte Pair Encoding\uff0cBPE\uff09\u7684\u65b9\u6cd5\u6765\u8fdb\u884ctokenization\uff08\u5c06\u8f93\u5165\u6587\u672c\u62c6\u5206\u4e3atoken\uff09\u3002BPE\u662f\u4e00\u79cd\u5e38\u89c1\u7684tokenization\u6280\u672f\uff0c\u5b83\u5c06\u8f93\u5165\u6587\u672c\u5206\u5272\u6210\u8f83\u5c0f\u7684\u5b50\u8bcd\u5355\u5143\u3002 \n", "\n", "OpenAI\u5728\u5176\u5b98\u65b9GitHub\u4e0a\u516c\u5f00\u4e86\u4e00\u4e2a\u6700\u65b0\u7684\u5f00\u6e90Python\u5e93 [tiktoken](https://github.com/openai/tiktoken)\uff0c\u8fd9\u4e2a\u5e93\u4e3b\u8981\u662f\u7528\u6765\u8ba1\u7b97tokens\u6570\u91cf\u7684\u3002\u76f8\u6bd4\u8f83HuggingFace\u7684tokenizer\uff0c\u5176\u901f\u5ea6\u63d0\u5347\u4e86\u597d\u51e0\u500d\u3002\n", "\n", "\u5177\u4f53token\u8ba1\u7b97\u65b9\u5f0f,\u7279\u522b\u662f\u6c49\u5b57\u548c\u82f1\u6587\u5355\u8bcd\u7684token\u533a\u522b\uff0c\u5177\u4f53\u8bfe\u53c2\u8003[\u77e5\u4e4e\u6587\u7ae0](https://www.zhihu.com/question/594159910) \u3002"]}, {"cell_type": "markdown", "id": "5ff55d5d", "metadata": {}, "source": ["## \u4e94\u3001\u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u50a8\u5b58"]}, {"cell_type": "markdown", "id": "7d39b83a", "metadata": {}, "source": ["\u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u50a8\u5b58\uff0c**\u4f7f\u7528LLM\u7f16\u5199\u5230\u76ee\u524d\u4e3a\u6b62\u5386\u53f2\u5bf9\u8bdd\u7684\u6458\u8981**\uff0c\u5e76\u5c06\u5176\u4fdd\u5b58"]}, {"cell_type": "code", "execution_count": 82, "id": "72dcf8b1", "metadata": {"height": 64}, "outputs": [], "source": ["from langchain.chains import ConversationChain\n", "from langchain.chat_models import ChatOpenAI\n", "from langchain.memory import ConversationSummaryBufferMemory"]}, {"cell_type": "markdown", "id": "243b213e-ce17-46a0-8652-03658ca58dd8", "metadata": {}, "source": ["### 5.1 \u82f1\u6587\u7248"]}, {"cell_type": "markdown", "id": "6572ef39", "metadata": {}, "source": ["#### 5.1.1 \u4f7f\u7528\u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u50a8\u5b58\n", "\n", "\u521b\u5efa\u4e00\u4e2a\u957f\u5b57\u7b26\u4e32\uff0c\u5176\u4e2d\u5305\u542b\u67d0\u4eba\u7684\u65e5\u7a0b\u5b89\u6392"]}, {"cell_type": "code", "execution_count": 84, "id": "4a5b238f", "metadata": {"height": 285}, "outputs": [], "source": ["# \u521b\u5efa\u4e00\u4e2a\u957f\u5b57\u7b26\u4e32\n", "schedule = \"There is a meeting at 8am with your product team. \\\n", "You will need your powerpoint presentation prepared. \\\n", "9am-12pm have time to work on your LangChain \\\n", "project which will go quickly because Langchain is such a powerful tool. \\\n", "At Noon, lunch at the italian resturant with a customer who is driving \\\n", "from over an hour away to meet you to understand the latest in AI. \\\n", "Be sure to bring your laptop to show the latest LLM demo.\"\n", "\n", "# \u4f7f\u7528\u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u8bb0\u5fc6\n", "llm = ChatOpenAI(temperature=0.0)\n", "memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=100) \n", "memory.save_context({\"input\": \"Hello\"}, {\"output\": \"What's up\"})\n", "memory.save_context({\"input\": \"Not much, just hanging\"}, {\"output\": \"Cool\"})\n", "memory.save_context(\n", " {\"input\": \"What is on the schedule today?\"}, {\"output\": f\"{schedule}\"}\n", ")"]}, {"cell_type": "code", "execution_count": 89, "id": "15226a41-ab36-43a0-93f7-c03c6b374936", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["System: The human and AI exchange greetings. The human asks about the schedule for the day. The AI provides a detailed schedule, including a meeting with the product team, work on the LangChain project, and a lunch meeting with a customer interested in AI. The AI emphasizes the importance of bringing a laptop to showcase the latest LLM demo during the lunch meeting.\n"]}], "source": ["print(memory.load_memory_variables({})['history'])"]}, {"cell_type": "markdown", "id": "7ccb97b6", "metadata": {}, "source": ["#### 5.1.2 \u57fa\u4e8e\u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u50a8\u5b58\u7684\u5bf9\u8bdd\u94fe\n", "\u57fa\u4e8e\u4e0a\u9762\u7684\u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u50a8\u5b58\uff0c\u65b0\u5efa\u4e00\u4e2a\u5bf9\u8bdd\u94fe"]}, {"cell_type": "code", "execution_count": 90, "id": "6728edba", "metadata": {"height": 99}, "outputs": [], "source": ["conversation = ConversationChain(llm=llm, memory=memory, verbose=True)"]}, {"cell_type": "code", "execution_count": 91, "id": "9a221b1d", "metadata": {"height": 47}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\n", "\n", "\u001b[1m> Entering new chain...\u001b[0m\n", "Prompt after formatting:\n", "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", "\n", "Current conversation:\n", "System: The human and AI exchange greetings. The human asks about the schedule for the day. The AI provides a detailed schedule, including a meeting with the product team, work on the LangChain project, and a lunch meeting with a customer interested in AI. The AI emphasizes the importance of bringing a laptop to showcase the latest LLM demo during the lunch meeting.\n", "Human: What would be a good demo to show?\n", "AI:\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n"]}, {"data": {"text/plain": ["'A good demo to show during the lunch meeting with the customer interested in AI would be the latest LLM (Language Model) demo. The LLM is a cutting-edge AI model that can generate human-like text based on a given prompt. It has been trained on a vast amount of data and can generate coherent and contextually relevant responses. By showcasing the LLM demo, you can demonstrate the capabilities of AI in natural language processing and how it can be applied to various industries and use cases.'"]}, "execution_count": 91, "metadata": {}, "output_type": "execute_result"}], "source": ["conversation.predict(input=\"What would be a good demo to show?\")"]}, {"cell_type": "code", "execution_count": 92, "id": "bb582617", "metadata": {"height": 31}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["System: The human and AI exchange greetings. The human asks about the schedule for the day. The AI provides a detailed schedule, including a meeting with the product team, work on the LangChain project, and a lunch meeting with a customer interested in AI. The AI emphasizes the importance of bringing a laptop to showcase the latest LLM demo during the lunch meeting. A good demo to show during the lunch meeting with the customer interested in AI would be the latest LLM (Language Model) demo. The LLM is a cutting-edge AI model that can generate human-like text based on a given prompt. It has been trained on a vast amount of data and can generate coherent and contextually relevant responses. By showcasing the LLM demo, you can demonstrate the capabilities of AI in natural language processing and how it can be applied to various industries and use cases.\n"]}], "source": ["print(memory.load_memory_variables({})['history'])"]}, {"cell_type": "markdown", "id": "4ba827aa", "metadata": {"height": 31}, "source": ["### 5.2 \u4e2d\u6587\u7248"]}, {"cell_type": "markdown", "id": "64898f33-c538-4e68-b008-7123870b692b", "metadata": {}, "source": ["#### 5.2.1 \u4f7f\u7528\u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u50a8\u5b58\n", "\n", "\u521b\u5efa\u4e00\u4e2a\u957f\u5b57\u7b26\u4e32\uff0c\u5176\u4e2d\u5305\u542b\u67d0\u4eba\u7684\u65e5\u7a0b\u5b89\u6392"]}, {"cell_type": "code", "execution_count": 97, "id": "2c07922b", "metadata": {"height": 31}, "outputs": [], "source": ["# \u521b\u5efa\u4e00\u4e2a\u957f\u5b57\u7b26\u4e32\n", "schedule = \"\u5728\u516b\u70b9\u4f60\u548c\u4f60\u7684\u4ea7\u54c1\u56e2\u961f\u6709\u4e00\u4e2a\u4f1a\u8bae\u3002 \\\n", "\u4f60\u9700\u8981\u505a\u4e00\u4e2aPPT\u3002 \\\n", "\u4e0a\u53489\u70b9\u523012\u70b9\u4f60\u9700\u8981\u5fd9\u4e8eLangChain\u3002\\\n", "Langchain\u662f\u4e00\u4e2a\u6709\u7528\u7684\u5de5\u5177\uff0c\u56e0\u6b64\u4f60\u7684\u9879\u76ee\u8fdb\u5c55\u7684\u975e\u5e38\u5feb\u3002\\\n", "\u4e2d\u5348\uff0c\u5728\u610f\u5927\u5229\u9910\u5385\u4e0e\u4e00\u4f4d\u5f00\u8f66\u6765\u7684\u987e\u5ba2\u5171\u8fdb\u5348\u9910 \\\n", "\u8d70\u4e86\u4e00\u4e2a\u591a\u5c0f\u65f6\u7684\u8def\u7a0b\u4e0e\u4f60\u89c1\u9762\uff0c\u53ea\u4e3a\u4e86\u89e3\u6700\u65b0\u7684 AI\u3002 \\\n", "\u786e\u4fdd\u4f60\u5e26\u4e86\u7b14\u8bb0\u672c\u7535\u8111\u53ef\u4ee5\u5c55\u793a\u6700\u65b0\u7684 LLM \u6837\u4f8b.\"\n", "\n", "llm = ChatOpenAI(temperature=0.0)\n", "memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=100)\n", "memory.save_context({\"input\": \"\u4f60\u597d\uff0c\u6211\u53eb\u76ae\u76ae\u9c81\"}, {\"output\": \"\u4f60\u597d\u554a\uff0c\u6211\u53eb\u9c81\u897f\u897f\"})\n", "memory.save_context({\"input\": \"\u5f88\u9ad8\u5174\u548c\u4f60\u6210\u4e3a\u670b\u53cb\uff01\"}, {\"output\": \"\u662f\u7684\uff0c\u8ba9\u6211\u4eec\u4e00\u8d77\u53bb\u5192\u9669\u5427\uff01\"})\n", "memory.save_context({\"input\": \"\u4eca\u5929\u7684\u65e5\u7a0b\u5b89\u6392\u662f\u4ec0\u4e48\uff1f\"}, {\"output\": f\"{schedule}\"})"]}, {"cell_type": "code", "execution_count": 98, "id": "17424a12-430f-4529-9067-300978c6169e", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["System: The human introduces themselves as Pipilu and the AI introduces themselves as Luxixi. They express happiness at becoming friends and decide to go on an adventure together. The human asks about the schedule for the day. The AI informs them that they have a meeting with their product team at 8 o'clock and need to prepare a PowerPoint presentation. From 9 am to 12 pm, they will be busy with LangChain, a useful tool that helps their project progress quickly. At noon, they will have lunch with a customer who has driven for over an hour just to learn about the latest AI. The AI advises the human to bring their laptop to showcase the latest LLM samples.\n"]}], "source": ["print(memory.load_memory_variables({})['history'])"]}, {"cell_type": "markdown", "id": "9e29a956-607a-4247-9eb5-01285a370991", "metadata": {}, "source": ["#### 5.1.2 \u57fa\u4e8e\u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u50a8\u5b58\u7684\u5bf9\u8bdd\u94fe\n", "\u57fa\u4e8e\u4e0a\u9762\u7684\u5bf9\u8bdd\u6458\u8981\u7f13\u5b58\u50a8\u5b58\uff0c\u65b0\u5efa\u4e00\u4e2a\u5bf9\u8bdd\u94fe"]}, {"cell_type": "code", "execution_count": 99, "id": "52696c8c", "metadata": {"height": 31}, "outputs": [], "source": ["conversation = ConversationChain(llm=llm, memory=memory, verbose=True)"]}, {"cell_type": "code", "execution_count": 100, "id": "48690d13", "metadata": {"height": 31}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["\n", "\n", "\u001b[1m> Entering new chain...\u001b[0m\n", "Prompt after formatting:\n", "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", "\n", "Current conversation:\n", "System: The human introduces themselves as Pipilu and the AI introduces themselves as Luxixi. They express happiness at becoming friends and decide to go on an adventure together. The human asks about the schedule for the day. The AI informs them that they have a meeting with their product team at 8 o'clock and need to prepare a PowerPoint presentation. From 9 am to 12 pm, they will be busy with LangChain, a useful tool that helps their project progress quickly. At noon, they will have lunch with a customer who has driven for over an hour just to learn about the latest AI. The AI advises the human to bring their laptop to showcase the latest LLM samples.\n", "Human: \u5c55\u793a\u4ec0\u4e48\u6837\u7684\u6837\u4f8b\u6700\u597d\u5462\uff1f\n", "AI:\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n"]}, {"data": {"text/plain": ["'\u5c55\u793a\u4e00\u4e9b\u5177\u6709\u591a\u6837\u6027\u548c\u521b\u65b0\u6027\u7684\u6837\u4f8b\u53ef\u80fd\u662f\u6700\u597d\u7684\u9009\u62e9\u3002\u4f60\u53ef\u4ee5\u9009\u62e9\u5c55\u793a\u4e00\u4e9b\u57fa\u4e8e\u56fe\u50cf\u8bc6\u522b\u7684\u6837\u4f8b\uff0c\u6bd4\u5982\u4eba\u8138\u8bc6\u522b\u3001\u7269\u4f53\u8bc6\u522b\u7b49\u3002\u53e6\u5916\uff0c\u4f60\u4e5f\u53ef\u4ee5\u5c55\u793a\u4e00\u4e9b\u81ea\u7136\u8bed\u8a00\u5904\u7406\u65b9\u9762\u7684\u6837\u4f8b\uff0c\u6bd4\u5982\u6587\u672c\u751f\u6210\u3001\u60c5\u611f\u5206\u6790\u7b49\u3002\u6700\u91cd\u8981\u7684\u662f\u9009\u62e9\u90a3\u4e9b\u80fd\u591f\u5c55\u793a\u51fa\u4f60\u4eec\u56e2\u961f\u7684\u6280\u672f\u5b9e\u529b\u548c\u521b\u9020\u529b\u7684\u6837\u4f8b\u3002'"]}, "execution_count": 100, "metadata": {}, "output_type": "execute_result"}], "source": ["conversation.predict(input=\"\u5c55\u793a\u4ec0\u4e48\u6837\u7684\u6837\u4f8b\u6700\u597d\u5462\uff1f\")"]}, {"cell_type": "code", "execution_count": 101, "id": "85bba1f8", "metadata": {"height": 31}, "outputs": [{"data": {"text/plain": ["{'history': \"System: The human introduces themselves as Pipilu and the AI introduces themselves as Luxixi. They express happiness at becoming friends and decide to go on an adventure together. The human asks about the schedule for the day. The AI informs them that they have a meeting with their product team at 8 o'clock and need to prepare a PowerPoint presentation. From 9 am to 12 pm, they will be busy with LangChain, a useful tool that helps their project progress quickly. At noon, they will have lunch with a customer who has driven for over an hour just to learn about the latest AI. The AI advises the human to bring their laptop to showcase the latest LLM samples. The human asks what kind of samples would be best to showcase. The AI suggests that showcasing diverse and innovative samples would be the best choice. They recommend selecting samples based on image recognition, such as face recognition and object recognition. Additionally, they suggest showcasing samples related to natural language processing, such as text generation and sentiment analysis. The AI emphasizes the importance of choosing samples that demonstrate the team's technical expertise and creativity.\"}"]}, "execution_count": 101, "metadata": {}, "output_type": "execute_result"}], "source": ["memory.load_memory_variables({}) # \u6458\u8981\u8bb0\u5f55\u66f4\u65b0\u4e86"]}], "metadata": {"kernelspec": {"display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3"}, "language_info": {"codemirror_mode": {"name": "ipython", "version": 3}, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.12"}, "toc": {"base_numbering": 1, "nav_menu": {}, "number_sections": false, "sideBar": true, "skip_h1_title": false, "title_cell": "", "title_sidebar": "Contents", "toc_cell": false, "toc_position": {}, "toc_section_display": true, "toc_window_display": true}}, "nbformat": 4, "nbformat_minor": 5} \ No newline at end of file