diff --git a/docs/content/C1 Prompt Engineering for Developer/8. 聊天机器人 Chatbot.ipynb b/docs/content/C1 Prompt Engineering for Developer/8. 聊天机器人 Chatbot.ipynb new file mode 100644 index 0000000..b008f73 --- /dev/null +++ b/docs/content/C1 Prompt Engineering for Developer/8. 聊天机器人 Chatbot.ipynb @@ -0,0 +1,856 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "a9183228-0ba6-4af9-8430-649e28868253", + "metadata": { + "id": "JMXGlIvAwn30" + }, + "source": [ + "# 第八章 聊天机器人" + ] + }, + { + "cell_type": "markdown", + "id": "f0bdc2c9", + "metadata": {}, + "source": [ + "\n", + "使用一个大型语言模型的一个令人兴奋的事情是,我们可以用它来构建一个定制的聊天机器人 (Chatbot) ,只需要很少的工作量。在这一节中,我们将探索如何利用聊天的方式,与个性化(或专门针对特定任务或行为的)聊天机器人进行扩展对话。" + ] + }, + { + "cell_type": "markdown", + "id": "e6fae355", + "metadata": {}, + "source": [ + "像 ChatGPT 这样的聊天模型实际上是组装成以一系列消息作为输入,并返回一个模型生成的消息作为输出的。这种聊天格式原本的设计目标是简便多轮对话,但我们通过之前的学习可以知道,它对于不会涉及任何对话的**单轮任务**也同样有用。\n" + ] + }, + { + "cell_type": "markdown", + "id": "78344a7e", + "metadata": {}, + "source": [ + "## 一、给定身份" + ] + }, + { + "cell_type": "markdown", + "id": "2c9b885b", + "metadata": {}, + "source": [ + "接下来,我们将定义两个辅助函数。\n", + "\n", + "第一个方法已经陪伴了您一整个教程,即 ```get_completion``` ,其适用于单轮对话。我们将 Prompt 放入某种类似**用户消息**的对话框中。另一个称为 ```get_completion_from_messages``` ,传入一个消息列表。这些消息可以来自大量不同的**角色** (roles) ,我们会描述一下这些角色。\n", + "\n", + "第一条消息中,我们以系统身份发送系统消息 (system message) ,它提供了一个总体的指示。系统消息则有助于设置助手的行为和角色,并作为对话的高级指示。你可以想象它在助手的耳边低语,引导它的回应,而用户不会注意到系统消息。因此,作为用户,如果你曾经使用过 ChatGPT,您可能从来不知道 ChatGPT 的系统消息是什么,这是有意为之的。系统消息的好处是为开发者提供了一种方法,在不让请求本身成为对话的一部分的情况下,引导助手并指导其回应。\n", + "\n", + "在 ChatGPT 网页界面中,您的消息称为用户消息,而 ChatGPT 的消息称为助手消息。但在构建聊天机器人时,在发送了系统消息之后,您的角色可以仅作为用户 (user) ;也可以在用户和助手 (assistant) 之间交替,从而提供对话上下文。" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "f5308d65", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "import openai\n", + "\n", + "# 下文第一个函数即tool工具包中的同名函数,此处展示出来以便于读者对比\n", + "def get_completion(prompt, model=\"gpt-3.5-turbo\"):\n", + " messages = [{\"role\": \"user\", \"content\": prompt}]\n", + " response = openai.ChatCompletion.create(\n", + " model=model,\n", + " messages=messages,\n", + " temperature=0, # 控制模型输出的随机程度\n", + " )\n", + " return response.choices[0].message[\"content\"]\n", + "\n", + "def get_completion_from_messages(messages, model=\"gpt-3.5-turbo\", temperature=0):\n", + " response = openai.ChatCompletion.create(\n", + " model=model,\n", + " messages=messages,\n", + " temperature=temperature, # 控制模型输出的随机程度\n", + " )\n", + "# print(str(response.choices[0].message))\n", + " return response.choices[0].message[\"content\"]" + ] + }, + { + "cell_type": "markdown", + "id": "46caaa5b", + "metadata": {}, + "source": [ + "现在让我们尝试在对话中使用这些消息。我们将使用上面的函数来获取从这些消息中得到的回答,同时,使用更高的温度 (temperature)(越高生成的越多样,更多内容见第七章)。\n" + ] + }, + { + "cell_type": "markdown", + "id": "e105c1b4", + "metadata": {}, + "source": [ + "### 1.1 讲笑话\n", + "\n", + "系统消息说,你是一个说话像莎士比亚的助手。这是我们向助手描述**它应该如何表现的方式**。然后,第一个用户消息是*给我讲个笑话*。接下来以助手身份给出回复是,*为什么鸡会过马路?* 最后发送用户消息是*我不知道*。" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "02b0e4d3", + "metadata": {}, + "outputs": [], + "source": [ + "# 中文\n", + "messages = [ \n", + "{'role':'system', 'content':'你是一个像莎士比亚一样说话的助手。'}, \n", + "{'role':'user', 'content':'给我讲个笑话'}, \n", + "{'role':'assistant', 'content':'鸡为什么过马路'}, \n", + "{'role':'user', 'content':'我不知道'} ]" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "id": "65f80283", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "为了到达彼岸,去追求自己的夢想! 有点儿像一个戏剧里面的人物吧,不是吗?\n" + ] + } + ], + "source": [ + "response = get_completion_from_messages(messages, temperature=1)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "7f51a7e0", + "metadata": {}, + "source": [ + "(注:上述例子中由于选定 temperature = 1,模型的回答会比较随机且迥异(不乏很有创意)。此处附上另一个回答:\n", + "\n", + "让我用一首莎士比亚式的诗歌来回答你的问题:\n", + "\n", + "当鸡之心欲往前,\n", + "马路之际是其选择。\n", + "驱车徐行而天晴,\n", + "鸣笛吹响伴交错。\n", + "\n", + "问之何去何从也?\n", + "因大道之上未有征,\n", + "而鸡乃跃步前进,\n", + "其决策毋需犹豫。\n", + "\n", + "鸡之智慧何可言,\n", + "道路孤独似乌漆。\n", + "然其勇气令人叹,\n", + "勇往直前没有退。\n", + "\n", + "故鸡过马路何解?\n", + "忍受车流喧嚣之困厄。\n", + "因其鸣鸣悍然一跃,\n", + "成就夸夸骄人壁画。\n", + "\n", + "所以笑话之妙处,\n", + "伴随鸡之勇气满溢。\n", + "笑谈人生不畏路,\n", + "有智有勇尽显妙。\n", + "\n", + "希望这个莎士比亚风格的回答给你带来一些欢乐!" + ] + }, + { + "cell_type": "markdown", + "id": "852b8989", + "metadata": {}, + "source": [ + "### 1.2 友好的聊天机器人" + ] + }, + { + "cell_type": "markdown", + "id": "5f76bedb", + "metadata": {}, + "source": [ + "让我们看另一个例子。助手的消息是*你是一个友好的聊天机器人*,第一个用户消息是*嗨,我叫Isa*。我们想要得到第一个用户消息。" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "id": "ca517ab0", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "嗨,Isa,很高兴见到你!有什么我可以帮助你的吗?\n" + ] + } + ], + "source": [ + "# 中文\n", + "messages = [ \n", + "{'role':'system', 'content':'你是个友好的聊天机器人。'}, \n", + "{'role':'user', 'content':'Hi, 我是Isa。'} ]\n", + "response = get_completion_from_messages(messages, temperature=1)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "1dd6c5f8", + "metadata": {}, + "source": [ + "## 二、构建上下文" + ] + }, + { + "cell_type": "markdown", + "id": "1e9f96ba", + "metadata": {}, + "source": [ + "让我们再试一个例子。系统消息是,你是一个友好的聊天机器人,第一个用户消息是,是的,你能提醒我我的名字是什么吗?" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "id": "a606d422", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "抱歉,我不知道您的名字,因为我们是虚拟的聊天机器人和现实生活中的人类在不同的世界中。\n" + ] + } + ], + "source": [ + "# 中文\n", + "messages = [ \n", + "{'role':'system', 'content':'你是个友好的聊天机器人。'}, \n", + "{'role':'user', 'content':'好,你能提醒我,我的名字是什么吗?'} ]\n", + "response = get_completion_from_messages(messages, temperature=1)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "05c65d16", + "metadata": {}, + "source": [ + "如上所见,模型实际上并不知道我的名字。\n", + "\n", + "因此,每次与语言模型的交互都互相独立,这意味着我们必须提供所有相关的消息,以便模型在当前对话中进行引用。如果想让模型引用或 “记住” 对话的早期部分,则必须在模型的输入中提供早期的交流。我们将其称为上下文 (context) 。尝试以下示例。" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "id": "6019b1d5", + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "当然可以!您的名字是Isa。\n" + ] + } + ], + "source": [ + "# 中文\n", + "messages = [ \n", + "{'role':'system', 'content':'你是个友好的聊天机器人。'},\n", + "{'role':'user', 'content':'Hi, 我是Isa'},\n", + "{'role':'assistant', 'content': \"Hi Isa! 很高兴认识你。今天有什么可以帮到你的吗?\"},\n", + "{'role':'user', 'content':'是的,你可以提醒我, 我的名字是什么?'} ]\n", + "response = get_completion_from_messages(messages, temperature=1)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "c1ed90a6", + "metadata": {}, + "source": [ + "现在我们已经给模型提供了上下文,也就是之前的对话中提到的我的名字,然后我们会问同样的问题,也就是我的名字是什么。因为模型有了需要的全部上下文,所以它能够做出回应,就像我们在输入的消息列表中看到的一样。" + ] + }, + { + "cell_type": "markdown", + "id": "dedba66a-58b0-40d4-b9ae-47e79ae22328", + "metadata": { + "id": "bBg_MpXeYnTq" + }, + "source": [ + "## 三、订餐机器人\n", + "\n", + "现在,我们构建一个 “订餐机器人”,我们需要它自动收集用户信息,接受比萨饼店的订单。\n", + "\n", + "### 3.1 构建机器人\n", + "\n", + "下面这个函数将收集我们的用户消息,以便我们可以避免像刚才一样手动输入。这个函数将从我们下面构建的用户界面中收集 Prompt ,然后将其附加到一个名为上下文( ```context``` )的列表中,并在每次调用模型时使用该上下文。模型的响应也会添加到上下文中,所以用户消息和模型消息都被添加到上下文中,上下文逐渐变长。这样,模型就有了需要的信息来确定下一步要做什么。" + ] + }, + { + "cell_type": "code", + "execution_count": 33, + "id": "e76749ac", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "def collect_messages(_):\n", + " prompt = inp.value_input\n", + " inp.value = ''\n", + " context.append({'role':'user', 'content':f\"{prompt}\"})\n", + " response = get_completion_from_messages(context) \n", + " context.append({'role':'assistant', 'content':f\"{response}\"})\n", + " panels.append(\n", + " pn.Row('User:', pn.pane.Markdown(prompt, width=600)))\n", + " panels.append(\n", + " pn.Row('Assistant:', pn.pane.Markdown(response, width=600, style={'background-color': '#F6F6F6'})))\n", + " \n", + " return pn.Column(*panels)" + ] + }, + { + "cell_type": "markdown", + "id": "8a3b003e", + "metadata": {}, + "source": [ + "现在,我们将设置并运行这个 UI 来显示订单机器人。初始的上下文包含了包含菜单的系统消息,在每次调用时都会使用。此后随着对话进行,上下文也会不断增长。" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d9f97fa0", + "metadata": {}, + "outputs": [], + "source": [ + "!pip install panel" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "fdf1731b", + "metadata": {}, + "outputs": [], + "source": [ + "# 中文\n", + "import panel as pn # GUI\n", + "pn.extension()\n", + "\n", + "panels = [] # collect display \n", + "\n", + "context = [{'role':'system', 'content':\"\"\"\n", + "你是订餐机器人,为披萨餐厅自动收集订单信息。\n", + "你要首先问候顾客。然后等待用户回复收集订单信息。收集完信息需确认顾客是否还需要添加其他内容。\n", + "最后需要询问是否自取或外送,如果是外送,你要询问地址。\n", + "最后告诉顾客订单总金额,并送上祝福。\n", + "\n", + "请确保明确所有选项、附加项和尺寸,以便从菜单中识别出该项唯一的内容。\n", + "你的回应应该以简短、非常随意和友好的风格呈现。\n", + "\n", + "菜单包括:\n", + "\n", + "菜品:\n", + "意式辣香肠披萨(大、中、小) 12.95、10.00、7.00\n", + "芝士披萨(大、中、小) 10.95、9.25、6.50\n", + "茄子披萨(大、中、小) 11.95、9.75、6.75\n", + "薯条(大、小) 4.50、3.50\n", + "希腊沙拉 7.25\n", + "\n", + "配料:\n", + "奶酪 2.00\n", + "蘑菇 1.50\n", + "香肠 3.00\n", + "加拿大熏肉 3.50\n", + "AI酱 1.50\n", + "辣椒 1.00\n", + "\n", + "饮料:\n", + "可乐(大、中、小) 3.00、2.00、1.00\n", + "雪碧(大、中、小) 3.00、2.00、1.00\n", + "瓶装水 5.00\n", + "\"\"\"} ] # accumulate messages\n", + "\n", + "\n", + "inp = pn.widgets.TextInput(value=\"Hi\", placeholder='Enter text here…')\n", + "button_conversation = pn.widgets.Button(name=\"Chat!\")\n", + "\n", + "interactive_conversation = pn.bind(collect_messages, button_conversation)\n", + "\n", + "dashboard = pn.Column(\n", + " inp,\n", + " pn.Row(button_conversation),\n", + " pn.panel(interactive_conversation, loading_indicator=True, height=300),\n", + ")\n", + "\n", + "dashboard" + ] + }, + { + "cell_type": "markdown", + "id": "07d29d10", + "metadata": {}, + "source": [ + "运行如上代码可以得到一个点餐机器人,下图展示了一个点餐的完整流程:\n", + "\n", + "![image.png](../../../figures/docs/C1/Chatbot-pizza-cn.png)" + ] + }, + { + "cell_type": "markdown", + "id": "668ea96d", + "metadata": {}, + "source": [ + "### 3.2 创建JSON摘要" + ] + }, + { + "cell_type": "markdown", + "id": "2a2c9822", + "metadata": {}, + "source": [ + "此处我们另外要求模型创建一个 JSON 摘要,方便我们发送给订单系统。\n", + "\n", + "因此我们需要在上下文的基础上追加另一个系统消息,作为另一条指示 (instruction) 。我们说*创建一个刚刚订单的 JSON 摘要,列出每个项目的价格,字段应包括 1)披萨,包括尺寸,2)配料列表,3)饮料列表,4)辅菜列表,包括尺寸,最后是总价格*。此处也可以定义为用户消息,不一定是系统消息。\n", + "\n", + "请注意,这里我们使用了一个较低的温度,因为对于这些类型的任务,我们希望输出相对可预测。" + ] + }, + { + "cell_type": "code", + "execution_count": 35, + "id": "c840ff56", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{\n", + " \"披萨\": {\n", + " \"意式辣香肠披萨\": {\n", + " \"大\": 12.95,\n", + " \"中\": 10.00,\n", + " \"小\": 7.00\n", + " },\n", + " \"芝士披萨\": {\n", + " \"大\": 10.95,\n", + " \"中\": 9.25,\n", + " \"小\": 6.50\n", + " },\n", + " \"茄子披萨\": {\n", + " \"大\": 11.95,\n", + " \"中\": 9.75,\n", + " \"小\": 6.75\n", + " }\n", + " },\n", + " \"配料\": {\n", + " \"奶酪\": 2.00,\n", + " \"蘑菇\": 1.50,\n", + " \"香肠\": 3.00,\n", + " \"加拿大熏肉\": 3.50,\n", + " \"AI酱\": 1.50,\n", + " \"辣椒\": 1.00\n", + " },\n", + " \"饮料\": {\n", + " \"可乐\": {\n", + " \"大\": 3.00,\n", + " \"中\": 2.00,\n", + " \"小\": 1.00\n", + " },\n", + " \"雪碧\": {\n", + " \"大\": 3.00,\n", + " \"中\": 2.00,\n", + " \"小\": 1.00\n", + " },\n", + " \"瓶装水\": 5.00\n", + " }\n", + "}\n" + ] + } + ], + "source": [ + "messages = context.copy()\n", + "messages.append(\n", + "{'role':'system', 'content':\n", + "'''创建上一个食品订单的 json 摘要。\\\n", + "逐项列出每件商品的价格,字段应该是 1) 披萨,包括大小 2) 配料列表 3) 饮料列表,包括大小 4) 配菜列表包括大小 5) 总价\n", + "你应该给我返回一个可解析的Json对象,包括上述字段'''}, \n", + ")\n", + "\n", + "response = get_completion_from_messages(messages, temperature=0)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "ef17c2b2", + "metadata": {}, + "source": [ + "现在,我们已经建立了自己的订餐聊天机器人。请随意自定义并修改系统消息,以更改聊天机器人的行为,并使其扮演不同的角色,拥有不同的知识。" + ] + }, + { + "cell_type": "markdown", + "id": "2764c8a0", + "metadata": {}, + "source": [ + "## 三、英文版" + ] + }, + { + "cell_type": "markdown", + "id": "123f2066", + "metadata": {}, + "source": [ + "**1.1 讲笑话**" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "id": "c9dff513", + "metadata": {}, + "outputs": [], + "source": [ + "messages = [ \n", + "{'role':'system', 'content':'You are an assistant that speaks like Shakespeare.'}, \n", + "{'role':'user', 'content':'tell me a joke'}, \n", + "{'role':'assistant', 'content':'Why did the chicken cross the road'}, \n", + "{'role':'user', 'content':'I don\\'t know'} ]" + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "id": "381e14c1", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "To get to the other side, methinks!\n" + ] + } + ], + "source": [ + "response = get_completion_from_messages(messages, temperature=1)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "028656a1", + "metadata": {}, + "source": [ + "**1.2 友好的聊天机器人**" + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "id": "8205c007", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Hello Isa! How can I assist you today?\n" + ] + } + ], + "source": [ + "messages = [ \n", + "{'role':'system', 'content':'You are friendly chatbot.'}, \n", + "{'role':'user', 'content':'Hi, my name is Isa'} ]\n", + "response = get_completion_from_messages(messages, temperature=1)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "81f0d22d", + "metadata": {}, + "source": [ + "**2.1 构建上下文**" + ] + }, + { + "cell_type": "code", + "execution_count": 28, + "id": "97296cdd", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "I'm sorry, but as a chatbot, I do not have access to personal information or memory. I cannot remind you of your name.\n" + ] + } + ], + "source": [ + "messages = [ \n", + "{'role':'system', 'content':'You are friendly chatbot.'}, \n", + "{'role':'user', 'content':'Yes, can you remind me, What is my name?'} ]\n", + "response = get_completion_from_messages(messages, temperature=1)\n", + "print(response)" + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "id": "5ab959d0", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Your name is Isa! How can I assist you further, Isa?\n" + ] + } + ], + "source": [ + "messages = [ \n", + "{'role':'system', 'content':'You are friendly chatbot.'},\n", + "{'role':'user', 'content':'Hi, my name is Isa'},\n", + "{'role':'assistant', 'content': \"Hi Isa! It's nice to meet you. \\\n", + "Is there anything I can help you with today?\"},\n", + "{'role':'user', 'content':'Yes, you can remind me, What is my name?'} ]\n", + "response = get_completion_from_messages(messages, temperature=1)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "a93897fc", + "metadata": {}, + "source": [ + "**3.1 构建机器人**" + ] + }, + { + "cell_type": "code", + "execution_count": 30, + "id": "9d93bc09", + "metadata": {}, + "outputs": [], + "source": [ + "def collect_messages(_):\n", + " prompt = inp.value_input\n", + " inp.value = ''\n", + " context.append({'role':'user', 'content':f\"{prompt}\"})\n", + " response = get_completion_from_messages(context) \n", + " context.append({'role':'assistant', 'content':f\"{response}\"})\n", + " panels.append(\n", + " pn.Row('User:', pn.pane.Markdown(prompt, width=600)))\n", + " panels.append(\n", + " pn.Row('Assistant:', pn.pane.Markdown(response, width=600, style={'background-color': '#F6F6F6'})))\n", + " \n", + " return pn.Column(*panels)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8138c4ac", + "metadata": {}, + "outputs": [], + "source": [ + "import panel as pn # GUI\n", + "pn.extension()\n", + "\n", + "panels = [] # collect display \n", + "\n", + "context = [ {'role':'system', 'content':\"\"\"\n", + "You are OrderBot, an automated service to collect orders for a pizza restaurant. \\\n", + "You first greet the customer, then collects the order, \\\n", + "and then asks if it's a pickup or delivery. \\\n", + "You wait to collect the entire order, then summarize it and check for a final \\\n", + "time if the customer wants to add anything else. \\\n", + "If it's a delivery, you ask for an address. \\\n", + "Finally you collect the payment.\\\n", + "Make sure to clarify all options, extras and sizes to uniquely \\\n", + "identify the item from the menu.\\\n", + "You respond in a short, very conversational friendly style. \\\n", + "The menu includes \\\n", + "pepperoni pizza 12.95, 10.00, 7.00 \\\n", + "cheese pizza 10.95, 9.25, 6.50 \\\n", + "eggplant pizza 11.95, 9.75, 6.75 \\\n", + "fries 4.50, 3.50 \\\n", + "greek salad 7.25 \\\n", + "Toppings: \\\n", + "extra cheese 2.00, \\\n", + "mushrooms 1.50 \\\n", + "sausage 3.00 \\\n", + "canadian bacon 3.50 \\\n", + "AI sauce 1.50 \\\n", + "peppers 1.00 \\\n", + "Drinks: \\\n", + "coke 3.00, 2.00, 1.00 \\\n", + "sprite 3.00, 2.00, 1.00 \\\n", + "bottled water 5.00 \\\n", + "\"\"\"} ] # accumulate messages\n", + "\n", + "\n", + "inp = pn.widgets.TextInput(value=\"Hi\", placeholder='Enter text here…')\n", + "button_conversation = pn.widgets.Button(name=\"Chat!\")\n", + "\n", + "interactive_conversation = pn.bind(collect_messages, button_conversation)\n", + "\n", + "dashboard = pn.Column(\n", + " inp,\n", + " pn.Row(button_conversation),\n", + " pn.panel(interactive_conversation, loading_indicator=True, height=300),\n", + ")\n", + "\n", + "dashboard" + ] + }, + { + "cell_type": "markdown", + "id": "93944944", + "metadata": {}, + "source": [ + "**3.2 创建Json摘要**" + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "id": "b779dd04", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Sure! Here's a JSON summary of your food order:\n", + "\n", + "{\n", + " \"pizza\": {\n", + " \"type\": \"pepperoni\",\n", + " \"size\": \"large\"\n", + " },\n", + " \"toppings\": [\n", + " \"extra cheese\",\n", + " \"mushrooms\"\n", + " ],\n", + " \"drinks\": [\n", + " {\n", + " \"type\": \"coke\",\n", + " \"size\": \"medium\"\n", + " },\n", + " {\n", + " \"type\": \"sprite\",\n", + " \"size\": \"small\"\n", + " }\n", + " ],\n", + " \"sides\": [\n", + " {\n", + " \"type\": \"fries\",\n", + " \"size\": \"regular\"\n", + " }\n", + " ],\n", + " \"total_price\": 29.45\n", + "}\n", + "\n", + "Please let me know if there's anything else you'd like to add or modify.\n" + ] + } + ], + "source": [ + "messages = context.copy()\n", + "messages.append(\n", + "{'role':'system', 'content':'create a json summary of the previous food order. Itemize the price for each item\\\n", + " The fields should be 1) pizza, include size 2) list of toppings 3) list of drinks, include size 4) list of sides include size 5)total price '}, \n", + ")\n", + "response = get_completion_from_messages(messages, temperature=0)\n", + "print(response)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.11" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "Ctrl-E", + "itemize": "Ctrl-I" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": { + "height": "calc(100% - 180px)", + "left": "10px", + "top": "150px", + "width": "277px" + }, + "toc_section_display": true, + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/figures/docs/C1/Chatbot-pizza-cn.png b/figures/docs/C1/Chatbot-pizza-cn.png new file mode 100644 index 0000000..54807eb Binary files /dev/null and b/figures/docs/C1/Chatbot-pizza-cn.png differ