diff --git a/docs/content/C1 Prompt Engineering for Developer/5. 推断 Inferring.ipynb b/docs/content/C1 Prompt Engineering for Developer/5. 推断 Inferring.ipynb new file mode 100644 index 0000000..f0c549f --- /dev/null +++ b/docs/content/C1 Prompt Engineering for Developer/5. 推断 Inferring.ipynb @@ -0,0 +1,1013 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "3630c235-f891-4874-bd0a-5277d4d6aa82", + "metadata": {}, + "source": [ + "# 第五章 推断\n", + "\n", + "在这节课中,你将从产品评论和新闻文章中推断情感和主题。\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "id": "5f3abbee", + "metadata": {}, + "source": [ + "\n", + "推断任务可以看作是模型接收文本作为输入,并执行某种分析的过程。其中涉及提取标签、提取实体、理解文本情感等等。如果你想要从一段文本中提取正面或负面情感,在传统的机器学习工作流程中,需要收集标签数据集、训练模型、确定如何在云端部署模型并进行推断。这样做可能效果还不错,但是执行全流程需要很多工作。而且对于每个任务,如情感分析、提取实体等等,都需要训练和部署单独的模型。\n", + "\n", + "LLM 的一个非常好的特点是,对于许多这样的任务,你只需要编写一个 Prompt 即可开始产出结果,而不需要进行大量的工作。这极大地加快了应用程序开发的速度。你还可以只使用一个模型和一个 API 来执行许多不同的任务,而不需要弄清楚如何训练和部署许多不同的模型。" + ] + }, + { + "cell_type": "markdown", + "id": "51d2fdfa-c99f-4750-8574-dba7712cd7f0", + "metadata": {}, + "source": [ + "## 一、情感推断\n", + "\n", + "### 1.1 情感倾向分析\n", + "\n", + "以电商平台关于一盏台灯的评论为例,可以对其传达的情感进行二分类(正向/负向)。" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "bc6260f0", + "metadata": {}, + "outputs": [], + "source": [ + "lamp_review = \"\"\"\n", + "我需要一盏漂亮的卧室灯,这款灯具有额外的储物功能,价格也不算太高。\\\n", + "我很快就收到了它。在运输过程中,我们的灯绳断了,但是公司很乐意寄送了一个新的。\\\n", + "几天后就收到了。这款灯很容易组装。我发现少了一个零件,于是联系了他们的客服,他们很快就给我寄来了缺失的零件!\\\n", + "在我看来,Lumina 是一家非常关心顾客和产品的优秀公司!\n", + "\"\"\"" + ] + }, + { + "cell_type": "markdown", + "id": "cc4ec4ca", + "metadata": {}, + "source": [] + }, + { + "cell_type": "markdown", + "id": "30d6e4bd-3337-45a3-8c99-a734cdd06743", + "metadata": {}, + "source": [ + "现在让我们来编写一个 Prompt 来分类这个评论的情感。如果我想让系统告诉我这个评论的情感是什么,只需要编写 “以下产品评论的情感是什么” 这个 Prompt ,加上通常的分隔符和评论文本等等。\n", + "\n", + "然后让我们运行一下。结果显示这个产品评论的情感是积极的,这似乎是非常正确的。虽然这盏台灯不完美,但这个客户似乎非常满意。这似乎是一家关心客户和产品的伟大公司,可以认为积极的情感似乎是正确的答案。" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "ac5b0bb9", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "情感是积极的。\n" + ] + } + ], + "source": [ + "from tool import get_completion\n", + "\n", + "prompt = f\"\"\"\n", + "以下用三个反引号分隔的产品评论的情感是什么?\n", + "\n", + "评论文本: ```{lamp_review}```\n", + "\"\"\"\n", + "response = get_completion(prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "a562e656", + "metadata": {}, + "source": [] + }, + { + "cell_type": "markdown", + "id": "76be2320", + "metadata": {}, + "source": [ + "如果你想要给出更简洁的答案,以便更容易进行后处理,可以在上述 Prompt 基础上添加另一个指令:*用一个单词回答:「正面」或「负面」*。这样就只会打印出 “正面” 这个单词,这使得输出更加统一,方便后续处理。" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "84a761b3", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "正面\n" + ] + } + ], + "source": [ + "prompt = f\"\"\"\n", + "以下用三个反引号分隔的产品评论的情感是什么?\n", + "\n", + "用一个单词回答:「正面」或「负面」。\n", + "\n", + "评论文本: ```{lamp_review}```\n", + "\"\"\"\n", + "response = get_completion(prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "81d2a973-1fa4-4a35-ae35-a2e746c0e91b", + "metadata": {}, + "source": [ + "### 2.2 识别情感类型\n", + "\n", + "仍然使用台灯评论,我们尝试另一个 Prompt 。这次我需要模型识别出评论作者所表达的情感,并归纳为列表,不超过五项。" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "e615c13a", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "满意,感激,赞赏,信任,满足\n" + ] + } + ], + "source": [ + "# 中文\n", + "prompt = f\"\"\"\n", + "识别以下评论的作者表达的情感。包含不超过五个项目。将答案格式化为以逗号分隔的单词列表。\n", + "\n", + "评论文本: ```{lamp_review}```\n", + "\"\"\"\n", + "response = get_completion(prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "c7743a53", + "metadata": {}, + "source": [] + }, + { + "cell_type": "markdown", + "id": "cc4444f7", + "metadata": {}, + "source": [ + "大型语言模型非常擅长从一段文本中提取特定的东西。在上面的例子中,评论所表达的情感有助于了解客户如何看待特定的产品。" + ] + }, + { + "cell_type": "markdown", + "id": "a428d093-51c9-461c-b41e-114e80876409", + "metadata": {}, + "source": [ + "### 1.3 识别愤怒\n", + "\n", + "对于很多企业来说,了解某个顾客是否非常生气很重要。所以产生了下述分类问题:以下评论的作者是否表达了愤怒情绪?因为如果有人真的很生气,那么可能值得额外关注,让客户支持或客户成功团队联系客户以了解情况,并为客户解决问题。" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "85bad324", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "否\n" + ] + } + ], + "source": [ + "# 中文\n", + "prompt = f\"\"\"\n", + "以下评论的作者是否表达了愤怒?评论用三个反引号分隔。给出是或否的答案。\n", + "\n", + "评论文本: ```{lamp_review}```\n", + "\"\"\"\n", + "response = get_completion(prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "77905fd8", + "metadata": {}, + "source": [] + }, + { + "cell_type": "markdown", + "id": "11ca57a2", + "metadata": {}, + "source": [ + "上面这个例子中,客户并没有生气。注意,如果使用常规的监督学习,如果想要建立所有这些分类器,不可能在几分钟内就做到这一点。我们鼓励大家尝试更改一些这样的 Prompt ,也许询问客户是否表达了喜悦,或者询问是否有任何遗漏的部分,并看看是否可以让 Prompt 对这个灯具评论做出不同的推论。" + ] + }, + { + "cell_type": "markdown", + "id": "936a771e-ca78-4e55-8088-2da6f3820ddc", + "metadata": {}, + "source": [ + "## 二、信息提取\n", + "\n", + "### 2.1 商品信息提取 \n", + "\n", + "接下来,让我们从客户评论中提取更丰富的信息。信息提取是自然语言处理(NLP)的一部分,与从文本中提取你想要知道的某些事物相关。因此,在这个 Prompt 中,我要求它识别以下内容:购买物品和制造物品的公司名称。\n", + "\n", + "同样,如果你试图总结在线购物电子商务网站的许多评论,对于这些评论来说,弄清楚是什么物品、谁制造了该物品,弄清楚积极和消极的情感,有助于追踪特定物品或制造商收获的用户情感趋势。\n", + "\n", + "在下面这个示例中,我们要求它将响应格式化为一个 JSON 对象,其中物品和品牌作为键。" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "e9ffe056", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{\n", + " \"物品\": \"卧室灯\",\n", + " \"品牌\": \"Lumina\"\n", + "}\n" + ] + } + ], + "source": [ + "# 中文\n", + "prompt = f\"\"\"\n", + "从评论文本中识别以下项目:\n", + "- 评论者购买的物品\n", + "- 制造该物品的公司\n", + "\n", + "评论文本用三个反引号分隔。将你的响应格式化为以 “物品” 和 “品牌” 为键的 JSON 对象。\n", + "如果信息不存在,请使用 “未知” 作为值。\n", + "让你的回应尽可能简短。\n", + " \n", + "评论文本: ```{lamp_review}```\n", + "\"\"\"\n", + "response = get_completion(prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "1342c732", + "metadata": {}, + "source": [] + }, + { + "cell_type": "markdown", + "id": "954d125d", + "metadata": {}, + "source": [ + "如上所示,它会说这个物品是一个卧室灯,品牌是 Luminar,你可以轻松地将其加载到 Python 字典中,然后对此输出进行其他处理。" + ] + }, + { + "cell_type": "markdown", + "id": "a38880a5-088f-4609-9913-f8fa41fb7ba0", + "metadata": {}, + "source": [ + "### 2.2 综合情感推断和信息提取\n", + "\n", + "提取上述所有信息使用了 3 或 4 个 Prompt ,但实际上可以编写单个 Prompt 来同时提取所有这些信息。" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "939c2b0e", + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{\n", + " \"情感倾向\": \"正面\",\n", + " \"是否生气\": false,\n", + " \"物品类型\": \"卧室灯\",\n", + " \"品牌\": \"Lumina\"\n", + "}\n" + ] + } + ], + "source": [ + "# 中文\n", + "prompt = f\"\"\"\n", + "从评论文本中识别以下项目:\n", + "- 情绪(正面或负面)\n", + "- 审稿人是否表达了愤怒?(是或否)\n", + "- 评论者购买的物品\n", + "- 制造该物品的公司\n", + "\n", + "评论用三个反引号分隔。将您的响应格式化为 JSON 对象,以 “情感倾向”、“是否生气”、“物品类型” 和 “品牌” 作为键。\n", + "如果信息不存在,请使用 “未知” 作为值。\n", + "让你的回应尽可能简短。\n", + "将 Anger 值格式化为布尔值。\n", + "\n", + "评论文本: ```{lamp_review}```\n", + "\"\"\"\n", + "response = get_completion(prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "5e09a673", + "metadata": {}, + "source": [ + "这个例子中,我们告诉它将愤怒值格式化为布尔值,然后输出一个 JSON。您可以自己尝试不同的变化,或者甚至尝试完全不同的评论,看看是否仍然可以准确地提取这些内容。" + ] + }, + { + "cell_type": "markdown", + "id": "235fc223-2c89-49ec-ac2d-78a8e74a43ac", + "metadata": {}, + "source": [ + "## 三、主题推断\n", + "\n", + "大型语言模型的另一个很酷的应用是推断主题。给定一段长文本,这段文本是关于什么的?有什么话题?以以下一段虚构的报纸报道为例。" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "id": "811ff13f", + "metadata": {}, + "outputs": [], + "source": [ + "# 中文\n", + "story = \"\"\"\n", + "在政府最近进行的一项调查中,要求公共部门的员工对他们所在部门的满意度进行评分。\n", + "调查结果显示,NASA 是最受欢迎的部门,满意度为 95%。\n", + "\n", + "一位 NASA 员工 John Smith 对这一发现发表了评论,他表示:\n", + "“我对 NASA 排名第一并不感到惊讶。这是一个与了不起的人们和令人难以置信的机会共事的好地方。我为成为这样一个创新组织的一员感到自豪。”\n", + "\n", + "NASA 的管理团队也对这一结果表示欢迎,主管 Tom Johnson 表示:\n", + "“我们很高兴听到我们的员工对 NASA 的工作感到满意。\n", + "我们拥有一支才华横溢、忠诚敬业的团队,他们为实现我们的目标不懈努力,看到他们的辛勤工作得到回报是太棒了。”\n", + "\n", + "调查还显示,社会保障管理局的满意度最低,只有 45%的员工表示他们对工作满意。\n", + "政府承诺解决调查中员工提出的问题,并努力提高所有部门的工作满意度。\n", + "\"\"\"" + ] + }, + { + "cell_type": "markdown", + "id": "a8ea91d6-e841-4ee2-bed9-ca4a36df177f", + "metadata": {}, + "source": [ + "### 3.1 推断讨论主题\n", + "\n", + "上面是一篇虚构的关于政府工作人员对他们工作机构感受的报纸文章。我们可以让它确定五个正在讨论的主题,用一两个字描述每个主题,并将输出格式化为逗号分隔的列表。" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "id": "cab27b65", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "['NASA', '满意度', '评论', '管理团队', '社会保障管理局']\n" + ] + } + ], + "source": [ + "# 中文\n", + "prompt = f\"\"\"\n", + "确定以下给定文本中讨论的五个主题。\n", + "\n", + "每个主题用1-2个词概括。\n", + "\n", + "请输出一个可解析的Python列表,每个元素是一个字符串,展示了一个主题。\n", + "\n", + "给定文本: ```{story}```\n", + "\"\"\"\n", + "response = get_completion(prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "790d1435", + "metadata": {}, + "source": [] + }, + { + "cell_type": "markdown", + "id": "34be1d2a-1309-4512-841a-b6f67338938b", + "metadata": {}, + "source": [ + "### 3.2 为特定主题制作新闻提醒\n", + "\n", + "假设我们有一个新闻网站或类似的东西,这是我们感兴趣的主题:NASA、地方政府、工程、员工满意度、联邦政府等。假设我们想弄清楚,针对一篇新闻文章,其中涵盖了哪些主题。可以使用这样的prompt:确定以下主题列表中的每个项目是否是以下文本中的主题。以 0 或 1 的形式给出答案列表。" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "id": "9f53d337", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[\n", + " {\"美国航空航天局\": 1},\n", + " {\"当地政府\": 1},\n", + " {\"工程\": 0},\n", + " {\"员工满意度\": 1},\n", + " {\"联邦政府\": 1}\n", + "]\n" + ] + } + ], + "source": [ + "# 中文\n", + "prompt = f\"\"\"\n", + "判断主题列表中的每一项是否是给定文本中的一个话题,\n", + "\n", + "以列表的形式给出答案,每个元素是一个Json对象,键为对应主题,值为对应的 0 或 1。\n", + "\n", + "主题列表:美国航空航天局、当地政府、工程、员工满意度、联邦政府\n", + "\n", + "给定文本: ```{story}```\n", + "\"\"\"\n", + "response = get_completion(prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "8f39f24a", + "metadata": {}, + "source": [] + }, + { + "cell_type": "markdown", + "id": "08247dbf", + "metadata": {}, + "source": [ + "有结果可见,这个故事是与关于 NASA 、员工满意度、联邦政府有关,而与当地政府的、工程学无关。这在机器学习中有时被称为 Zero-Shot (零样本)学习算法,因为我们没有给它任何标记的训练数据。仅凭 Prompt ,它就能确定哪些主题在新闻文章中有所涵盖。\n", + "\n", + "如果我们想生成一个新闻提醒,也可以使用这个处理新闻的过程。假设我非常喜欢 NASA 所做的工作,就可以构建一个这样的系统,每当 NASA 新闻出现时,输出提醒。" + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "id": "53bf1abd", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'美国航空航天局': 1, '当地政府': 1, '工程': 0, '员工满意度': 1, '联邦政府': 1}\n", + "提醒: 关于美国航空航天局的新消息\n" + ] + } + ], + "source": [ + "result_lst = eval(response)\n", + "topic_dict = {list(i.keys())[0] : list(i.values())[0] for i in result_lst}\n", + "print(topic_dict)\n", + "if topic_dict['美国航空航天局'] == 1:\n", + " print(\"提醒: 关于美国航空航天局的新消息\")" + ] + }, + { + "cell_type": "markdown", + "id": "9fc2c643", + "metadata": {}, + "source": [] + }, + { + "cell_type": "markdown", + "id": "76ccd189", + "metadata": {}, + "source": [ + "这就是关于推断的全部内容了,仅用几分钟时间,我们就可以构建多个用于对文本进行推理的系统,而以前则需要熟练的机器学习开发人员数天甚至数周的时间。这非常令人兴奋,无论是对于熟练的机器学习开发人员,还是对于新手来说,都可以使用 Prompt 来非常快速地构建和开始相当复杂的自然语言处理任务。" + ] + }, + { + "cell_type": "markdown", + "id": "9ace190d", + "metadata": {}, + "source": [ + "## 英文版" + ] + }, + { + "cell_type": "markdown", + "id": "a3b34fec", + "metadata": {}, + "source": [ + "**1.1 情感倾向分析**" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "id": "57b08c8d", + "metadata": {}, + "outputs": [], + "source": [ + "lamp_review = \"\"\"\n", + "Needed a nice lamp for my bedroom, and this one had \\\n", + "additional storage and not too high of a price point. \\\n", + "Got it fast. The string to our lamp broke during the \\\n", + "transit and the company happily sent over a new one. \\\n", + "Came within a few days as well. It was easy to put \\\n", + "together. I had a missing part, so I contacted their \\\n", + "support and they very quickly got me the missing piece! \\\n", + "Lumina seems to me to be a great company that cares \\\n", + "about their customers and products!!\n", + "\"\"\"" + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "id": "5456540c", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "The sentiment of the product review is positive.\n" + ] + } + ], + "source": [ + "prompt = f\"\"\"\n", + "What is the sentiment of the following product review, \n", + "which is delimited with triple backticks?\n", + "\n", + "Review text: ```{lamp_review}```\n", + "\"\"\"\n", + "response = get_completion(prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "id": "cc0fe287", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "positive\n" + ] + } + ], + "source": [ + "prompt = f\"\"\"\n", + "What is the sentiment of the following product review, \n", + "which is delimited with triple backticks?\n", + "\n", + "Give your answer as a single word, either \"positive\" \\\n", + "or \"negative\".\n", + "\n", + "Review text: ```{lamp_review}```\n", + "\"\"\"\n", + "response = get_completion(prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "7dc01fe8", + "metadata": {}, + "source": [ + "**1.2识别情感类型**" + ] + }, + { + "cell_type": "code", + "execution_count": 28, + "id": "07708a7d", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "satisfied, pleased, grateful, impressed, happy\n" + ] + } + ], + "source": [ + "prompt = f\"\"\"\n", + "Identify a list of emotions that the writer of the \\\n", + "following review is expressing. Include no more than \\\n", + "five items in the list. Format your answer as a list of \\\n", + "lower-case words separated by commas.\n", + "\n", + "Review text: ```{lamp_review}```\n", + "\"\"\"\n", + "response = get_completion(prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "5ebd8903", + "metadata": {}, + "source": [ + "**1.3 识别愤怒**" + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "id": "0fb1fa65", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "No\n" + ] + } + ], + "source": [ + "prompt = f\"\"\"\n", + "Is the writer of the following review expressing anger?\\\n", + "The review is delimited with triple backticks. \\\n", + "Give your answer as either yes or no.\n", + "\n", + "Review text: ```{lamp_review}```\n", + "\"\"\"\n", + "response = get_completion(prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "60186c02", + "metadata": {}, + "source": [ + "**2.1 商品信息提取**" + ] + }, + { + "cell_type": "code", + "execution_count": 30, + "id": "58ec19cd", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{\n", + " \"Item\": \"lamp\",\n", + " \"Brand\": \"Lumina\"\n", + "}\n" + ] + } + ], + "source": [ + "prompt = f\"\"\"\n", + "Identify the following items from the review text: \n", + "- Item purchased by reviewer\n", + "- Company that made the item\n", + "\n", + "The review is delimited with triple backticks. \\\n", + "Format your response as a JSON object with \\\n", + "\"Item\" and \"Brand\" as the keys. \n", + "If the information isn't present, use \"unknown\" \\\n", + "as the value.\n", + "Make your response as short as possible.\n", + " \n", + "Review text: ```{lamp_review}```\n", + "\"\"\"\n", + "response = get_completion(prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "0a290d15", + "metadata": {}, + "source": [ + "**2.2 综合情感推断和信息提取**" + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "id": "785ccfe2", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{\n", + " \"Sentiment\": \"positive\",\n", + " \"Anger\": false,\n", + " \"Item\": \"lamp\",\n", + " \"Brand\": \"Lumina\"\n", + "}\n" + ] + } + ], + "source": [ + "prompt = f\"\"\"\n", + "Identify the following items from the review text: \n", + "- Sentiment (positive or negative)\n", + "- Is the reviewer expressing anger? (true or false)\n", + "- Item purchased by reviewer\n", + "- Company that made the item\n", + "\n", + "The review is delimited with triple backticks. \\\n", + "Format your response as a JSON object with \\\n", + "\"Sentiment\", \"Anger\", \"Item\" and \"Brand\" as the keys.\n", + "If the information isn't present, use \"unknown\" \\\n", + "as the value.\n", + "Make your response as short as possible.\n", + "Format the Anger value as a boolean.\n", + "\n", + "Review text: ```{lamp_review}```\n", + "\"\"\"\n", + "response = get_completion(prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "28f57f53", + "metadata": {}, + "source": [ + "**3.1 推断讨论主题**" + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "id": "8d2859c4", + "metadata": {}, + "outputs": [], + "source": [ + "story = \"\"\"\n", + "In a recent survey conducted by the government, \n", + "public sector employees were asked to rate their level \n", + "of satisfaction with the department they work at. \n", + "The results revealed that NASA was the most popular \n", + "department with a satisfaction rating of 95%.\n", + "\n", + "One NASA employee, John Smith, commented on the findings, \n", + "stating, \"I'm not surprised that NASA came out on top. \n", + "It's a great place to work with amazing people and \n", + "incredible opportunities. I'm proud to be a part of \n", + "such an innovative organization.\"\n", + "\n", + "The results were also welcomed by NASA's management team, \n", + "with Director Tom Johnson stating, \"We are thrilled to \n", + "hear that our employees are satisfied with their work at NASA. \n", + "We have a talented and dedicated team who work tirelessly \n", + "to achieve our goals, and it's fantastic to see that their \n", + "hard work is paying off.\"\n", + "\n", + "The survey also revealed that the \n", + "Social Security Administration had the lowest satisfaction \n", + "rating, with only 45% of employees indicating they were \n", + "satisfied with their job. The government has pledged to \n", + "address the concerns raised by employees in the survey and \n", + "work towards improving job satisfaction across all departments.\n", + "\"\"\"" + ] + }, + { + "cell_type": "code", + "execution_count": 38, + "id": "48774999", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "survey, satisfaction rating, NASA, Social Security Administration, job satisfaction\n" + ] + } + ], + "source": [ + "prompt = f\"\"\"\n", + "Determine five topics that are being discussed in the \\\n", + "following text, which is delimited by triple backticks.\n", + "\n", + "Make each item one or two words long. \n", + "\n", + "Format your response as a list of items separated by commas.\n", + "Give me a list which can be read in Python.\n", + "\n", + "Text sample: ```{story}```\n", + "\"\"\"\n", + "response = get_completion(prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "code", + "execution_count": 39, + "id": "35afde60", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "['survey',\n", + " ' satisfaction rating',\n", + " ' NASA',\n", + " ' Social Security Administration',\n", + " ' job satisfaction']" + ] + }, + "execution_count": 39, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "response.split(sep=',')" + ] + }, + { + "cell_type": "markdown", + "id": "4874c5bb", + "metadata": {}, + "source": [ + "**3.2 为特定主题制作新闻提醒**" + ] + }, + { + "cell_type": "code", + "execution_count": 40, + "id": "a4d3d64f", + "metadata": {}, + "outputs": [], + "source": [ + "topic_list = [\n", + " \"nasa\", \"local government\", \"engineering\", \n", + " \"employee satisfaction\", \"federal government\"\n", + "]" + ] + }, + { + "cell_type": "code", + "execution_count": 41, + "id": "a0ceea1a", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[1, 0, 0, 1, 1]\n" + ] + } + ], + "source": [ + "prompt = f\"\"\"\n", + "Determine whether each item in the following list of \\\n", + "topics is a topic in the text below, which\n", + "is delimited with triple backticks.\n", + "\n", + "Give your answer as list with 0 or 1 for each topic.\\\n", + "\n", + "List of topics: {\", \".join(topic_list)}\n", + "\n", + "Text sample: ```{story}```\n", + "\"\"\"\n", + "response = get_completion(prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "code", + "execution_count": 45, + "id": "82489580", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'nasa': 1, 'local government': 0, 'engineering': 0, 'employee satisfaction': 1, 'federal government': 1}\n", + "ALERT: New NASA story!\n" + ] + } + ], + "source": [ + "topic_dict = {topic_list[i] : eval(response)[i] for i in range(len(eval(response)))}\n", + "print(topic_dict)\n", + "if topic_dict['nasa'] == 1:\n", + " print(\"ALERT: New NASA story!\")" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.11" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "Ctrl-E", + "itemize": "Ctrl-I" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": { + "height": "calc(100% - 180px)", + "left": "10px", + "top": "150px", + "width": "256px" + }, + "toc_section_display": true, + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 5 +}