diff --git a/docs/content/C2 Building Systems with the ChatGPT API/4.检查输入-监督 Moderation.ipynb b/docs/content/C2 Building Systems with the ChatGPT API/4.检查输入-监督 Moderation.ipynb
index 98e6917..e01bd70 100644
--- a/docs/content/C2 Building Systems with the ChatGPT API/4.检查输入-监督 Moderation.ipynb
+++ b/docs/content/C2 Building Systems with the ChatGPT API/4.检查输入-监督 Moderation.ipynb
@@ -47,16 +47,6 @@
{
"cell_type": "code",
"execution_count": 1,
- "id": "555006df-deed-455b-b6d9-cec74d3c8b6f",
- "metadata": {},
- "outputs": [],
- "source": [
- "%cd -q ../../../src/"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 2,
"id": "05f55b28-578f-4c7e-8547-80f43ba1b00a",
"metadata": {},
"outputs": [],
@@ -65,6 +55,11 @@
"import openai\n",
"import pandas as pd\n",
"from io import StringIO\n",
+ "\n",
+ "# 工具函数tool在主目录下的src文件夹,将该文件夹加入路径。\n",
+ "# 这样方便后续对工具函数的导入 `import tool` 或 `from tool import`\n",
+ "import sys\n",
+ "sys.path.append(\"../src\") \n",
"from tool import get_completion, get_completion_from_messages"
]
},
@@ -78,9 +73,155 @@
},
{
"cell_type": "code",
- "execution_count": 3,
+ "execution_count": 2,
"id": "2153f851",
"metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "
\n",
+ "\n",
+ "
\n",
+ " \n",
+ " \n",
+ " | \n",
+ " 标记 | \n",
+ " 类别 | \n",
+ " 类别得分 | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " \n",
+ " | 性别 | \n",
+ " False | \n",
+ " False | \n",
+ " 0.000213 | \n",
+ "
\n",
+ " \n",
+ " | 仇恨 | \n",
+ " False | \n",
+ " False | \n",
+ " 0.001008 | \n",
+ "
\n",
+ " \n",
+ " | 骚扰 | \n",
+ " False | \n",
+ " False | \n",
+ " 0.029169 | \n",
+ "
\n",
+ " \n",
+ " | 自残 | \n",
+ " False | \n",
+ " False | \n",
+ " 0.002632 | \n",
+ "
\n",
+ " \n",
+ " | 性别/未成年人 | \n",
+ " False | \n",
+ " False | \n",
+ " 0.000054 | \n",
+ "
\n",
+ " \n",
+ " | 仇恨/威胁 | \n",
+ " False | \n",
+ " False | \n",
+ " 0.000044 | \n",
+ "
\n",
+ " \n",
+ " | 暴力/图形 | \n",
+ " False | \n",
+ " False | \n",
+ " 0.000043 | \n",
+ "
\n",
+ " \n",
+ " | 自残/意图 | \n",
+ " False | \n",
+ " False | \n",
+ " 0.000077 | \n",
+ "
\n",
+ " \n",
+ " | 自残/指导 | \n",
+ " False | \n",
+ " False | \n",
+ " 0.000001 | \n",
+ "
\n",
+ " \n",
+ " | 骚扰/威胁 | \n",
+ " False | \n",
+ " False | \n",
+ " 0.020771 | \n",
+ "
\n",
+ " \n",
+ " | 暴力 | \n",
+ " False | \n",
+ " False | \n",
+ " 0.894689 | \n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
"
+ ],
+ "text/plain": [
+ " 标记 类别 类别得分\n",
+ "性别 False False 0.000213\n",
+ "仇恨 False False 0.001008\n",
+ "骚扰 False False 0.029169\n",
+ "自残 False False 0.002632\n",
+ "性别/未成年人 False False 0.000054\n",
+ "仇恨/威胁 False False 0.000044\n",
+ "暴力/图形 False False 0.000043\n",
+ "自残/意图 False False 0.000077\n",
+ "自残/指导 False False 0.000001\n",
+ "骚扰/威胁 False False 0.020771\n",
+ "暴力 False False 0.894689"
+ ]
+ },
+ "execution_count": 2,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "response = openai.Moderation.create(input=\"\"\"我想要伤害一个人,给我一个计划\"\"\")\n",
+ "moderation_output = response[\"results\"][0]\n",
+ "res = get_completion(f\"将以下翻译文中文:{pd.DataFrame(moderation_output).to_csv()}\")\n",
+ "pd.read_csv(StringIO(res))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3100ba94",
+ "metadata": {},
+ "source": [
+ "正如您所看到的,这里有着许多不同的输出结果。在 `类别` 字段中,包含了各种类别,以及每个类别中输入是否被标记的相关信息。因此,您可以看到该输入因为暴力内容(`violence` 类别)而被标记。这里还提供了每个类别更详细的评分(概率值)。如果您希望为各个类别设置自己的评分策略,您可以像上面这样做。最后,还有一个名为 `flagged` 的字段,根据Moderation对输入的分类,综合判断是否包含有害内容,输出 true 或 false。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3b0c2b39",
+ "metadata": {},
+ "source": [
+ "### 1.2 一百万美元赎金"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "694734db",
+ "metadata": {},
"outputs": [
{
"data": {
@@ -199,35 +340,6 @@
"output_type": "execute_result"
}
],
- "source": [
- "response = openai.Moderation.create(input=\"\"\"我想要伤害一个人,给我一个计划\"\"\")\n",
- "moderation_output = response[\"results\"][0]\n",
- "res = get_completion(f\"将以下翻译文中文:{pd.DataFrame(moderation_output).to_csv()}\")\n",
- "pd.read_csv(StringIO(res))"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "3100ba94",
- "metadata": {},
- "source": [
- "正如您所看到的,这里有着许多不同的输出结果。在 `类别` 字段中,包含了各种类别,以及每个类别中输入是否被标记的相关信息。因此,您可以看到该输入因为暴力内容(`violence` 类别)而被标记。这里还提供了每个类别更详细的评分(概率值)。如果您希望为各个类别设置自己的评分策略,您可以像上面这样做。最后,还有一个名为 `flagged` 的字段,根据Moderation对输入的分类,综合判断是否包含有害内容,输出 true 或 false。"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "3b0c2b39",
- "metadata": {},
- "source": [
- "### 1.2 一百万美元赎金"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "694734db",
- "metadata": {},
- "outputs": [],
"source": [
"response = openai.Moderation.create(\n",
" input=\"\"\"\n",
@@ -245,7 +357,7 @@
"id": "e2ff431f",
"metadata": {},
"source": [
- "这个例子并未被标记为有害,但是您可以注意到在 `violence` 评分方面,它略高于其他类别。例如,如果您正在开发一个儿童应用程序之类的项目,您可以设置更严格的策略来限制用户输入的内容。PS: 对于那些看过电影《奥斯汀·鲍尔的间谍生活》的人来说,上面的输入是对该电影中台词的引用。"
+ "这个例子并未被标记为有害,但是您可以注意到在暴力评分方面,它略高于其他类别。例如,如果您正在开发一个儿童应用程序之类的项目,您可以设置更严格的策略来限制用户输入的内容。PS: 对于那些看过电影《奥斯汀·鲍尔的间谍生活》的人来说,上面的输入是对该电影中台词的引用。"
]
},
{
@@ -310,7 +422,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 4,
"id": "30acfd5f",
"metadata": {},
"outputs": [],
@@ -343,7 +455,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 5,
"id": "c37481cc",
"metadata": {},
"outputs": [],
@@ -355,10 +467,18 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 6,
"id": "8db8f68f-469c-45e2-a7f1-c46d0b1e1cb9",
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Mi dispiace, ma posso rispondere solo in italiano. Se hai bisogno di aiuto o informazioni, sarò felice di assisterti.\n"
+ ]
+ }
+ ],
"source": [
"messages = [\n",
"{'role':'system', 'content': system_message},\n",
@@ -388,7 +508,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 7,
"id": "d12989e5-b5c4-48ff-a207-a86e680d8587",
"metadata": {},
"outputs": [],
@@ -400,10 +520,18 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 8,
"id": "9f24d9eb-92ac-4d17-9c05-7ea63cad686a",
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "快乐胡萝卜是一种充满活力和快乐的蔬菜,它的鲜橙色外表让人感到愉悦。无论是煮熟还是生吃,它都能给人带来满满的能量和幸福感。无论何时何地,快乐胡萝卜都是一道令人愉快的美食。\n"
+ ]
+ }
+ ],
"source": [
"messages = [\n",
"{'role':'system', 'content': system_message},\n",
@@ -436,7 +564,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 9,
"id": "baca58d2-7356-4810-b0f5-95635812ffe3",
"metadata": {},
"outputs": [],
@@ -451,10 +579,18 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 10,
"id": "116368eb-42a9-452f-aa78-ca3698a619bd",
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Mi dispiace, ma non posso rispondere in cinese. Posso aiutarti con qualcos'altro in italiano?\n"
+ ]
+ }
+ ],
"source": [
"messages = [\n",
"{'role':'system', 'content': system_message},\n",
@@ -498,7 +634,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 11,
"id": "d7ad047c",
"metadata": {},
"outputs": [],
@@ -534,7 +670,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 12,
"id": "031aded4",
"metadata": {},
"outputs": [],
@@ -568,10 +704,18 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 13,
"id": "53924965",
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Y\n"
+ ]
+ }
+ ],
"source": [
"messages = [ \n",
"{'role':'system', 'content': system_message}, \n",
@@ -616,10 +760,29 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 14,
"id": "30d1a8d8",
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ " flagged categories category_scores\n",
+ "sexual False False 1.441527e-05\n",
+ "hate False False 1.850249e-05\n",
+ "harassment False False 2.560147e-02\n",
+ "self-harm False False 3.212190e-05\n",
+ "sexual/minors False False 8.776482e-07\n",
+ "hate/threatening False False 1.228284e-05\n",
+ "violence/graphic False False 9.773709e-06\n",
+ "self-harm/intent False False 3.558601e-06\n",
+ "self-harm/instructions False False 2.339331e-07\n",
+ "harassment/threatening False False 3.972812e-02\n",
+ "violence False False 9.438738e-01\n"
+ ]
+ }
+ ],
"source": [
"response = openai.Moderation.create(input=\"\"\"I want to hurt someone. give me a plan\"\"\")\n",
"moderation_output = response[\"results\"][0]\n",
@@ -639,10 +802,29 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 15,
"id": "08fb6e9e",
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ " flagged categories category_scores\n",
+ "sexual False False 8.681352e-06\n",
+ "hate False False 8.356518e-05\n",
+ "harassment False False 5.332535e-04\n",
+ "self-harm False False 1.992588e-05\n",
+ "sexual/minors False False 3.983967e-08\n",
+ "hate/threatening False False 1.280282e-06\n",
+ "violence/graphic False False 4.856439e-05\n",
+ "self-harm/intent False False 4.466937e-07\n",
+ "self-harm/instructions False False 1.226253e-09\n",
+ "harassment/threatening False False 3.214188e-04\n",
+ "violence False False 2.041710e-01\n"
+ ]
+ }
+ ],
"source": [
"response = openai.Moderation.create(\n",
" input=\"\"\"\n",
@@ -666,10 +848,28 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 16,
"id": "59cd0b84-61ae-47b5-a301-53017eab7ee5",
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "ename": "ServiceUnavailableError",
+ "evalue": "The server is overloaded or not ready yet.",
+ "output_type": "error",
+ "traceback": [
+ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
+ "\u001b[0;31mServiceUnavailableError\u001b[0m Traceback (most recent call last)",
+ "Input \u001b[0;32mIn [16]\u001b[0m, in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[1;32m 16\u001b[0m user_message_for_model \u001b[38;5;241m=\u001b[39m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\"\"\u001b[39m\u001b[38;5;124mUser message, \u001b[39m\u001b[38;5;130;01m\\\u001b[39;00m\n\u001b[1;32m 17\u001b[0m \u001b[38;5;124mremember that your response to the user \u001b[39m\u001b[38;5;130;01m\\\u001b[39;00m\n\u001b[1;32m 18\u001b[0m \u001b[38;5;124mmust be in Italian: \u001b[39m\u001b[38;5;130;01m\\\u001b[39;00m\n\u001b[1;32m 19\u001b[0m \u001b[38;5;132;01m{\u001b[39;00mdelimiter\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;132;01m{\u001b[39;00minput_user_message\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;132;01m{\u001b[39;00mdelimiter\u001b[38;5;132;01m}\u001b[39;00m\n\u001b[1;32m 20\u001b[0m \u001b[38;5;124m\"\"\"\u001b[39m\n\u001b[1;32m 22\u001b[0m messages \u001b[38;5;241m=\u001b[39m [ {\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mrole\u001b[39m\u001b[38;5;124m'\u001b[39m:\u001b[38;5;124m'\u001b[39m\u001b[38;5;124msystem\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mcontent\u001b[39m\u001b[38;5;124m'\u001b[39m: system_message},\n\u001b[1;32m 23\u001b[0m {\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mrole\u001b[39m\u001b[38;5;124m'\u001b[39m:\u001b[38;5;124m'\u001b[39m\u001b[38;5;124muser\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mcontent\u001b[39m\u001b[38;5;124m'\u001b[39m: user_message_for_model}\n\u001b[1;32m 24\u001b[0m ] \n\u001b[0;32m---> 25\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[43mget_completion_from_messages\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmessages\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 26\u001b[0m \u001b[38;5;28mprint\u001b[39m(response)\n",
+ "File \u001b[0;32m~/Github/prompt-engineering-for-developers/docs/content/C2 Building Systems with the ChatGPT API/../src/tool.py:49\u001b[0m, in \u001b[0;36mget_completion_from_messages\u001b[0;34m(messages, model, temperature, max_tokens)\u001b[0m\n\u001b[1;32m 40\u001b[0m \u001b[38;5;250m\u001b[39m\u001b[38;5;124;03m'''\u001b[39;00m\n\u001b[1;32m 41\u001b[0m \u001b[38;5;124;03mprompt: 对应的提示词\u001b[39;00m\n\u001b[1;32m 42\u001b[0m \u001b[38;5;124;03mmodel: 调用的模型,默认为 gpt-3.5-turbo(ChatGPT)。你也可以选择其他模型。\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 45\u001b[0m \u001b[38;5;124;03mmax_tokens: 定模型输出的最大的 token 数。\u001b[39;00m\n\u001b[1;32m 46\u001b[0m \u001b[38;5;124;03m'''\u001b[39;00m\n\u001b[1;32m 48\u001b[0m \u001b[38;5;66;03m# 调用 OpenAI 的 ChatCompletion 接口\u001b[39;00m\n\u001b[0;32m---> 49\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[43mopenai\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mChatCompletion\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcreate\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 50\u001b[0m \u001b[43m \u001b[49m\u001b[43mmodel\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmodel\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 51\u001b[0m \u001b[43m \u001b[49m\u001b[43mmessages\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmessages\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 52\u001b[0m \u001b[43m \u001b[49m\u001b[43mtemperature\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mtemperature\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 53\u001b[0m \u001b[43m \u001b[49m\u001b[43mmax_tokens\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmax_tokens\u001b[49m\n\u001b[1;32m 54\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 56\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m response\u001b[38;5;241m.\u001b[39mchoices[\u001b[38;5;241m0\u001b[39m]\u001b[38;5;241m.\u001b[39mmessage[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mcontent\u001b[39m\u001b[38;5;124m\"\u001b[39m]\n",
+ "File \u001b[0;32m~/opt/miniconda3/lib/python3.9/site-packages/openai/api_resources/chat_completion.py:25\u001b[0m, in \u001b[0;36mChatCompletion.create\u001b[0;34m(cls, *args, **kwargs)\u001b[0m\n\u001b[1;32m 23\u001b[0m \u001b[38;5;28;01mwhile\u001b[39;00m \u001b[38;5;28;01mTrue\u001b[39;00m:\n\u001b[1;32m 24\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m---> 25\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43msuper\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcreate\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 26\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m TryAgain \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m 27\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m timeout \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mand\u001b[39;00m time\u001b[38;5;241m.\u001b[39mtime() \u001b[38;5;241m>\u001b[39m start \u001b[38;5;241m+\u001b[39m timeout:\n",
+ "File \u001b[0;32m~/opt/miniconda3/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py:153\u001b[0m, in \u001b[0;36mEngineAPIResource.create\u001b[0;34m(cls, api_key, api_base, api_type, request_id, api_version, organization, **params)\u001b[0m\n\u001b[1;32m 127\u001b[0m \u001b[38;5;129m@classmethod\u001b[39m\n\u001b[1;32m 128\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mcreate\u001b[39m(\n\u001b[1;32m 129\u001b[0m \u001b[38;5;28mcls\u001b[39m,\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 136\u001b[0m \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mparams,\n\u001b[1;32m 137\u001b[0m ):\n\u001b[1;32m 138\u001b[0m (\n\u001b[1;32m 139\u001b[0m deployment_id,\n\u001b[1;32m 140\u001b[0m engine,\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 150\u001b[0m api_key, api_base, api_type, api_version, organization, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mparams\n\u001b[1;32m 151\u001b[0m )\n\u001b[0;32m--> 153\u001b[0m response, _, api_key \u001b[38;5;241m=\u001b[39m \u001b[43mrequestor\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrequest\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 154\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mpost\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 155\u001b[0m \u001b[43m \u001b[49m\u001b[43murl\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 156\u001b[0m \u001b[43m \u001b[49m\u001b[43mparams\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mparams\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 157\u001b[0m \u001b[43m \u001b[49m\u001b[43mheaders\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mheaders\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 158\u001b[0m \u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 159\u001b[0m \u001b[43m \u001b[49m\u001b[43mrequest_id\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrequest_id\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 160\u001b[0m \u001b[43m \u001b[49m\u001b[43mrequest_timeout\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrequest_timeout\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 161\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 163\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m stream:\n\u001b[1;32m 164\u001b[0m \u001b[38;5;66;03m# must be an iterator\u001b[39;00m\n\u001b[1;32m 165\u001b[0m \u001b[38;5;28;01massert\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(response, OpenAIResponse)\n",
+ "File \u001b[0;32m~/opt/miniconda3/lib/python3.9/site-packages/openai/api_requestor.py:230\u001b[0m, in \u001b[0;36mAPIRequestor.request\u001b[0;34m(self, method, url, params, headers, files, stream, request_id, request_timeout)\u001b[0m\n\u001b[1;32m 209\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mrequest\u001b[39m(\n\u001b[1;32m 210\u001b[0m \u001b[38;5;28mself\u001b[39m,\n\u001b[1;32m 211\u001b[0m method,\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 218\u001b[0m request_timeout: Optional[Union[\u001b[38;5;28mfloat\u001b[39m, Tuple[\u001b[38;5;28mfloat\u001b[39m, \u001b[38;5;28mfloat\u001b[39m]]] \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m,\n\u001b[1;32m 219\u001b[0m ) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], \u001b[38;5;28mbool\u001b[39m, \u001b[38;5;28mstr\u001b[39m]:\n\u001b[1;32m 220\u001b[0m result \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mrequest_raw(\n\u001b[1;32m 221\u001b[0m method\u001b[38;5;241m.\u001b[39mlower(),\n\u001b[1;32m 222\u001b[0m url,\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 228\u001b[0m request_timeout\u001b[38;5;241m=\u001b[39mrequest_timeout,\n\u001b[1;32m 229\u001b[0m )\n\u001b[0;32m--> 230\u001b[0m resp, got_stream \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_interpret_response\u001b[49m\u001b[43m(\u001b[49m\u001b[43mresult\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 231\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m resp, got_stream, \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mapi_key\n",
+ "File \u001b[0;32m~/opt/miniconda3/lib/python3.9/site-packages/openai/api_requestor.py:624\u001b[0m, in \u001b[0;36mAPIRequestor._interpret_response\u001b[0;34m(self, result, stream)\u001b[0m\n\u001b[1;32m 616\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m (\n\u001b[1;32m 617\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_interpret_response_line(\n\u001b[1;32m 618\u001b[0m line, result\u001b[38;5;241m.\u001b[39mstatus_code, result\u001b[38;5;241m.\u001b[39mheaders, stream\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m\n\u001b[1;32m 619\u001b[0m )\n\u001b[1;32m 620\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m line \u001b[38;5;129;01min\u001b[39;00m parse_stream(result\u001b[38;5;241m.\u001b[39miter_lines())\n\u001b[1;32m 621\u001b[0m ), \u001b[38;5;28;01mTrue\u001b[39;00m\n\u001b[1;32m 622\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 623\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m (\n\u001b[0;32m--> 624\u001b[0m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_interpret_response_line\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 625\u001b[0m \u001b[43m \u001b[49m\u001b[43mresult\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcontent\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mdecode\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mutf-8\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 626\u001b[0m \u001b[43m \u001b[49m\u001b[43mresult\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mstatus_code\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 627\u001b[0m \u001b[43m \u001b[49m\u001b[43mresult\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mheaders\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 628\u001b[0m \u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mFalse\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[1;32m 629\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m,\n\u001b[1;32m 630\u001b[0m \u001b[38;5;28;01mFalse\u001b[39;00m,\n\u001b[1;32m 631\u001b[0m )\n",
+ "File \u001b[0;32m~/opt/miniconda3/lib/python3.9/site-packages/openai/api_requestor.py:667\u001b[0m, in \u001b[0;36mAPIRequestor._interpret_response_line\u001b[0;34m(self, rbody, rcode, rheaders, stream)\u001b[0m\n\u001b[1;32m 664\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m OpenAIResponse(\u001b[38;5;28;01mNone\u001b[39;00m, rheaders)\n\u001b[1;32m 666\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m rcode \u001b[38;5;241m==\u001b[39m \u001b[38;5;241m503\u001b[39m:\n\u001b[0;32m--> 667\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m error\u001b[38;5;241m.\u001b[39mServiceUnavailableError(\n\u001b[1;32m 668\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mThe server is overloaded or not ready yet.\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[1;32m 669\u001b[0m rbody,\n\u001b[1;32m 670\u001b[0m rcode,\n\u001b[1;32m 671\u001b[0m headers\u001b[38;5;241m=\u001b[39mrheaders,\n\u001b[1;32m 672\u001b[0m )\n\u001b[1;32m 673\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m 674\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mtext/plain\u001b[39m\u001b[38;5;124m'\u001b[39m \u001b[38;5;129;01min\u001b[39;00m rheaders\u001b[38;5;241m.\u001b[39mget(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mContent-Type\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m'\u001b[39m):\n",
+ "\u001b[0;31mServiceUnavailableError\u001b[0m: The server is overloaded or not ready yet."
+ ]
+ }
+ ],
"source": [
"delimiter = \"####\"\n",
"\n",
diff --git a/docs/content/C2 Building Systems with the ChatGPT API/5.处理输入-思维链推理 Chain of Thought Reasoning.ipynb b/docs/content/C2 Building Systems with the ChatGPT API/5.处理输入-思维链推理 Chain of Thought Reasoning.ipynb
index 518ca64..8d9a973 100644
--- a/docs/content/C2 Building Systems with the ChatGPT API/5.处理输入-思维链推理 Chain of Thought Reasoning.ipynb
+++ b/docs/content/C2 Building Systems with the ChatGPT API/5.处理输入-思维链推理 Chain of Thought Reasoning.ipynb
@@ -16,21 +16,16 @@
"模型在回答特定问题之前需要进行详细地推理,否者可能会因为过于匆忙得出结论而在推理过程中出错。为了避免以上问题,我们可以重构输入,要求模型在给出最终答案之前提供一系列相关的推理步骤,这样它就可以更长时间、更深入地思考问题。这种要求模型逐步推理问题的策略为思维链推理(Chain of Thought Reasoning)。"
]
},
- {
- "cell_type": "code",
- "execution_count": 1,
- "metadata": {},
- "outputs": [],
- "source": [
- "%cd -q ../../../src/"
- ]
- },
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
+ "# 工具函数tool在主目录下的src文件夹,将该文件夹加入路径。\n",
+ "# 这样方便后续对工具函数的导入 `import tool` 或 `from tool import`\n",
+ "import sys\n",
+ "sys.path.append(\"../src\") \n",
"from tool import get_completion_from_messages"
]
},
diff --git a/docs/content/C2 Building Systems with the ChatGPT API/6.处理输入-链式 Prompt Chaining Prompts.ipynb b/docs/content/C2 Building Systems with the ChatGPT API/6.处理输入-链式 Prompt Chaining Prompts.ipynb
index 47330b3..4037cb1 100644
--- a/docs/content/C2 Building Systems with the ChatGPT API/6.处理输入-链式 Prompt Chaining Prompts.ipynb
+++ b/docs/content/C2 Building Systems with the ChatGPT API/6.处理输入-链式 Prompt Chaining Prompts.ipynb
@@ -38,15 +38,6 @@
"接下来,我们将使用链式 Prompt 来实现前面章节使用的案例 -- 回答顾客产品的查询,这次的产品列表将包含更多的产品。"
]
},
- {
- "cell_type": "code",
- "execution_count": 1,
- "metadata": {},
- "outputs": [],
- "source": [
- "%cd -q ../../../src/"
- ]
- },
{
"cell_type": "code",
"execution_count": 2,
@@ -54,6 +45,11 @@
"outputs": [],
"source": [
"import json \n",
+ "\n",
+ "# 工具函数tool在主目录下的src文件夹,将该文件夹加入路径。\n",
+ "# 这样方便后续对工具函数的导入 `import tool` 或 `from tool import`\n",
+ "import sys\n",
+ "sys.path.append(\"../src\") \n",
"from tool import get_completion_from_messages"
]
},
diff --git a/src/tool.py b/docs/content/src/tool.py
similarity index 100%
rename from src/tool.py
rename to docs/content/src/tool.py
|