Compare commits

..

184 Commits

Author SHA1 Message Date
5c0d34793e Latex File Name Bug Patch 2023-07-07 00:09:50 +08:00
41c10f5688 report image generation error in UI 2023-07-01 02:28:32 +08:00
d7ac99f603 更正错误提示 2023-07-01 01:46:43 +08:00
1616daae6a Merge branch 'master' of https://github.com/binary-husky/chatgpt_academic into master 2023-07-01 00:17:30 +08:00
a1092d8f92 提供自动清空输入框的选项 2023-07-01 00:17:26 +08:00
34ca9f138f Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-06-30 14:56:28 +08:00
df3f1aa3ca 更正ChatGLM2的默认Token数量 2023-06-30 14:56:22 +08:00
bf805cf477 Merge branch 'master' of https://github.com/binary-husky/chatgpt_academic into master 2023-06-30 13:09:51 +08:00
ecb08e69be remove find picture core functionality 2023-06-30 13:08:54 +08:00
28c1e3f11b Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-06-30 12:06:33 +08:00
403667aec1 upgrade chatglm to chatglm2 2023-06-30 12:06:28 +08:00
22f377e2fb fix multi user cwd shift 2023-06-30 11:05:47 +08:00
37172906ef 修复文件导出的bug 2023-06-29 14:55:55 +08:00
3b78e0538b 修复插件demo的图像显示的问题 2023-06-29 14:52:58 +08:00
d8f9ac71d0 Merge pull request #907 from Xminry/master
feat:联网搜索功能,cn.bing.com版,国内可用
2023-06-29 12:44:32 +08:00
aced272d3c 微调插件提示 2023-06-29 12:43:50 +08:00
aff77a086d Merge branch 'master' of https://github.com/Xminry/gpt_academic into Xminry-master 2023-06-29 12:38:43 +08:00
49253c4dc6 [arxiv trans] add html comparison to zip file 2023-06-29 12:29:49 +08:00
1a00093015 修复提示 2023-06-29 12:15:52 +08:00
64f76e7401 3.42 2023-06-29 11:32:19 +08:00
eb4c07997e 修复Latex矫错和本地Latex论文翻译的问题 2023-06-29 11:30:42 +08:00
99cf7205c3 feat:联网搜索功能,cn.bing.com版,国内可用 2023-06-28 10:30:08 +08:00
d684b4cdb3 Merge pull request #905 from Xminry/master
Update 理解PDF文档内容.py
2023-06-27 23:37:25 +08:00
601a95c948 Merge pull request #881 from OverKit/master
update latex_utils.py
2023-06-27 19:20:17 +08:00
e18bef2e9c add item breaker 2023-06-27 19:16:05 +08:00
f654c1af31 merge regex expressions 2023-06-27 18:59:56 +08:00
e90048a671 Merge branch 'master' of https://github.com/OverKit/gpt_academic into OverKit-master 2023-06-27 16:14:12 +08:00
ea624b1510 Merge pull request #889 from dackdawn/master
添加0613模型的声明
2023-06-27 15:03:15 +08:00
057e3dda3c Merge branch 'master' of https://github.com/dackdawn/gpt_academic into dackdawn-master 2023-06-27 15:02:22 +08:00
4290821a50 Update 理解PDF文档内容.py 2023-06-27 01:57:31 +08:00
280e14d7b7 更新Latex模块的docker-compose 2023-06-26 09:59:14 +08:00
9f0cf9fb2b arxiv PDF 引用 2023-06-25 23:30:31 +08:00
b8560b7510 修正误判latex模板文件的bug 2023-06-25 22:46:16 +08:00
d841d13b04 add arxiv translation test samples 2023-06-25 22:12:44 +08:00
efda9e5193 Merge pull request #897 from Ranhuiryan/master
添加azure-gpt35选项
2023-06-24 17:59:51 +10:00
33d2e75aac add azure-gpt35 to model list 2023-06-21 16:19:49 +08:00
74941170aa update azure use instruction 2023-06-21 16:19:26 +08:00
cd38949903 当遇到错误时,回滚到原文 2023-06-21 11:53:57 +10:00
d87f1eb171 更新接入azure的说明 2023-06-21 11:38:59 +10:00
cd1e4e1ba7 Merge pull request #797 from XiaojianTang/master
增加azure openai api的支持
2023-06-21 11:23:41 +10:00
cf5f348d70 update test samples 2023-06-21 11:20:31 +10:00
0ee25f475e Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-06-20 23:07:51 +08:00
1fede6df7f temp 2023-06-20 23:05:17 +08:00
22a65cd163 Create build-with-latex.yml 2023-06-21 00:55:24 +10:00
538b041ea3 Merge pull request #890 from Mcskiller/master
Update README.md
2023-06-21 00:53:26 +10:00
d7b056576d add latex docker-compose 2023-06-21 00:52:58 +10:00
cb0bb6ab4a fix minor bugs 2023-06-21 00:41:33 +10:00
bf955aaf12 fix bugs 2023-06-20 23:12:30 +10:00
61eb0da861 fix encoding bug 2023-06-20 22:08:09 +10:00
5da633d94d Update README.md
Fix the error URL for the git clone.
2023-06-20 19:10:11 +08:00
f3e4e26e2f 添加0613模型的声明
openai对gpt-3.5-turbo的RPM限制是3,而gpt-3.5-turbo-0613的RPM是60,虽然两个模型的内容是一致的,但是选定特定模型可以获得更高的RPM和TPM
2023-06-19 21:40:26 +08:00
af7734dd35 avoid file fusion 2023-06-19 16:57:11 +10:00
d5bab093f9 rename function names 2023-06-19 15:17:33 +10:00
f94b167dc2 Merge branch 'master' into overkit-master 2023-06-19 14:53:51 +10:00
951d5ec758 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-06-19 14:52:25 +10:00
016d8ee156 Merge remote-tracking branch 'origin/master' into OverKit-master 2023-06-19 14:51:59 +10:00
dca9ec4bae Merge branch 'master' of https://github.com/OverKit/gpt_academic into OverKit-master 2023-06-19 14:49:50 +10:00
a06e43c96b Update README.md 2023-06-18 16:15:37 +08:00
29c6bfb6cb Update README.md 2023-06-18 16:12:06 +08:00
8d7ee975a0 Update README.md 2023-06-18 16:10:45 +08:00
4bafbb3562 Update Latex输出PDF结果.py 2023-06-18 15:54:23 +08:00
7fdf0a8e51 调整区分内容的代码 2023-06-18 15:51:29 +08:00
2bb13b4677 Update README.md 2023-06-18 15:44:42 +08:00
9a5a509dd9 修复关于abstract的搜索 2023-06-17 19:27:21 +08:00
cbcb98ef6a Merge pull request #872 from Skyzayre/master
Update README.md
2023-06-16 17:54:39 +08:00
bb864c6313 增加一些提示文字 2023-06-16 17:33:19 +08:00
6d849eeb12 修复Langchain插件的bug 2023-06-16 17:33:03 +08:00
ef752838b0 Update README.md 2023-06-15 02:07:43 +08:00
73d4a1ff4b Update README.md 2023-06-14 10:15:47 +08:00
8c62f21aa6 3.41增加gpt-3.5-16k的支持 2023-06-14 09:57:09 +08:00
c40ebfc21f 将gpt-3.5-16k作为加入支持列表 2023-06-14 09:50:15 +08:00
c365ea9f57 Update README.md 2023-06-13 16:13:19 +08:00
12d66777cc Merge pull request #864 from OverKit/master
check letter % after removing spaces or tabs in the left
2023-06-12 15:21:35 +08:00
9ac3d0d65d check letter % after removing spaces or tabs in the left 2023-06-12 10:09:52 +08:00
9fd212652e 专业词汇声明 2023-06-12 09:45:59 +08:00
790a1cf12a 添加一些提示 2023-06-11 20:12:25 +08:00
3ecf2977a8 修复caption翻译 2023-06-11 18:23:54 +08:00
aeddf6b461 Update Latex输出PDF结果.py 2023-06-11 10:20:49 +08:00
ce0d8b9dab 虚空终端插件雏形 2023-06-11 01:36:23 +08:00
3c00e7a143 file link in chatbot 2023-06-10 21:45:38 +08:00
ef1bfdd60f update pip install notice 2023-06-08 21:29:10 +08:00
e48d92e82e update translation 2023-06-08 18:34:06 +08:00
110510997f Update README.md 2023-06-08 12:48:52 +08:00
b52695845e Update README.md 2023-06-08 12:44:05 +08:00
f30c9c6d3b Update README.md 2023-06-08 12:43:13 +08:00
ff5403eac6 Update README.md 2023-06-08 12:42:24 +08:00
f9226d92be Update version 2023-06-08 12:24:14 +08:00
a0ea5d0e9e Update README.md 2023-06-08 12:22:03 +08:00
ce6f11d200 Update README.md 2023-06-08 12:20:49 +08:00
10b3001dba Update README.md 2023-06-08 12:19:11 +08:00
e2de1d76ea Update README.md 2023-06-08 12:18:31 +08:00
77cc141a82 Update README.md 2023-06-08 12:14:02 +08:00
526b4d8ecd Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-06-07 11:09:20 +08:00
149db621ec langchain check depends 2023-06-07 11:09:12 +08:00
2e1bb7311c Merge pull request #848 from MengDanzz/master
将Dockerfile COPY分成两段,缓存依赖库,重新构建不需要重新安装
2023-06-07 10:44:09 +08:00
dae65fd2c2 在copy ..后在运行一次pip install检查依赖变化 2023-06-07 10:43:45 +08:00
9aafb2ee47 非pypi包加入COPY 2023-06-07 09:18:57 +08:00
6bc91bd02e Merge branch 'binary-husky:master' into master 2023-06-07 09:15:44 +08:00
8ef7344101 fix subprocess bug in Windows 2023-06-06 18:57:52 +08:00
40da1b0afe 将Latex分解程序放到子进程执行 2023-06-06 18:44:00 +08:00
c65def90f3 将Dockerfile COPY分成两段,缓存依赖库,重新构建不需要重新安装 2023-06-06 14:36:30 +08:00
ddeaf76422 check latex in PATH 2023-06-06 00:23:00 +08:00
f23b66dec2 update Dockerfile with Latex 2023-06-05 23:49:54 +08:00
a26b294817 Write Some Docstring 2023-06-05 23:44:59 +08:00
66018840da declare resp 2023-06-05 23:24:41 +08:00
cea2144f34 fix test samples 2023-06-05 23:11:21 +08:00
7f5be93c1d 修正一些正则匹配bug 2023-06-05 22:57:39 +08:00
85b838b302 add Linux support 2023-06-04 23:06:35 +08:00
27f97ba92a remove previous results 2023-06-04 16:55:36 +08:00
14269eba98 建立本地arxiv缓存区 2023-06-04 16:08:01 +08:00
d5c9bc9f0a 提高iffalse搜索优先级 2023-06-04 14:15:59 +08:00
b0fed3edfc consider iffalse state 2023-06-04 14:06:02 +08:00
7296d054a2 patch latex segmentation 2023-06-04 13:56:15 +08:00
d57c7d352d improve quality 2023-06-03 23:54:30 +08:00
3fd2927ea3 改善 2023-06-03 23:33:45 +08:00
b745074160 avoid most compile failure 2023-06-03 23:33:32 +08:00
70ee810133 improve success rate 2023-06-03 19:39:19 +08:00
68fea9e79b fix test 2023-06-03 18:09:39 +08:00
f82bf91aa8 test example 2023-06-03 18:06:39 +08:00
dde9edcc0c fix a fatal mistake 2023-06-03 17:49:22 +08:00
66c78e459e 修正提示 2023-06-03 17:18:38 +08:00
de54102303 修改提醒 2023-06-03 16:43:26 +08:00
7c7d2d8a84 Latex的minipage补丁 2023-06-03 16:16:32 +08:00
834f989ed4 考虑有人用input不加.tex的情况 2023-06-03 15:42:22 +08:00
b658ee6e04 修复arxiv翻译的一些问题 2023-06-03 15:36:55 +08:00
1a60280ea0 添加警告 2023-06-03 14:40:37 +08:00
991cb7d272 warning 2023-06-03 14:39:40 +08:00
463991cfb2 fix bug 2023-06-03 14:24:06 +08:00
06f10b5fdc fix zh cite bug 2023-06-03 14:17:58 +08:00
d275d012c6 Merge branch 'langchain' into master 2023-06-03 13:53:39 +08:00
c5d1ea3e21 update langchain version 2023-06-03 13:53:34 +08:00
0022b92404 update prompt 2023-06-03 13:50:39 +08:00
ef61221241 latex auto translation milestone 2023-06-03 13:46:40 +08:00
5a1831db98 成功! 2023-06-03 00:34:23 +08:00
a643f8b0db debug translation 2023-06-02 23:06:01 +08:00
601712fd0a latex toolchain 2023-06-02 21:44:11 +08:00
e769f831c7 latex 2023-06-02 14:07:04 +08:00
dcd952671f Update main.py 2023-06-01 15:56:52 +08:00
06564df038 Merge branch 'langchain' 2023-06-01 09:39:34 +08:00
2f037f30d5 暂时移除插件锁定 2023-06-01 09:39:00 +08:00
efedab186d Merge branch 'master' into langchain 2023-06-01 00:10:22 +08:00
f49cae5116 Update Langchain知识库.py 2023-06-01 00:09:07 +08:00
2b620ccf2e 更新提示 2023-06-01 00:07:19 +08:00
a1b7a4da56 更新测试案例 2023-06-01 00:03:27 +08:00
61b0e49fed fix some bugs in linux 2023-05-31 23:49:25 +08:00
f60dc371db 12 2023-05-31 10:42:44 +08:00
0a3433b8ac Update README.md 2023-05-31 10:37:08 +08:00
31bce54abb Update README.md 2023-05-31 10:34:21 +08:00
5db1530717 Merge branch 'langchain' of github.com:binary-husky/chatgpt_academic into langchain 2023-05-30 20:08:47 +08:00
c32929fd11 Merge branch 'master' into langchain 2023-05-30 20:08:15 +08:00
3e4c2b056c knowledge base 2023-05-30 19:55:38 +08:00
e79e9d7d23 Merge branch 'master' into langchain 2023-05-30 18:31:39 +08:00
d175b93072 Update README.md.Italian.md 2023-05-30 17:27:41 +08:00
ed254687d2 Update README.md.Italian.md 2023-05-30 17:26:12 +08:00
c0392f7074 Update README.md.Korean.md 2023-05-30 17:25:32 +08:00
f437712af7 Update README.md.Portuguese.md 2023-05-30 17:22:46 +08:00
6d1ea643e9 langchain 2023-05-30 12:54:42 +08:00
9e84cfcd46 Update README.md 2023-05-29 19:48:34 +08:00
897695d29f 修复二级路径的文件屏蔽 2023-05-28 20:25:35 +08:00
1dcc2873d2 修复Gradio配置泄露的问题 2023-05-28 20:23:47 +08:00
42cf738a31 修复一些情况下复制键失效的问题 2023-05-28 18:12:48 +08:00
e4646789af Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-05-28 16:07:29 +08:00
e6c3aabd45 docker-compose check 2023-05-28 16:07:24 +08:00
6789d1fab4 Update README.md 2023-05-28 11:21:50 +08:00
7a733f00a2 Update README.md 2023-05-28 00:19:23 +08:00
dd55888f0e Update README.md 2023-05-28 00:16:45 +08:00
0327df22eb Update README.md 2023-05-28 00:14:54 +08:00
e544f5e9d0 Update README.md 2023-05-27 23:45:15 +08:00
0fad4f44a4 fix dockerfile 2023-05-27 23:36:42 +08:00
1240dd6f26 local gradio 2023-05-27 23:29:22 +08:00
d6be947177 修复gradio的依赖安装问题 2023-05-27 23:10:44 +08:00
3cfbdce9f2 remove limitation for now 2023-05-27 22:25:50 +08:00
1ee471ff57 fix reminder 2023-05-27 22:20:46 +08:00
25ccecf8e3 Update README.md 2023-05-27 21:56:43 +08:00
9e991bfa3e Update requirements.txt 2023-05-27 21:56:16 +08:00
221efd0193 Update README.md 2023-05-27 21:11:25 +08:00
976b9bf65f Update README.md 2023-05-27 21:04:52 +08:00
ae5783e383 修复gradio复制按钮BUG 2023-05-27 20:20:45 +08:00
30224af042 Merge pull request #798 from Bit0r/master
🐛 匹配latex注释的正则表达式
2023-05-27 14:03:07 +08:00
8ff7c15cd8 🐛 匹配latex注释的正则表达式 2023-05-27 11:19:48 +08:00
f3205994ea 增加azure openai api的支持 2023-05-26 23:22:12 +08:00
ec8cc48a4d Add ProxyNetworkActivate 2023-05-25 23:48:18 +08:00
5d75c578b9 fix dependency 2023-05-25 15:28:27 +08:00
cd411c2eea newbing-free deps 2023-05-25 15:12:54 +08:00
13 changed files with 56 additions and 55 deletions

View File

@ -1,15 +1,3 @@
---
title: ChatImprovement
emoji: 😻
colorFrom: blue
colorTo: blue
sdk: gradio
sdk_version: 3.32.0
app_file: app.py
pinned: false
---
# ChatGPT 学术优化
> **Note** > **Note**
> >
> 2023.5.27 对Gradio依赖进行了调整Fork并解决了官方Gradio的若干Bugs。请及时**更新代码**并重新更新pip依赖。安装依赖时请严格选择`requirements.txt`中**指定的版本** > 2023.5.27 对Gradio依赖进行了调整Fork并解决了官方Gradio的若干Bugs。请及时**更新代码**并重新更新pip依赖。安装依赖时请严格选择`requirements.txt`中**指定的版本**

View File

@ -45,9 +45,10 @@ WEB_PORT = -1
# 如果OpenAI不响应网络卡顿、代理失败、KEY失效重试的次数限制 # 如果OpenAI不响应网络卡顿、代理失败、KEY失效重试的次数限制
MAX_RETRY = 2 MAX_RETRY = 2
# OpenAI模型选择是gpt4现在只对申请成功的人开放 # 模型选择是 (注意: LLM_MODEL是默认选中的模型, 同时它必须被包含在AVAIL_LLM_MODELS切换列表中 )
LLM_MODEL = "gpt-3.5-turbo" # 可选 "chatglm" LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
AVAIL_LLM_MODELS = ["newbing-free", "gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "api2d-gpt-3.5-turbo"] AVAIL_LLM_MODELS = ["gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt35", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "moss", "newbing", "newbing-free", "stack-claude"]
# P.S. 其他可用的模型还包括 ["gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "newbing-free", "jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
# 本地LLM模型如ChatGLM的执行方式 CPU/GPU # 本地LLM模型如ChatGLM的执行方式 CPU/GPU
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda" LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
@ -55,6 +56,9 @@ LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
# 设置gradio的并行线程数不需要修改 # 设置gradio的并行线程数不需要修改
CONCURRENT_COUNT = 100 CONCURRENT_COUNT = 100
# 是否在提交时自动清空输入框
AUTO_CLEAR_TXT = False
# 加一个live2d装饰 # 加一个live2d装饰
ADD_WAIFU = False ADD_WAIFU = False

View File

@ -63,6 +63,7 @@ def get_core_functions():
"Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL" + "Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL" +
r"然后请使用Markdown格式封装并且不要有反斜线不要用代码块。现在请按以下描述给我发送图片" + "\n\n", r"然后请使用Markdown格式封装并且不要有反斜线不要用代码块。现在请按以下描述给我发送图片" + "\n\n",
"Suffix": r"", "Suffix": r"",
"Visible": False,
}, },
"解释代码": { "解释代码": {
"Prefix": r"请解释以下代码:" + "\n```\n", "Prefix": r"请解释以下代码:" + "\n```\n",
@ -73,6 +74,5 @@ def get_core_functions():
r"Note that, reference styles maybe more than one kind, you should transform each item correctly." + r"Note that, reference styles maybe more than one kind, you should transform each item correctly." +
r"Items need to be transformed:", r"Items need to be transformed:",
"Suffix": r"", "Suffix": r"",
"Visible": False,
} }
} }

View File

@ -193,8 +193,9 @@ def test_Latex():
# txt = r"https://arxiv.org/abs/2212.10156" # txt = r"https://arxiv.org/abs/2212.10156"
# txt = r"https://arxiv.org/abs/2211.11559" # txt = r"https://arxiv.org/abs/2211.11559"
# txt = r"https://arxiv.org/abs/2303.08774" # txt = r"https://arxiv.org/abs/2303.08774"
txt = r"https://arxiv.org/abs/2303.12712" # txt = r"https://arxiv.org/abs/2303.12712"
# txt = r"C:\Users\fuqingxu\arxiv_cache\2303.12712\workfolder" # txt = r"C:\Users\fuqingxu\arxiv_cache\2303.12712\workfolder"
txt = r"2306.17157" # 这个paper有个input命令文件名大小写错误
for cookies, cb, hist, msg in (Latex翻译中文并重新编译PDF)(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): for cookies, cb, hist, msg in (Latex翻译中文并重新编译PDF)(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):

View File

@ -189,6 +189,18 @@ def rm_comments(main_file):
main_file = re.sub(r'(?<!\\)%.*', '', main_file) # 使用正则表达式查找半行注释, 并替换为空字符串 main_file = re.sub(r'(?<!\\)%.*', '', main_file) # 使用正则表达式查找半行注释, 并替换为空字符串
return main_file return main_file
def find_tex_file_ignore_case(fp):
dir_name = os.path.dirname(fp)
base_name = os.path.basename(fp)
if not base_name.endswith('.tex'): base_name+='.tex'
if os.path.exists(pj(dir_name, base_name)): return pj(dir_name, base_name)
# go case in-sensitive
import glob
for f in glob.glob(dir_name+'/*.tex'):
base_name_s = os.path.basename(fp)
if base_name_s.lower() == base_name.lower(): return f
return None
def merge_tex_files_(project_foler, main_file, mode): def merge_tex_files_(project_foler, main_file, mode):
""" """
Merge Tex project recrusively Merge Tex project recrusively
@ -197,14 +209,11 @@ def merge_tex_files_(project_foler, main_file, mode):
for s in reversed([q for q in re.finditer(r"\\input\{(.*?)\}", main_file, re.M)]): for s in reversed([q for q in re.finditer(r"\\input\{(.*?)\}", main_file, re.M)]):
f = s.group(1) f = s.group(1)
fp = os.path.join(project_foler, f) fp = os.path.join(project_foler, f)
if os.path.exists(fp): fp = find_tex_file_ignore_case(fp)
# e.g., \input{srcs/07_appendix.tex} if fp:
with open(fp, 'r', encoding='utf-8', errors='replace') as fx: with open(fp, 'r', encoding='utf-8', errors='replace') as fx: c = fx.read()
c = fx.read()
else: else:
# e.g., \input{srcs/07_appendix} raise RuntimeError(f'找不到{fp}Tex源文件缺失')
with open(fp+'.tex', 'r', encoding='utf-8', errors='replace') as fx:
c = fx.read()
c = merge_tex_files_(project_foler, c, mode) c = merge_tex_files_(project_foler, c, mode)
main_file = main_file[:s.span()[0]] + c + main_file[s.span()[1]:] main_file = main_file[:s.span()[0]] + c + main_file[s.span()[1]:]
return main_file return main_file

View File

@ -27,8 +27,10 @@ def gen_image(llm_kwargs, prompt, resolution="256x256"):
} }
response = requests.post(url, headers=headers, json=data, proxies=proxies) response = requests.post(url, headers=headers, json=data, proxies=proxies)
print(response.content) print(response.content)
try:
image_url = json.loads(response.content.decode('utf8'))['data'][0]['url'] image_url = json.loads(response.content.decode('utf8'))['data'][0]['url']
except:
raise RuntimeError(response.content.decode())
# 文件保存到本地 # 文件保存到本地
r = requests.get(image_url, proxies=proxies) r = requests.get(image_url, proxies=proxies)
file_path = 'gpt_log/image_gen/' file_path = 'gpt_log/image_gen/'

View File

@ -1,5 +1,5 @@
from toolbox import CatchException, report_execption, write_results_to_file from toolbox import CatchException, report_execption, write_results_to_file
from toolbox import update_ui from toolbox import update_ui, promote_file_to_downloadzone
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
from .crazy_utils import read_and_clean_pdf_text from .crazy_utils import read_and_clean_pdf_text
@ -147,23 +147,14 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
print('writing html result failed:', trimmed_format_exc()) print('writing html result failed:', trimmed_format_exc())
# 准备文件的下载 # 准备文件的下载
import shutil
for pdf_path in generated_conclusion_files: for pdf_path in generated_conclusion_files:
# 重命名文件 # 重命名文件
rename_file = f'./gpt_log/翻译-{os.path.basename(pdf_path)}' rename_file = f'翻译-{os.path.basename(pdf_path)}'
if os.path.exists(rename_file): promote_file_to_downloadzone(pdf_path, rename_file=rename_file, chatbot=chatbot)
os.remove(rename_file)
shutil.copyfile(pdf_path, rename_file)
if os.path.exists(pdf_path):
os.remove(pdf_path)
for html_path in generated_html_files: for html_path in generated_html_files:
# 重命名文件 # 重命名文件
rename_file = f'./gpt_log/翻译-{os.path.basename(html_path)}' rename_file = f'翻译-{os.path.basename(html_path)}'
if os.path.exists(rename_file): promote_file_to_downloadzone(html_path, rename_file=rename_file, chatbot=chatbot)
os.remove(rename_file)
shutil.copyfile(html_path, rename_file)
if os.path.exists(html_path):
os.remove(html_path)
chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files))) chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files)))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

View File

@ -13,11 +13,11 @@ def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
web_port 当前软件运行的端口号 web_port 当前软件运行的端口号
""" """
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
chatbot.append((txt, "正在同时咨询gpt-3.5和gpt-4……")) chatbot.append((txt, "正在同时咨询ChatGPT和ChatGLM……"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间我们先及时地做一次界面更新 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间我们先及时地做一次界面更新
# llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口用&符号分隔 # llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口用&符号分隔
llm_kwargs['llm_model'] = 'gpt-3.5-turbo&gpt-4' # 支持任意数量的llm接口用&符号分隔 llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo' # 支持任意数量的llm接口用&符号分隔
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=txt, inputs_show_user=txt, inputs=txt, inputs_show_user=txt,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,

View File

@ -104,7 +104,7 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
meta_paper_info_list = meta_paper_info_list[batchsize:] meta_paper_info_list = meta_paper_info_list[batchsize:]
chatbot.append(["状态?", chatbot.append(["状态?",
"已经全部完成您可以试试让AI写一个Related Works例如您可以继续输入Write an academic \"Related Works\" section about \"你搜索的研究领域\" for me."]) "已经全部完成您可以试试让AI写一个Related Works例如您可以继续输入Write a \"Related Works\" section about \"你搜索的研究领域\" for me."])
msg = '正常' msg = '正常'
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
res = write_results_to_file(history) res = write_results_to_file(history)

Binary file not shown.

View File

@ -1,15 +1,13 @@
import os; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染 import os; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
def main(): def main():
import subprocess, sys
subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'gradio-stable-fork'])
import gradio as gr import gradio as gr
if gr.__version__ not in ['3.28.3','3.32.3']: assert False, "用 pip install -r requirements.txt 安装依赖" if gr.__version__ not in ['3.28.3','3.32.2']: assert False, "需要特殊依赖,请务必用 pip install -r requirements.txt 指令安装依赖详情信息见requirements.txt"
from request_llm.bridge_all import predict from request_llm.bridge_all import predict
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, DummyWith from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, DummyWith
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到 # 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY, AVAIL_LLM_MODELS = \ proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = \
get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY', 'AVAIL_LLM_MODELS') get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
# 如果WEB_PORT是-1, 则随机选取WEB端口 # 如果WEB_PORT是-1, 则随机选取WEB端口
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
@ -56,7 +54,6 @@ def main():
cancel_handles = [] cancel_handles = []
with gr.Blocks(title="ChatGPT 学术优化", theme=set_theme, analytics_enabled=False, css=advanced_css) as demo: with gr.Blocks(title="ChatGPT 学术优化", theme=set_theme, analytics_enabled=False, css=advanced_css) as demo:
gr.HTML(title_html) gr.HTML(title_html)
gr.HTML('''<center><a href="https://huggingface.co/spaces/qingxu98/gpt-academic?duplicate=true"><img src="https://bit.ly/3gLdBN6" alt="Duplicate Space"></a>请您打开此页面后务必点击上方的“复制空间”Duplicate Space按钮<font color="#FF00FF">使用时先在输入框填入API-KEY然后回车。</font><br/>切忌在“复制空间”Duplicate Space之前填入API_KEY或进行提问否则您的API_KEY将极可能被空间所有者攫取<br/>支持任意数量的OpenAI的密钥和API2D的密钥共存例如输入"OpenAI密钥1,API2D密钥2",然后提交,即可同时使用两种模型接口。</center>''')
cookies = gr.State({'api_key': API_KEY, 'llm_model': LLM_MODEL}) cookies = gr.State({'api_key': API_KEY, 'llm_model': LLM_MODEL})
with gr_L1(): with gr_L1():
with gr_L2(scale=2): with gr_L2(scale=2):
@ -66,7 +63,7 @@ def main():
with gr_L2(scale=1): with gr_L2(scale=1):
with gr.Accordion("输入区", open=True) as area_input_primary: with gr.Accordion("输入区", open=True) as area_input_primary:
with gr.Row(): with gr.Row():
txt = gr.Textbox(show_label=False, lines=2, placeholder="输入问题或API密钥输入多个密钥时用英文逗号间隔。支持OpenAI密钥和API2D密钥共存。").style(container=False) txt = gr.Textbox(show_label=False, placeholder="Input question here.").style(container=False)
with gr.Row(): with gr.Row():
submitBtn = gr.Button("提交", variant="primary") submitBtn = gr.Button("提交", variant="primary")
with gr.Row(): with gr.Row():
@ -107,7 +104,7 @@ def main():
system_prompt = gr.Textbox(show_label=True, placeholder=f"System Prompt", label="System prompt", value=initial_prompt) system_prompt = gr.Textbox(show_label=True, placeholder=f"System Prompt", label="System prompt", value=initial_prompt)
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",) top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",) temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",)
max_length_sl = gr.Slider(minimum=256, maximum=4096, value=512, step=1, interactive=True, label="Local LLM MaxLength",) max_length_sl = gr.Slider(minimum=256, maximum=8192, value=4096, step=1, interactive=True, label="Local LLM MaxLength",)
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "底部输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区") checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "底部输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区")
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False) md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
@ -147,6 +144,11 @@ def main():
resetBtn2.click(lambda: ([], [], "已重置"), None, [chatbot, history, status]) resetBtn2.click(lambda: ([], [], "已重置"), None, [chatbot, history, status])
clearBtn.click(lambda: ("",""), None, [txt, txt2]) clearBtn.click(lambda: ("",""), None, [txt, txt2])
clearBtn2.click(lambda: ("",""), None, [txt, txt2]) clearBtn2.click(lambda: ("",""), None, [txt, txt2])
if AUTO_CLEAR_TXT:
submitBtn.click(lambda: ("",""), None, [txt, txt2])
submitBtn2.click(lambda: ("",""), None, [txt, txt2])
txt.submit(lambda: ("",""), None, [txt, txt2])
txt2.submit(lambda: ("",""), None, [txt, txt2])
# 基础功能区的回调函数注册 # 基础功能区的回调函数注册
for k in functional: for k in functional:
if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue
@ -200,7 +202,10 @@ def main():
threading.Thread(target=warm_up_modules, name="warm-up", daemon=True).start() threading.Thread(target=warm_up_modules, name="warm-up", daemon=True).start()
auto_opentab_delay() auto_opentab_delay()
demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", share=False, favicon_path="docs/logo.png", blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile"]) demo.queue(concurrency_count=CONCURRENT_COUNT).launch(
server_name="0.0.0.0", server_port=PORT,
favicon_path="docs/logo.png", auth=AUTHENTICATION,
blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile"])
# 如果需要在二级路径下运行 # 如果需要在二级路径下运行
# CUSTOM_PATH, = get_conf('CUSTOM_PATH') # CUSTOM_PATH, = get_conf('CUSTOM_PATH')

View File

@ -1,3 +1,4 @@
./docs/gradio-3.32.2-py3-none-any.whl
tiktoken>=0.3.3 tiktoken>=0.3.3
requests[socks] requests[socks]
transformers transformers