Compare commits

...

27 Commits

Author SHA1 Message Date
ec1cfaadba pip 2023-07-28 12:28:04 +08:00
2747c23868 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-07-28 10:35:50 +08:00
f446dbb62d Update README.md 2023-07-28 09:54:03 +08:00
8d37d94e2c Update README.md 2023-07-28 09:53:17 +08:00
4216c5196e verify ignore history practice 2023-07-27 22:30:55 +08:00
2df660a718 Merge pull request #992 from yangchuansheng/master
Update README.md
2023-07-26 22:46:43 +08:00
bb496a9c2c Update README.md 2023-07-26 22:46:21 +08:00
4e0737c0c2 Update README.md 2023-07-26 22:46:02 +08:00
4bb3cba5c8 Update README.md 2023-07-26 18:53:42 +08:00
08b9b0d140 improve audio assistant documents 2023-07-26 18:51:33 +08:00
3577a72a3b add audio assistant docker compose solution 2023-07-26 18:39:32 +08:00
0328d6f498 add ALIYUN ACCESSKEY SECRET 2023-07-26 18:28:15 +08:00
d437305a4f add audio assistant docker 2023-07-26 18:16:59 +08:00
c4899bcb20 long-term aliyun access 2023-07-26 18:09:28 +08:00
4295764f8c Update README.md
添加 Sealos 部署方案
2023-07-25 16:38:37 +08:00
e4e2430255 version 3.47 2023-07-24 19:58:47 +08:00
1732127a28 Merge pull request #979 from fenglui/master
增加chatGLM int4配置支持 小显存也可以选择chatGLM
2023-07-24 19:52:27 +08:00
56bb8b6498 improve re efficiency 2023-07-24 18:50:29 +08:00
e93b6fa3a6 Add GLM INT8 2023-07-24 18:19:57 +08:00
dd4ba0ea22 Merge branch 'master' of https://github.com/fenglui/gpt_academic into fenglui-master 2023-07-24 18:06:15 +08:00
c2701c9ce5 Merge pull request #986 from one-pr/git-clone
默认仅 clone 最新的代码,减小 git clone 的大小
2023-07-24 17:48:35 +08:00
2f019ce359 优化 README.md 中的其他 git clone 2023-07-24 15:14:48 +08:00
c5b147aeb7 默认仅 clone 最新的代码,减小 git clone 的大小 2023-07-24 15:14:42 +08:00
5813d65e52 增加chatGLM int4配置支持 小显存也可以选择chatGLM 2023-07-22 08:29:15 +08:00
a393edfaa4 ALLOW CUSTOM API KEY PATTERN 2023-07-21 22:49:07 +08:00
dd7a01cda5 Merge pull request #976 from fenglui/master
fix msg.data.split(DELIMITER) exception when msg.data is int
2023-07-21 17:02:29 +08:00
00a3b91f95 fix msg.data.split(DELIMITER) exception when msg.data is int 2023-07-21 03:51:33 +08:00
26 changed files with 298 additions and 95 deletions

View File

@ -0,0 +1,44 @@
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
name: build-with-audio-assistant
on:
push:
branches:
- 'master'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}_audio_assistant
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Log in to the Container registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: .
push: true
file: docs/GithubAction+NoLocal+AudioAssistant
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

View File

@ -93,7 +93,7 @@ Latex论文一键校对 | [函数插件] 仿Grammarly对Latex文章进行语法
1. 下载项目
```sh
git clone https://github.com/binary-husky/gpt_academic.git
git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
cd gpt_academic
```
@ -116,7 +116,7 @@ python -m pip install -r requirements.txt # 这个步骤和pip安装一样的步
```
<details><summary>如果需要支持清华ChatGLM2/复旦MOSS作为后端请点击展开此处</summary>
<details><summary>如果需要支持清华ChatGLM2/复旦MOSS/RWKV作为后端,请点击展开此处</summary>
<p>
【可选步骤】如果需要支持清华ChatGLM2/复旦MOSS作为后端需要额外安装更多依赖前提条件熟悉Python + 用过Pytorch + 电脑配置够强):
@ -126,9 +126,12 @@ python -m pip install -r request_llm/requirements_chatglm.txt
# 【可选步骤II】支持复旦MOSS
python -m pip install -r request_llm/requirements_moss.txt
git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # 注意执行此行代码时,必须处于项目根路径
git clone --depth=1 https://github.com/OpenLMLab/MOSS.git request_llm/moss # 注意执行此行代码时,必须处于项目根路径
# 【可选步骤III】确保config.py配置文件的AVAIL_LLM_MODELS包含了期望的模型目前支持的全部模型如下(jittorllms系列目前仅支持docker方案)
# 【可选步骤III】支持RWKV Runner
参考wikihttps://github.com/binary-husky/gpt_academic/wiki/%E9%80%82%E9%85%8DRWKV-Runner
# 【可选步骤IV】确保config.py配置文件的AVAIL_LLM_MODELS包含了期望的模型目前支持的全部模型如下(jittorllms系列目前仅支持docker方案)
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
```
@ -147,9 +150,10 @@ python main.py
1. 仅ChatGPT推荐大多数人选择等价于docker-compose方案1
[![basic](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml)
[![basiclatex](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml)
[![basicaudio](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
``` sh
git clone https://github.com/binary-husky/gpt_academic.git # 下载项目
git clone --depth=1 https://github.com/binary-husky/gpt_academic.git # 下载项目
cd gpt_academic # 进入路径
nano config.py # 用任意文本编辑器编辑config.py, 配置 “Proxy” “API_KEY” 以及 “WEB_PORT” (例如50923) 等
docker build -t gpt-academic . # 安装
@ -195,10 +199,12 @@ docker-compose up
5. 远程云服务器部署(需要云服务器知识与经验)。
请访问[部署wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
6. 使用WSL2Windows Subsystem for Linux 子系统)
6. 使用Sealos[一键部署](https://github.com/binary-husky/gpt_academic/issues/993)
7. 使用WSL2Windows Subsystem for Linux 子系统)。
请访问[部署wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
7. 如何在二级网址(如`http://localhost/subpath`)下运行。
8. 如何在二级网址(如`http://localhost/subpath`)下运行。
请访问[FastAPI运行说明](docs/WithFastapi.md)

View File

@ -80,6 +80,7 @@ ChatGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b
# 本地LLM模型如ChatGLM的执行方式 CPU/GPU
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
LOCAL_MODEL_QUANT = "FP16" # 默认 "FP16" "INT4" 启用量化INT4版本 "INT8" 启用量化INT8版本
# 设置gradio的并行线程数不需要修改
@ -131,9 +132,14 @@ put your new bing cookies here
# 阿里云实时语音识别 配置难度较高 仅建议高手用户使用 参考 https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md
ENABLE_AUDIO = False
ALIYUN_TOKEN="" # 例如 f37f30e0f9934c34a992f6f64f7eba4f
ALIYUN_APPKEY="" # 例如 RoPlZrM88DnAFkZK
ALIYUN_TOKEN="" # 例如 f37f30e0f9934c34a992f6f64f7eba4f
ALIYUN_APPKEY="" # 例如 RoPlZrM88DnAFkZK
ALIYUN_ACCESSKEY="" # (无需填写)
ALIYUN_SECRET="" # (无需填写)
# Claude API KEY
ANTHROPIC_API_KEY = ""
ANTHROPIC_API_KEY = ""
# 自定义API KEY格式
CUSTOM_API_KEY_PATTERN = ""

View File

@ -1,7 +1,7 @@
# 'primary' 颜色对应 theme.py 中的 primary_hue
# 'secondary' 颜色对应 theme.py 中的 neutral_hue
# 'stop' 颜色对应 theme.py 中的 color_er
# 默认按钮颜色是 secondary
import importlib
from toolbox import clear_line_break
@ -14,7 +14,12 @@ def get_core_functions():
r"Furthermore, list all modification and explain the reasons to do so in markdown table." + "\n\n",
# 后语
"Suffix": r"",
"Color": r"secondary", # 按钮颜色
# 按钮颜色 (默认 secondary)
"Color": r"secondary",
# 按钮是否可见 (默认 True即可见)
"Visible": True,
# 是否在触发时清除历史 (默认 False即不处理之前的对话历史)
"AutoClearHistory": True
},
"中文学术润色": {
"Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," +
@ -76,3 +81,13 @@ def get_core_functions():
"Suffix": r"",
}
}
def handle_core_functionality(additional_fn, inputs, history):
import core_functional
importlib.reload(core_functional) # 热更新prompt
core_functional = core_functional.get_core_functions()
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
history = [] if core_functional[additional_fn].get("AutoClearHistory", False) else history
return inputs, history

View File

@ -22,7 +22,8 @@ def split_subprocess(txt, project_folder, return_dict, opts):
mask = np.zeros(len(txt), dtype=np.uint8) + TRANSFORM
# 吸收title与作者以上的部分
text, mask = set_forbidden_text(text, mask, r"(.*?)\\maketitle", re.DOTALL)
text, mask = set_forbidden_text(text, mask, r"^(.*?)\\maketitle", re.DOTALL)
text, mask = set_forbidden_text(text, mask, r"^(.*?)\\begin{document}", re.DOTALL)
# 吸收iffalse注释
text, mask = set_forbidden_text(text, mask, r"\\iffalse(.*?)\\fi", re.DOTALL)
# 吸收在42行以内的begin-end组合

View File

@ -19,7 +19,7 @@ class AliyunASR():
pass
def test_on_error(self, message, *args):
# print("on_error args=>{}".format(args))
print("on_error args=>{}".format(args))
pass
def test_on_close(self, *args):
@ -50,6 +50,8 @@ class AliyunASR():
rad.clean_up()
temp_folder = tempfile.gettempdir()
TOKEN, APPKEY = get_conf('ALIYUN_TOKEN', 'ALIYUN_APPKEY')
if len(TOKEN) == 0:
TOKEN = self.get_token()
self.aliyun_service_ok = True
URL="wss://nls-gateway.aliyuncs.com/ws/v1"
sr = nls.NlsSpeechTranscriber(
@ -91,3 +93,38 @@ class AliyunASR():
self.stop = True
self.stop_msg = 'Aliyun音频服务异常请检查ALIYUN_TOKEN和ALIYUN_APPKEY是否过期。'
r = sr.stop()
def get_token(self):
from toolbox import get_conf
import json
from aliyunsdkcore.request import CommonRequest
from aliyunsdkcore.client import AcsClient
AccessKey_ID, AccessKey_secret = get_conf('ALIYUN_ACCESSKEY', 'ALIYUN_SECRET')
# 创建AcsClient实例
client = AcsClient(
AccessKey_ID,
AccessKey_secret,
"cn-shanghai"
)
# 创建request并设置参数。
request = CommonRequest()
request.set_method('POST')
request.set_domain('nls-meta.cn-shanghai.aliyuncs.com')
request.set_version('2019-02-28')
request.set_action_name('CreateToken')
try:
response = client.do_action_with_exception(request)
print(response)
jss = json.loads(response)
if 'Token' in jss and 'Id' in jss['Token']:
token = jss['Token']['Id']
expireTime = jss['Token']['ExpireTime']
print("token = " + token)
print("expireTime = " + str(expireTime))
except Exception as e:
print(e)
return token

View File

@ -179,12 +179,12 @@ def 语音助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
import nls
from scipy import io
except:
chatbot.append(["导入依赖失败", "使用该模块需要额外依赖, 安装方法:```pip install --upgrade pyOpenSSL scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git```"])
chatbot.append(["导入依赖失败", "使用该模块需要额外依赖, 安装方法:```pip install --upgrade aliyun-python-sdk-core==2.13.3 pyOpenSSL scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git```"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
TOKEN, APPKEY = get_conf('ALIYUN_TOKEN', 'ALIYUN_APPKEY')
if TOKEN == "" or APPKEY == "":
APPKEY = get_conf('ALIYUN_APPKEY')
if APPKEY == "":
chatbot.append(["导入依赖失败", "没有阿里云语音识别APPKEY和TOKEN, 详情见https://help.aliyun.com/document_detail/450255.html"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return

View File

@ -115,3 +115,36 @@ services:
command: >
bash -c "python3 -u main.py"
## ===================================================
## 【方案五】 ChatGPT + 语音助手 (请先阅读 docs/use_audio.md
## ===================================================
version: '3'
services:
gpt_academic_with_audio:
image: ghcr.io/binary-husky/gpt_academic_audio_assistant:master
environment:
# 请查阅 `config.py` 以查看所有的配置信息
API_KEY: ' fk195831-IdP0Pb3W6DCMUIbQwVX6MsSiyxwqybyS '
USE_PROXY: ' False '
proxies: ' None '
LLM_MODEL: ' gpt-3.5-turbo '
AVAIL_LLM_MODELS: ' ["gpt-3.5-turbo", "gpt-4"] '
ENABLE_AUDIO: ' True '
LOCAL_MODEL_DEVICE: ' cuda '
DEFAULT_WORKER_NUM: ' 20 '
WEB_PORT: ' 12343 '
ADD_WAIFU: ' True '
THEME: ' Chuanhu-Small-and-Beautiful '
ALIYUN_APPKEY: ' RoP1ZrM84DnAFkZK '
ALIYUN_TOKEN: ' f37f30e0f9934c34a992f6f64f7eba4f '
# (无需填写) ALIYUN_ACCESSKEY: ' LTAI5q6BrFUzoRXVGUWnekh1 '
# (无需填写) ALIYUN_SECRET: ' eHmI20AVWIaQZ0CiTD2bGQVsaP9i68 '
# 与宿主的网络融合
network_mode: "host"
# 不使用代理网络拉取最新代码
command: >
bash -c "python3 -u main.py"

View File

@ -0,0 +1,22 @@
# 此Dockerfile适用于“无本地模型”的环境构建如果需要使用chatglm等本地模型请参考 docs/Dockerfile+ChatGLM
# 如何构建: 先修改 `config.py` 然后 docker build -t gpt-academic-nolocal -f docs/Dockerfile+NoLocal .
# 如何运行: docker run --rm -it --net=host gpt-academic-nolocal
FROM python:3.11
# 指定路径
WORKDIR /gpt
# 装载项目文件
COPY . .
# 安装依赖
RUN pip3 install -r requirements.txt
# 安装语音插件的额外依赖
RUN pip3 install pyOpenSSL scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
# 可选步骤,用于预热模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
# 启动
CMD ["python3", "-u", "main.py"]

View File

@ -28,6 +28,16 @@ ALIYUN_APPKEY = "RoPlZrM88DnAFkZK" # 此appkey已经失效
参考 https://help.aliyun.com/document_detail/450255.html
先有阿里云开发者账号,登录之后,需要开通 智能语音交互 的功能可以免费获得一个token然后在 全部项目 中创建一个项目可以获得一个appkey.
- 进阶功能
进一步填写ALIYUN_ACCESSKEY和ALIYUN_SECRET实现自动获取ALIYUN_TOKEN
```
ALIYUN_APPKEY = "RoP1ZrM84DnAFkZK"
ALIYUN_TOKEN = ""
ALIYUN_ACCESSKEY = "LTAI5q6BrFUzoRXVGUWnekh1"
ALIYUN_SECRET = "eHmI20AVWIaQZ0CiTD2bGQVsaP9i68"
```
## 3.启动
启动gpt-academic `python main.py`
@ -48,7 +58,7 @@ III `[把特殊软件如腾讯会议的外放声音用VoiceMeeter截留]`
VI 两种音频监听模式切换时,需要刷新页面才有效。
VII 非localhost运行+非https情况下无法打开录音功能的坑https://blog.csdn.net/weixin_39461487/article/details/109594434
## 5.点击函数插件区“实时音频采集” 或者其他音频交互功能

View File

@ -37,15 +37,23 @@ class GetGLMHandle(Process):
# 子进程执行
# 第一次运行,加载参数
retry = 0
LOCAL_MODEL_QUANT, device = get_conf('LOCAL_MODEL_QUANT', 'LOCAL_MODEL_DEVICE')
if LOCAL_MODEL_QUANT == "INT4": # INT4
_model_name_ = "THUDM/chatglm2-6b-int4"
elif LOCAL_MODEL_QUANT == "INT8": # INT8
_model_name_ = "THUDM/chatglm2-6b-int8"
else:
_model_name_ = "THUDM/chatglm2-6b" # FP16
while True:
try:
if self.chatglm_model is None:
self.chatglm_tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True)
device, = get_conf('LOCAL_MODEL_DEVICE')
self.chatglm_tokenizer = AutoTokenizer.from_pretrained(_model_name_, trust_remote_code=True)
if device=='cpu':
self.chatglm_model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True).float()
self.chatglm_model = AutoModel.from_pretrained(_model_name_, trust_remote_code=True).float()
else:
self.chatglm_model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True).half().cuda()
self.chatglm_model = AutoModel.from_pretrained(_model_name_, trust_remote_code=True).half().cuda()
self.chatglm_model = self.chatglm_model.eval()
break
else:
@ -136,11 +144,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
return
if additional_fn is not None:
import core_functional
importlib.reload(core_functional) # 热更新prompt
core_functional = core_functional.get_core_functions()
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history)
# 处理历史信息
history_feedin = []

View File

@ -185,11 +185,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
return
if additional_fn is not None:
import core_functional
importlib.reload(core_functional) # 热更新prompt
core_functional = core_functional.get_core_functions()
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history)
# 处理历史信息
history_feedin = []

View File

@ -129,11 +129,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
return
if additional_fn is not None:
import core_functional
importlib.reload(core_functional) # 热更新prompt
core_functional = core_functional.get_core_functions()
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history)
raw_input = inputs
logging.info(f'[raw_input] {raw_input}')

View File

@ -116,11 +116,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
return
if additional_fn is not None:
import core_functional
importlib.reload(core_functional) # 热更新prompt
core_functional = core_functional.get_core_functions()
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history)
raw_input = inputs
logging.info(f'[raw_input] {raw_input}')

View File

@ -290,11 +290,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
return
if additional_fn is not None:
import core_functional
importlib.reload(core_functional) # 热更新prompt
core_functional = core_functional.get_core_functions()
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history)
# 处理历史信息
history_feedin = []

View File

@ -154,11 +154,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
return
if additional_fn is not None:
import core_functional
importlib.reload(core_functional) # 热更新prompt
core_functional = core_functional.get_core_functions()
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history)
# 处理历史信息
history_feedin = []

View File

@ -154,11 +154,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
return
if additional_fn is not None:
import core_functional
importlib.reload(core_functional) # 热更新prompt
core_functional = core_functional.get_core_functions()
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history)
# 处理历史信息
history_feedin = []

View File

@ -154,11 +154,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
return
if additional_fn is not None:
import core_functional
importlib.reload(core_functional) # 热更新prompt
core_functional = core_functional.get_core_functions()
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history)
# 处理历史信息
history_feedin = []

View File

@ -224,11 +224,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
yield from update_ui(chatbot=chatbot, history=history)
if additional_fn is not None:
import core_functional
importlib.reload(core_functional) # 热更新prompt
core_functional = core_functional.get_core_functions()
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history)
# 处理历史信息
history_feedin = []

View File

@ -224,11 +224,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
return
if additional_fn is not None:
import core_functional
importlib.reload(core_functional) # 热更新prompt
core_functional = core_functional.get_core_functions()
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history)
history_feedin = []
for i in range(len(history)//2):

View File

@ -248,14 +248,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
return
if additional_fn is not None:
import core_functional
importlib.reload(core_functional) # 热更新prompt
core_functional = core_functional.get_core_functions()
if "PreProcess" in core_functional[additional_fn]:
inputs = core_functional[additional_fn]["PreProcess"](
inputs) # 获取预处理函数(如果有的话)
inputs = core_functional[additional_fn]["Prefix"] + \
inputs + core_functional[additional_fn]["Suffix"]
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history)
history_feedin = []
for i in range(len(history)//2):

View File

@ -96,11 +96,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
additional_fn代表点击的哪个按钮按钮见functional.py
"""
if additional_fn is not None:
import core_functional
importlib.reload(core_functional) # 热更新prompt
core_functional = core_functional.get_core_functions()
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history)
raw_input = "What I would like to say is the following: " + inputs
history.extend([inputs, ""])

View File

@ -519,7 +519,11 @@ class _ChatHub:
resp_txt_no_link = ""
while not final:
msg = await self.wss.receive()
objects = msg.data.split(DELIMITER)
try:
objects = msg.data.split(DELIMITER)
except :
continue
for obj in objects:
if obj is None or not obj:
continue

50
setup.py Normal file
View File

@ -0,0 +1,50 @@
import setuptools, glob, os, fnmatch
with open("README.md", "r", encoding="utf-8") as fh:
long_description = fh.read()
def _process_requirements():
packages = open('requirements.txt').read().strip().split('\n')
requires = []
for pkg in packages:
if pkg.startswith('git+ssh'):
return_code = os.system('pip install {}'.format(pkg))
assert return_code == 0, 'error, status_code is: {}, exit!'.format(return_code)
if pkg.startswith('./docs'):
continue
else:
requires.append(pkg)
return requires
def package_files(directory):
import subprocess
list_of_files = subprocess.check_output("git ls-files", shell=True).splitlines()
return [str(k) for k in list_of_files]
extra_files = package_files('./')
setuptools.setup(
name="void-terminal",
version="0.0.0",
author="Qingxu",
author_email="505030475@qq.com",
description="LLM based APIs",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/binary-husky/gpt-academic",
project_urls={
"Bug Tracker": "https://github.com/binary-husky/gpt-academic/issues",
},
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
package_dir={"": "."},
package_data={"": extra_files},
include_package_data=True,
packages=setuptools.find_packages(where="."),
python_requires=">=3.9",
install_requires=_process_requirements(),
)

View File

@ -538,7 +538,11 @@ def load_chat_cookies():
return {'api_key': API_KEY, 'llm_model': LLM_MODEL}
def is_openai_api_key(key):
API_MATCH_ORIGINAL = re.match(r"sk-[a-zA-Z0-9]{48}$", key)
CUSTOM_API_KEY_PATTERN, = get_conf('CUSTOM_API_KEY_PATTERN')
if len(CUSTOM_API_KEY_PATTERN) != 0:
API_MATCH_ORIGINAL = re.match(CUSTOM_API_KEY_PATTERN, key)
else:
API_MATCH_ORIGINAL = re.match(r"sk-[a-zA-Z0-9]{48}$", key)
return bool(API_MATCH_ORIGINAL)
def is_azure_api_key(key):
@ -594,7 +598,7 @@ def select_api_key(keys, llm_model):
if is_azure_api_key(k): avail_key_list.append(k)
if len(avail_key_list) == 0:
raise RuntimeError(f"您提供的api-key不满足要求不包含任何可用于{llm_model}的api-key。您可能选择了错误的模型或请求源右下角更换模型菜单中可切换openai,azureapi2d请求源")
raise RuntimeError(f"您提供的api-key不满足要求不包含任何可用于{llm_model}的api-key。您可能选择了错误的模型或请求源右下角更换模型菜单中可切换openai,azure,claude,api2d请求源)")
api_key = random.choice(avail_key_list) # 随机负载均衡
return api_key
@ -670,13 +674,14 @@ def read_single_conf_with_lru_cache(arg):
# 在读取API_KEY时检查一下是不是忘了改config
if arg == 'API_KEY':
print亮蓝(f"[API_KEY] 本项目现已支持OpenAI和API2D的api-key。也支持同时填写多个api-key如API_KEY=\"openai-key1,openai-key2,api2d-key3\"")
print亮蓝(f"[API_KEY] 本项目现已支持OpenAI和Azure的api-key。也支持同时填写多个api-key如API_KEY=\"openai-key1,openai-key2,azure-key3\"")
print亮蓝(f"[API_KEY] 您既可以在config.py中修改api-key(s)也可以在问题输入区输入临时的api-key(s),然后回车键提交后即可生效。")
if is_any_api_key(r):
print亮绿(f"[API_KEY] 您的 API_KEY 是: {r[:15]}*** API_KEY 导入成功")
else:
print亮红( "[API_KEY] 正确的 API_KEY 'sk'开头的51位密钥OpenAI或者 'fk'开头的41位密钥请在config文件中修改API密钥之后再运行。")
print亮红( "[API_KEY] 的 API_KEY 不满足任何一种已知的密钥格式请在config文件中修改API密钥之后再运行。")
if arg == 'proxies':
if not read_single_conf_with_lru_cache('USE_PROXY'): r = None # 检查USE_PROXY防止proxies单独起作用
if r is None:
print亮红('[PROXY] 网络代理状态未配置。无代理状态下很可能无法访问OpenAI家族的模型。建议检查USE_PROXY选项是否修改。')
else:
@ -685,6 +690,7 @@ def read_single_conf_with_lru_cache(arg):
return r
@lru_cache(maxsize=128)
def get_conf(*args):
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到
res = []

View File

@ -1,5 +1,5 @@
{
"version": 3.46,
"version": 3.47,
"show_feature": true,
"new_feature": "临时修复theme的文件丢失问题 <-> 新增实时语音对话插件(自动断句,脱手对话) <-> 支持加载自定义的ChatGLM2微调模型 <-> 动态ChatBot窗口高度 <-> 修复Azure接口的BUG <-> 完善多语言模块 <-> 完善本地Latex矫错和翻译功能 <-> 增加gpt-3.5-16k的支持 <-> 新增最强Arxiv论文翻译插件 <-> 修复gradio复制按钮BUG <-> 修复PDF翻译的BUG, 新增HTML中英双栏对照 <-> 添加了OpenAI图片生成插件"
}
"new_feature": "优化一键升级 <-> 提高arxiv翻译速度和成功率 <-> 支持自定义APIKEY格式 <-> 临时修复theme的文件丢失问题 <-> 新增实时语音对话插件(自动断句,脱手对话) <-> 支持加载自定义的ChatGLM2微调模型 <-> 动态ChatBot窗口高度 <-> 修复Azure接口的BUG <-> 完善多语言模块 <-> 完善本地Latex矫错和翻译功能 <-> 增加gpt-3.5-16k的支持"
}