pull dev
正在显示
docs/启动API服务.md
0 → 100644
img/docker_logs.png
0 → 100644
69.0 KB
img/qr_code_32.jpg
deleted
100644 → 0
143.3 KB
img/qr_code_36.jpg
0 → 100644
247.1 KB
img/qr_code_42.jpg
0 → 100644
273.1 KB
File added
File added
... | ... | @@ -23,9 +23,13 @@ openai |
#accelerate~=0.18.0 | ||
#peft~=0.3.0 | ||
#bitsandbytes; platform_system != "Windows" | ||
#llama-cpp-python==0.1.34; platform_system != "Windows" | ||
#https://github.com/abetlen/llama-cpp-python/releases/download/v0.1.34/llama_cpp_python-0.1.34-cp310-cp310-win_amd64.whl; platform_system == "Windows" | ||
# 要调用llama-cpp模型,如vicuma-13b量化模型需要安装llama-cpp-python库 | ||
# but!!! 实测pip install 不好使,需要手动从ttps://github.com/abetlen/llama-cpp-python/releases/下载 | ||
# 而且注意不同时期的ggml格式并不!兼!容!!!因此需要安装的llama-cpp-python版本也不一致,需要手动测试才能确定 | ||
# 实测ggml-vicuna-13b-1.1在llama-cpp-python 0.1.63上可正常兼容 | ||
# 不过!!!本项目模型加载的方式控制的比较严格,与llama-cpp-python的兼容性较差,很多参数设定不能使用, | ||
# 建议如非必要还是不要使用llama-cpp | ||
torch~=2.0.0 | ||
pydantic~=1.10.7 | ||
starlette~=0.26.1 | ||
... | ... | @@ -33,5 +37,4 @@ numpy~=1.23.5 |
tqdm~=4.65.0 | ||
requests~=2.28.2 | ||
tenacity~=8.2.2 | ||
# 默认下载的charset_normalizer模块版本过高会抛出,`artially initialized module 'charset_normalizer' has no attribute 'md__mypyc' (most likely due to a circular import)` | ||
charset_normalizer==2.1.0 | ||
\ No newline at end of file |
test/textsplitter/test_zh_title_enhance.py
0 → 100644
textsplitter/zh_title_enhance.py
0 → 100644