@ -0,0 +1,201 @@ |
|||
Apache License |
|||
Version 2.0, January 2004 |
|||
http://www.apache.org/licenses/ |
|||
|
|||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION |
|||
|
|||
1. Definitions. |
|||
|
|||
"License" shall mean the terms and conditions for use, reproduction, |
|||
and distribution as defined by Sections 1 through 9 of this document. |
|||
|
|||
"Licensor" shall mean the copyright owner or entity authorized by |
|||
the copyright owner that is granting the License. |
|||
|
|||
"Legal Entity" shall mean the union of the acting entity and all |
|||
other entities that control, are controlled by, or are under common |
|||
control with that entity. For the purposes of this definition, |
|||
"control" means (i) the power, direct or indirect, to cause the |
|||
direction or management of such entity, whether by contract or |
|||
otherwise, or (ii) ownership of fifty percent (50%) or more of the |
|||
outstanding shares, or (iii) beneficial ownership of such entity. |
|||
|
|||
"You" (or "Your") shall mean an individual or Legal Entity |
|||
exercising permissions granted by this License. |
|||
|
|||
"Source" form shall mean the preferred form for making modifications, |
|||
including but not limited to software source code, documentation |
|||
source, and configuration files. |
|||
|
|||
"Object" form shall mean any form resulting from mechanical |
|||
transformation or translation of a Source form, including but |
|||
not limited to compiled object code, generated documentation, |
|||
and conversions to other media types. |
|||
|
|||
"Work" shall mean the work of authorship, whether in Source or |
|||
Object form, made available under the License, as indicated by a |
|||
copyright notice that is included in or attached to the work |
|||
(an example is provided in the Appendix below). |
|||
|
|||
"Derivative Works" shall mean any work, whether in Source or Object |
|||
form, that is based on (or derived from) the Work and for which the |
|||
editorial revisions, annotations, elaborations, or other modifications |
|||
represent, as a whole, an original work of authorship. For the purposes |
|||
of this License, Derivative Works shall not include works that remain |
|||
separable from, or merely link (or bind by name) to the interfaces of, |
|||
the Work and Derivative Works thereof. |
|||
|
|||
"Contribution" shall mean any work of authorship, including |
|||
the original version of the Work and any modifications or additions |
|||
to that Work or Derivative Works thereof, that is intentionally |
|||
submitted to Licensor for inclusion in the Work by the copyright owner |
|||
or by an individual or Legal Entity authorized to submit on behalf of |
|||
the copyright owner. For the purposes of this definition, "submitted" |
|||
means any form of electronic, verbal, or written communication sent |
|||
to the Licensor or its representatives, including but not limited to |
|||
communication on electronic mailing lists, source code control systems, |
|||
and issue tracking systems that are managed by, or on behalf of, the |
|||
Licensor for the purpose of discussing and improving the Work, but |
|||
excluding communication that is conspicuously marked or otherwise |
|||
designated in writing by the copyright owner as "Not a Contribution." |
|||
|
|||
"Contributor" shall mean Licensor and any individual or Legal Entity |
|||
on behalf of whom a Contribution has been received by Licensor and |
|||
subsequently incorporated within the Work. |
|||
|
|||
2. Grant of Copyright License. Subject to the terms and conditions of |
|||
this License, each Contributor hereby grants to You a perpetual, |
|||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable |
|||
copyright license to reproduce, prepare Derivative Works of, |
|||
publicly display, publicly perform, sublicense, and distribute the |
|||
Work and such Derivative Works in Source or Object form. |
|||
|
|||
3. Grant of Patent License. Subject to the terms and conditions of |
|||
this License, each Contributor hereby grants to You a perpetual, |
|||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable |
|||
(except as stated in this section) patent license to make, have made, |
|||
use, offer to sell, sell, import, and otherwise transfer the Work, |
|||
where such license applies only to those patent claims licensable |
|||
by such Contributor that are necessarily infringed by their |
|||
Contribution(s) alone or by combination of their Contribution(s) |
|||
with the Work to which such Contribution(s) was submitted. If You |
|||
institute patent litigation against any entity (including a |
|||
cross-claim or counterclaim in a lawsuit) alleging that the Work |
|||
or a Contribution incorporated within the Work constitutes direct |
|||
or contributory patent infringement, then any patent licenses |
|||
granted to You under this License for that Work shall terminate |
|||
as of the date such litigation is filed. |
|||
|
|||
4. Redistribution. You may reproduce and distribute copies of the |
|||
Work or Derivative Works thereof in any medium, with or without |
|||
modifications, and in Source or Object form, provided that You |
|||
meet the following conditions: |
|||
|
|||
(a) You must give any other recipients of the Work or |
|||
Derivative Works a copy of this License; and |
|||
|
|||
(b) You must cause any modified files to carry prominent notices |
|||
stating that You changed the files; and |
|||
|
|||
(c) You must retain, in the Source form of any Derivative Works |
|||
that You distribute, all copyright, patent, trademark, and |
|||
attribution notices from the Source form of the Work, |
|||
excluding those notices that do not pertain to any part of |
|||
the Derivative Works; and |
|||
|
|||
(d) If the Work includes a "NOTICE" text file as part of its |
|||
distribution, then any Derivative Works that You distribute must |
|||
include a readable copy of the attribution notices contained |
|||
within such NOTICE file, excluding those notices that do not |
|||
pertain to any part of the Derivative Works, in at least one |
|||
of the following places: within a NOTICE text file distributed |
|||
as part of the Derivative Works; within the Source form or |
|||
documentation, if provided along with the Derivative Works; or, |
|||
within a display generated by the Derivative Works, if and |
|||
wherever such third-party notices normally appear. The contents |
|||
of the NOTICE file are for informational purposes only and |
|||
do not modify the License. You may add Your own attribution |
|||
notices within Derivative Works that You distribute, alongside |
|||
or as an addendum to the NOTICE text from the Work, provided |
|||
that such additional attribution notices cannot be construed |
|||
as modifying the License. |
|||
|
|||
You may add Your own copyright statement to Your modifications and |
|||
may provide additional or different license terms and conditions |
|||
for use, reproduction, or distribution of Your modifications, or |
|||
for any such Derivative Works as a whole, provided Your use, |
|||
reproduction, and distribution of the Work otherwise complies with |
|||
the conditions stated in this License. |
|||
|
|||
5. Submission of Contributions. Unless You explicitly state otherwise, |
|||
any Contribution intentionally submitted for inclusion in the Work |
|||
by You to the Licensor shall be under the terms and conditions of |
|||
this License, without any additional terms or conditions. |
|||
Notwithstanding the above, nothing herein shall supersede or modify |
|||
the terms of any separate license agreement you may have executed |
|||
with Licensor regarding such Contributions. |
|||
|
|||
6. Trademarks. This License does not grant permission to use the trade |
|||
names, trademarks, service marks, or product names of the Licensor, |
|||
except as required for reasonable and customary use in describing the |
|||
origin of the Work and reproducing the content of the NOTICE file. |
|||
|
|||
7. Disclaimer of Warranty. Unless required by applicable law or |
|||
agreed to in writing, Licensor provides the Work (and each |
|||
Contributor provides its Contributions) on an "AS IS" BASIS, |
|||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or |
|||
implied, including, without limitation, any warranties or conditions |
|||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A |
|||
PARTICULAR PURPOSE. You are solely responsible for determining the |
|||
appropriateness of using or redistributing the Work and assume any |
|||
risks associated with Your exercise of permissions under this License. |
|||
|
|||
8. Limitation of Liability. In no event and under no legal theory, |
|||
whether in tort (including negligence), contract, or otherwise, |
|||
unless required by applicable law (such as deliberate and grossly |
|||
negligent acts) or agreed to in writing, shall any Contributor be |
|||
liable to You for damages, including any direct, indirect, special, |
|||
incidental, or consequential damages of any character arising as a |
|||
result of this License or out of the use or inability to use the |
|||
Work (including but not limited to damages for loss of goodwill, |
|||
work stoppage, computer failure or malfunction, or any and all |
|||
other commercial damages or losses), even if such Contributor |
|||
has been advised of the possibility of such damages. |
|||
|
|||
9. Accepting Warranty or Additional Liability. While redistributing |
|||
the Work or Derivative Works thereof, You may choose to offer, |
|||
and charge a fee for, acceptance of support, warranty, indemnity, |
|||
or other liability obligations and/or rights consistent with this |
|||
License. However, in accepting such obligations, You may act only |
|||
on Your own behalf and on Your sole responsibility, not on behalf |
|||
of any other Contributor, and only if You agree to indemnify, |
|||
defend, and hold each Contributor harmless for any liability |
|||
incurred by, or claims asserted against, such Contributor by reason |
|||
of your accepting any such warranty or additional liability. |
|||
|
|||
END OF TERMS AND CONDITIONS |
|||
|
|||
APPENDIX: How to apply the Apache License to your work. |
|||
|
|||
To apply the Apache License to your work, attach the following |
|||
boilerplate notice, with the fields enclosed by brackets "[]" |
|||
replaced with your own identifying information. (Don't include |
|||
the brackets!) The text should be enclosed in the appropriate |
|||
comment syntax for the file format. We also recommend that a |
|||
file or class name and description of purpose be included on the |
|||
same "printed page" as the copyright notice for easier |
|||
identification within third-party archives. |
|||
|
|||
Copyright [yyyy] [name of copyright owner] |
|||
|
|||
Licensed under the Apache License, Version 2.0 (the "License"); |
|||
you may not use this file except in compliance with the License. |
|||
You may obtain a copy of the License at |
|||
|
|||
http://www.apache.org/licenses/LICENSE-2.0 |
|||
|
|||
Unless required by applicable law or agreed to in writing, software |
|||
distributed under the License is distributed on an "AS IS" BASIS, |
|||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
|||
See the License for the specific language governing permissions and |
|||
limitations under the License. |
@ -0,0 +1,159 @@ |
|||
 |
|||
|
|||
|
|||
🌍 [READ THIS IN ENGLISH](README_en.md) |
|||
|
|||
📃 **LangChain-Chatchat** (原 Langchain-ChatGLM) |
|||
|
|||
基于 ChatGLM 等大语言模型与 Langchain 等应用框架实现,开源、可离线部署的检索增强生成(RAG)大模型知识库项目。 |
|||
|
|||
--- |
|||
|
|||
## 目录 |
|||
|
|||
* [介绍](README.md#介绍) |
|||
* [解决的痛点](README.md#解决的痛点) |
|||
* [快速上手](README.md#快速上手) |
|||
* [1. 环境配置](README.md#1-环境配置) |
|||
* [2. 模型下载](README.md#2-模型下载) |
|||
* [3. 初始化知识库和配置文件](README.md#3-初始化知识库和配置文件) |
|||
* [4. 一键启动](README.md#4-一键启动) |
|||
* [5. 启动界面示例](README.md#5-启动界面示例) |
|||
* [联系我们](README.md#联系我们) |
|||
|
|||
|
|||
## 介绍 |
|||
|
|||
🤖️ 一种利用 [langchain](https://github.com/hwchase17/langchain) 思想实现的基于本地知识库的问答应用,目标期望建立一套对中文场景与开源模型支持友好、可离线运行的知识库问答解决方案。 |
|||
|
|||
💡 受 [GanymedeNil](https://github.com/GanymedeNil) 的项目 [document.ai](https://github.com/GanymedeNil/document.ai) 和 [AlexZhangji](https://github.com/AlexZhangji) 创建的 [ChatGLM-6B Pull Request](https://github.com/THUDM/ChatGLM-6B/pull/216) 启发,建立了全流程可使用开源模型实现的本地知识库问答应用。本项目的最新版本中通过使用 [FastChat](https://github.com/lm-sys/FastChat) 接入 Vicuna, Alpaca, LLaMA, Koala, RWKV 等模型,依托于 [langchain](https://github.com/langchain-ai/langchain) 框架支持通过基于 [FastAPI](https://github.com/tiangolo/fastapi) 提供的 API 调用服务,或使用基于 [Streamlit](https://github.com/streamlit/streamlit) 的 WebUI 进行操作。 |
|||
|
|||
✅ 依托于本项目支持的开源 LLM 与 Embedding 模型,本项目可实现全部使用**开源**模型**离线私有部署**。与此同时,本项目也支持 OpenAI GPT API 的调用,并将在后续持续扩充对各类模型及模型 API 的接入。 |
|||
|
|||
⛓️ 本项目实现原理如下图所示,过程包括加载文件 -> 读取文本 -> 文本分割 -> 文本向量化 -> 问句向量化 -> 在文本向量中匹配出与问句向量最相似的 `top k`个 -> 匹配出的文本作为上下文和问题一起添加到 `prompt`中 -> 提交给 `LLM`生成回答。 |
|||
|
|||
📺 [原理介绍视频](https://www.bilibili.com/video/BV13M4y1e7cN/?share_source=copy_web&vd_source=e6c5aafe684f30fbe41925d61ca6d514) |
|||
|
|||
 |
|||
|
|||
从文档处理角度来看,实现流程如下: |
|||
|
|||
 |
|||
|
|||
🚩 本项目未涉及微调、训练过程,但可利用微调或训练对本项目效果进行优化。 |
|||
|
|||
🌐 [AutoDL 镜像](https://www.codewithgpu.com/i/chatchat-space/Langchain-Chatchat/Langchain-Chatchat) 中 `v11` 版本所使用代码已更新至本项目 `v0.2.7` 版本。 |
|||
|
|||
🐳 [Docker 镜像](registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.6) 已经更新到 ```0.2.7``` 版本。 |
|||
|
|||
🌲 一行命令运行 Docker : |
|||
|
|||
```shell |
|||
docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.7 |
|||
``` |
|||
|
|||
🧩 本项目有一个非常完整的[Wiki](https://github.com/chatchat-space/Langchain-Chatchat/wiki/) , README只是一个简单的介绍,__仅仅是入门教程,能够基础运行__。 如果你想要更深入的了解本项目,或者想对本项目做出贡献。请移步 [Wiki](https://github.com/chatchat-space/Langchain-Chatchat/wiki/) 界面 |
|||
|
|||
## 解决的痛点 |
|||
|
|||
该项目是一个可以实现 __完全本地化__推理的知识库增强方案, 重点解决数据安全保护,私域化部署的企业痛点。 |
|||
本开源方案采用```Apache License```,可以免费商用,无需付费。 |
|||
|
|||
我们支持市面上主流的本地大预言模型和Embedding模型,支持开源的本地向量数据库。 |
|||
支持列表详见[Wiki](https://github.com/chatchat-space/Langchain-Chatchat/wiki/) |
|||
|
|||
|
|||
## 快速上手 |
|||
|
|||
### 1. 环境配置 |
|||
|
|||
+ 首先,确保你的机器安装了 Python 3.8 - 3.10 |
|||
``` |
|||
$ python --version |
|||
Python 3.10.12 |
|||
``` |
|||
接着,创建一个虚拟环境,并在虚拟环境内安装项目的依赖 |
|||
```shell |
|||
|
|||
# 拉取仓库 |
|||
$ git clone https://github.com/chatchat-space/Langchain-Chatchat.git |
|||
|
|||
# 进入目录 |
|||
$ cd Langchain-Chatchat |
|||
|
|||
# 安装全部依赖 |
|||
$ pip install -r requirements.txt |
|||
$ pip install -r requirements_api.txt |
|||
$ pip install -r requirements_webui.txt |
|||
|
|||
# 默认依赖包括基本运行环境(FAISS向量库)。如果要使用 milvus/pg_vector 等向量库,请将 requirements.txt 中相应依赖取消注释再安装。 |
|||
``` |
|||
### 2, 模型下载 |
|||
|
|||
如需在本地或离线环境下运行本项目,需要首先将项目所需的模型下载至本地,通常开源 LLM 与 Embedding 模型可以从 [HuggingFace](https://huggingface.co/models) 下载。 |
|||
|
|||
以本项目中默认使用的 LLM 模型 [THUDM/ChatGLM2-6B](https://huggingface.co/THUDM/chatglm2-6b) 与 Embedding 模型 [moka-ai/m3e-base](https://huggingface.co/moka-ai/m3e-base) 为例: |
|||
|
|||
下载模型需要先[安装 Git LFS](https://docs.github.com/zh/repositories/working-with-files/managing-large-files/installing-git-large-file-storage),然后运行 |
|||
|
|||
```Shell |
|||
$ git lfs install |
|||
$ git clone https://huggingface.co/THUDM/chatglm2-6b |
|||
$ git clone https://huggingface.co/moka-ai/m3e-base |
|||
``` |
|||
### 3. 初始化知识库和配置文件 |
|||
|
|||
按照下列方式初始化自己的知识库和简单的复制配置文件 |
|||
```shell |
|||
$ python copy_config_example.py |
|||
$ python init_database.py --recreate-vs |
|||
``` |
|||
### 4. 一键启动 |
|||
|
|||
按照以下命令启动项目 |
|||
```shell |
|||
$ python startup.py -a |
|||
``` |
|||
### 5. 启动界面示例 |
|||
|
|||
如果正常启动,你将能看到以下界面 |
|||
|
|||
1. FastAPI Docs 界面 |
|||
|
|||
 |
|||
|
|||
2. Web UI 启动界面示例: |
|||
|
|||
- Web UI 对话界面: |
|||
|
|||
 |
|||
|
|||
- Web UI 知识库管理页面: |
|||
|
|||
 |
|||
|
|||
|
|||
### 注意 |
|||
|
|||
以上方式只是为了快速上手,如果需要更多的功能和自定义启动方式 ,请参考[Wiki](https://github.com/chatchat-space/Langchain-Chatchat/wiki/) |
|||
|
|||
|
|||
--- |
|||
## 项目里程碑 |
|||
|
|||
|
|||
--- |
|||
## 联系我们 |
|||
### Telegram |
|||
[](https://t.me/+RjliQ3jnJ1YyN2E9) |
|||
|
|||
### 项目交流群 |
|||
<img src="img/qr_code_73.jpg" alt="二维码" width="300" /> |
|||
|
|||
🎉 Langchain-Chatchat 项目微信交流群,如果你也对本项目感兴趣,欢迎加入群聊参与讨论交流。 |
|||
|
|||
### 公众号 |
|||
|
|||
<img src="img/official_wechat_mp_account.png" alt="二维码" width="300" /> |
|||
|
|||
🎉 Langchain-Chatchat 项目官方公众号,欢迎扫码关注。 |
@ -0,0 +1,172 @@ |
|||
 |
|||
|
|||
🌍 [中文文档](README.md) |
|||
|
|||
📃 **LangChain-Chatchat** (formerly Langchain-ChatGLM): |
|||
|
|||
A LLM application aims to implement knowledge and search engine based QA based on Langchain and open-source or remote |
|||
LLM API. |
|||
|
|||
--- |
|||
|
|||
## Table of Contents |
|||
|
|||
- [Introduction](README.md#Introduction) |
|||
- [Pain Points Addressed](README.md#Pain-Points-Addressed) |
|||
- [Quick Start](README.md#Quick-Start) |
|||
- [1. Environment Setup](README.md#1-Environment-Setup) |
|||
- [2. Model Download](README.md#2-Model-Download) |
|||
- [3. Initialize Knowledge Base and Configuration Files](README.md#3-Initialize-Knowledge-Base-and-Configuration-Files) |
|||
- [4. One-Click Startup](README.md#4-One-Click-Startup) |
|||
- [5. Startup Interface Examples](README.md#5-Startup-Interface-Examples) |
|||
- [Contact Us](README.md#Contact-Us) |
|||
|
|||
## Introduction |
|||
|
|||
🤖️ A Q&A application based on local knowledge base implemented using the idea |
|||
of [langchain](https://github.com/hwchase17/langchain). The goal is to build a KBQA(Knowledge based Q&A) solution that |
|||
is friendly to Chinese scenarios and open source models and can run both offline and online. |
|||
|
|||
💡 Inspired by [document.ai](https://github.com/GanymedeNil/document.ai) |
|||
and [ChatGLM-6B Pull Request](https://github.com/THUDM/ChatGLM-6B/pull/216) , we build a local knowledge base question |
|||
answering application that can be implemented using an open source model or remote LLM api throughout the process. In |
|||
the latest version of this project, [FastChat](https://github.com/lm-sys/FastChat) is used to access Vicuna, Alpaca, |
|||
LLaMA, Koala, RWKV and many other models. Relying on [langchain](https://github.com/langchain-ai/langchain) , this |
|||
project supports calling services through the API provided based on [FastAPI](https://github.com/tiangolo/fastapi), or |
|||
using the WebUI based on [Streamlit](https://github.com/streamlit/streamlit). |
|||
|
|||
✅ Relying on the open source LLM and Embedding models, this project can realize full-process **offline private |
|||
deployment**. At the same time, this project also supports the call of OpenAI GPT API- and Zhipu API, and will continue |
|||
to expand the access to various models and remote APIs in the future. |
|||
|
|||
⛓️ The implementation principle of this project is shown in the graph below. The main process includes: loading files -> |
|||
reading text -> text segmentation -> text vectorization -> question vectorization -> matching the `top-k` most similar |
|||
to the question vector in the text vector -> The matched text is added to `prompt `as context and question -> submitte |
|||
to `LLM` to generate an answer. |
|||
|
|||
📺[video introduction](https://www.bilibili.com/video/BV13M4y1e7cN/?share_source=copy_web&vd_source=e6c5aafe684f30fbe41925d61ca6d514) |
|||
|
|||
 |
|||
|
|||
The main process analysis from the aspect of document process: |
|||
|
|||
 |
|||
|
|||
🚩 The training or fine-tuning are not involved in the project, but still, one always can improve performance by do |
|||
these. |
|||
|
|||
🌐 [AutoDL image](registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.5) is supported, and in v9 the codes are update |
|||
to v0.2.5. |
|||
|
|||
🐳 [Docker image](registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.5) |
|||
|
|||
## Pain Points Addressed |
|||
|
|||
This project is a solution for enhancing knowledge bases with fully localized inference, specifically addressing the |
|||
pain points of data security and private deployments for businesses. |
|||
This open-source solution is under the Apache License and can be used for commercial purposes for free, with no fees |
|||
required. |
|||
We support mainstream local large prophecy models and Embedding models available in the market, as well as open-source |
|||
local vector databases. For a detailed list of supported models and databases, please refer to |
|||
our [Wiki](https://github.com/chatchat-space/Langchain-Chatchat/wiki/) |
|||
|
|||
## Quick Start |
|||
|
|||
### Environment Setup |
|||
|
|||
First, make sure your machine has Python 3.10 installed. |
|||
|
|||
``` |
|||
$ python --version |
|||
Python 3.10.12 |
|||
``` |
|||
|
|||
Then, create a virtual environment and install the project's dependencies within the virtual environment. |
|||
|
|||
```shell |
|||
|
|||
# 拉取仓库 |
|||
$ git clone https://github.com/chatchat-space/Langchain-Chatchat.git |
|||
|
|||
# 进入目录 |
|||
$ cd Langchain-Chatchat |
|||
|
|||
# 安装全部依赖 |
|||
$ pip install -r requirements.txt |
|||
$ pip install -r requirements_api.txt |
|||
$ pip install -r requirements_webui.txt |
|||
|
|||
# 默认依赖包括基本运行环境(FAISS向量库)。如果要使用 milvus/pg_vector 等向量库,请将 requirements.txt 中相应依赖取消注释再安装。 |
|||
``` |
|||
|
|||
### Model Download |
|||
|
|||
If you need to run this project locally or in an offline environment, you must first download the required models for |
|||
the project. Typically, open-source LLM and Embedding models can be downloaded from HuggingFace. |
|||
|
|||
Taking the default LLM model used in this project, [THUDM/chatglm2-6b](https://huggingface.co/THUDM/chatglm2-6b), and |
|||
the Embedding model [moka-ai/m3e-base](https://huggingface.co/moka-ai/m3e-base) as examples: |
|||
|
|||
To download the models, you need to first |
|||
install [Git LFS](https://docs.github.com/zh/repositories/working-with-files/managing-large-files/installing-git-large-file-storage) |
|||
and then run: |
|||
|
|||
```Shell |
|||
$ git lfs install |
|||
$ git clone https://huggingface.co/THUDM/chatglm2-6b |
|||
$ git clone https://huggingface.co/moka-ai/m3e-base |
|||
``` |
|||
|
|||
### Initializing the Knowledge Base and Config File |
|||
|
|||
Follow the steps below to initialize your own knowledge base and config file: |
|||
|
|||
```shell |
|||
$ python copy_config_example.py |
|||
$ python init_database.py --recreate-vs |
|||
``` |
|||
|
|||
### One-Click Launch |
|||
|
|||
To start the project, run the following command: |
|||
|
|||
```shell |
|||
$ python startup.py -a |
|||
``` |
|||
|
|||
### Example of Launch Interface |
|||
|
|||
1. FastAPI docs interface |
|||
|
|||
 |
|||
|
|||
2. webui page |
|||
|
|||
- Web UI dialog page: |
|||
|
|||
 |
|||
|
|||
- Web UI knowledge base management page: |
|||
|
|||
 |
|||
|
|||
### Note |
|||
|
|||
The above instructions are provided for a quick start. If you need more features or want to customize the launch method, |
|||
please refer to the [Wiki](https://github.com/chatchat-space/Langchain-Chatchat/wiki/). |
|||
|
|||
--- |
|||
|
|||
## Contact Us |
|||
|
|||
### Telegram |
|||
|
|||
[](https://t.me/+RjliQ3jnJ1YyN2E9) |
|||
|
|||
### WeChat Group、 |
|||
|
|||
<img src="img/qr_code_67.jpg" alt="二维码" width="300" height="300" /> |
|||
|
|||
### WeChat Official Account |
|||
|
|||
<img src="img/official_wechat_mp_account.png" alt="图片" width="900" height="300" /> |
@ -0,0 +1,22 @@ |
|||
from server.utils import get_ChatOpenAI |
|||
from configs.model_config import LLM_MODELS, TEMPERATURE |
|||
from langchain.chains import LLMChain |
|||
from langchain.prompts.chat import ( |
|||
ChatPromptTemplate, |
|||
HumanMessagePromptTemplate, |
|||
) |
|||
|
|||
model = get_ChatOpenAI(model_name=LLM_MODELS[0], temperature=TEMPERATURE) |
|||
|
|||
|
|||
human_prompt = "{input}" |
|||
human_message_template = HumanMessagePromptTemplate.from_template(human_prompt) |
|||
|
|||
chat_prompt = ChatPromptTemplate.from_messages( |
|||
[("human", "我们来玩成语接龙,我先来,生龙活虎"), |
|||
("ai", "虎头虎脑"), |
|||
("human", "{input}")]) |
|||
|
|||
|
|||
chain = LLMChain(prompt=chat_prompt, llm=model, verbose=True) |
|||
print(chain({"input": "恼羞成怒"})) |
@ -0,0 +1,8 @@ |
|||
from .basic_config import * |
|||
from .model_config import * |
|||
from .kb_config import * |
|||
from .server_config import * |
|||
from .prompt_config import * |
|||
|
|||
|
|||
VERSION = "v0.2.7" |
@ -0,0 +1,25 @@ |
|||
import logging |
|||
import os |
|||
import langchain |
|||
|
|||
|
|||
# 是否显示详细日志 |
|||
log_verbose = False |
|||
langchain.verbose = False |
|||
|
|||
# 是否保存聊天记录 |
|||
SAVE_CHAT_HISTORY = False |
|||
|
|||
# 通常情况下不需要更改以下内容 |
|||
|
|||
# 日志格式 |
|||
LOG_FORMAT = "%(asctime)s - %(pathname)s[line:%(lineno)d] - %(levelname)s: %(message)s" |
|||
logger = logging.getLogger() |
|||
logger.setLevel(logging.INFO) |
|||
logging.basicConfig(format=LOG_FORMAT) |
|||
|
|||
|
|||
# 日志存储路径 |
|||
LOG_PATH = os.path.join(os.path.dirname(os.path.dirname(__file__)), "logs") |
|||
if not os.path.exists(LOG_PATH): |
|||
os.mkdir(LOG_PATH) |
@ -0,0 +1,25 @@ |
|||
import logging |
|||
import os |
|||
import langchain |
|||
|
|||
|
|||
# 是否显示详细日志 |
|||
log_verbose = False |
|||
langchain.verbose = False |
|||
|
|||
# 是否保存聊天记录 |
|||
SAVE_CHAT_HISTORY = False |
|||
|
|||
# 通常情况下不需要更改以下内容 |
|||
|
|||
# 日志格式 |
|||
LOG_FORMAT = "%(asctime)s - %(filename)s[line:%(lineno)d] - %(levelname)s: %(message)s" |
|||
logger = logging.getLogger() |
|||
logger.setLevel(logging.INFO) |
|||
logging.basicConfig(format=LOG_FORMAT) |
|||
|
|||
|
|||
# 日志存储路径 |
|||
LOG_PATH = os.path.join(os.path.dirname(os.path.dirname(__file__)), "logs") |
|||
if not os.path.exists(LOG_PATH): |
|||
os.mkdir(LOG_PATH) |
@ -0,0 +1,132 @@ |
|||
import os |
|||
|
|||
# 默认使用的知识库 |
|||
DEFAULT_KNOWLEDGE_BASE = "samples" |
|||
|
|||
# 默认向量库/全文检索引擎类型。可选:faiss, milvus(离线) & zilliz(在线), pgvector,全文检索引擎es |
|||
DEFAULT_VS_TYPE = "faiss" |
|||
|
|||
# 缓存向量库数量(针对FAISS) |
|||
CACHED_VS_NUM = 1 |
|||
|
|||
# 知识库中单段文本长度(不适用MarkdownHeaderTextSplitter) |
|||
CHUNK_SIZE = 250 |
|||
|
|||
# 知识库中相邻文本重合长度(不适用MarkdownHeaderTextSplitter) |
|||
OVERLAP_SIZE = 50 |
|||
|
|||
# 知识库匹配向量数量 |
|||
VECTOR_SEARCH_TOP_K = 3 |
|||
|
|||
# 知识库匹配相关度阈值,取值范围在0-1之间,SCORE越小,相关度越高,取到1相当于不筛选,建议设置在0.5左右 |
|||
SCORE_THRESHOLD = 1 |
|||
|
|||
# 默认搜索引擎。可选:bing, duckduckgo, metaphor |
|||
DEFAULT_SEARCH_ENGINE = "duckduckgo" |
|||
|
|||
# 搜索引擎匹配结题数量 |
|||
SEARCH_ENGINE_TOP_K = 3 |
|||
|
|||
|
|||
# Bing 搜索必备变量 |
|||
# 使用 Bing 搜索需要使用 Bing Subscription Key,需要在azure port中申请试用bing search |
|||
# 具体申请方式请见 |
|||
# https://learn.microsoft.com/en-us/bing/search-apis/bing-web-search/create-bing-search-service-resource |
|||
# 使用python创建bing api 搜索实例详见: |
|||
# https://learn.microsoft.com/en-us/bing/search-apis/bing-web-search/quickstarts/rest/python |
|||
BING_SEARCH_URL = "https://api.bing.microsoft.com/v7.0/search" |
|||
# 注意不是bing Webmaster Tools的api key, |
|||
|
|||
# 此外,如果是在服务器上,报Failed to establish a new connection: [Errno 110] Connection timed out |
|||
# 是因为服务器加了防火墙,需要联系管理员加白名单,如果公司的服务器的话,就别想了GG |
|||
BING_SUBSCRIPTION_KEY = "" |
|||
|
|||
# metaphor搜索需要KEY |
|||
METAPHOR_API_KEY = "" |
|||
|
|||
|
|||
# 是否开启中文标题加强,以及标题增强的相关配置 |
|||
# 通过增加标题判断,判断哪些文本为标题,并在metadata中进行标记; |
|||
# 然后将文本与往上一级的标题进行拼合,实现文本信息的增强。 |
|||
ZH_TITLE_ENHANCE = False |
|||
|
|||
|
|||
# 每个知识库的初始化介绍,用于在初始化知识库时显示和Agent调用,没写则没有介绍,不会被Agent调用。 |
|||
KB_INFO = { |
|||
"知识库名称": "知识库介绍", |
|||
"samples": "关于本项目issue的解答", |
|||
} |
|||
|
|||
|
|||
# 通常情况下不需要更改以下内容 |
|||
|
|||
# 知识库默认存储路径 |
|||
KB_ROOT_PATH = os.path.join(os.path.dirname(os.path.dirname(__file__)), "knowledge_base") |
|||
if not os.path.exists(KB_ROOT_PATH): |
|||
os.mkdir(KB_ROOT_PATH) |
|||
# 数据库默认存储路径。 |
|||
# 如果使用sqlite,可以直接修改DB_ROOT_PATH;如果使用其它数据库,请直接修改SQLALCHEMY_DATABASE_URI。 |
|||
DB_ROOT_PATH = os.path.join(KB_ROOT_PATH, "info.db") |
|||
SQLALCHEMY_DATABASE_URI = f"sqlite:///{DB_ROOT_PATH}" |
|||
|
|||
# 可选向量库类型及对应配置 |
|||
kbs_config = { |
|||
"faiss": { |
|||
}, |
|||
"milvus": { |
|||
"host": "127.0.0.1", |
|||
"port": "19530", |
|||
"user": "", |
|||
"password": "", |
|||
"secure": False, |
|||
}, |
|||
"zilliz": { |
|||
"host": "in01-a7ce524e41e3935.ali-cn-hangzhou.vectordb.zilliz.com.cn", |
|||
"port": "19530", |
|||
"user": "", |
|||
"password": "", |
|||
"secure": True, |
|||
}, |
|||
"pg": { |
|||
"connection_uri": "postgresql://postgres:postgres@127.0.0.1:5432/langchain_chatchat", |
|||
}, |
|||
|
|||
"es": { |
|||
"host": "127.0.0.1", |
|||
"port": "9200", |
|||
"index_name": "test_index", |
|||
"user": "", |
|||
"password": "" |
|||
} |
|||
} |
|||
|
|||
# TextSplitter配置项,如果你不明白其中的含义,就不要修改。 |
|||
text_splitter_dict = { |
|||
"ChineseRecursiveTextSplitter": { |
|||
"source": "huggingface", ## 选择tiktoken则使用openai的方法 |
|||
"tokenizer_name_or_path": "", |
|||
}, |
|||
"SpacyTextSplitter": { |
|||
"source": "huggingface", |
|||
"tokenizer_name_or_path": "gpt2", |
|||
}, |
|||
"RecursiveCharacterTextSplitter": { |
|||
"source": "tiktoken", |
|||
"tokenizer_name_or_path": "cl100k_base", |
|||
}, |
|||
"MarkdownHeaderTextSplitter": { |
|||
"headers_to_split_on": |
|||
[ |
|||
("#", "head1"), |
|||
("##", "head2"), |
|||
("###", "head3"), |
|||
("####", "head4"), |
|||
] |
|||
}, |
|||
} |
|||
|
|||
# TEXT_SPLITTER 名称 |
|||
TEXT_SPLITTER_NAME = "ChineseRecursiveTextSplitter" |
|||
|
|||
# Embedding模型定制词语的词表文件 |
|||
EMBEDDING_KEYWORD_FILE = "embedding_keywords.txt" |
@ -0,0 +1,132 @@ |
|||
import os |
|||
|
|||
# 默认使用的知识库 |
|||
DEFAULT_KNOWLEDGE_BASE = "samples" |
|||
|
|||
# 默认向量库/全文检索引擎类型。可选:faiss, milvus(离线) & zilliz(在线), pgvector,全文检索引擎es |
|||
DEFAULT_VS_TYPE = "faiss" |
|||
|
|||
# 缓存向量库数量(针对FAISS) |
|||
CACHED_VS_NUM = 1 |
|||
|
|||
# 知识库中单段文本长度(不适用MarkdownHeaderTextSplitter) |
|||
CHUNK_SIZE = 250 |
|||
|
|||
# 知识库中相邻文本重合长度(不适用MarkdownHeaderTextSplitter) |
|||
OVERLAP_SIZE = 50 |
|||
|
|||
# 知识库匹配向量数量 |
|||
VECTOR_SEARCH_TOP_K = 3 |
|||
|
|||
# 知识库匹配相关度阈值,取值范围在0-1之间,SCORE越小,相关度越高,取到1相当于不筛选,建议设置在0.5左右 |
|||
SCORE_THRESHOLD = 1 |
|||
|
|||
# 默认搜索引擎。可选:bing, duckduckgo, metaphor |
|||
DEFAULT_SEARCH_ENGINE = "duckduckgo" |
|||
|
|||
# 搜索引擎匹配结题数量 |
|||
SEARCH_ENGINE_TOP_K = 3 |
|||
|
|||
|
|||
# Bing 搜索必备变量 |
|||
# 使用 Bing 搜索需要使用 Bing Subscription Key,需要在azure port中申请试用bing search |
|||
# 具体申请方式请见 |
|||
# https://learn.microsoft.com/en-us/bing/search-apis/bing-web-search/create-bing-search-service-resource |
|||
# 使用python创建bing api 搜索实例详见: |
|||
# https://learn.microsoft.com/en-us/bing/search-apis/bing-web-search/quickstarts/rest/python |
|||
BING_SEARCH_URL = "https://api.bing.microsoft.com/v7.0/search" |
|||
# 注意不是bing Webmaster Tools的api key, |
|||
|
|||
# 此外,如果是在服务器上,报Failed to establish a new connection: [Errno 110] Connection timed out |
|||
# 是因为服务器加了防火墙,需要联系管理员加白名单,如果公司的服务器的话,就别想了GG |
|||
BING_SUBSCRIPTION_KEY = "" |
|||
|
|||
# metaphor搜索需要KEY |
|||
METAPHOR_API_KEY = "" |
|||
|
|||
|
|||
# 是否开启中文标题加强,以及标题增强的相关配置 |
|||
# 通过增加标题判断,判断哪些文本为标题,并在metadata中进行标记; |
|||
# 然后将文本与往上一级的标题进行拼合,实现文本信息的增强。 |
|||
ZH_TITLE_ENHANCE = False |
|||
|
|||
|
|||
# 每个知识库的初始化介绍,用于在初始化知识库时显示和Agent调用,没写则没有介绍,不会被Agent调用。 |
|||
KB_INFO = { |
|||
"知识库名称": "知识库介绍", |
|||
"samples": "关于本项目issue的解答", |
|||
} |
|||
|
|||
|
|||
# 通常情况下不需要更改以下内容 |
|||
|
|||
# 知识库默认存储路径 |
|||
KB_ROOT_PATH = os.path.join(os.path.dirname(os.path.dirname(__file__)), "knowledge_base") |
|||
if not os.path.exists(KB_ROOT_PATH): |
|||
os.mkdir(KB_ROOT_PATH) |
|||
# 数据库默认存储路径。 |
|||
# 如果使用sqlite,可以直接修改DB_ROOT_PATH;如果使用其它数据库,请直接修改SQLALCHEMY_DATABASE_URI。 |
|||
DB_ROOT_PATH = os.path.join(KB_ROOT_PATH, "info.db") |
|||
SQLALCHEMY_DATABASE_URI = f"sqlite:///{DB_ROOT_PATH}" |
|||
|
|||
# 可选向量库类型及对应配置 |
|||
kbs_config = { |
|||
"faiss": { |
|||
}, |
|||
"milvus": { |
|||
"host": "127.0.0.1", |
|||
"port": "19530", |
|||
"user": "", |
|||
"password": "", |
|||
"secure": False, |
|||
}, |
|||
"zilliz": { |
|||
"host": "in01-a7ce524e41e3935.ali-cn-hangzhou.vectordb.zilliz.com.cn", |
|||
"port": "19530", |
|||
"user": "", |
|||
"password": "", |
|||
"secure": True, |
|||
}, |
|||
"pg": { |
|||
"connection_uri": "postgresql://postgres:postgres@127.0.0.1:5432/langchain_chatchat", |
|||
}, |
|||
|
|||
"es": { |
|||
"host": "127.0.0.1", |
|||
"port": "9200", |
|||
"index_name": "test_index", |
|||
"user": "", |
|||
"password": "" |
|||
} |
|||
} |
|||
|
|||
# TextSplitter配置项,如果你不明白其中的含义,就不要修改。 |
|||
text_splitter_dict = { |
|||
"ChineseRecursiveTextSplitter": { |
|||
"source": "huggingface", ## 选择tiktoken则使用openai的方法 |
|||
"tokenizer_name_or_path": "", |
|||
}, |
|||
"SpacyTextSplitter": { |
|||
"source": "huggingface", |
|||
"tokenizer_name_or_path": "gpt2", |
|||
}, |
|||
"RecursiveCharacterTextSplitter": { |
|||
"source": "tiktoken", |
|||
"tokenizer_name_or_path": "cl100k_base", |
|||
}, |
|||
"MarkdownHeaderTextSplitter": { |
|||
"headers_to_split_on": |
|||
[ |
|||
("#", "head1"), |
|||
("##", "head2"), |
|||
("###", "head3"), |
|||
("####", "head4"), |
|||
] |
|||
}, |
|||
} |
|||
|
|||
# TEXT_SPLITTER 名称 |
|||
TEXT_SPLITTER_NAME = "ChineseRecursiveTextSplitter" |
|||
|
|||
# Embedding模型定制词语的词表文件 |
|||
EMBEDDING_KEYWORD_FILE = "embedding_keywords.txt" |
@ -0,0 +1,280 @@ |
|||
import os |
|||
|
|||
|
|||
# 可以指定一个绝对路径,统一存放所有的Embedding和LLM模型。 |
|||
# 每个模型可以是一个单独的目录,也可以是某个目录下的二级子目录。 |
|||
# 如果模型目录名称和 MODEL_PATH 中的 key 或 value 相同,程序会自动检测加载,无需修改 MODEL_PATH 中的路径。 |
|||
MODEL_ROOT_PATH = "" |
|||
|
|||
# 选用的 Embedding 名称 |
|||
EMBEDDING_MODEL = "bge-large-zh-v1.5" # bge-large-zh |
|||
|
|||
# Embedding 模型运行设备。设为"auto"会自动检测,也可手动设定为"cuda","mps","cpu"其中之一。 |
|||
EMBEDDING_DEVICE = "auto" |
|||
|
|||
# 如果需要在 EMBEDDING_MODEL 中增加自定义的关键字时配置 |
|||
EMBEDDING_KEYWORD_FILE = "keywords.txt" |
|||
EMBEDDING_MODEL_OUTPUT_PATH = "output" |
|||
|
|||
# 要运行的 LLM 名称,可以包括本地模型和在线模型。 |
|||
# 第一个将作为 API 和 WEBUI 的默认模型 |
|||
# LLM_MODELS = ["chatglm2-6b", "zhipu-api", "openai-api"] |
|||
# LLM_MODELS = ["vicuna-15b-v1.5"] |
|||
LLM_MODELS = ["chatglm3-6b"] |
|||
# LLM_MODELS = ["Qwen-14B-Chat"] |
|||
# LLM_MODELS = ["Qwen-7B-Chat"] |
|||
|
|||
# AgentLM模型的名称 (可以不指定,指定之后就锁定进入Agent之后的Chain的模型,不指定就是LLM_MODELS[0]) |
|||
Agent_MODEL = None |
|||
|
|||
# LLM 运行设备。设为"auto"会自动检测,也可手动设定为"cuda","mps","cpu"其中之一。 |
|||
LLM_DEVICE = "auto" |
|||
|
|||
# 历史对话轮数 |
|||
HISTORY_LEN = 3 |
|||
|
|||
# 大模型最长支持的长度,如果不填写,则使用模型默认的最大长度,如果填写,则为用户设定的最大长度 |
|||
MAX_TOKENS = None |
|||
|
|||
# LLM通用对话参数 |
|||
TEMPERATURE = 0.7 |
|||
# TOP_P = 0.95 # ChatOpenAI暂不支持该参数 |
|||
|
|||
ONLINE_LLM_MODEL = { |
|||
# 线上模型。请在server_config中为每个在线API设置不同的端口 |
|||
|
|||
"openai-api": { |
|||
"model_name": "gpt-35-turbo", |
|||
"api_base_url": "https://api.openai.com/v1", |
|||
"api_key": "", |
|||
"openai_proxy": "", |
|||
}, |
|||
|
|||
# 具体注册及api key获取请前往 http://open.bigmodel.cn |
|||
"zhipu-api": { |
|||
"api_key": "", |
|||
"version": "chatglm_turbo", # 可选包括 "chatglm_turbo" |
|||
"provider": "ChatGLMWorker", |
|||
}, |
|||
|
|||
|
|||
# 具体注册及api key获取请前往 https://api.minimax.chat/ |
|||
"minimax-api": { |
|||
"group_id": "", |
|||
"api_key": "", |
|||
"is_pro": False, |
|||
"provider": "MiniMaxWorker", |
|||
}, |
|||
|
|||
|
|||
# 具体注册及api key获取请前往 https://xinghuo.xfyun.cn/ |
|||
"xinghuo-api": { |
|||
"APPID": "", |
|||
"APISecret": "", |
|||
"api_key": "", |
|||
"version": "v1.5", # 你使用的讯飞星火大模型版本,可选包括 "v3.0", "v1.5", "v2.0" |
|||
"provider": "XingHuoWorker", |
|||
}, |
|||
|
|||
# 百度千帆 API,申请方式请参考 https://cloud.baidu.com/doc/WENXINWORKSHOP/s/4lilb2lpf |
|||
"qianfan-api": { |
|||
"version": "ERNIE-Bot", # 注意大小写。当前支持 "ERNIE-Bot" 或 "ERNIE-Bot-turbo", 更多的见官方文档。 |
|||
"version_url": "", # 也可以不填写version,直接填写在千帆申请模型发布的API地址 |
|||
"api_key": "", |
|||
"secret_key": "", |
|||
"provider": "QianFanWorker", |
|||
}, |
|||
|
|||
# 火山方舟 API,文档参考 https://www.volcengine.com/docs/82379 |
|||
"fangzhou-api": { |
|||
"version": "chatglm-6b-model", # 当前支持 "chatglm-6b-model", 更多的见文档模型支持列表中方舟部分。 |
|||
"version_url": "", # 可以不填写version,直接填写在方舟申请模型发布的API地址 |
|||
"api_key": "", |
|||
"secret_key": "", |
|||
"provider": "FangZhouWorker", |
|||
}, |
|||
|
|||
# 阿里云通义千问 API,文档参考 https://help.aliyun.com/zh/dashscope/developer-reference/api-details |
|||
"qwen-api": { |
|||
"version": "qwen-turbo", # 可选包括 "qwen-turbo", "qwen-plus" |
|||
"api_key": "", # 请在阿里云控制台模型服务灵积API-KEY管理页面创建 |
|||
"provider": "QwenWorker", |
|||
}, |
|||
|
|||
# 百川 API,申请方式请参考 https://www.baichuan-ai.com/home#api-enter |
|||
"baichuan-api": { |
|||
"version": "Baichuan2-53B", # 当前支持 "Baichuan2-53B", 见官方文档。 |
|||
"api_key": "", |
|||
"secret_key": "", |
|||
"provider": "BaiChuanWorker", |
|||
}, |
|||
|
|||
# Azure API |
|||
"azure-api": { |
|||
"deployment_name": "", # 部署容器的名字 |
|||
"resource_name": "", # https://{resource_name}.openai.azure.com/openai/ 填写resource_name的部分,其他部分不要填写 |
|||
"api_version": "", # API的版本,不是模型版本 |
|||
"api_key": "", |
|||
"provider": "AzureWorker", |
|||
}, |
|||
|
|||
} |
|||
|
|||
# 在以下字典中修改属性值,以指定本地embedding模型存储位置。支持3种设置方法: |
|||
# 1、将对应的值修改为模型绝对路径 |
|||
# 2、不修改此处的值(以 text2vec 为例): |
|||
# 2.1 如果{MODEL_ROOT_PATH}下存在如下任一子目录: |
|||
# - text2vec |
|||
# - GanymedeNil/text2vec-large-chinese |
|||
# - text2vec-large-chinese |
|||
# 2.2 如果以上本地路径不存在,则使用huggingface模型 |
|||
MODEL_PATH = { |
|||
"embed_model": { |
|||
"ernie-tiny": "nghuyong/ernie-3.0-nano-zh", |
|||
"ernie-base": "nghuyong/ernie-3.0-base-zh", |
|||
"text2vec-base": "shibing624/text2vec-base-chinese", |
|||
"text2vec": "GanymedeNil/text2vec-large-chinese", |
|||
"text2vec-paraphrase": "shibing624/text2vec-base-chinese-paraphrase", |
|||
"text2vec-sentence": "shibing624/text2vec-base-chinese-sentence", |
|||
"text2vec-multilingual": "shibing624/text2vec-base-multilingual", |
|||
"text2vec-bge-large-chinese": "shibing624/text2vec-bge-large-chinese", |
|||
"m3e-small": "moka-ai/m3e-small", |
|||
"m3e-base": "moka-ai/m3e-base", |
|||
"m3e-large": "moka-ai/m3e-large", |
|||
"bge-small-zh": "BAAI/bge-small-zh", |
|||
"bge-base-zh": "BAAI/bge-base-zh", |
|||
"bge-large-zh": "BAAI/bge-large-zh", |
|||
"bge-large-zh-noinstruct": "BAAI/bge-large-zh-noinstruct", |
|||
"bge-base-zh-v1.5": "BAAI/bge-base-zh-v1.5", |
|||
"bge-large-zh-v1.5": "/Users/Angela/Documents/LLM Model/Xorbits/bge-large-zh-v1.5", |
|||
"piccolo-base-zh": "sensenova/piccolo-base-zh", |
|||
"piccolo-large-zh": "sensenova/piccolo-large-zh", |
|||
"text-embedding-ada-002": "your OPENAI_API_KEY", |
|||
}, |
|||
|
|||
"llm_model": { |
|||
# 以下部分模型并未完全测试,仅根据fastchat和vllm模型的模型列表推定支持 |
|||
"chatglm2-6b": "THUDM/chatglm2-6b", |
|||
"chatglm2-6b-32k": "THUDM/chatglm2-6b-32k", |
|||
"chatglm3-6b": "/Users/Angela/Documents/LLM Model/ZhipuAI/chatglm3-6b", |
|||
|
|||
"baichuan2-13b": "baichuan-inc/Baichuan2-13B-Chat", |
|||
"baichuan2-7b": "baichuan-inc/Baichuan2-7B-Chat", |
|||
|
|||
"baichuan-7b": "baichuan-inc/Baichuan-7B", |
|||
"baichuan-13b": "baichuan-inc/Baichuan-13B", |
|||
'baichuan-13b-chat': 'baichuan-inc/Baichuan-13B-Chat', |
|||
|
|||
"aquila-7b": "BAAI/Aquila-7B", |
|||
"aquilachat-7b": "BAAI/AquilaChat-7B", |
|||
|
|||
"internlm-7b": "internlm/internlm-7b", |
|||
"internlm-chat-7b": "internlm/internlm-chat-7b", |
|||
|
|||
"falcon-7b": "tiiuae/falcon-7b", |
|||
"falcon-40b": "tiiuae/falcon-40b", |
|||
"falcon-rw-7b": "tiiuae/falcon-rw-7b", |
|||
|
|||
"gpt2": "gpt2", |
|||
"gpt2-xl": "gpt2-xl", |
|||
|
|||
"gpt-j-6b": "EleutherAI/gpt-j-6b", |
|||
"gpt4all-j": "nomic-ai/gpt4all-j", |
|||
"gpt-neox-20b": "EleutherAI/gpt-neox-20b", |
|||
"pythia-12b": "EleutherAI/pythia-12b", |
|||
"oasst-sft-4-pythia-12b-epoch-3.5": "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5", |
|||
"dolly-v2-12b": "databricks/dolly-v2-12b", |
|||
"stablelm-tuned-alpha-7b": "stabilityai/stablelm-tuned-alpha-7b", |
|||
|
|||
"Llama-2-13b-hf": "meta-llama/Llama-2-13b-hf", |
|||
"Llama-2-70b-hf": "meta-llama/Llama-2-70b-hf", |
|||
"Meta-Llama-3-8B-Instruct": "/Users/Angela/Documents/LLM Model/LLM-Research/Meta-Llama-3-8B-Instruct", |
|||
"open_llama_13b": "openlm-research/open_llama_13b", |
|||
"vicuna-13b-v1.3": "lmsys/vicuna-13b-v1.3", |
|||
"vicuna-13b-v1.5": "/Users/Angela/Documents/LLM Model/Xorbits/vicuna-13b-v1.5", |
|||
"vicuna-7b-v1.5": "/Users/Angela/Documents/LLM Model/Xorbits/vicuna-7b-v1.5", |
|||
"koala": "young-geng/koala", |
|||
"mpt-7b": "mosaicml/mpt-7b", |
|||
"mpt-7b-storywriter": "mosaicml/mpt-7b-storywriter", |
|||
"mpt-30b": "mosaicml/mpt-30b", |
|||
"opt-66b": "facebook/opt-66b", |
|||
"opt-iml-max-30b": "facebook/opt-iml-max-30b", |
|||
|
|||
"Qwen-7B": "Qwen/Qwen-7B", |
|||
"Qwen-14B": "/Users/Angela/Documents/LLM Model/qwen/Qwen-14B", |
|||
"Qwen-7B-Chat": "/Users/Angela/Documents/LLM Model/qwen/Qwen-7B-Chat", |
|||
"Qwen-14B-Chat": "/Users/Angela/Documents/LLM Model/qwen/Qwen-14B-Chat", |
|||
"Qwen-14B-Chat-Int8": "Qwen/Qwen-14B-Chat-Int8", # 确保已经安装了auto-gptq optimum flash-attn |
|||
"Qwen-14B-Chat-Int4": "Qwen/Qwen-14B-Chat-Int4", # 确保已经安装了auto-gptq optimum flash-attn |
|||
}, |
|||
} |
|||
|
|||
|
|||
# 通常情况下不需要更改以下内容 |
|||
|
|||
# nltk 模型存储路径 |
|||
NLTK_DATA_PATH = os.path.join(os.path.dirname(os.path.dirname(__file__)), "nltk_data") |
|||
|
|||
VLLM_MODEL_DICT = { |
|||
"aquila-7b": "BAAI/Aquila-7B", |
|||
"aquilachat-7b": "BAAI/AquilaChat-7B", |
|||
|
|||
"baichuan-7b": "baichuan-inc/Baichuan-7B", |
|||
"baichuan-13b": "baichuan-inc/Baichuan-13B", |
|||
'baichuan-13b-chat': 'baichuan-inc/Baichuan-13B-Chat', |
|||
# 注意:bloom系列的tokenizer与model是分离的,因此虽然vllm支持,但与fschat框架不兼容 |
|||
# "bloom":"bigscience/bloom", |
|||
# "bloomz":"bigscience/bloomz", |
|||
# "bloomz-560m":"bigscience/bloomz-560m", |
|||
# "bloomz-7b1":"bigscience/bloomz-7b1", |
|||
# "bloomz-1b7":"bigscience/bloomz-1b7", |
|||
|
|||
"internlm-7b": "internlm/internlm-7b", |
|||
"internlm-chat-7b": "internlm/internlm-chat-7b", |
|||
"falcon-7b": "tiiuae/falcon-7b", |
|||
"falcon-40b": "tiiuae/falcon-40b", |
|||
"falcon-rw-7b": "tiiuae/falcon-rw-7b", |
|||
"gpt2": "gpt2", |
|||
"gpt2-xl": "gpt2-xl", |
|||
"gpt-j-6b": "EleutherAI/gpt-j-6b", |
|||
"gpt4all-j": "nomic-ai/gpt4all-j", |
|||
"gpt-neox-20b": "EleutherAI/gpt-neox-20b", |
|||
"pythia-12b": "EleutherAI/pythia-12b", |
|||
"oasst-sft-4-pythia-12b-epoch-3.5": "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5", |
|||
"dolly-v2-12b": "databricks/dolly-v2-12b", |
|||
"stablelm-tuned-alpha-7b": "stabilityai/stablelm-tuned-alpha-7b", |
|||
"Llama-2-13b-hf": "meta-llama/Llama-2-13b-hf", |
|||
"Llama-2-70b-hf": "meta-llama/Llama-2-70b-hf", |
|||
"open_llama_13b": "openlm-research/open_llama_13b", |
|||
"vicuna-13b-v1.3": "lmsys/vicuna-13b-v1.3", |
|||
"koala": "young-geng/koala", |
|||
"mpt-7b": "mosaicml/mpt-7b", |
|||
"mpt-7b-storywriter": "mosaicml/mpt-7b-storywriter", |
|||
"mpt-30b": "mosaicml/mpt-30b", |
|||
"opt-66b": "facebook/opt-66b", |
|||
"opt-iml-max-30b": "facebook/opt-iml-max-30b", |
|||
|
|||
"Qwen-7B": "Qwen/Qwen-7B", |
|||
"Qwen-14B": "Qwen/Qwen-14B", |
|||
"Qwen-7B-Chat": "Qwen/Qwen-7B-Chat", |
|||
"Qwen-14B-Chat": "Qwen/Qwen-14B-Chat", |
|||
|
|||
"agentlm-7b": "THUDM/agentlm-7b", |
|||
"agentlm-13b": "THUDM/agentlm-13b", |
|||
"agentlm-70b": "THUDM/agentlm-70b", |
|||
|
|||
} |
|||
|
|||
# 你认为支持Agent能力的模型,可以在这里添加,添加后不会出现可视化界面的警告 |
|||
SUPPORT_AGENT_MODEL = [ |
|||
"azure-api", |
|||
"openai-api", |
|||
"claude-api", |
|||
"zhipu-api", |
|||
"qwen-api", |
|||
"Qwen", |
|||
"baichuan-api", |
|||
"agentlm", |
|||
"chatglm3", |
|||
"xinghuo-api", |
|||
] |
@ -0,0 +1,273 @@ |
|||
import os |
|||
|
|||
|
|||
# 可以指定一个绝对路径,统一存放所有的Embedding和LLM模型。 |
|||
# 每个模型可以是一个单独的目录,也可以是某个目录下的二级子目录。 |
|||
# 如果模型目录名称和 MODEL_PATH 中的 key 或 value 相同,程序会自动检测加载,无需修改 MODEL_PATH 中的路径。 |
|||
MODEL_ROOT_PATH = "" |
|||
|
|||
# 选用的 Embedding 名称 |
|||
EMBEDDING_MODEL = "m3e-base" # bge-large-zh |
|||
|
|||
# Embedding 模型运行设备。设为"auto"会自动检测,也可手动设定为"cuda","mps","cpu"其中之一。 |
|||
EMBEDDING_DEVICE = "auto" |
|||
|
|||
# 如果需要在 EMBEDDING_MODEL 中增加自定义的关键字时配置 |
|||
EMBEDDING_KEYWORD_FILE = "keywords.txt" |
|||
EMBEDDING_MODEL_OUTPUT_PATH = "output" |
|||
|
|||
# 要运行的 LLM 名称,可以包括本地模型和在线模型。 |
|||
# 第一个将作为 API 和 WEBUI 的默认模型 |
|||
LLM_MODELS = ["chatglm2-6b", "zhipu-api", "openai-api"] |
|||
|
|||
# AgentLM模型的名称 (可以不指定,指定之后就锁定进入Agent之后的Chain的模型,不指定就是LLM_MODELS[0]) |
|||
Agent_MODEL = None |
|||
|
|||
# LLM 运行设备。设为"auto"会自动检测,也可手动设定为"cuda","mps","cpu"其中之一。 |
|||
LLM_DEVICE = "auto" |
|||
|
|||
# 历史对话轮数 |
|||
HISTORY_LEN = 3 |
|||
|
|||
# 大模型最长支持的长度,如果不填写,则使用模型默认的最大长度,如果填写,则为用户设定的最大长度 |
|||
MAX_TOKENS = None |
|||
|
|||
# LLM通用对话参数 |
|||
TEMPERATURE = 0.7 |
|||
# TOP_P = 0.95 # ChatOpenAI暂不支持该参数 |
|||
|
|||
ONLINE_LLM_MODEL = { |
|||
# 线上模型。请在server_config中为每个在线API设置不同的端口 |
|||
|
|||
"openai-api": { |
|||
"model_name": "gpt-35-turbo", |
|||
"api_base_url": "https://api.openai.com/v1", |
|||
"api_key": "", |
|||
"openai_proxy": "", |
|||
}, |
|||
|
|||
# 具体注册及api key获取请前往 http://open.bigmodel.cn |
|||
"zhipu-api": { |
|||
"api_key": "", |
|||
"version": "chatglm_turbo", # 可选包括 "chatglm_turbo" |
|||
"provider": "ChatGLMWorker", |
|||
}, |
|||
|
|||
|
|||
# 具体注册及api key获取请前往 https://api.minimax.chat/ |
|||
"minimax-api": { |
|||
"group_id": "", |
|||
"api_key": "", |
|||
"is_pro": False, |
|||
"provider": "MiniMaxWorker", |
|||
}, |
|||
|
|||
|
|||
# 具体注册及api key获取请前往 https://xinghuo.xfyun.cn/ |
|||
"xinghuo-api": { |
|||
"APPID": "", |
|||
"APISecret": "", |
|||
"api_key": "", |
|||
"version": "v1.5", # 你使用的讯飞星火大模型版本,可选包括 "v3.0", "v1.5", "v2.0" |
|||
"provider": "XingHuoWorker", |
|||
}, |
|||
|
|||
# 百度千帆 API,申请方式请参考 https://cloud.baidu.com/doc/WENXINWORKSHOP/s/4lilb2lpf |
|||
"qianfan-api": { |
|||
"version": "ERNIE-Bot", # 注意大小写。当前支持 "ERNIE-Bot" 或 "ERNIE-Bot-turbo", 更多的见官方文档。 |
|||
"version_url": "", # 也可以不填写version,直接填写在千帆申请模型发布的API地址 |
|||
"api_key": "", |
|||
"secret_key": "", |
|||
"provider": "QianFanWorker", |
|||
}, |
|||
|
|||
# 火山方舟 API,文档参考 https://www.volcengine.com/docs/82379 |
|||
"fangzhou-api": { |
|||
"version": "chatglm-6b-model", # 当前支持 "chatglm-6b-model", 更多的见文档模型支持列表中方舟部分。 |
|||
"version_url": "", # 可以不填写version,直接填写在方舟申请模型发布的API地址 |
|||
"api_key": "", |
|||
"secret_key": "", |
|||
"provider": "FangZhouWorker", |
|||
}, |
|||
|
|||
# 阿里云通义千问 API,文档参考 https://help.aliyun.com/zh/dashscope/developer-reference/api-details |
|||
"qwen-api": { |
|||
"version": "qwen-turbo", # 可选包括 "qwen-turbo", "qwen-plus" |
|||
"api_key": "", # 请在阿里云控制台模型服务灵积API-KEY管理页面创建 |
|||
"provider": "QwenWorker", |
|||
}, |
|||
|
|||
# 百川 API,申请方式请参考 https://www.baichuan-ai.com/home#api-enter |
|||
"baichuan-api": { |
|||
"version": "Baichuan2-53B", # 当前支持 "Baichuan2-53B", 见官方文档。 |
|||
"api_key": "", |
|||
"secret_key": "", |
|||
"provider": "BaiChuanWorker", |
|||
}, |
|||
|
|||
# Azure API |
|||
"azure-api": { |
|||
"deployment_name": "", # 部署容器的名字 |
|||
"resource_name": "", # https://{resource_name}.openai.azure.com/openai/ 填写resource_name的部分,其他部分不要填写 |
|||
"api_version": "", # API的版本,不是模型版本 |
|||
"api_key": "", |
|||
"provider": "AzureWorker", |
|||
}, |
|||
|
|||
} |
|||
|
|||
# 在以下字典中修改属性值,以指定本地embedding模型存储位置。支持3种设置方法: |
|||
# 1、将对应的值修改为模型绝对路径 |
|||
# 2、不修改此处的值(以 text2vec 为例): |
|||
# 2.1 如果{MODEL_ROOT_PATH}下存在如下任一子目录: |
|||
# - text2vec |
|||
# - GanymedeNil/text2vec-large-chinese |
|||
# - text2vec-large-chinese |
|||
# 2.2 如果以上本地路径不存在,则使用huggingface模型 |
|||
MODEL_PATH = { |
|||
"embed_model": { |
|||
"ernie-tiny": "nghuyong/ernie-3.0-nano-zh", |
|||
"ernie-base": "nghuyong/ernie-3.0-base-zh", |
|||
"text2vec-base": "shibing624/text2vec-base-chinese", |
|||
"text2vec": "GanymedeNil/text2vec-large-chinese", |
|||
"text2vec-paraphrase": "shibing624/text2vec-base-chinese-paraphrase", |
|||
"text2vec-sentence": "shibing624/text2vec-base-chinese-sentence", |
|||
"text2vec-multilingual": "shibing624/text2vec-base-multilingual", |
|||
"text2vec-bge-large-chinese": "shibing624/text2vec-bge-large-chinese", |
|||
"m3e-small": "moka-ai/m3e-small", |
|||
"m3e-base": "moka-ai/m3e-base", |
|||
"m3e-large": "moka-ai/m3e-large", |
|||
"bge-small-zh": "BAAI/bge-small-zh", |
|||
"bge-base-zh": "BAAI/bge-base-zh", |
|||
"bge-large-zh": "BAAI/bge-large-zh", |
|||
"bge-large-zh-noinstruct": "BAAI/bge-large-zh-noinstruct", |
|||
"bge-base-zh-v1.5": "BAAI/bge-base-zh-v1.5", |
|||
"bge-large-zh-v1.5": "BAAI/bge-large-zh-v1.5", |
|||
"piccolo-base-zh": "sensenova/piccolo-base-zh", |
|||
"piccolo-large-zh": "sensenova/piccolo-large-zh", |
|||
"text-embedding-ada-002": "your OPENAI_API_KEY", |
|||
}, |
|||
|
|||
"llm_model": { |
|||
# 以下部分模型并未完全测试,仅根据fastchat和vllm模型的模型列表推定支持 |
|||
"chatglm2-6b": "THUDM/chatglm2-6b", |
|||
"chatglm2-6b-32k": "THUDM/chatglm2-6b-32k", |
|||
|
|||
"baichuan2-13b": "baichuan-inc/Baichuan2-13B-Chat", |
|||
"baichuan2-7b": "baichuan-inc/Baichuan2-7B-Chat", |
|||
|
|||
"baichuan-7b": "baichuan-inc/Baichuan-7B", |
|||
"baichuan-13b": "baichuan-inc/Baichuan-13B", |
|||
'baichuan-13b-chat': 'baichuan-inc/Baichuan-13B-Chat', |
|||
|
|||
"aquila-7b": "BAAI/Aquila-7B", |
|||
"aquilachat-7b": "BAAI/AquilaChat-7B", |
|||
|
|||
"internlm-7b": "internlm/internlm-7b", |
|||
"internlm-chat-7b": "internlm/internlm-chat-7b", |
|||
|
|||
"falcon-7b": "tiiuae/falcon-7b", |
|||
"falcon-40b": "tiiuae/falcon-40b", |
|||
"falcon-rw-7b": "tiiuae/falcon-rw-7b", |
|||
|
|||
"gpt2": "gpt2", |
|||
"gpt2-xl": "gpt2-xl", |
|||
|
|||
"gpt-j-6b": "EleutherAI/gpt-j-6b", |
|||
"gpt4all-j": "nomic-ai/gpt4all-j", |
|||
"gpt-neox-20b": "EleutherAI/gpt-neox-20b", |
|||
"pythia-12b": "EleutherAI/pythia-12b", |
|||
"oasst-sft-4-pythia-12b-epoch-3.5": "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5", |
|||
"dolly-v2-12b": "databricks/dolly-v2-12b", |
|||
"stablelm-tuned-alpha-7b": "stabilityai/stablelm-tuned-alpha-7b", |
|||
|
|||
"Llama-2-13b-hf": "meta-llama/Llama-2-13b-hf", |
|||
"Llama-2-70b-hf": "meta-llama/Llama-2-70b-hf", |
|||
"open_llama_13b": "openlm-research/open_llama_13b", |
|||
"vicuna-13b-v1.3": "lmsys/vicuna-13b-v1.3", |
|||
"koala": "young-geng/koala", |
|||
|
|||
"mpt-7b": "mosaicml/mpt-7b", |
|||
"mpt-7b-storywriter": "mosaicml/mpt-7b-storywriter", |
|||
"mpt-30b": "mosaicml/mpt-30b", |
|||
"opt-66b": "facebook/opt-66b", |
|||
"opt-iml-max-30b": "facebook/opt-iml-max-30b", |
|||
|
|||
"Qwen-7B": "Qwen/Qwen-7B", |
|||
"Qwen-14B": "Qwen/Qwen-14B", |
|||
"Qwen-7B-Chat": "Qwen/Qwen-7B-Chat", |
|||
"Qwen-14B-Chat": "Qwen/Qwen-14B-Chat", |
|||
"Qwen-14B-Chat-Int8": "Qwen/Qwen-14B-Chat-Int8", # 确保已经安装了auto-gptq optimum flash-attn |
|||
"Qwen-14B-Chat-Int4": "Qwen/Qwen-14B-Chat-Int4", # 确保已经安装了auto-gptq optimum flash-attn |
|||
}, |
|||
} |
|||
|
|||
|
|||
# 通常情况下不需要更改以下内容 |
|||
|
|||
# nltk 模型存储路径 |
|||
NLTK_DATA_PATH = os.path.join(os.path.dirname(os.path.dirname(__file__)), "nltk_data") |
|||
|
|||
VLLM_MODEL_DICT = { |
|||
"aquila-7b": "BAAI/Aquila-7B", |
|||
"aquilachat-7b": "BAAI/AquilaChat-7B", |
|||
|
|||
"baichuan-7b": "baichuan-inc/Baichuan-7B", |
|||
"baichuan-13b": "baichuan-inc/Baichuan-13B", |
|||
'baichuan-13b-chat': 'baichuan-inc/Baichuan-13B-Chat', |
|||
# 注意:bloom系列的tokenizer与model是分离的,因此虽然vllm支持,但与fschat框架不兼容 |
|||
# "bloom":"bigscience/bloom", |
|||
# "bloomz":"bigscience/bloomz", |
|||
# "bloomz-560m":"bigscience/bloomz-560m", |
|||
# "bloomz-7b1":"bigscience/bloomz-7b1", |
|||
# "bloomz-1b7":"bigscience/bloomz-1b7", |
|||
|
|||
"internlm-7b": "internlm/internlm-7b", |
|||
"internlm-chat-7b": "internlm/internlm-chat-7b", |
|||
"falcon-7b": "tiiuae/falcon-7b", |
|||
"falcon-40b": "tiiuae/falcon-40b", |
|||
"falcon-rw-7b": "tiiuae/falcon-rw-7b", |
|||
"gpt2": "gpt2", |
|||
"gpt2-xl": "gpt2-xl", |
|||
"gpt-j-6b": "EleutherAI/gpt-j-6b", |
|||
"gpt4all-j": "nomic-ai/gpt4all-j", |
|||
"gpt-neox-20b": "EleutherAI/gpt-neox-20b", |
|||
"pythia-12b": "EleutherAI/pythia-12b", |
|||
"oasst-sft-4-pythia-12b-epoch-3.5": "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5", |
|||
"dolly-v2-12b": "databricks/dolly-v2-12b", |
|||
"stablelm-tuned-alpha-7b": "stabilityai/stablelm-tuned-alpha-7b", |
|||
"Llama-2-13b-hf": "meta-llama/Llama-2-13b-hf", |
|||
"Llama-2-70b-hf": "meta-llama/Llama-2-70b-hf", |
|||
"open_llama_13b": "openlm-research/open_llama_13b", |
|||
"vicuna-13b-v1.3": "lmsys/vicuna-13b-v1.3", |
|||
"koala": "young-geng/koala", |
|||
"mpt-7b": "mosaicml/mpt-7b", |
|||
"mpt-7b-storywriter": "mosaicml/mpt-7b-storywriter", |
|||
"mpt-30b": "mosaicml/mpt-30b", |
|||
"opt-66b": "facebook/opt-66b", |
|||
"opt-iml-max-30b": "facebook/opt-iml-max-30b", |
|||
|
|||
"Qwen-7B": "Qwen/Qwen-7B", |
|||
"Qwen-14B": "Qwen/Qwen-14B", |
|||
"Qwen-7B-Chat": "Qwen/Qwen-7B-Chat", |
|||
"Qwen-14B-Chat": "Qwen/Qwen-14B-Chat", |
|||
|
|||
"agentlm-7b": "THUDM/agentlm-7b", |
|||
"agentlm-13b": "THUDM/agentlm-13b", |
|||
"agentlm-70b": "THUDM/agentlm-70b", |
|||
|
|||
} |
|||
|
|||
# 你认为支持Agent能力的模型,可以在这里添加,添加后不会出现可视化界面的警告 |
|||
SUPPORT_AGENT_MODEL = [ |
|||
"azure-api", |
|||
"openai-api", |
|||
"claude-api", |
|||
"zhipu-api", |
|||
"qwen-api", |
|||
"Qwen", |
|||
"baichuan-api", |
|||
"agentlm", |
|||
"chatglm3", |
|||
"xinghuo-api", |
|||
] |
@ -0,0 +1,158 @@ |
|||
# prompt模板使用Jinja2语法,简单点就是用双大括号代替f-string的单大括号 |
|||
# 本配置文件支持热加载,修改prompt模板后无需重启服务。 |
|||
|
|||
|
|||
# LLM对话支持的变量: |
|||
# - input: 用户输入内容 |
|||
|
|||
# 知识库和搜索引擎对话支持的变量: |
|||
# - context: 从检索结果拼接的知识文本 |
|||
# - question: 用户提出的问题 |
|||
|
|||
# Agent对话支持的变量: |
|||
|
|||
# - tools: 可用的工具列表 |
|||
# - tool_names: 可用的工具名称列表 |
|||
# - history: 用户和Agent的对话历史 |
|||
# - input: 用户输入内容 |
|||
# - agent_scratchpad: Agent的思维记录 |
|||
|
|||
PROMPT_TEMPLATES = { |
|||
"completion": { |
|||
"default": "{input}" |
|||
}, |
|||
|
|||
"llm_chat": { |
|||
"default": "{{ input }}", |
|||
|
|||
"py": |
|||
""" |
|||
你是一个聪明的代码助手,请你给我写出简单的py代码。 \n |
|||
{{ input }} |
|||
""" |
|||
, |
|||
}, |
|||
|
|||
"knowledge_base_chat": { |
|||
"default": |
|||
""" |
|||
<指令>根据已知信息,简洁和专业的来回答问题。如果无法从中得到答案,请说 “根据已知信息无法回答该问题”,不允许在答案中添加编造成分,答案请使用中文。 </指令> |
|||
<已知信息>{{ context }}</已知信息>、 |
|||
<问题>{{ question }}</问题> |
|||
""", |
|||
"text": |
|||
""" |
|||
<指令>根据已知信息,简洁和专业的来回答问题。如果无法从中得到答案,请说 “根据已知信息无法回答该问题”,答案请使用中文。 </指令> |
|||
<已知信息>{{ context }}</已知信息>、 |
|||
<问题>{{ question }}</问题> |
|||
""", |
|||
"Empty": # 搜不到内容的时候调用,此时没有已知信息,这个Empty可以更改,但不能删除,会影响程序使用 |
|||
""" |
|||
<指令>请根据用户的问题,进行简洁明了的回答</指令> |
|||
<问题>{{ question }}</问题> |
|||
""", |
|||
}, |
|||
|
|||
"search_engine_chat": { |
|||
"default": |
|||
""" |
|||
<指令>这是我搜索到的互联网信息,请你根据这些信息进行提取并有调理,简洁的回答问题。如果无法从中得到答案,请说 “无法搜索到能回答问题的内容”。 </指令> |
|||
<已知信息>{{ context }}</已知信息>、 |
|||
<问题>{{ question }}</问题> |
|||
""", |
|||
"search": |
|||
""" |
|||
<指令>根据已知信息,简洁和专业的来回答问题。如果无法从中得到答案,请说 “根据已知信息无法回答该问题”,答案请使用中文。 </指令> |
|||
<已知信息>{{ context }}</已知信息>、 |
|||
<问题>{{ question }}</问题> |
|||
""", |
|||
"Empty": # 搜不到内容的时候调用,此时没有已知信息,这个Empty可以更改,但不能删除,会影响程序使用 |
|||
""" |
|||
<指令>请根据用户的问题,进行简洁明了的回答</指令> |
|||
<问题>{{ question }}</问题> |
|||
""", |
|||
}, |
|||
|
|||
"agent_chat": { |
|||
"default": |
|||
""" |
|||
Answer the following questions as best you can. If it is in order, you can use some tools appropriately.You have access to the following tools: |
|||
|
|||
{tools} |
|||
|
|||
Please note that the "知识库查询工具" is information about the "西交利物浦大学" ,and if a question is asked about it, you must answer with the knowledge base, |
|||
Please note that the "天气查询工具" can only be used once since Question begin. |
|||
|
|||
Use the following format: |
|||
Question: the input question you must answer1 |
|||
Thought: you should always think about what to do and what tools to use. |
|||
Action: the action to take, should be one of [{tool_names}] |
|||
Action Input: the input to the action |
|||
Observation: the result of the action |
|||
... (this Thought/Action/Action Input/Observation can be repeated zero or more times) |
|||
Thought: I now know the final answer |
|||
Final Answer: the final answer to the original input question |
|||
|
|||
|
|||
Begin! |
|||
history: |
|||
{history} |
|||
Question: {input} |
|||
Thought: {agent_scratchpad} |
|||
""", |
|||
|
|||
"AgentLM": |
|||
""" |
|||
<SYS>>\n |
|||
You are a helpful, respectful and honest assistant. |
|||
</SYS>>\n |
|||
Answer the following questions as best you can. If it is in order, you can use some tools appropriately.You have access to the following tools: |
|||
|
|||
{tools}. |
|||
|
|||
Use the following steps and think step by step!: |
|||
Question: the input question you must answer1 |
|||
Thought: you should always think about what to do and what tools to use. |
|||
Action: the action to take, should be one of [{tool_names}] |
|||
Action Input: the input to the action |
|||
Observation: the result of the action |
|||
... (this Thought/Action/Action Input/Observation can be repeated zero or more times) |
|||
Thought: I now know the final answer |
|||
Final Answer: the final answer to the original input question |
|||
|
|||
Begin! let's think step by step! |
|||
history: |
|||
{history} |
|||
Question: {input} |
|||
Thought: {agent_scratchpad} |
|||
|
|||
""", |
|||
|
|||
"中文版本": |
|||
""" |
|||
你的知识不一定正确,所以你一定要用提供的工具来思考,并给出用户答案。 |
|||
你有以下工具可以使用: |
|||
{tools} |
|||
|
|||
请请严格按照提供的思维方式来思考,所有的关键词都要输出,例如Action,Action Input,Observation等 |
|||
``` |
|||
Question: 用户的提问或者观察到的信息, |
|||
Thought: 你应该思考该做什么,是根据工具的结果来回答问题,还是决定使用什么工具。 |
|||
Action: 需要使用的工具,应该是在[{tool_names}]中的一个。 |
|||
Action Input: 传入工具的内容 |
|||
Observation: 工具给出的答案(不是你生成的) |
|||
... (this Thought/Action/Action Input/Observation can be repeated zero or more times) |
|||
Thought: 通过工具给出的答案,你是否能回答Question。 |
|||
Final Answer是你的答案 |
|||
|
|||
现在,我们开始! |
|||
你和用户的历史记录: |
|||
History: |
|||
{history} |
|||
|
|||
用户开始以提问: |
|||
Question: {input} |
|||
Thought: {agent_scratchpad} |
|||
""", |
|||
}, |
|||
} |
@ -0,0 +1,158 @@ |
|||
# prompt模板使用Jinja2语法,简单点就是用双大括号代替f-string的单大括号 |
|||
# 本配置文件支持热加载,修改prompt模板后无需重启服务。 |
|||
|
|||
|
|||
# LLM对话支持的变量: |
|||
# - input: 用户输入内容 |
|||
|
|||
# 知识库和搜索引擎对话支持的变量: |
|||
# - context: 从检索结果拼接的知识文本 |
|||
# - question: 用户提出的问题 |
|||
|
|||
# Agent对话支持的变量: |
|||
|
|||
# - tools: 可用的工具列表 |
|||
# - tool_names: 可用的工具名称列表 |
|||
# - history: 用户和Agent的对话历史 |
|||
# - input: 用户输入内容 |
|||
# - agent_scratchpad: Agent的思维记录 |
|||
|
|||
PROMPT_TEMPLATES = { |
|||
"completion": { |
|||
"default": "{input}" |
|||
}, |
|||
|
|||
"llm_chat": { |
|||
"default": "{{ input }}", |
|||
|
|||
"py": |
|||
""" |
|||
你是一个聪明的代码助手,请你给我写出简单的py代码。 \n |
|||
{{ input }} |
|||
""" |
|||
, |
|||
}, |
|||
|
|||
"knowledge_base_chat": { |
|||
"default": |
|||
""" |
|||
<指令>根据已知信息,简洁和专业的来回答问题。如果无法从中得到答案,请说 “根据已知信息无法回答该问题”,不允许在答案中添加编造成分,答案请使用中文。 </指令> |
|||
<已知信息>{{ context }}</已知信息>、 |
|||
<问题>{{ question }}</问题> |
|||
""", |
|||
"text": |
|||
""" |
|||
<指令>根据已知信息,简洁和专业的来回答问题。如果无法从中得到答案,请说 “根据已知信息无法回答该问题”,答案请使用中文。 </指令> |
|||
<已知信息>{{ context }}</已知信息>、 |
|||
<问题>{{ question }}</问题> |
|||
""", |
|||
"Empty": # 搜不到内容的时候调用,此时没有已知信息,这个Empty可以更改,但不能删除,会影响程序使用 |
|||
""" |
|||
<指令>请根据用户的问题,进行简洁明了的回答</指令> |
|||
<问题>{{ question }}</问题> |
|||
""", |
|||
}, |
|||
|
|||
"search_engine_chat": { |
|||
"default": |
|||
""" |
|||
<指令>这是我搜索到的互联网信息,请你根据这些信息进行提取并有调理,简洁的回答问题。如果无法从中得到答案,请说 “无法搜索到能回答问题的内容”。 </指令> |
|||
<已知信息>{{ context }}</已知信息>、 |
|||
<问题>{{ question }}</问题> |
|||
""", |
|||
"search": |
|||
""" |
|||
<指令>根据已知信息,简洁和专业的来回答问题。如果无法从中得到答案,请说 “根据已知信息无法回答该问题”,答案请使用中文。 </指令> |
|||
<已知信息>{{ context }}</已知信息>、 |
|||
<问题>{{ question }}</问题> |
|||
""", |
|||
"Empty": # 搜不到内容的时候调用,此时没有已知信息,这个Empty可以更改,但不能删除,会影响程序使用 |
|||
""" |
|||
<指令>请根据用户的问题,进行简洁明了的回答</指令> |
|||
<问题>{{ question }}</问题> |
|||
""", |
|||
}, |
|||
|
|||
"agent_chat": { |
|||
"default": |
|||
""" |
|||
Answer the following questions as best you can. If it is in order, you can use some tools appropriately.You have access to the following tools: |
|||
|
|||
{tools} |
|||
|
|||
Please note that the "知识库查询工具" is information about the "西交利物浦大学" ,and if a question is asked about it, you must answer with the knowledge base, |
|||
Please note that the "天气查询工具" can only be used once since Question begin. |
|||
|
|||
Use the following format: |
|||
Question: the input question you must answer1 |
|||
Thought: you should always think about what to do and what tools to use. |
|||
Action: the action to take, should be one of [{tool_names}] |
|||
Action Input: the input to the action |
|||
Observation: the result of the action |
|||
... (this Thought/Action/Action Input/Observation can be repeated zero or more times) |
|||
Thought: I now know the final answer |
|||
Final Answer: the final answer to the original input question |
|||
|
|||
|
|||
Begin! |
|||
history: |
|||
{history} |
|||
Question: {input} |
|||
Thought: {agent_scratchpad} |
|||
""", |
|||
|
|||
"AgentLM": |
|||
""" |
|||
<SYS>>\n |
|||
You are a helpful, respectful and honest assistant. |
|||
</SYS>>\n |
|||
Answer the following questions as best you can. If it is in order, you can use some tools appropriately.You have access to the following tools: |
|||
|
|||
{tools}. |
|||
|
|||
Use the following steps and think step by step!: |
|||
Question: the input question you must answer1 |
|||
Thought: you should always think about what to do and what tools to use. |
|||
Action: the action to take, should be one of [{tool_names}] |
|||
Action Input: the input to the action |
|||
Observation: the result of the action |
|||
... (this Thought/Action/Action Input/Observation can be repeated zero or more times) |
|||
Thought: I now know the final answer |
|||
Final Answer: the final answer to the original input question |
|||
|
|||
Begin! let's think step by step! |
|||
history: |
|||
{history} |
|||
Question: {input} |
|||
Thought: {agent_scratchpad} |
|||
|
|||
""", |
|||
|
|||
"中文版本": |
|||
""" |
|||
你的知识不一定正确,所以你一定要用提供的工具来思考,并给出用户答案。 |
|||
你有以下工具可以使用: |
|||
{tools} |
|||
|
|||
请请严格按照提供的思维方式来思考,所有的关键词都要输出,例如Action,Action Input,Observation等 |
|||
``` |
|||
Question: 用户的提问或者观察到的信息, |
|||
Thought: 你应该思考该做什么,是根据工具的结果来回答问题,还是决定使用什么工具。 |
|||
Action: 需要使用的工具,应该是在[{tool_names}]中的一个。 |
|||
Action Input: 传入工具的内容 |
|||
Observation: 工具给出的答案(不是你生成的) |
|||
... (this Thought/Action/Action Input/Observation can be repeated zero or more times) |
|||
Thought: 通过工具给出的答案,你是否能回答Question。 |
|||
Final Answer是你的答案 |
|||
|
|||
现在,我们开始! |
|||
你和用户的历史记录: |
|||
History: |
|||
{history} |
|||
|
|||
用户开始以提问: |
|||
Question: {input} |
|||
Thought: {agent_scratchpad} |
|||
""", |
|||
}, |
|||
} |
@ -0,0 +1,135 @@ |
|||
import sys |
|||
from configs.model_config import LLM_DEVICE |
|||
|
|||
# httpx 请求默认超时时间(秒)。如果加载模型或对话较慢,出现超时错误,可以适当加大该值。 |
|||
HTTPX_DEFAULT_TIMEOUT = 300.0 |
|||
|
|||
# API 是否开启跨域,默认为False,如果需要开启,请设置为True |
|||
# is open cross domain |
|||
OPEN_CROSS_DOMAIN = False |
|||
|
|||
# 各服务器默认绑定host。如改为"0.0.0.0"需要修改下方所有XX_SERVER的host |
|||
DEFAULT_BIND_HOST = "localhost" if sys.platform != "win32" else "127.0.0.1" |
|||
|
|||
# webui.py server |
|||
WEBUI_SERVER = { |
|||
"host": DEFAULT_BIND_HOST, |
|||
"port": 8501, |
|||
} |
|||
|
|||
# api.py server |
|||
API_SERVER = { |
|||
"host": DEFAULT_BIND_HOST, |
|||
"port": 7861, |
|||
} |
|||
|
|||
# fastchat openai_api server |
|||
FSCHAT_OPENAI_API = { |
|||
"host": DEFAULT_BIND_HOST, |
|||
"port": 20000, |
|||
} |
|||
|
|||
# fastchat model_worker server |
|||
# 这些模型必须是在model_config.MODEL_PATH或ONLINE_MODEL中正确配置的。 |
|||
# 在启动startup.py时,可用通过`--model-name xxxx yyyy`指定模型,不指定则为LLM_MODELS |
|||
FSCHAT_MODEL_WORKERS = { |
|||
# 所有模型共用的默认配置,可在模型专项配置中进行覆盖。 |
|||
"default": { |
|||
"host": DEFAULT_BIND_HOST, |
|||
"port": 20002, |
|||
"device": LLM_DEVICE, |
|||
# False,'vllm',使用的推理加速框架,使用vllm如果出现HuggingFace通信问题,参见doc/FAQ |
|||
# vllm对一些模型支持还不成熟,暂时默认关闭 |
|||
"infer_turbo": False, |
|||
|
|||
# model_worker多卡加载需要配置的参数 |
|||
# "gpus": None, # 使用的GPU,以str的格式指定,如"0,1",如失效请使用CUDA_VISIBLE_DEVICES="0,1"等形式指定 |
|||
# "num_gpus": 1, # 使用GPU的数量 |
|||
# "max_gpu_memory": "20GiB", # 每个GPU占用的最大显存 |
|||
|
|||
# 以下为model_worker非常用参数,可根据需要配置 |
|||
# "load_8bit": False, # 开启8bit量化 |
|||
# "cpu_offloading": None, |
|||
# "gptq_ckpt": None, |
|||
# "gptq_wbits": 16, |
|||
# "gptq_groupsize": -1, |
|||
# "gptq_act_order": False, |
|||
# "awq_ckpt": None, |
|||
# "awq_wbits": 16, |
|||
# "awq_groupsize": -1, |
|||
# "model_names": LLM_MODELS, |
|||
# "conv_template": None, |
|||
# "limit_worker_concurrency": 5, |
|||
# "stream_interval": 2, |
|||
# "no_register": False, |
|||
# "embed_in_truncate": False, |
|||
|
|||
# 以下为vllm_woker配置参数,注意使用vllm必须有gpu,仅在Linux测试通过 |
|||
|
|||
# tokenizer = model_path # 如果tokenizer与model_path不一致在此处添加 |
|||
# 'tokenizer_mode':'auto', |
|||
# 'trust_remote_code':True, |
|||
# 'download_dir':None, |
|||
# 'load_format':'auto', |
|||
# 'dtype':'auto', |
|||
# 'seed':0, |
|||
# 'worker_use_ray':False, |
|||
# 'pipeline_parallel_size':1, |
|||
# 'tensor_parallel_size':1, |
|||
# 'block_size':16, |
|||
# 'swap_space':4 , # GiB |
|||
# 'gpu_memory_utilization':0.90, |
|||
# 'max_num_batched_tokens':2560, |
|||
# 'max_num_seqs':256, |
|||
# 'disable_log_stats':False, |
|||
# 'conv_template':None, |
|||
# 'limit_worker_concurrency':5, |
|||
# 'no_register':False, |
|||
# 'num_gpus': 1 |
|||
# 'engine_use_ray': False, |
|||
# 'disable_log_requests': False |
|||
|
|||
}, |
|||
# 可以如下示例方式更改默认配置 |
|||
# "baichuan-7b": { # 使用default中的IP和端口 |
|||
# "device": "cpu", |
|||
# }, |
|||
|
|||
#以下配置可以不用修改,在model_config中设置启动的模型 |
|||
"zhipu-api": { |
|||
"port": 21001, |
|||
}, |
|||
"minimax-api": { |
|||
"port": 21002, |
|||
}, |
|||
"xinghuo-api": { |
|||
"port": 21003, |
|||
}, |
|||
"qianfan-api": { |
|||
"port": 21004, |
|||
}, |
|||
"fangzhou-api": { |
|||
"port": 21005, |
|||
}, |
|||
"qwen-api": { |
|||
"port": 21006, |
|||
}, |
|||
"baichuan-api": { |
|||
"port": 21007, |
|||
}, |
|||
"azure-api": { |
|||
"port": 21008, |
|||
}, |
|||
} |
|||
|
|||
# fastchat multi model worker server |
|||
FSCHAT_MULTI_MODEL_WORKERS = { |
|||
# TODO: |
|||
} |
|||
|
|||
# fastchat controller server |
|||
FSCHAT_CONTROLLER = { |
|||
"host": DEFAULT_BIND_HOST, |
|||
"port": 20001, |
|||
"dispatch_method": "shortest_queue", |
|||
} |
@ -0,0 +1,135 @@ |
|||
import sys |
|||
from configs.model_config import LLM_DEVICE |
|||
|
|||
# httpx 请求默认超时时间(秒)。如果加载模型或对话较慢,出现超时错误,可以适当加大该值。 |
|||
HTTPX_DEFAULT_TIMEOUT = 300.0 |
|||
|
|||
# API 是否开启跨域,默认为False,如果需要开启,请设置为True |
|||
# is open cross domain |
|||
OPEN_CROSS_DOMAIN = False |
|||
|
|||
# 各服务器默认绑定host。如改为"0.0.0.0"需要修改下方所有XX_SERVER的host |
|||
DEFAULT_BIND_HOST = "0.0.0.0" if sys.platform != "win32" else "127.0.0.1" |
|||
|
|||
# webui.py server |
|||
WEBUI_SERVER = { |
|||
"host": DEFAULT_BIND_HOST, |
|||
"port": 8501, |
|||
} |
|||
|
|||
# api.py server |
|||
API_SERVER = { |
|||
"host": DEFAULT_BIND_HOST, |
|||
"port": 7861, |
|||
} |
|||
|
|||
# fastchat openai_api server |
|||
FSCHAT_OPENAI_API = { |
|||
"host": DEFAULT_BIND_HOST, |
|||
"port": 20000, |
|||
} |
|||
|
|||
# fastchat model_worker server |
|||
# 这些模型必须是在model_config.MODEL_PATH或ONLINE_MODEL中正确配置的。 |
|||
# 在启动startup.py时,可用通过`--model-name xxxx yyyy`指定模型,不指定则为LLM_MODELS |
|||
FSCHAT_MODEL_WORKERS = { |
|||
# 所有模型共用的默认配置,可在模型专项配置中进行覆盖。 |
|||
"default": { |
|||
"host": DEFAULT_BIND_HOST, |
|||
"port": 20002, |
|||
"device": LLM_DEVICE, |
|||
# False,'vllm',使用的推理加速框架,使用vllm如果出现HuggingFace通信问题,参见doc/FAQ |
|||
# vllm对一些模型支持还不成熟,暂时默认关闭 |
|||
"infer_turbo": False, |
|||
|
|||
# model_worker多卡加载需要配置的参数 |
|||
# "gpus": None, # 使用的GPU,以str的格式指定,如"0,1",如失效请使用CUDA_VISIBLE_DEVICES="0,1"等形式指定 |
|||
# "num_gpus": 1, # 使用GPU的数量 |
|||
# "max_gpu_memory": "20GiB", # 每个GPU占用的最大显存 |
|||
|
|||
# 以下为model_worker非常用参数,可根据需要配置 |
|||
# "load_8bit": False, # 开启8bit量化 |
|||
# "cpu_offloading": None, |
|||
# "gptq_ckpt": None, |
|||
# "gptq_wbits": 16, |
|||
# "gptq_groupsize": -1, |
|||
# "gptq_act_order": False, |
|||
# "awq_ckpt": None, |
|||
# "awq_wbits": 16, |
|||
# "awq_groupsize": -1, |
|||
# "model_names": LLM_MODELS, |
|||
# "conv_template": None, |
|||
# "limit_worker_concurrency": 5, |
|||
# "stream_interval": 2, |
|||
# "no_register": False, |
|||
# "embed_in_truncate": False, |
|||
|
|||
# 以下为vllm_woker配置参数,注意使用vllm必须有gpu,仅在Linux测试通过 |
|||
|
|||
# tokenizer = model_path # 如果tokenizer与model_path不一致在此处添加 |
|||
# 'tokenizer_mode':'auto', |
|||
# 'trust_remote_code':True, |
|||
# 'download_dir':None, |
|||
# 'load_format':'auto', |
|||
# 'dtype':'auto', |
|||
# 'seed':0, |
|||
# 'worker_use_ray':False, |
|||
# 'pipeline_parallel_size':1, |
|||
# 'tensor_parallel_size':1, |
|||
# 'block_size':16, |
|||
# 'swap_space':4 , # GiB |
|||
# 'gpu_memory_utilization':0.90, |
|||
# 'max_num_batched_tokens':2560, |
|||
# 'max_num_seqs':256, |
|||
# 'disable_log_stats':False, |
|||
# 'conv_template':None, |
|||
# 'limit_worker_concurrency':5, |
|||
# 'no_register':False, |
|||
# 'num_gpus': 1 |
|||
# 'engine_use_ray': False, |
|||
# 'disable_log_requests': False |
|||
|
|||
}, |
|||
# 可以如下示例方式更改默认配置 |
|||
# "baichuan-7b": { # 使用default中的IP和端口 |
|||
# "device": "cpu", |
|||
# }, |
|||
|
|||
#以下配置可以不用修改,在model_config中设置启动的模型 |
|||
"zhipu-api": { |
|||
"port": 21001, |
|||
}, |
|||
"minimax-api": { |
|||
"port": 21002, |
|||
}, |
|||
"xinghuo-api": { |
|||
"port": 21003, |
|||
}, |
|||
"qianfan-api": { |
|||
"port": 21004, |
|||
}, |
|||
"fangzhou-api": { |
|||
"port": 21005, |
|||
}, |
|||
"qwen-api": { |
|||
"port": 21006, |
|||
}, |
|||
"baichuan-api": { |
|||
"port": 21007, |
|||
}, |
|||
"azure-api": { |
|||
"port": 21008, |
|||
}, |
|||
} |
|||
|
|||
# fastchat multi model worker server |
|||
FSCHAT_MULTI_MODEL_WORKERS = { |
|||
# TODO: |
|||
} |
|||
|
|||
# fastchat controller server |
|||
FSCHAT_CONTROLLER = { |
|||
"host": DEFAULT_BIND_HOST, |
|||
"port": 20001, |
|||
"dispatch_method": "shortest_queue", |
|||
} |
@ -0,0 +1,12 @@ |
|||
# 用于批量将configs下的.example文件复制并命名为.py文件 |
|||
import os |
|||
import shutil |
|||
|
|||
if __name__ == "__main__": |
|||
files = os.listdir("configs") |
|||
|
|||
src_files = [os.path.join("configs", file) for file in files if ".example" in file] |
|||
|
|||
for src_file in src_files: |
|||
tar_file = src_file.replace(".example", "") |
|||
shutil.copy(src_file, tar_file) |
@ -0,0 +1,29 @@ |
|||
|
|||
# 实现基于ES的数据插入、检索、删除、更新 |
|||
```shell |
|||
author: 唐国梁Tommy |
|||
e-mail: flytang186@qq.com |
|||
|
|||
如果遇到任何问题,可以与我联系,我这边部署后服务是没有问题的。 |
|||
``` |
|||
|
|||
## 第1步:ES docker部署 |
|||
```shell |
|||
docker network create elastic |
|||
docker run -id --name elasticsearch --net elastic -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e "xpack.security.enabled=false" -e "xpack.security.http.ssl.enabled=false" -t docker.elastic.co/elasticsearch/elasticsearch:8.8.2 |
|||
``` |
|||
|
|||
### 第2步:Kibana docker部署 |
|||
**注意:Kibana版本与ES保持一致** |
|||
```shell |
|||
docker pull docker.elastic.co/kibana/kibana:{version} |
|||
docker run --name kibana --net elastic -p 5601:5601 docker.elastic.co/kibana/kibana:{version} |
|||
``` |
|||
|
|||
### 第3步:核心代码 |
|||
```shell |
|||
1. 核心代码路径 |
|||
server/knowledge_base/kb_service/es_kb_service.py |
|||
|
|||
2. 需要在 configs/model_config.py 中 配置 ES参数(IP, PORT)等; |
|||
``` |
@ -0,0 +1,80 @@ |
|||
## 指定制定列的csv文件加载器 |
|||
|
|||
from langchain.document_loaders import CSVLoader |
|||
import csv |
|||
from io import TextIOWrapper |
|||
from typing import Dict, List, Optional |
|||
from langchain.docstore.document import Document |
|||
from langchain.document_loaders.helpers import detect_file_encodings |
|||
|
|||
|
|||
class FilteredCSVLoader(CSVLoader): |
|||
def __init__( |
|||
self, |
|||
file_path: str, |
|||
columns_to_read: List[str], |
|||
source_column: Optional[str] = None, |
|||
metadata_columns: List[str] = [], |
|||
csv_args: Optional[Dict] = None, |
|||
encoding: Optional[str] = None, |
|||
autodetect_encoding: bool = False, |
|||
): |
|||
super().__init__( |
|||
file_path=file_path, |
|||
source_column=source_column, |
|||
metadata_columns=metadata_columns, |
|||
csv_args=csv_args, |
|||
encoding=encoding, |
|||
autodetect_encoding=autodetect_encoding, |
|||
) |
|||
self.columns_to_read = columns_to_read |
|||
|
|||
def load(self) -> List[Document]: |
|||
"""Load data into document objects.""" |
|||
|
|||
docs = [] |
|||
try: |
|||
with open(self.file_path, newline="", encoding=self.encoding) as csvfile: |
|||
docs = self.__read_file(csvfile) |
|||
except UnicodeDecodeError as e: |
|||
if self.autodetect_encoding: |
|||
detected_encodings = detect_file_encodings(self.file_path) |
|||
for encoding in detected_encodings: |
|||
try: |
|||
with open( |
|||
self.file_path, newline="", encoding=encoding.encoding |
|||
) as csvfile: |
|||
docs = self.__read_file(csvfile) |
|||
break |
|||
except UnicodeDecodeError: |
|||
continue |
|||
else: |
|||
raise RuntimeError(f"Error loading {self.file_path}") from e |
|||
except Exception as e: |
|||
raise RuntimeError(f"Error loading {self.file_path}") from e |
|||
|
|||
return docs |
|||
def __read_file(self, csvfile: TextIOWrapper) -> List[Document]: |
|||
docs = [] |
|||
csv_reader = csv.DictReader(csvfile, **self.csv_args) # type: ignore |
|||
for i, row in enumerate(csv_reader): |
|||
if self.columns_to_read[0] in row: |
|||
content = row[self.columns_to_read[0]] |
|||
# Extract the source if available |
|||
source = ( |
|||
row.get(self.source_column, None) |
|||
if self.source_column is not None |
|||
else self.file_path |
|||
) |
|||
metadata = {"source": source, "row": i} |
|||
|
|||
for col in self.metadata_columns: |
|||
if col in row: |
|||
metadata[col] = row[col] |
|||
|
|||
doc = Document(page_content=content, metadata=metadata) |
|||
docs.append(doc) |
|||
else: |
|||
raise ValueError(f"Column '{self.columns_to_read[0]}' not found in CSV file.") |
|||
|
|||
return docs |
@ -0,0 +1,2 @@ |
|||
from .mypdfloader import RapidOCRPDFLoader |
|||
from .myimgloader import RapidOCRLoader |
@ -0,0 +1,25 @@ |
|||
from typing import List |
|||
from langchain.document_loaders.unstructured import UnstructuredFileLoader |
|||
|
|||
|
|||
class RapidOCRLoader(UnstructuredFileLoader): |
|||
def _get_elements(self) -> List: |
|||
def img2text(filepath): |
|||
from rapidocr_onnxruntime import RapidOCR |
|||
resp = "" |
|||
ocr = RapidOCR() |
|||
result, _ = ocr(filepath) |
|||
if result: |
|||
ocr_result = [line[1] for line in result] |
|||
resp += "\n".join(ocr_result) |
|||
return resp |
|||
|
|||
text = img2text(self.file_path) |
|||
from unstructured.partition.text import partition_text |
|||
return partition_text(text=text, **self.unstructured_kwargs) |
|||
|
|||
|
|||
if __name__ == "__main__": |
|||
loader = RapidOCRLoader(file_path="../tests/samples/ocr_test.jpg") |
|||
docs = loader.load() |
|||
print(docs) |
@ -0,0 +1,48 @@ |
|||
from typing import List |
|||
from langchain.document_loaders.unstructured import UnstructuredFileLoader |
|||
import tqdm |
|||
|
|||
|
|||
class RapidOCRPDFLoader(UnstructuredFileLoader): |
|||
def _get_elements(self) -> List: |
|||
def pdf2text(filepath): |
|||
import fitz # pyMuPDF里面的fitz包,不要与pip install fitz混淆 |
|||
from rapidocr_onnxruntime import RapidOCR |
|||
import numpy as np |
|||
ocr = RapidOCR() |
|||
doc = fitz.open(filepath) |
|||
resp = "" |
|||
|
|||
b_unit = tqdm.tqdm(total=doc.page_count, desc="RapidOCRPDFLoader context page index: 0") |
|||
for i, page in enumerate(doc): |
|||
|
|||
# 更新描述 |
|||
b_unit.set_description("RapidOCRPDFLoader context page index: {}".format(i)) |
|||
# 立即显示进度条更新结果 |
|||
b_unit.refresh() |
|||
# TODO: 依据文本与图片顺序调整处理方式 |
|||
text = page.get_text("") |
|||
resp += text + "\n" |
|||
|
|||
img_list = page.get_images() |
|||
for img in img_list: |
|||
pix = fitz.Pixmap(doc, img[0]) |
|||
img_array = np.frombuffer(pix.samples, dtype=np.uint8).reshape(pix.height, pix.width, -1) |
|||
result, _ = ocr(img_array) |
|||
if result: |
|||
ocr_result = [line[1] for line in result] |
|||
resp += "\n".join(ocr_result) |
|||
|
|||
# 更新进度 |
|||
b_unit.update(1) |
|||
return resp |
|||
|
|||
text = pdf2text(self.file_path) |
|||
from unstructured.partition.text import partition_text |
|||
return partition_text(text=text, **self.unstructured_kwargs) |
|||
|
|||
|
|||
if __name__ == "__main__": |
|||
loader = RapidOCRPDFLoader(file_path="../tests/samples/ocr_test.pdf") |
|||
docs = loader.load() |
|||
print(docs) |
@ -0,0 +1,121 @@ |
|||
''' |
|||
该功能是为了将关键词加入到embedding模型中,以便于在embedding模型中进行关键词的embedding |
|||
该功能的实现是通过修改embedding模型的tokenizer来实现的 |
|||
该功能仅仅对EMBEDDING_MODEL参数对应的的模型有效,输出后的模型保存在原本模型 |
|||
感谢@CharlesJu1和@charlesyju的贡献提出了想法和最基础的PR |
|||
|
|||
保存的模型的位置位于原本嵌入模型的目录下,模型的名称为原模型名称+Merge_Keywords_时间戳 |
|||
''' |
|||
import sys |
|||
sys.path.append("..") |
|||
from datetime import datetime |
|||
from configs import ( |
|||
MODEL_PATH, |
|||
EMBEDDING_MODEL, |
|||
EMBEDDING_KEYWORD_FILE, |
|||
) |
|||
import os |
|||
import torch |
|||
from safetensors.torch import save_model |
|||
from sentence_transformers import SentenceTransformer |
|||
|
|||
|
|||
def get_keyword_embedding(bert_model, tokenizer, key_words): |
|||
tokenizer_output = tokenizer(key_words, return_tensors="pt", padding=True, truncation=True) |
|||
|
|||
# No need to manually convert to tensor as we've set return_tensors="pt" |
|||
input_ids = tokenizer_output['input_ids'] |
|||
|
|||
# Remove the first and last token for each sequence in the batch |
|||
input_ids = input_ids[:, 1:-1] |
|||
|
|||
keyword_embedding = bert_model.embeddings.word_embeddings(input_ids) |
|||
keyword_embedding = torch.mean(keyword_embedding, 1) |
|||
|
|||
return keyword_embedding |
|||
|
|||
|
|||
def add_keyword_to_model(model_name=EMBEDDING_MODEL, keyword_file: str = "", output_model_path: str = None): |
|||
key_words = [] |
|||
with open(keyword_file, "r") as f: |
|||
for line in f: |
|||
key_words.append(line.strip()) |
|||
|
|||
st_model = SentenceTransformer(model_name) |
|||
key_words_len = len(key_words) |
|||
word_embedding_model = st_model._first_module() |
|||
bert_model = word_embedding_model.auto_model |
|||
tokenizer = word_embedding_model.tokenizer |
|||
key_words_embedding = get_keyword_embedding(bert_model, tokenizer, key_words) |
|||
# key_words_embedding = st_model.encode(key_words) |
|||
|
|||
embedding_weight = bert_model.embeddings.word_embeddings.weight |
|||
embedding_weight_len = len(embedding_weight) |
|||
tokenizer.add_tokens(key_words) |
|||
bert_model.resize_token_embeddings(len(tokenizer), pad_to_multiple_of=32) |
|||
|
|||
# key_words_embedding_tensor = torch.from_numpy(key_words_embedding) |
|||
embedding_weight = bert_model.embeddings.word_embeddings.weight |
|||
with torch.no_grad(): |
|||
embedding_weight[embedding_weight_len:embedding_weight_len + key_words_len, :] = key_words_embedding |
|||
|
|||
if output_model_path: |
|||
os.makedirs(output_model_path, exist_ok=True) |
|||
word_embedding_model.save(output_model_path) |
|||
safetensors_file = os.path.join(output_model_path, "model.safetensors") |
|||
metadata = {'format': 'pt'} |
|||
save_model(bert_model, safetensors_file, metadata) |
|||
print("save model to {}".format(output_model_path)) |
|||
|
|||
|
|||
def add_keyword_to_embedding_model(path: str = EMBEDDING_KEYWORD_FILE): |
|||
keyword_file = os.path.join(path) |
|||
model_name = MODEL_PATH["embed_model"][EMBEDDING_MODEL] |
|||
model_parent_directory = os.path.dirname(model_name) |
|||
current_time = datetime.now().strftime('%Y%m%d_%H%M%S') |
|||
output_model_name = "{}_Merge_Keywords_{}".format(EMBEDDING_MODEL, current_time) |
|||
output_model_path = os.path.join(model_parent_directory, output_model_name) |
|||
add_keyword_to_model(model_name, keyword_file, output_model_path) |
|||
|
|||
|
|||
if __name__ == '__main__': |
|||
add_keyword_to_embedding_model(EMBEDDING_KEYWORD_FILE) |
|||
|
|||
# input_model_name = "" |
|||
# output_model_path = "" |
|||
# # 以下为加入关键字前后tokenizer的测试用例对比 |
|||
# def print_token_ids(output, tokenizer, sentences): |
|||
# for idx, ids in enumerate(output['input_ids']): |
|||
# print(f'sentence={sentences[idx]}') |
|||
# print(f'ids={ids}') |
|||
# for id in ids: |
|||
# decoded_id = tokenizer.decode(id) |
|||
# print(f' {decoded_id}->{id}') |
|||
# |
|||
# sentences = [ |
|||
# '数据科学与大数据技术', |
|||
# 'Langchain-Chatchat' |
|||
# ] |
|||
# |
|||
# st_no_keywords = SentenceTransformer(input_model_name) |
|||
# tokenizer_without_keywords = st_no_keywords.tokenizer |
|||
# print("===== tokenizer with no keywords added =====") |
|||
# output = tokenizer_without_keywords(sentences) |
|||
# print_token_ids(output, tokenizer_without_keywords, sentences) |
|||
# print(f'-------- embedding with no keywords added -----') |
|||
# embeddings = st_no_keywords.encode(sentences) |
|||
# print(embeddings) |
|||
# |
|||
# print("--------------------------------------------") |
|||
# print("--------------------------------------------") |
|||
# print("--------------------------------------------") |
|||
# |
|||
# st_with_keywords = SentenceTransformer(output_model_path) |
|||
# tokenizer_with_keywords = st_with_keywords.tokenizer |
|||
# print("===== tokenizer with keyword added =====") |
|||
# output = tokenizer_with_keywords(sentences) |
|||
# print_token_ids(output, tokenizer_with_keywords, sentences) |
|||
# |
|||
# print(f'-------- embedding with keywords added -----') |
|||
# embeddings = st_with_keywords.encode(sentences) |
|||
# print(embeddings) |
@ -0,0 +1,3 @@ |
|||
Langchain-Chatchat |
|||
数据科学与大数据技术 |
|||
人工智能与先进计算 |
After Width: | Height: | Size: 148 KiB |
After Width: | Height: | Size: 101 KiB |
After Width: | Height: | Size: 84 KiB |
After Width: | Height: | Size: 27 KiB |
After Width: | Height: | Size: 7.1 KiB |
After Width: | Height: | Size: 69 KiB |
After Width: | Height: | Size: 75 KiB |
After Width: | Height: | Size: 75 KiB |
After Width: | Height: | Size: 114 KiB |
After Width: | Height: | Size: 1.1 MiB |
After Width: | Height: | Size: 124 KiB |
After Width: | Height: | Size: 48 KiB |
After Width: | Height: | Size: 27 KiB |
After Width: | Height: | Size: 4.1 MiB |
After Width: | Height: | Size: 123 KiB |
After Width: | Height: | Size: 42 KiB |
After Width: | Height: | Size: 28 KiB |
After Width: | Height: | Size: 4.9 KiB |
After Width: | Height: | Size: 267 KiB |
After Width: | Height: | Size: 198 KiB |
After Width: | Height: | Size: 270 KiB |
@ -0,0 +1,121 @@ |
|||
import sys |
|||
sys.path.append(".") |
|||
from server.knowledge_base.migrate import (create_tables, reset_tables, import_from_db, |
|||
folder2db, prune_db_docs, prune_folder_files) |
|||
from configs.model_config import NLTK_DATA_PATH, EMBEDDING_MODEL |
|||
import nltk |
|||
nltk.data.path = [NLTK_DATA_PATH] + nltk.data.path |
|||
from datetime import datetime |
|||
import sys |
|||
|
|||
|
|||
if __name__ == "__main__": |
|||
import argparse |
|||
|
|||
parser = argparse.ArgumentParser(description="please specify only one operate method once time.") |
|||
|
|||
parser.add_argument( |
|||
"-r", |
|||
"--recreate-vs", |
|||
action="store_true", |
|||
help=(''' |
|||
recreate vector store. |
|||
use this option if you have copied document files to the content folder, but vector store has not been populated or DEFAUL_VS_TYPE/EMBEDDING_MODEL changed. |
|||
''' |
|||
) |
|||
) |
|||
parser.add_argument( |
|||
"--create-tables", |
|||
action="store_true", |
|||
help=("create empty tables if not existed") |
|||
) |
|||
parser.add_argument( |
|||
"--clear-tables", |
|||
action="store_true", |
|||
help=("create empty tables, or drop the database tables before recreate vector stores") |
|||
) |
|||
parser.add_argument( |
|||
"--import-db", |
|||
help="import tables from specified sqlite database" |
|||
) |
|||
parser.add_argument( |
|||
"-u", |
|||
"--update-in-db", |
|||
action="store_true", |
|||
help=(''' |
|||
update vector store for files exist in database. |
|||
use this option if you want to recreate vectors for files exist in db and skip files exist in local folder only. |
|||
''' |
|||
) |
|||
) |
|||
parser.add_argument( |
|||
"-i", |
|||
"--increament", |
|||
action="store_true", |
|||
help=(''' |
|||
update vector store for files exist in local folder and not exist in database. |
|||
use this option if you want to create vectors increamentally. |
|||
''' |
|||
) |
|||
) |
|||
parser.add_argument( |
|||
"--prune-db", |
|||
action="store_true", |
|||
help=(''' |
|||
delete docs in database that not existed in local folder. |
|||
it is used to delete database docs after user deleted some doc files in file browser |
|||
''' |
|||
) |
|||
) |
|||
parser.add_argument( |
|||
"--prune-folder", |
|||
action="store_true", |
|||
help=(''' |
|||
delete doc files in local folder that not existed in database. |
|||
is is used to free local disk space by delete unused doc files. |
|||
''' |
|||
) |
|||
) |
|||
parser.add_argument( |
|||
"-n", |
|||
"--kb-name", |
|||
type=str, |
|||
nargs="+", |
|||
default=[], |
|||
help=("specify knowledge base names to operate on. default is all folders exist in KB_ROOT_PATH.") |
|||
) |
|||
parser.add_argument( |
|||
"-e", |
|||
"--embed-model", |
|||
type=str, |
|||
default=EMBEDDING_MODEL, |
|||
help=("specify embeddings model.") |
|||
) |
|||
|
|||
args = parser.parse_args() |
|||
start_time = datetime.now() |
|||
|
|||
if args.create_tables: |
|||
create_tables() # confirm tables exist |
|||
|
|||
if args.clear_tables: |
|||
reset_tables() |
|||
print("database talbes reseted") |
|||
|
|||
if args.recreate_vs: |
|||
create_tables() |
|||
print("recreating all vector stores") |
|||
folder2db(kb_names=args.kb_name, mode="recreate_vs", embed_model=args.embed_model) |
|||
elif args.import_db: |
|||
import_from_db(args.import_db) |
|||
elif args.update_in_db: |
|||
folder2db(kb_names=args.kb_name, mode="update_in_db", embed_model=args.embed_model) |
|||
elif args.increament: |
|||
folder2db(kb_names=args.kb_name, mode="increament", embed_model=args.embed_model) |
|||
elif args.prune_db: |
|||
prune_db_docs(args.kb_name) |
|||
elif args.prune_folder: |
|||
prune_folder_files(args.kb_name) |
|||
|
|||
end_time = datetime.now() |
|||
print(f"总计用时: {end_time-start_time}") |
@ -0,0 +1,172 @@ |
|||
{"title": "加油~以及一些建议", "file": "2023-03-31.0002", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/2", "detail": "加油,我认为你的方向是对的。", "id": 0} |
|||
{"title": "当前的运行环境是什么,windows还是Linux", "file": "2023-04-01.0003", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/3", "detail": "当前的运行环境是什么,windows还是Linux,python是什么版本?", "id": 1} |
|||
{"title": "请问这是在CLM基础上运行吗?", "file": "2023-04-01.0004", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/4", "detail": "请问是不是需要本地安装好clm并正常运行的情况下,再按文中的步骤执行才能运行起来?", "id": 2} |
|||
{"title": "[复现问题] 构造 prompt 时从知识库中提取的文字乱码", "file": "2023-04-01.0005", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/5", "detail": "hi,我在尝试复现 README 中的效果,也使用了 ChatGLM-6B 的 README 作为输入文本,但发现从知识库中提取的文字是乱码,导致构造的 prompt 不可用。想了解如何解决这个问题。", "id": 3} |
|||
{"title": "后面能否加入上下文对话功能?", "file": "2023-04-02.0006", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/6", "detail": "目前的get_wiki_agent_answer函数中已经实现了历史消息传递的功能,后面我再确认一下是否有langchain中model调用过程中是否传递了chat_history。", "id": 4} |
|||
{"title": "请问:纯cpu可以吗?", "file": "2023-04-03.0007", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/7", "detail": "很酷的实现,极大地开拓了我的眼界!很顺利的在gpu机器上运行了", "id": 5} |
|||
{"title": "运行报错:AttributeError: 'NoneType' object has no attribute 'message_types_by_name'", "file": "2023-04-03.0008", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/8", "detail": "报错:", "id": 6} |
|||
{"title": "运行环境:GPU需要多大的?", "file": "2023-04-03.0009", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/9", "detail": "如果按照THUDM/ChatGLM-6B的说法,使用的GPU大小应该在13GB左右,但运行脚本后,占用了24GB还不够。", "id": 7} |
|||
{"title": "请问本地知识的格式是什么?", "file": "2023-04-03.0010", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/10", "detail": "已测试格式包括docx、md文件中的文本信息,具体格式可以参考 [langchain文档](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html?highlight=pdf#)", "id": 8} |
|||
{"title": "24G的显存还是爆掉了,是否支持双卡运行", "file": "2023-04-03.0011", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/11", "detail": "RuntimeError: CUDA out of memory. Tried to allocate 96.00 MiB (GPU 0; 23.70 GiB total capacity; 22.18 GiB already allocated; 12.75 MiB free; 22.18 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF", "id": 9} |
|||
{"title": "你怎么知道embeddings方式和模型训练时候的方式是一样的?", "file": "2023-04-03.0012", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/12", "detail": "embedding和LLM的方式不用一致,embedding能够解决语义检索的需求就行。这个项目里用到embedding是在对本地知识建立索引和对问句转换成向量的过程。", "id": 10} |
|||
{"title": "是否能提供本地知识文件的格式?", "file": "2023-04-04.0013", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/13", "detail": "是否能提供本地知识文件的格式?", "id": 11} |
|||
{"title": "是否可以像清华原版跑在8G一以下的卡?", "file": "2023-04-04.0016", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/16", "detail": "是否可以像清华原版跑在8G一以下的卡?我的8G卡爆显存了🤣🤣🤣", "id": 12} |
|||
{"title": "请教一下langchain协调使用向量库和chatGLM工作的", "file": "2023-04-05.0018", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/18", "detail": "代码里面这段是创建问答模型的,会接入ChatGLM和本地语料的向量库,langchain回答的时候是怎么个优先顺序?先搜向量库,没有再找chatglm么? 还是什么机制?", "id": 13} |
|||
{"title": "在mac m2max上抛出了ValueError: 150001 is not in list这个异常", "file": "2023-04-05.0019", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/19", "detail": "我把chatglm_llm.py加载模型的代码改成如下", "id": 14} |
|||
{"title": "程序运行后一直卡住", "file": "2023-04-05.0020", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/20", "detail": "感谢作者的付出,不过本人在运行时出现了问题,请大家帮助。", "id": 15} |
|||
{"title": "问一下chat_history的逻辑", "file": "2023-04-06.0022", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/22", "detail": "感谢开源。", "id": 16} |
|||
{"title": "为什么每次运行都会loading checkpoint", "file": "2023-04-06.0023", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/23", "detail": "我把这个embeding模型下载到本地后,无法正常启动。", "id": 17} |
|||
{"title": "本地知识文件能否上传一些示例?", "file": "2023-04-06.0025", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/25", "detail": "如题,怎么构造知识文件,效果更好?能否提供一个样例", "id": 18} |
|||
{"title": "What version of you are using?", "file": "2023-04-06.0026", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/26", "detail": "Hi Panda, I saw the `pip install -r requirements` command in README, and want to confirm you are using python2 or python3? because my pip and pip3 version are all is 22.3.", "id": 19} |
|||
{"title": "有兴趣交流本项目应用的朋友可以加一下微信群", "file": "2023-04-07.0027", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/27", "detail": "", "id": 20} |
|||
{"title": "本地知识越多,回答时检索的时间是否会越长", "file": "2023-04-07.0029", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/29", "detail": "是的 因为需要进行向量匹配检索", "id": 21} |
|||
{"title": "爲啥最後還是報錯 哭。。", "file": "2023-04-07.0030", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/30", "detail": "Failed to import transformers.models.t5.configuration_t5 because of the following error (look up to see", "id": 22} |
|||
{"title": "对话到第二次的时候就报错UnicodeDecodeError: 'utf-8' codec can't decode", "file": "2023-04-07.0031", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/31", "detail": "对话第一次是没问题的,模型返回输出后又给到请输入你的问题,我再输入问题就报错", "id": 23} |
|||
{"title": "用的in4的量化版本,推理的时候显示需要申请10Gb的显存", "file": "2023-04-07.0033", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/33", "detail": "File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b-int4-qe/modeling_chatglm.py\", line 581, in forward", "id": 24} |
|||
{"title": "使用colab运行,python3.9,提示包导入有问题", "file": "2023-04-07.0034", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/34", "detail": "from ._util import is_directory, is_path", "id": 25} |
|||
{"title": "运行失败,Loading checkpoint未达到100%被kill了,请问下是什么原因?", "file": "2023-04-07.0035", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/35", "detail": "日志如下:", "id": 26} |
|||
{"title": "弄了个交流群,自己弄好多细节不会,大家技术讨论 加connection-image 我来拉你", "file": "2023-04-08.0036", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/36", "detail": "自己搞好多不清楚的,一起来弄吧。。准备搞个部署问题的解决文档出来", "id": 27} |
|||
{"title": "Error using the new version with langchain", "file": "2023-04-09.0043", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/43", "detail": "Error with the new changes:", "id": 28} |
|||
{"title": "程序报错torch.cuda.OutOfMemoryError如何解决?", "file": "2023-04-10.0044", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/44", "detail": "报错详细信息如下:", "id": 29} |
|||
{"title": "qa的训练数据格式是如何设置的", "file": "2023-04-10.0045", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/45", "detail": "本项目不是使用微调的方式,所以并不涉及到训练过程。", "id": 30} |
|||
{"title": "The FileType.UNK file type is not supported in partition. 解决办法", "file": "2023-04-10.0046", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/46", "detail": "ValueError: Invalid file /home/yawu/Documents/langchain-ChatGLM-master/data. The FileType.UNK file type is not supported in partition.", "id": 31} |
|||
{"title": "如何读取多个txt文档?", "file": "2023-04-10.0047", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/47", "detail": "如题,请教一下如何读取多个txt文档?示例代码中只给了读一个文档的案例,这个input我换成string之后也只能指定一个文档,无法用通配符指定多个文档,也无法传入多个文件路径的列表。", "id": 32} |
|||
{"title": "nltk package unable to either download or load local nltk_data folder", "file": "2023-04-10.0049", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/49", "detail": "I'm running this project on an offline Windows Server environment so I download the Punkt and averaged_perceptron_tagger tokenizer in this directory:", "id": 33} |
|||
{"title": "requirements.txt中需要指定langchain版本", "file": "2023-04-11.0055", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/55", "detail": "langchain版本0.116下无法引入RetrievalQA,需要指定更高版本(0.136版本下无问题)", "id": 34} |
|||
{"title": "Demo演示无法给出输出内容", "file": "2023-04-12.0059", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/59", "detail": "你好,测试了项目自带新闻稿示例和自行上传的一个文本,可以加载进去,但是无法给出答案,请问属于什么情况,如何解决,谢谢。PS: 1、今天早上刚下载全部代码;2、硬件服务器满足要求;3、按操作说明正常操作。", "id": 35} |
|||
{"title": "群人数过多无法进群,求帮忙拉进群", "file": "2023-04-12.0061", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/61", "detail": "您好,您的群人数超过了200人,目前无法通过二维码加群,请问您方便加我微信拉我进群吗?万分感谢", "id": 36} |
|||
{"title": "群人数已满,求大佬拉入群", "file": "2023-04-12.0062", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/62", "detail": "已在README中更新拉群二维码", "id": 37} |
|||
{"title": "requirements中langchain版本错误", "file": "2023-04-12.0065", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/65", "detail": "langchain版本应该是0.0.12而不是0.0.120", "id": 38} |
|||
{"title": "Linux : Searchd in", "file": "2023-04-13.0068", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/68", "detail": "import nltk", "id": 39} |
|||
{"title": "No sentence-transformers model found", "file": "2023-04-13.0069", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/69", "detail": "加载不了这个模型,错误原因是找不到这个模型,但是路径是配置好了的", "id": 40} |
|||
{"title": "Error loading punkt: <urlopen error [Errno 111] Connection", "file": "2023-04-13.0070", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/70", "detail": "运行knowledge_based_chatglm.py,出现nltk报错,具体情况如下:", "id": 41} |
|||
{"title": "[不懂就问] ptuning数据集格式", "file": "2023-04-13.0072", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/72", "detail": "大家好请教 微调数据集的格式有什么玄机吗?我看 ChatGLM-6B/ptuning/readme.md的demo数据集ADGEN里content为啥都写成 类型#裙*风格#简约 这种格式的?这里面有啥玄机的? 特此请教", "id": 42} |
|||
{"title": "Embedding model请教", "file": "2023-04-13.0074", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/74", "detail": "您好,我看到项目里的embedding模型用的是:GanymedeNil/text2vec-large-chinese,请问这个项目里的embedding模型可以直接用ChatGLM嘛?", "id": 43} |
|||
{"title": "Macbook M1 运行 webui.py 时报错,请问是否可支持M系列芯片", "file": "2023-04-13.0080", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/80", "detail": "```", "id": 44} |
|||
{"title": "new feature: 添加对P-tunningv2微调后的模型支持", "file": "2023-04-14.0099", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/99", "detail": "能否添加新的功能,对使用[P-tunningv2](https://github.com/THUDM/ChatGLM-6B/tree/main/ptuning)微调chatglm后的模型提供加载支持", "id": 45} |
|||
{"title": "txt文件加载成功,但读取报错", "file": "2023-04-15.0106", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/106", "detail": "最新版的代码。比较诡异的是我的电脑是没有D盘的,报错信息里怎么有个D盘出来了...", "id": 46} |
|||
{"title": "模型加载成功?文件无法导入。", "file": "2023-04-15.0107", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/107", "detail": "所有模型均在本地。", "id": 47} |
|||
{"title": "请问用的什么操作系统呢?", "file": "2023-04-16.0110", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/110", "detail": "ubuntu、centos还是windows?", "id": 48} |
|||
{"title": "报错ModuleNotFoundError: No module named 'configs.model_config'", "file": "2023-04-17.0112", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/112", "detail": "更新代码后,运行webui.py,报错ModuleNotFoundError: No module named 'configs.model_config'。未查得解决方法。", "id": 49} |
|||
{"title": "问特定问题会出现爆显存", "file": "2023-04-17.0116", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/116", "detail": "正常提问没问题。", "id": 50} |
|||
{"title": "loading进不去?", "file": "2023-04-18.0127", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/127", "detail": "在linux系统上python webui.py之后打开网页,一直在loading,是不是跟我没装detectron2有关呢?", "id": 51} |
|||
{"title": "本地知识内容数量限制?", "file": "2023-04-18.0129", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/129", "detail": "本地知识文件类型是txt,超过5条以上的数据,提问的时候就爆显存了。", "id": 52} |
|||
{"title": "我本来也计划做一个类似的产品,看来不用从头开始做了", "file": "2023-04-18.0130", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/130", "detail": "文本切割,还有优化空间吗?微信群已经加不进去了。", "id": 53} |
|||
{"title": "load model failed. 加载模型失败", "file": "2023-04-18.0132", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/132", "detail": "```", "id": 54} |
|||
{"title": "如何在webui里回答时同时返回引用的本地数据内容?", "file": "2023-04-18.0133", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/133", "detail": "如题", "id": 55} |
|||
{"title": "交流群满200人加不了了,能不能给个负责人的联系方式拉我进群?", "file": "2023-04-20.0143", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/143", "detail": "同求", "id": 56} |
|||
{"title": "No sentence-transformers model found with name ‘/text2vec/‘,但是再路径下面确实有模型文件", "file": "2023-04-20.0145", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/145", "detail": "另外:The dtype of attention mask (torch.int64) is not bool", "id": 57} |
|||
{"title": "请问加载模型的路径在哪里修改,默认好像前面会带上transformers_modules.", "file": "2023-04-20.0148", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/148", "detail": "<img width=\"1181\" alt=\"1681977897052\" src=\"https://user-images.githubusercontent.com/30926001/233301106-3846680a-d842-41d2-874e-5b6514d732c4.png\">", "id": 58} |
|||
{"title": "为啥放到方法调用会出错,这个怎么处理?", "file": "2023-04-20.0150", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/150", "detail": "```python", "id": 59} |
|||
{"title": "No sentence-transformers model found with name C:\\Users\\Administrator/.cache\\torch\\sentence_transformers\\GanymedeNil_text2vec-large-chinese. Creating a new one with MEAN pooling.", "file": "2023-04-21.0154", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/154", "detail": "卡在这块很久是正常现象吗", "id": 60} |
|||
{"title": "微信群需要邀请才能加入", "file": "2023-04-21.0155", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/155", "detail": "RT,给个个人联系方式白", "id": 61} |
|||
{"title": "No sentence-transformers model found with name GanymedeNil/text2vec-large-chinese. Creating a new one with MEAN pooling", "file": "2023-04-21.0156", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/156", "detail": "ls GanymedeNil/text2vec-large-chinese", "id": 62} |
|||
{"title": "embedding会加载两次", "file": "2023-04-23.0159", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/159", "detail": "你好,为什么要这样设置呢,这样会加载两次呀。", "id": 63} |
|||
{"title": "扫二维码加的那个群,群成员满了进不去了", "file": "2023-04-23.0160", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/160", "detail": "如题", "id": 64} |
|||
{"title": "执行python3 cli_demo.py 报错AttributeError: 'NoneType' object has no attribute 'chat'", "file": "2023-04-24.0163", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/163", "detail": "刚开始怀疑是内存不足问题,换成int4,int4-qe也不行,有人知道是什么原因吗", "id": 65} |
|||
{"title": "匹配得分", "file": "2023-04-24.0167", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/167", "detail": "在示例cli_demo.py中返回的匹配文本没有对应的score,可以加上这个feature吗", "id": 66} |
|||
{"title": "大佬有计划往web_ui.py加入打字机功能吗", "file": "2023-04-25.0170", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/170", "detail": "目前在载入了知识库后,单张V100 32G在回答垂直领域的问题时也需要20S以上,没有打字机逐字输出的使用体验还是比较煎熬的....", "id": 67} |
|||
{"title": "Is it possible to use a verctorDB for the embedings?", "file": "2023-04-25.0171", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/171", "detail": "when I play, I have to load the local data again and again when to start. I wonder if it is possible to use", "id": 68} |
|||
{"title": "请问通过lora训练官方模型得到的微调模型文件该如何加载?", "file": "2023-04-25.0173", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/173", "detail": "通过lora训练的方式得到以下文件:", "id": 69} |
|||
{"title": "from langchain.chains import RetrievalQA的代码在哪里?", "file": "2023-04-25.0174", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/174", "detail": "local_doc_qa.py", "id": 70} |
|||
{"title": "哪里有knowledge_based_chatglm.py文件?怎么找不到了??是被替换成cli_demo.py文件了吗?", "file": "2023-04-26.0175", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/175", "detail": "哪里有knowledge_based_chatglm.py文件?怎么找不到了??是被替换成cli_demo.py文件了吗?", "id": 71} |
|||
{"title": "AttributeError: 'Chatbot' object has no attribute 'value'", "file": "2023-04-26.0177", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/177", "detail": "Traceback (most recent call last):", "id": 72} |
|||
{"title": "控制台调api.py报警告", "file": "2023-04-26.0178", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/178", "detail": "you must pass the application as an import string to enable \"reload\" or \"workers\"", "id": 73} |
|||
{"title": "如何加入群聊", "file": "2023-04-27.0183", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/183", "detail": "微信群超过200人了,需要邀请,如何加入呢?", "id": 74} |
|||
{"title": "如何将Chatglm和本地知识相结合", "file": "2023-04-27.0185", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/185", "detail": "您好,我想请教一下怎么才能让知识库匹配到的文本和chatglm生成的相结合,而不是说如果没搜索到,就说根据已知信息无法回答该问题,谢谢", "id": 75} |
|||
{"title": "一点建议", "file": "2023-04-27.0189", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/189", "detail": "1.weiui的get_vector_store方法里面添加一个判断以兼容gradio版本导致的上传异常", "id": 76} |
|||
{"title": "windows环境下,按照教程,配置好conda环境,git完项目,修改完模型路径相关内容后,运行demo报错缺少", "file": "2023-04-28.0194", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/194", "detail": "报错代码如下:", "id": 77} |
|||
{"title": "ValueError: too many values to unpack (expected 2)", "file": "2023-04-28.0198", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/198", "detail": "When i tried to use the non-streaming, `ValueError: too many values to unpack (expected 2)` error came out.", "id": 78} |
|||
{"title": "加载doc后覆盖原本知识", "file": "2023-04-28.0201", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/201", "detail": "加载较大量级的私有知识库后,原本的知识会被覆盖", "id": 79} |
|||
{"title": "自定义知识库回答效果很差", "file": "2023-04-28.0203", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/203", "detail": "请问加了自定义知识库知识库,回答效果很差,是因为数据量太小的原因么", "id": 80} |
|||
{"title": "python310下,安装pycocotools失败,提示低版本cython,实际已安装高版本", "file": "2023-04-29.0208", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/208", "detail": "RT,纯离线环境安装,依赖安装的十分艰难,最后碰到pycocotools,始终无法安装上,求教方法!", "id": 81} |
|||
{"title": "[FEATURE] 支持 RWKV 模型(目前已有 pip package & rwkv.cpp 等等)", "file": "2023-05-01.0216", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/216", "detail": "您好,我是 RWKV 的作者,介绍见:https://zhuanlan.zhihu.com/p/626083366", "id": 82} |
|||
{"title": "[BUG] 为啥主机/服务器不联网不能正常启动服务?", "file": "2023-05-02.0220", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/220", "detail": "**问题描述 / Problem Description**", "id": 83} |
|||
{"title": "[BUG] 简洁阐述问题 / Concise description of the issue", "file": "2023-05-03.0222", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/222", "detail": "**local variable 'torch' referenced before assignment**", "id": 84} |
|||
{"title": "不支持txt文件的中文输入", "file": "2023-05-04.0235", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/235", "detail": "vs_path, _ = local_doc_qa.init_knowledge_vector_store(filepath)", "id": 85} |
|||
{"title": "文件均未成功加载,请检查依赖包或替换为其他文件再次上传。 文件未成功加载,请重新上传文件", "file": "2023-05-05.0237", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/237", "detail": "请大佬帮忙解决,谢谢!", "id": 86} |
|||
{"title": "[BUG] 使用多卡时chatglm模型加载两次", "file": "2023-05-05.0241", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/241", "detail": "chatglm_llm.py文件下第129行先加载了一次chatglm模型,第143行又加载了一次", "id": 87} |
|||
{"title": "[BUG] similarity_search_with_score_by_vector函数返回多个doc时的score结果错误", "file": "2023-05-06.0252", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/252", "detail": "**问题描述 / Problem Description**", "id": 88} |
|||
{"title": "可以再建一个交流群吗,这个群满了进不去。", "file": "2023-05-06.0255", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/255", "detail": "上午应该已经在readme里更新过了,如果不能添加可能是网页缓存问题,可以试试看直接扫描img/qr_code_12.jpg", "id": 89} |
|||
{"title": "请问这是什么错误哇?KeyError: 'serialized_input'", "file": "2023-05-06.0257", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/257", "detail": "运行“python webui.py” 后这是什么错误?怎么解决啊?", "id": 90} |
|||
{"title": "修改哪里的代码,可以再cpu上跑?", "file": "2023-05-06.0258", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/258", "detail": "**问题描述 / Problem Description**", "id": 91} |
|||
{"title": "ModuleNotFoundError: No module named 'modelscope'", "file": "2023-05-07.0266", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/266", "detail": "安装这个", "id": 92} |
|||
{"title": "加载lora微调模型时,lora参数加载成功,但显示模型未成功加载?", "file": "2023-05-08.0270", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/270", "detail": "什么原因呀?", "id": 93} |
|||
{"title": "[BUG] 运行webui.py报错:name 'EMBEDDING_DEVICE' is not defined", "file": "2023-05-08.0274", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/274", "detail": "解决了,我修改model_config时候把这个变量改错了", "id": 94} |
|||
{"title": "基于ptuning训练完成,新老模型都进行了加载,但是只有新的", "file": "2023-05-08.0280", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/280", "detail": "licitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.", "id": 95} |
|||
{"title": "[BUG] 使用chatyuan模型时,对话Error,has no attribute 'stream_chat'", "file": "2023-05-08.0282", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/282", "detail": "**问题描述 / Problem Description**", "id": 96} |
|||
{"title": "chaglm调用过程中 _call提示有一个 stop", "file": "2023-05-09.0286", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/286", "detail": "**功能描述 / Feature Description**", "id": 97} |
|||
{"title": "Logger._log() got an unexpected keyword argument 'end'", "file": "2023-05-10.0295", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/295", "detail": "使用cli_demo的时候,加载一个普通txt文件,输入问题后,报错:“TypeError: Logger._log() got an unexpected keyword argument 'end'”", "id": 98} |
|||
{"title": "[BUG] 请问可以解释下这个FAISS.similarity_search_with_score_by_vector = similarity_search_with_score_by_vector的目的吗", "file": "2023-05-10.0296", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/296", "detail": "我不太明白这个库自己写的similarity_search_with_score_by_vector方法做的事情,因为langchain原版的similarity_search_with_score_by_vector只是search faiss之后把返回的topk句子组合起来。我觉得原版理解起来没什么问题,但是这个库里自己写的我就没太看明白多做了什么其他的事情,因为没有注释。", "id": 99} |
|||
{"title": "[BUG] Windows下上传中文文件名文件,faiss无法生成向量数据库文件", "file": "2023-05-11.0318", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/318", "detail": "**问题描述 / Problem Description**", "id": 100} |
|||
{"title": "cli_demo中的流式输出能否接着前一答案输出?", "file": "2023-05-11.0320", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/320", "detail": "现有流式输出结果样式为:", "id": 101} |
|||
{"title": "内网部署时网页无法加载,能否增加离线静态资源", "file": "2023-05-12.0326", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/326", "detail": "内网部署时网页无法加载,能否增加离线静态资源", "id": 102} |
|||
{"title": "我想把文件字符的编码格式改为encoding='utf-8'在哪修改呢,因为会有ascii codec can't decode byte报错", "file": "2023-05-14.0360", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/360", "detail": "上传中文的txt文件时报错,编码格式为utf-8", "id": 103} |
|||
{"title": "Batches的进度条是在哪里设置的?能否关闭显示?", "file": "2023-05-15.0366", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/366", "detail": "使用cli_demo.py进行命令行测试时,每句回答前都有个Batches的进度条", "id": 104} |
|||
{"title": "ImportError: dlopen: cannot load any more object with static TLS or Segmentation fault", "file": "2023-05-15.0368", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/368", "detail": "**问题描述 / Problem Description**", "id": 105} |
|||
{"title": "读取PDF时报错", "file": "2023-05-16.0373", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/373", "detail": "在Colab上执行cli_demo.py时,在路径文件夹里放了pdf文件,在加载的过程中会显示错误,然后无法加载PDF文件", "id": 106} |
|||
{"title": "[BUG] webui报错 InvalidURL", "file": "2023-05-16.0375", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/375", "detail": "python 版本:3.8.16", "id": 107} |
|||
{"title": "[FEATURE] 如果让回答不包含出处,应该怎么处理", "file": "2023-05-16.0380", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/380", "detail": "**功能描述 / Feature Description**", "id": 108} |
|||
{"title": "加载PDF文件时,出现 unsupported colorspace for 'png'", "file": "2023-05-16.0381", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/381", "detail": "**问题描述 / Problem Description**", "id": 109} |
|||
{"title": "'ascii' codec can't encode characters in position 14-44: ordinal not in range(128) 经典bug", "file": "2023-05-16.0382", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/382", "detail": "添加了知识库之后进行对话,之后再新增知识库就会出现这个问题。", "id": 110} |
|||
{"title": "微信群人数超过200了,扫码进不去了,群主可以再创建一个新群吗", "file": "2023-05-17.0391", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/391", "detail": "**功能描述 / Feature Description**", "id": 111} |
|||
{"title": "TypeError: 'ListDocsResponse' object is not subscriptable", "file": "2023-05-17.0393", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/393", "detail": "应该是用remain_docs.code和remain_docs.data吧?吗?", "id": 112} |
|||
{"title": "[BUG] 加载chatglm模型报错:'NoneType' object has no attribute 'message_types_by_name'", "file": "2023-05-17.0398", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/398", "detail": "**问题描述 / Problem Description**", "id": 113} |
|||
{"title": "[BUG] 执行 python webui.py 没有报错,但是ui界面提示 Something went wrong Expecting value: line 1 column 1 (char 0", "file": "2023-05-18.0399", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/399", "detail": "**环境配置**", "id": 114} |
|||
{"title": "启动后调用api接口正常,过一会就不断的爆出 Since the angle classifier is not initialized", "file": "2023-05-18.0404", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/404", "detail": "**问题描述 / Problem Description**", "id": 115} |
|||
{"title": "[BUG] write_check_file方法中,open函数未指定编码", "file": "2023-05-18.0408", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/408", "detail": "def write_check_file(filepath, docs):", "id": 116} |
|||
{"title": "导入的PDF中存在图片,有大概率出现 “unsupported colorspace for 'png'”异常", "file": "2023-05-18.0409", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/409", "detail": "pix = fitz.Pixmap(doc, img[0])", "id": 117} |
|||
{"title": "请问流程图是用什么软件画的", "file": "2023-05-18.0410", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/410", "detail": "draw.io", "id": 118} |
|||
{"title": "mac 加载模型失败", "file": "2023-05-19.0417", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/417", "detail": "Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.", "id": 119} |
|||
{"title": "使用GPU本地运行知识库问答,提问第一个问题出现异常。", "file": "2023-05-20.0419", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/419", "detail": "配置文件model_config.py为:", "id": 120} |
|||
{"title": "想加入讨论群", "file": "2023-05-20.0420", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/420", "detail": "OK", "id": 121} |
|||
{"title": "有没有直接调用LLM的API,目前只有知识库的API?", "file": "2023-05-22.0426", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/426", "detail": "-------------------------------------------------------------------------------", "id": 122} |
|||
{"title": "上传文件后出现 ERROR __init__() got an unexpected keyword argument 'autodetect_encoding'", "file": "2023-05-22.0428", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/428", "detail": "上传文件后出现这个问题:ERROR 2023-05-22 11:46:19,568-1d: __init__() got an unexpected keyword argument 'autodetect_encoding'", "id": 123} |
|||
{"title": "想问下README中用到的流程图用什么软件画的", "file": "2023-05-22.0431", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/431", "detail": "**功能描述 / Feature Description**", "id": 124} |
|||
{"title": "No matching distribution found for langchain==0.0.174", "file": "2023-05-23.0436", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/436", "detail": "ERROR: Could not find a version that satisfies the requirement langchain==0.0.174 ", "id": 125} |
|||
{"title": "[FEATURE] bing是必须的么?", "file": "2023-05-23.0437", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/437", "detail": "从这个[脚步](https://github.com/imClumsyPanda/langchain-ChatGLM/blob/master/configs/model_config.py#L129)里面发现需要申请bing api,如果不申请,纯用模型推理不可吗?", "id": 126} |
|||
{"title": "同一台环境下部署了5.22号更新的langchain-chatglm v0.1.13和之前的版本,回复速度明显变慢", "file": "2023-05-23.0442", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/442", "detail": "新langchain-chatglm v0.1.13版本速度很慢", "id": 127} |
|||
{"title": "Error reported during startup", "file": "2023-05-23.0443", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/443", "detail": "Traceback (most recent call last):", "id": 128} |
|||
{"title": "ValueError: not enough values to unpack (expected 2, got 1)on of the issue", "file": "2023-05-24.0449", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/449", "detail": "File \".cache\\huggingface\\modules\\transformers_modules\\chatglm-6b-int4\\modeling_chatglm.py\", line 1280, in chat", "id": 129} |
|||
{"title": "[BUG] API部署,流式输出的函数,少了个question", "file": "2023-05-24.0451", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/451", "detail": "**问题描述 / Problem Description**", "id": 130} |
|||
{"title": "项目结构的简洁性保持", "file": "2023-05-24.0454", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/454", "detail": "**功能描述 / Feature Description**", "id": 131} |
|||
{"title": "项目群扫码进不去了", "file": "2023-05-24.0455", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/455", "detail": "项目群扫码进不去了,是否可以加一下微信拉我进群,谢谢!微信号:daniel-0527", "id": 132} |
|||
{"title": "请求拉我入群讨论,海硕一枚,专注于LLM等相关技术", "file": "2023-05-24.0461", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/461", "detail": "**功能描述 / Feature Description**", "id": 133} |
|||
{"title": "[BUG] chatglm-6b模型报错OSError: Error no file named pytorch_model.bin found in directory /chatGLM/model/model-6b", "file": "2023-05-26.0474", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/474", "detail": "**1、简述:**", "id": 134} |
|||
{"title": "现在本项目交流群二维码扫描不进去了,需要群主通过", "file": "2023-05-27.0478", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/478", "detail": "现在本项目交流群二维码扫描不进去了,需要群主通过", "id": 135} |
|||
{"title": "RuntimeError: Only Tensors of floating point and complex dtype can require gradients", "file": "2023-05-28.0483", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/483", "detail": "刚更新了最新版本:", "id": 136} |
|||
{"title": "RuntimeError: \"LayerNormKernelImpl\" not implemented for 'Half'", "file": "2023-05-28.0484", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/484", "detail": "已经解决了 params 只用两个参数 {'trust_remote_code': True, 'torch_dtype': torch.float16}", "id": 137} |
|||
{"title": "[BUG] 文件未成功加载,请重新上传文件", "file": "2023-05-31.0504", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/504", "detail": "webui.py", "id": 138} |
|||
{"title": "[BUG] bug 17 ,pdf和pdf为啥还不一样呢?为啥有的pdf能识别?有的pdf识别不了呢?", "file": "2023-05-31.0506", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/506", "detail": "bug 17 ,pdf和pdf为啥还不一样呢?为啥有的pdf能识别?有的pdf识别不了呢?", "id": 139} |
|||
{"title": "[FEATURE] 简洁阐述功能 / Concise description of the feature", "file": "2023-05-31.0513", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/513", "detail": "**功能描述 / Feature Description**", "id": 140} |
|||
{"title": "[BUG] webui.py 加载chatglm-6b-int4 失败", "file": "2023-06-02.0524", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/524", "detail": "**问题描述 / Problem Description**", "id": 141} |
|||
{"title": "[BUG] webui.py 加载chatglm-6b模型异常", "file": "2023-06-02.0525", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/525", "detail": "**问题描述 / Problem Description**", "id": 142} |
|||
{"title": "增加对chatgpt的embedding和api调用的支持", "file": "2023-06-02.0531", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/531", "detail": "能否支持openai的embedding api和对话的api?", "id": 143} |
|||
{"title": "[FEATURE] 调整模型下载的位置", "file": "2023-06-02.0537", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/537", "detail": "模型默认下载到 $HOME/.cache/huggingface/,当 C 盘空间不足时无法完成模型的下载。configs/model_config.py 中也没有调整模型位置的参数。", "id": 144} |
|||
{"title": "[BUG] langchain=0.0.174 出错", "file": "2023-06-04.0543", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/543", "detail": "**问题描述 / Problem Description**", "id": 145} |
|||
{"title": "[BUG] 更新后加载本地模型路径不正确", "file": "2023-06-05.0545", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/545", "detail": "**问题描述 / Problem Description**", "id": 146} |
|||
{"title": "SystemError: 8bit 模型需要 CUDA 支持,或者改用量化后模型!", "file": "2023-06-06.0550", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/550", "detail": "docker 部署后,启动docker,过会儿容器会自动退出,logs报错 SystemError: 8bit 模型需要 CUDA 支持,或者改用量化后模型! [NVIDIA Container Toolkit](https://github.com/NVIDIA/nvidia-container-toolkit) 也已经安装了", "id": 147} |
|||
{"title": "[BUG] 上传知识库超过1M报错", "file": "2023-06-06.0556", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/556", "detail": "**问题描述 / Problem Description**", "id": 148} |
|||
{"title": "打开跨域访问后仍然报错,不能请求", "file": "2023-06-06.0560", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/560", "detail": "报错信息:", "id": 149} |
|||
{"title": "dialogue_answering 里面的代码是不是没有用到?,没有看到调用", "file": "2023-06-07.0571", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/571", "detail": "dialogue_answering 是干啥的", "id": 150} |
|||
{"title": "[BUG] 响应速度极慢,应从哪里入手优化?48C/128G/8卡", "file": "2023-06-07.0573", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/573", "detail": "运行环境:ubuntu20.04", "id": 151} |
|||
{"title": "纯CPU环境下运行cli_demo时报错,提示找不到nvcuda.dll", "file": "2023-06-08.0576", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/576", "detail": "本地部署环境是纯CPU,之前的版本在纯CPU环境下能正常运行,但上传本地知识库经常出现encode问题。今天重新git项目后,运行时出现如下问题,请问该如何解决。", "id": 152} |
|||
{"title": "如何加载本地的embedding模型(text2vec-large-chinese模型文件)", "file": "2023-06-08.0582", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/582", "detail": "因为需要离线部署,所以要把模型放到本地,我修改了chains/local_doc_qa.py中的HuggingFaceEmbeddings(),在其中加了一个cache_folder的参数,保证下载的文件在cache_folder中,model_name是text2vec-large-chinese。如cache_folder='/home/xx/model/text2vec-large-chinese', model_name='text2vec-large-chinese',这样仍然需要联网下载报错,请问大佬如何解决该问题?", "id": 153} |
|||
{"title": "ChatGLM-6B 在另外服务器安装好了,请问如何修改model.cofnig.py 来使用它的接口呢??", "file": "2023-06-09.0588", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/588", "detail": "我本来想在这加一个api base url 但是运行web.py 发现 还是会去连huggingface 下载模型", "id": 154} |
|||
{"title": "[BUG] raise partially initialized module 'charset_normalizer' has no attribute 'md__mypyc' when call interface `upload_file`", "file": "2023-06-10.0591", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/591", "detail": "**问题描述 / Problem Description**", "id": 155} |
|||
{"title": "[BUG] raise OSError: [Errno 101] Network is unreachable when call interface upload_file and upload .pdf files", "file": "2023-06-10.0592", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/592", "detail": "**问题描述 / Problem Description**", "id": 156} |
|||
{"title": "如果直接用vicuna作为基座大模型,需要修改的地方有哪些?", "file": "2023-06-12.0596", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/596", "detail": "vicuna模型有直接转换好的没有?也就是llama转换之后的vicuna。", "id": 157} |
|||
{"title": "[BUG] 通过cli.py调用api时抛出AttributeError: 'NoneType' object has no attribute 'get'错误", "file": "2023-06-12.0598", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/598", "detail": "通过`python cli.py start api --ip localhost --port 8001` 命令调用api时,抛出:", "id": 158} |
|||
{"title": "[BUG] 通过cli.py调用api时直接报错`langchain-ChatGLM: error: unrecognized arguments: start cli`", "file": "2023-06-12.0601", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/601", "detail": "通过python cli.py start cli启动cli_demo时,报错:", "id": 159} |
|||
{"title": "[BUG] error: unrecognized arguments: --model-dir conf/models/", "file": "2023-06-12.0602", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/602", "detail": "关键字参数修改了吗?有没有文档啊?大佬", "id": 160} |
|||
{"title": "[BUG] 上传文件全部失败", "file": "2023-06-12.0603", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/603", "detail": "ERROR: Exception in ASGI application", "id": 161} |
|||
{"title": "[BUG] config 使用 chatyuan 无法启动", "file": "2023-06-12.0604", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/604", "detail": "\"chatyuan\": {", "id": 162} |
|||
{"title": "使用fashchat api之后,后台报错APIError 如图所示", "file": "2023-06-12.0606", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/606", "detail": "我按照https://github.com/imClumsyPanda/langchain-ChatGLM/blob/master/docs/fastchat.md", "id": 163} |
|||
{"title": "[BUG] 启用上下文关联,每次embedding搜索到的内容都会比前一次多一段", "file": "2023-06-13.0613", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/613", "detail": "**问题描述 / Problem Description**", "id": 164} |
|||
{"title": "local_doc_qa.py中MyFAISS.from_documents() 这个语句看不太懂。MyFAISS类中没有这个方法,其父类FAISS和VectorStore中也只有from_texts方法[BUG] 简洁阐述问题 / Concise description of the issue", "file": "2023-06-14.0619", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/619", "detail": "local_doc_qa.py中MyFAISS.from_documents() 这个语句看不太懂。MyFAISS类中没有这个方法,其父类FAISS和VectorStore中也只有from_texts方法", "id": 165} |
|||
{"title": "[BUG] TypeError: similarity_search_with_score_by_vector() got an unexpected keyword argument 'filter'", "file": "2023-06-14.0624", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/624", "detail": "**问题描述 / Problem Description**", "id": 166} |
|||
{"title": "please delete this issue", "file": "2023-06-15.0633", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/633", "detail": "sorry, incorrect submission. Please remove this issue!", "id": 167} |
|||
{"title": "[BUG] vue前端镜像构建失败", "file": "2023-06-15.0635", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/635", "detail": "**问题描述 / Problem Description**", "id": 168} |
|||
{"title": "ChatGLM-6B模型能否回答英文问题?", "file": "2023-06-15.0640", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/640", "detail": "大佬,请问一下,如果本地知识文档是英文,ChatGLM-6B模型能否回答英文问题?不能的话,有没有替代的模型推荐,期待你的回复,谢谢", "id": 169} |
|||
{"title": "[BUG] 简洁阐述问题 / Concise description of the issue", "file": "2023-06-16.0644", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/644", "detail": "**问题描述 / Problem Description**", "id": 170} |
|||
{"title": "KeyError: 3224", "file": "2023-06-16.0645", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/645", "detail": "```", "id": 171} |
@ -0,0 +1,323 @@ |
|||
{"title": "效果如何优化", "file": "2023-04-04.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/14", "detail": "如图所示,将该项目的README.md和该项目结合后,回答效果并不理想,请问可以从哪些方面进行优化", "id": 0} |
|||
{"title": "怎么让模型严格根据检索的数据进行回答,减少胡说八道的回答呢", "file": "2023-04-04.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/15", "detail": "举个例子:", "id": 1} |
|||
{"title": "When I try to run the `python knowledge_based_chatglm.py`, I got this error in macOS(M1 Max, OS 13.2)", "file": "2023-04-07.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/32", "detail": "```python", "id": 2} |
|||
{"title": "萌新求教大佬怎么改成AMD显卡或者CPU?", "file": "2023-04-10.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/48", "detail": "把.cuda()去掉就行", "id": 3} |
|||
{"title": "输出answer的时间很长,是否可以把文本向量化的部分提前做好存储起来?", "file": "2023-04-10.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/50", "detail": "GPU:4090 24G显存", "id": 4} |
|||
{"title": "报错Use `repo_type` argument if needed.", "file": "2023-04-11.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/57", "detail": "Traceback (most recent call last):", "id": 5} |
|||
{"title": "无法打开gradio的页面", "file": "2023-04-11.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/58", "detail": "$ python webui.py", "id": 6} |
|||
{"title": "支持word,那word里面的图片正常显示吗?", "file": "2023-04-12.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/60", "detail": "如题,刚刚从隔壁转过来的,想先了解下", "id": 7} |
|||
{"title": "detectron2 is not installed. Cannot use the hi_res partitioning strategy. Falling back to partitioning with the fast strategy.", "file": "2023-04-12.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/63", "detail": "能够正常的跑起来,在加载content文件夹中的文件时,每加载一个文件都会提示:", "id": 8} |
|||
{"title": "cpu上运行webui,step3 asking时报错", "file": "2023-04-12.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/66", "detail": "web运行,文件加载都正常,asking时报错", "id": 9} |
|||
{"title": "建议弄一个插件系统", "file": "2023-04-13.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/67", "detail": "如题弄成stable-diffusion-webui那种能装插件,再开一个存储库给使用者或插件开发,存储或下载插件。", "id": 10} |
|||
{"title": "请教加载模型出错!?", "file": "2023-04-13.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/75", "detail": "AttributeError: module 'transformers_modules.chatglm-6b.configuration_chatglm' has no attribute 'ChatGLMConfig 怎么解决呀", "id": 11} |
|||
{"title": "从本地知识检索内容的时候,是否可以设置相似度阈值,小于这个阈值的内容不返回,即使会小于设置的VECTOR_SEARCH_TOP_K参数呢?谢谢大佬", "file": "2023-04-13.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/76", "detail": "比如 问一些 你好/你是谁 等一些跟本地知识库无关的问题", "id": 12} |
|||
{"title": "如何改成多卡推理?", "file": "2023-04-13.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/77", "detail": "+1", "id": 13} |
|||
{"title": "能否弄个懒人包,可以一键体验?", "file": "2023-04-13.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/78", "detail": "能否弄个懒人包,可以一键体验?", "id": 14} |
|||
{"title": "连续问问题会导致崩溃", "file": "2023-04-13.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/79", "detail": "看上去不是爆内存的问题,连续问问题后,会出现如下报错", "id": 15} |
|||
{"title": "AttributeError: 'NoneType' object has no attribute 'as_retriever'", "file": "2023-04-14.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/86", "detail": "环境:windows 11, anaconda/python 3.8", "id": 16} |
|||
{"title": "FileNotFoundError: Could not find module 'nvcuda.dll' (or one of its dependencies). Try using the full path with constructor syntax.", "file": "2023-04-14.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/87", "detail": "请检查一下cuda或cudnn是否存在安装问题", "id": 17} |
|||
{"title": "加载txt文件失败?", "file": "2023-04-14.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/89", "detail": "", "id": 18} |
|||
{"title": "NameError: name 'chatglm' is not defined", "file": "2023-04-14.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/90", "detail": "This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces", "id": 19} |
|||
{"title": "打不开地址?", "file": "2023-04-14.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/91", "detail": "报错数据如下:", "id": 20} |
|||
{"title": "加载md文件出错", "file": "2023-04-14.00", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/98", "detail": "运行 webui.py后能访问页面,上传一个md文件后,日志中有错误。等待后能加载完成,提示可以提问了,但提问没反应,日志中有错误。 具体日志如下。", "id": 21} |
|||
{"title": "建议增加获取在线知识的能力", "file": "2023-04-15.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/101", "detail": "建议增加获取在线知识的能力", "id": 22} |
|||
{"title": "txt 未能成功加载", "file": "2023-04-15.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/103", "detail": "hinese. Creating a new one with MEAN pooling.", "id": 23} |
|||
{"title": "pdf加载失败", "file": "2023-04-15.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/105", "detail": "e:\\a.txt加载成功了,e:\\a.pdf加载就失败,pdf文件里面前面几页是图片,后面都是文字,加载失败没有报更多错误,请问该怎么排查?", "id": 24} |
|||
{"title": "一直停在文本加载处", "file": "2023-04-15.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/108", "detail": "一直停在文本加载处", "id": 25} |
|||
{"title": " File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/modeling_chatglm.py\", line 440, in forward new_tensor_shape = mixed_raw_layer.size()[:-1] + ( TypeError: torch.Size() takes an iterable of 'int' (item 2 is 'float')", "file": "2023-04-17.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/113", "detail": "按照最新的代码,发现", "id": 26} |
|||
{"title": "后续会提供前后端分离的功能吗?", "file": "2023-04-17.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/114", "detail": "类似这种https://github.com/lm-sys/FastChat/tree/main/fastchat/serve", "id": 27} |
|||
{"title": "安装依赖报错", "file": "2023-04-17.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/115", "detail": "(test) C:\\Users\\linh\\Desktop\\langchain-ChatGLM-master>pip install -r requirements.txt", "id": 28} |
|||
{"title": "问特定问题会出现爆显存", "file": "2023-04-17.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/117", "detail": "正常提问没问题。", "id": 29} |
|||
{"title": "Expecting value: line 1 column 1 (char 0)", "file": "2023-04-17.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/118", "detail": "运行后 第一步加载配置一直报错:", "id": 30} |
|||
{"title": "embedding https://huggingface.co/GanymedeNil/text2vec-large-chinese/tree/main是免费的,效果比对openai的如何?", "file": "2023-04-17.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/119", "detail": "-------------------------------------------------------------------------------", "id": 31} |
|||
{"title": "这是什么错误,在Colab上运行的。", "file": "2023-04-17.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/120", "detail": "libcuda.so.1: cannot open shared object file: No such file or directory", "id": 32} |
|||
{"title": "只想用自己的lora微调后的模型进行对话,不想加载任何本地文档,该如何调整?", "file": "2023-04-18.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/121", "detail": "能出一个单独的教程吗", "id": 33} |
|||
{"title": "租的gpu,Running on local URL: http://0.0.0.0:7860 To create a public link, set `share=True` in `launch()`. 浏览器上访问不了???", "file": "2023-04-18.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/122", "detail": "(chatglm20230401) root@autodl-container-e82d11963c-10ece0d7:~/autodl-tmp/chatglm/langchain-ChatGLM-20230418# python3.9 webui.py", "id": 34} |
|||
{"title": "本地部署中的报错请教", "file": "2023-04-18.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/124", "detail": "您好,在本地运行langchain-ChatGLM过程中,环境及依赖的包都已经满足条件,但是运行webui.py,报错如下(运行cli_demo.py报错类似),请问是哪里出了错呢?盼望您的回复,谢谢!", "id": 35} |
|||
{"title": "报错。The dtype of attention mask (torch.int64) is not bool", "file": "2023-04-18.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/131", "detail": "The dtype of attention mask (torch.int64) is not bool", "id": 36} |
|||
{"title": "[求助] pip install -r requirements.txt 的时候出现以下报错。。。有大佬帮忙看看怎么搞么,下的release里面的包", "file": "2023-04-18.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/134", "detail": "$ pip install -r requirements.txt", "id": 37} |
|||
{"title": "如何提升根据问题搜索到对应知识的准确率", "file": "2023-04-19.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/136", "detail": "外链知识库最大的问题在于问题是短文本,知识是中长文本。如何根据问题精准的搜索到对应的知识是个最大的问题。这类本地化项目不像百度,由无数的网页,基本上每个问题都可以找到对应的页面。", "id": 38} |
|||
{"title": "是否可以增加向量召回的阈值设定,有些召回内容相关性太低,导致模型胡言乱语", "file": "2023-04-20.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/140", "detail": "如题", "id": 39} |
|||
{"title": "输入长度问题", "file": "2023-04-20.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/141", "detail": "感谢作者支持ptuning微调模型。", "id": 40} |
|||
{"title": "已有部署好的chatGLM-6b,如何通过接口接入?", "file": "2023-04-20.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/144", "detail": "已有部署好的chatGLM-6b,如何通过接口接入,而不是重新加载一个模型;", "id": 41} |
|||
{"title": "执行web_demo.py后,显示Killed,就退出了,是不是配置不足呢?", "file": "2023-04-20.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/146", "detail": "", "id": 42} |
|||
{"title": "执行python cli_demo1.py", "file": "2023-04-20.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/147", "detail": "Traceback (most recent call last):", "id": 43} |
|||
{"title": "报错:ImportError: cannot import name 'GENERATION_CONFIG_NAME' from 'transformers.utils'", "file": "2023-04-20.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/149", "detail": "(mychatGLM) PS D:\\Users\\admin3\\zrh\\langchain-ChatGLM> python cli_demo.py", "id": 44} |
|||
{"title": "上传文件并加载知识库时,会不停地出现临时文件", "file": "2023-04-21.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/153", "detail": "环境:ubuntu 18.04", "id": 45} |
|||
{"title": "向知识库中添加文件后点击”上传文件并加载知识库“后Segmentation fault报错。", "file": "2023-04-23.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/161", "detail": "运行服务后的提示如下:", "id": 46} |
|||
{"title": "langchain-serve 集成", "file": "2023-04-24.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/162", "detail": "Hey 我是来自 [langchain-serve](https://github.com/jina-ai/langchain-serve) 的dev!", "id": 47} |
|||
{"title": "大佬们,wsl的ubuntu怎么配置用cuda加速,装了运行后发现是cpu在跑", "file": "2023-04-24.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/164", "detail": "大佬们,wsl的ubuntu怎么配置用cuda加速,装了运行后发现是cpu在跑", "id": 48} |
|||
{"title": "在github codespaces docker运行出错", "file": "2023-04-24.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/165", "detail": "docker run -d --restart=always --name chatglm -p 7860:7860 -v /www/wwwroot/code/langchain-ChatGLM:/chatGLM chatglm", "id": 49} |
|||
{"title": "有计划接入Moss模型嘛", "file": "2023-04-24.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/166", "detail": "后续会开展测试,目前主要在优化langchain部分效果,如果有兴趣也欢迎提PR", "id": 50} |
|||
{"title": "怎么实现 API 部署?", "file": "2023-04-24.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/168", "detail": "利用 fastapi 实现 API 部署方式,具体怎么实现,有方法说明吗?", "id": 51} |
|||
{"title": " 'NoneType' object has no attribute 'message_types_by_name'报错", "file": "2023-04-24.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/169", "detail": "_HISTOGRAMPROTO = DESCRIPTOR.message_types_by_name['HistogramProto']", "id": 52} |
|||
{"title": "能否指定自己训练的text2vector模型?", "file": "2023-04-25.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/172", "detail": "请问大佬:", "id": 53} |
|||
{"title": "关于项目支持的模型以及quantization_bit潜在的影响的问题", "file": "2023-04-26.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/176", "detail": "作者您好~", "id": 54} |
|||
{"title": "运行python3.9 api.py WARNING: You must pass the application as an import string to enable 'reload' or 'workers'.", "file": "2023-04-26.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/179", "detail": "api.py文件最下面改成这样试试:", "id": 55} |
|||
{"title": "ValidationError: 1 validation error for HuggingFaceEmbeddings model_kwargs extra fields not permitted (type=value_error.extra)", "file": "2023-04-26.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/180", "detail": "ValidationError: 1 validation error for HuggingFaceEmbeddings", "id": 56} |
|||
{"title": "如果没有检索到相关性比较高的,回答“我不知道”", "file": "2023-04-26.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/181", "detail": "如果通过设计system_template,让模型在搜索到的文档都不太相关的情况下回答“我不知道”", "id": 57} |
|||
{"title": "请问如果不能联网,6B之类的文件从本地上传需要放到哪里", "file": "2023-04-26.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/182", "detail": "感谢大佬的项目,很有启发~", "id": 58} |
|||
{"title": "知识库问答--输入新的知识库名称是中文的话,会报error", "file": "2023-04-27.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/184", "detail": "知识库问答--输入新的知识库名称是中文的话,会报error,选择要加载的知识库那里也不显示之前添加的知识库", "id": 59} |
|||
{"title": "现在能通过问题匹配的相似度值,来直接返回文档中的文段,而不经过模型吗?因为有些答案在文档中,模型自己回答,不能回答文档中的答案", "file": "2023-04-27.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/186", "detail": "现在能通过问题匹配的相似度值,来直接返回文档中的文段,而不经过模型吗?因为有些答案在文档中,模型自己回答,不能回答文档中的答案。也就是说,提供向量检索回答+模型回答相结合的策略。如果相似度值高于一定数值,直接返回文档中的文本,没有高于就返回模型的回答或者不知道", "id": 60} |
|||
{"title": "TypeError: The type of ChatGLM.callback_manager differs from the new default value; if you wish to change the type of this field, please use a type annotation", "file": "2023-04-27.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/188", "detail": "Mac 运行 python3 ./webui.py 报 TypeError: The type of ChatGLM.callback_manager differs from the new default value; if you wish to change the type of this field, please use a type annotation", "id": 61} |
|||
{"title": "Not Enough Memory", "file": "2023-04-27.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/190", "detail": "运行命令行程序python cli_demo.py, 已经成功加载pdf文件, 报“DefaultCPUAllocator: not enough memory: you tried to allocate 458288380900 bytes”错误,请问哪里可以配置default memory", "id": 62} |
|||
{"title": "参与开发问题", "file": "2023-04-27.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/191", "detail": "1.是否需要进专门的开发群", "id": 63} |
|||
{"title": "对话框中代码片段格式需改进", "file": "2023-04-27.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/192", "detail": "最好能改进下输出代码片段的格式,目前输出的格式还不友好。", "id": 64} |
|||
{"title": "请问未来有可能支持belle吗", "file": "2023-04-28.01", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/195", "detail": "如题,谢谢大佬", "id": 65} |
|||
{"title": "TypeError: cannot unpack non-iterable NoneType object", "file": "2023-04-28.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/200", "detail": "When i tried to change the knowledge vector store through `init_knowledge_vector_store`, the error `TypeError: cannot unpack non-iterable NoneType object` came out.", "id": 66} |
|||
{"title": "生成结果", "file": "2023-04-28.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/202", "detail": "你好,想问一下langchain+chatglm-6B,找到相似匹配的prompt,是直接返回prompt对应的答案信息,还是chatglm-6B在此基础上自己优化答案?", "id": 67} |
|||
{"title": "在win、ubuntu下都出现这个错误:attributeerror: 't5forconditionalgeneration' object has no attribute 'stream_chat'", "file": "2023-04-29.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/207", "detail": "在win、ubuntu。下载完模型后,没办法修改代码以执行本地模型,每次都要重新输入路径; LLM 模型、Embedding 模型支持也都在官网下的,在其他项目(wenda)下可以使用", "id": 68} |
|||
{"title": "[FEATURE] knowledge_based_chatglm.py: renamed or missing?", "file": "2023-04-30.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/210", "detail": "Not found. Was it renamed? Or, is it missing? How can I get it?", "id": 69} |
|||
{"title": "sudo apt-get install -y nvidia-container-toolkit-base执行报错", "file": "2023-05-01.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/211", "detail": "**问题描述 / Problem Description**", "id": 70} |
|||
{"title": "效果不佳几乎答不上来", "file": "2023-05-01.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/212", "detail": "提供了50条问答的docx文件", "id": 71} |
|||
{"title": "有没有可能新增一个基于chatglm api调用的方式构建langchain", "file": "2023-05-02.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/218", "detail": "我有两台8G GPU/40G内存的服务器,一个台做成了chatglm的api ;想基于另外一台服务器部署langchain;网上好像没有类似的代码。", "id": 72} |
|||
{"title": "电脑是intel的集成显卡; 运行时告知我找不到nvcuda.dll,模型无法运行", "file": "2023-05-02.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/219", "detail": "您好,我的电脑是intel的集成显卡,不过CPU是i5-11400 @ 2.60GHz ,内存64G;", "id": 73} |
|||
{"title": "根据langchain官方的文档和使用模式,是否可以改Faiss为Elasticsearch?会需要做哪些额外调整?求解", "file": "2023-05-03.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/221", "detail": "本人新手小白,由于业务模式的原因(有一些自己的场景和优化),希望利用Elasticsearch做这个体系内部的检索机制,不知道是否可以替换,同时,还会涉及到哪些地方的改动?或者说可能会有哪些其他影响,希望作者和大佬们不吝赐教!", "id": 74} |
|||
{"title": "请问未来有可能支持t5吗", "file": "2023-05-04.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/224", "detail": "请问可能支持基於t5的模型吗?", "id": 75} |
|||
{"title": "[BUG] 内存溢出 / torch.cuda.OutOfMemoryError:", "file": "2023-05-04.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/229", "detail": "**问题描述 / Problem Description**", "id": 76} |
|||
{"title": "报错 No module named 'chatglm_llm'", "file": "2023-05-04.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/230", "detail": "明明已经安装了包,却在python里吊不出来", "id": 77} |
|||
{"title": "能出一个api部署的描述文档吗", "file": "2023-05-04.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/233", "detail": "**功能描述 / Feature Description**", "id": 78} |
|||
{"title": "使用docs/API.md 出错", "file": "2023-05-04.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/234", "detail": "使用API.md文档2种方法,出错", "id": 79} |
|||
{"title": "加载pdf文档报错?", "file": "2023-05-05.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/238", "detail": "ew one with MEAN pooling.", "id": 80} |
|||
{"title": "上传的本地知识文件后再次上传不能显示,只显示成功了一个,别的上传成功后再次刷新就没了", "file": "2023-05-05.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/239", "detail": "您好,项目有很大启发,感谢~", "id": 81} |
|||
{"title": "创建了新的虚拟环境,安装了相关包,并且自动下载了相关的模型,但是仍旧出现:OSError: Unable to load weights from pytorch checkpoint file for", "file": "2023-05-05.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/240", "detail": "", "id": 82} |
|||
{"title": "[BUG] 数据加载不进来", "file": "2023-05-05.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/243", "detail": "使用的.txt格式,utf-8编码,报以下错误", "id": 83} |
|||
{"title": "不能读取pdf", "file": "2023-05-05.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/244", "detail": "请问是webui还是cli_demo", "id": 84} |
|||
{"title": "本地txt文件有500M,加载的时候很慢,如何提高速度?", "file": "2023-05-06.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/251", "detail": "", "id": 85} |
|||
{"title": "[BUG] gradio上传知识库后刷新之后 知识库就不见了 只有重启才能看到之前的上传的知识库", "file": "2023-05-06.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/253", "detail": "gradio上传知识库后刷新之后 知识库就不见了 只有重启才能看到之前的上传的知识库", "id": 86} |
|||
{"title": "[FEATURE] 可以支持 OpenAI 的模型嘛?比如 GPT-3、GPT-3.5、GPT-4;embedding 增加 text-embedding-ada-002", "file": "2023-05-06.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/254", "detail": "**功能描述 / Feature Description**", "id": 87} |
|||
{"title": "[FEATURE] 能否增加对于milvus向量数据库的支持 / Concise description of the feature", "file": "2023-05-06.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/256", "detail": "**功能描述 / Feature Description**", "id": 88} |
|||
{"title": "CPU和GPU上跑,除了速度有区别,准确率效果回答上有区别吗?", "file": "2023-05-06.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/259", "detail": "理论上没有区别", "id": 89} |
|||
{"title": "m1,请问在生成回答时怎么看是否使用了mps or cpu?", "file": "2023-05-06.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/260", "detail": "m1,请问在生成回答时怎么看是否使用了mps or cpu?", "id": 90} |
|||
{"title": "知识库一刷新就没了", "file": "2023-05-07.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/263", "detail": "知识库上传后刷新就没了", "id": 91} |
|||
{"title": "本地部署报没有模型", "file": "2023-05-07.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/267", "detail": "建议在下载llm和embedding模型至本地后在configs/model_config中写入模型本地存储路径后再运行", "id": 92} |
|||
{"title": "[BUG] python3: can't open file 'webui.py': [Errno 2] No such file or directory", "file": "2023-05-08.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/269", "detail": "**问题描述 / Problem Description**", "id": 93} |
|||
{"title": "模块缺失提示", "file": "2023-05-08.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/271", "detail": "因为已有自己使用的docker环境,直接启动webui.py,提示", "id": 94} |
|||
{"title": "运行api.py后,执行curl -X POST \"http://127.0.0.1:7861\" 报错?", "file": "2023-05-08.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/272", "detail": "执行curl -X POST \"http://127.0.0.1:7861\" \\ -H 'Content-Type: application/json' \\ -d '{\"prompt\": \"你好\", \"history\": []}',报错怎么解决", "id": 95} |
|||
{"title": "[BUG] colab安装requirements提示protobuf版本问题?", "file": "2023-05-08.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/273", "detail": "pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.", "id": 96} |
|||
{"title": "请问项目里面向量相似度使用了什么方法计算呀?", "file": "2023-05-08.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/275", "detail": "基本按照langchain里的FAISS.similarity_search_with_score_by_vector实现", "id": 97} |
|||
{"title": "[BUG] 安装detectron2后,pdf无法加载", "file": "2023-05-08.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/276", "detail": "**问题描述 / Problem Description**", "id": 98} |
|||
{"title": "[BUG] 使用ChatYuan-V2模型无法流式输出,会报错", "file": "2023-05-08.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/277", "detail": "一方面好像是ChatYuan本身不支持stream_chat,有人在clueai那边提了issue他们说还没开发,所以估计这个attribute调不起来;但是另一方面看报错好像是T5模型本身就不是decoder-only模型,所以不能流式输出吧(个人理解)", "id": 99} |
|||
{"title": "[BUG] 无法加载text2vec模型", "file": "2023-05-08.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/278", "detail": "**问题描述 / Problem Description**", "id": 100} |
|||
{"title": "请问能否增加网络搜索功能", "file": "2023-05-08.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/281", "detail": "请问能否增加网络搜索功能", "id": 101} |
|||
{"title": "[FEATURE] 结构化数据sql、excel、csv啥时会支持呐。", "file": "2023-05-08.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/283", "detail": "**功能描述 / Feature Description**", "id": 102} |
|||
{"title": "TypeError: ChatGLM._call() got an unexpected keyword argument 'stop'", "file": "2023-05-08.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/284", "detail": "No sentence-transformers model found with name D:\\DevProject\\langchain-ChatGLM\\GanymedeNil\\text2vec-large-chinese. Creating a new one with MEAN pooling.", "id": 103} |
|||
{"title": "关于api.py的一些bug和设计逻辑问题?", "file": "2023-05-09.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/285", "detail": "首先冒昧的问一下,这个api.py,开发者大佬们是在自己电脑上测试后确实没问题吗?", "id": 104} |
|||
{"title": "有没有租用的算力平台上,运行api.py后,浏览器http://localhost:7861/报错", "file": "2023-05-09.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/287", "detail": "是不是租用的gpu平台上都会出现这个问题???", "id": 105} |
|||
{"title": "请问一下项目中有用到文档段落切割方法吗?", "file": "2023-05-09.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/288", "detail": "text_load中的文档切割方法用上了吗?在代码中看好像没有用到?", "id": 106} |
|||
{"title": "报错 raise ValueError(f\"Knowledge base {knowledge_base_id} not found\") ValueError: Knowledge base ./vector_store not found", "file": "2023-05-09.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/289", "detail": "File \"/root/autodl-tmp/chatglm/langchain-ChatGLM-master/api.py\", line 183, in chat", "id": 107} |
|||
{"title": "能接入vicuna模型吗", "file": "2023-05-09.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/290", "detail": "目前本地已经有了vicuna模型能直接接入吗?", "id": 108} |
|||
{"title": "[BUG] 提问公式相关问题大概率爆显存", "file": "2023-05-09.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/291", "detail": "**问题描述 / Problem Description**", "id": 109} |
|||
{"title": "安装pycocotools失败,找了好多方法都不能解决。", "file": "2023-05-10.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/292", "detail": "**问题描述 / Problem Description**", "id": 110} |
|||
{"title": "使用requirements安装,PyTorch安装的是CPU版本", "file": "2023-05-10.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/294", "detail": "如题目,使用requirements安装,PyTorch安装的是CPU版本,运行程序的时候,也是使用CPU在工作。", "id": 111} |
|||
{"title": "能不能给一个毛坯服务器的部署教程", "file": "2023-05-10.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/298", "detail": "“开发部署”你当成服务器的部署教程用就行了。", "id": 112} |
|||
{"title": " Error(s) in loading state_dict for ChatGLMForConditionalGeneration:", "file": "2023-05-10.02", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/299", "detail": "运行中出现的问题,7860的端口页面显示不出来,求助。", "id": 113} |
|||
{"title": "ChatYuan-large-v2模型加载失败", "file": "2023-05-10.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/300", "detail": "**实际结果 / Actual Result**", "id": 114} |
|||
{"title": "新增摘要功能", "file": "2023-05-10.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/303", "detail": "你好,后续会考虑新增对长文本信息进行推理和语音理解功能吗?比如生成摘要", "id": 115} |
|||
{"title": "[BUG] pip install -r requirements.txt 出错", "file": "2023-05-10.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/304", "detail": "pip install langchain -i https://pypi.org/simple", "id": 116} |
|||
{"title": "[BUG] 上传知识库文件报错", "file": "2023-05-10.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/305", "detail": "", "id": 117} |
|||
{"title": "[BUG] AssertionError: <class 'gradio.layouts.Accordion'> Component with id 41 not a valid input component.", "file": "2023-05-10.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/306", "detail": "**问题描述 / Problem Description**", "id": 118} |
|||
{"title": "[BUG] CUDA out of memory with container deployment", "file": "2023-05-10.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/310", "detail": "**问题描述 / Problem Description**", "id": 119} |
|||
{"title": "[FEATURE] 增加微调训练功能", "file": "2023-05-11.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/311", "detail": "**功能描述 / Feature Description**", "id": 120} |
|||
{"title": "如何使用多卡部署,多个gpu", "file": "2023-05-11.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/315", "detail": "机器上有多个gpu,如何全使用了", "id": 121} |
|||
{"title": "请问这个知识库问答,和chatglm的关系是什么", "file": "2023-05-11.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/319", "detail": "这个知识库问答,哪部分关联到了chatglm,是不是没有这个chatglm,知识库问答也可单单拎出来", "id": 122} |
|||
{"title": "[BUG] 运行的时候报错ImportError: libcudnn.so.8: cannot open shared object file: No such file or directory", "file": "2023-05-12.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/324", "detail": "**问题描述 / Problem Description**raceback (most recent call last):", "id": 123} |
|||
{"title": "webui启动成功,但有报错", "file": "2023-05-12.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/325", "detail": "**问题描述 / Problem Description**", "id": 124} |
|||
{"title": "切换MOSS的时候报错", "file": "2023-05-12.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/327", "detail": "danshi但是发布的源码中,", "id": 125} |
|||
{"title": "vicuna模型是否能接入?", "file": "2023-05-12.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/328", "detail": "您好!关于MOSS模型和vicuna模型,都是AutoModelForCausalLM来加载模型的,但是稍作更改(模型路径这些)会报这个错误。这个错误的造成是什么", "id": 126} |
|||
{"title": "你好,请问一下在阿里云CPU服务器上跑可以吗?可以的话比较理想的cpu配置是什么?", "file": "2023-05-12.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/330", "detail": "你好,请问一下在阿里云CPU服务器上跑可以吗?可以的话比较理想的cpu配置是什么?", "id": 127} |
|||
{"title": "你好,请问8核32g的CPU可以跑多轮对话吗?", "file": "2023-05-12.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/331", "detail": "什么样的cpu配置比较好呢?我目前想部署CPU下的多轮对话?", "id": 128} |
|||
{"title": "[BUG] 聊天内容输入超过10000个字符系统出现错误", "file": "2023-05-12.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/332", "detail": "聊天内容输入超过10000个字符系统出现错误,如下图所示:", "id": 129} |
|||
{"title": "能增加API的多用户访问接口部署吗?", "file": "2023-05-12.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/333", "detail": "默认部署程序仅支持单用户访问,多用户则需要排队访问。测试过相关的几个Github多用户工程,但是其中一些仍然不满足要求。本节将系统介绍如何实现多用户同时访问ChatGLM的部署接口,包括http、websocket(流式输出,stream)和web页面等方式,主要目录如下所示。", "id": 130} |
|||
{"title": "多卡部署", "file": "2023-05-12.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/334", "detail": "用单机多卡或多机多卡,fastapi部署模型,怎样提高并发", "id": 131} |
|||
{"title": "WEBUI能否指定知识库目录?", "file": "2023-05-12.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/335", "detail": "**功能描述 / Feature Description**", "id": 132} |
|||
{"title": "[BUG] Cannot read properties of undefined (reading 'error')", "file": "2023-05-12.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/336", "detail": "**问题描述 / Problem Description**", "id": 133} |
|||
{"title": "[BUG] 1 validation error for HuggingFaceEmbeddings model_kwargs extra fields not permitted.", "file": "2023-05-12.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/337", "detail": "模型加载到 100% 后出现问题:", "id": 134} |
|||
{"title": "上传知识库需要重启能不能修复一下", "file": "2023-05-12.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/338", "detail": "挺严重的这个问题", "id": 135} |
|||
{"title": "[BUG] 4块v100卡爆显存,在LLM会话模式也一样", "file": "2023-05-12.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/339", "detail": "**问题描述 / Problem Description**", "id": 136} |
|||
{"title": "针对上传的文件配置不同的TextSpliter", "file": "2023-05-12.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/341", "detail": "1. 目前的ChineseTextSpliter切分对英文尤其是代码文件不友好,而且限制固定长度;导致对话结果不如人意", "id": 137} |
|||
{"title": "[FEATURE] 未来可增加Bloom系列模型吗?根据甲骨易的测试,这系列中文评测效果不错", "file": "2023-05-13.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/346", "detail": "**功能描述 / Feature Description**", "id": 138} |
|||
{"title": "[BUG] v0.1.12打包镜像后启动webui.py失败 / Concise description of the issue", "file": "2023-05-13.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/347", "detail": "**问题描述 / Problem Description**", "id": 139} |
|||
{"title": "切换MOSS模型时报错", "file": "2023-05-13.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/349", "detail": "昨天问了下,说是transformers版本不对,需要4.30.0,发现没有这个版本,今天更新到4.29.1,依旧报错,错误如下", "id": 140} |
|||
{"title": "[BUG] pdf文档加载失败", "file": "2023-05-13.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/350", "detail": "**问题描述 / Problem Description**", "id": 141} |
|||
{"title": "建议可以在后期增强一波注释,这样也有助于更多人跟进提PR", "file": "2023-05-13.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/351", "detail": "知道作者和团队在疯狂更新审查代码,只是建议后续稳定后可以把核心代码进行一些注释的补充,从而能帮助更多人了解各个模块作者的思路从而提出更好的优化。", "id": 142} |
|||
{"title": "[FEATURE] MOSS 量化版支援", "file": "2023-05-13.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/353", "detail": "**功能描述 / Feature Description**", "id": 143} |
|||
{"title": "[BUG] moss模型无法加载", "file": "2023-05-13.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/356", "detail": "**问题描述 / Problem Description**", "id": 144} |
|||
{"title": "[BUG] load_doc_qa.py 中的 load_file 函数有bug", "file": "2023-05-14.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/358", "detail": "原函数为:", "id": 145} |
|||
{"title": "[FEATURE] API模式,知识库加载优化", "file": "2023-05-14.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/359", "detail": "如题,当前版本,每次调用本地知识库接口,都将加载一次知识库,是否有更好的方式?", "id": 146} |
|||
{"title": "运行Python api.py脚本后端部署后,怎么使用curl命令调用?", "file": "2023-05-15.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/361", "detail": "也就是说,我现在想做个对话机器人,想和公司的前后端联调?怎么与前后端相互调用呢?可私信,有偿解答!!!", "id": 147} |
|||
{"title": "上传知识库需要重启能不能修复一下", "file": "2023-05-15.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/363", "detail": "上传知识库需要重启能不能修复一下", "id": 148} |
|||
{"title": "[BUG] pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple", "file": "2023-05-15.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/364", "detail": "我的python是3.8.5的", "id": 149} |
|||
{"title": "pip install gradio 报错", "file": "2023-05-15.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/367", "detail": "大佬帮我一下", "id": 150} |
|||
{"title": "[BUG] pip install gradio 一直卡不动", "file": "2023-05-15.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/369", "detail": "", "id": 151} |
|||
{"title": "[BUG] 简洁阐述问题 / Concise description of the issue", "file": "2023-05-16.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/370", "detail": "初次加载本地知识库成功,但提问后,就无法重写加载本地知识库", "id": 152} |
|||
{"title": "[FEATURE] 简洁阐述功能 / Concise description of the feature", "file": "2023-05-16.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/371", "detail": "**功能描述 / Feature Description**", "id": 153} |
|||
{"title": "在windows上,模型文件默认会安装到哪", "file": "2023-05-16.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/372", "detail": "-------------------------------------------------------------------------------", "id": 154} |
|||
{"title": "[FEATURE] 兼顾对话管理", "file": "2023-05-16.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/374", "detail": "如何在知识库检索的情况下,兼顾对话管理?", "id": 155} |
|||
{"title": "llm device: cpu embedding device: cpu", "file": "2023-05-16.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/376", "detail": "**问题描述 / Problem Description**", "id": 156} |
|||
{"title": "[FEATURE] 简洁阐述功能 /文本文件的知识点之间使用什么分隔符可以分割?", "file": "2023-05-16.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/377", "detail": "**功能描述 / Feature Description**", "id": 157} |
|||
{"title": "[BUG] 上传文件失败:PermissionError: [WinError 32] 另一个程序正在使用此文件,进程无法访问。", "file": "2023-05-16.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/379", "detail": "**问题描述 / Problem Description**", "id": 158} |
|||
{"title": "[BUG] 执行python api.py 报错", "file": "2023-05-16.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/383", "detail": "错误信息", "id": 159} |
|||
{"title": "model_kwargs extra fields not permitted (type=value_error.extra)", "file": "2023-05-16.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/384", "detail": "大家好,请问这个有遇到的么,?", "id": 160} |
|||
{"title": "[BUG] 简洁阐述问题 / Concise description of the issue", "file": "2023-05-17.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/385", "detail": "执行的时候出现了ls1 = [ls[0]]", "id": 161} |
|||
{"title": "[FEATURE] 性能优化", "file": "2023-05-17.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/388", "detail": "**功能描述 / Feature Description**", "id": 162} |
|||
{"title": "[BUG] Moss模型问答,RuntimeError: probability tensor contains either inf, nan or element < 0", "file": "2023-05-17.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/390", "detail": "**问题描述 / Problem Description**", "id": 163} |
|||
{"title": "有没有人知道v100GPU的32G显存,会报错吗?支持V100GPU吗?", "file": "2023-05-17.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/392", "detail": "**问题描述 / Problem Description**", "id": 164} |
|||
{"title": "针对于编码问题比如'gbk' codec can't encode character '\\xab' in position 14: illegal multibyte sequence粗浅的解决方法", "file": "2023-05-17.03", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/397", "detail": "**功能描述 / Feature Description**", "id": 165} |
|||
{"title": "Could not import sentence_transformers python package. Please install it with `pip install sentence_transformers`.", "file": "2023-05-18.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/400", "detail": "**问题描述 / Problem Description**", "id": 166} |
|||
{"title": "支持模型问答与检索问答", "file": "2023-05-18.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/401", "detail": "不同的query,根据意图不一致,回答也应该不一样。", "id": 167} |
|||
{"title": "文本分割的时候,能不能按照txt文件的每行进行分割,也就是按照换行符号\\n进行分割???", "file": "2023-05-18.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/403", "detail": "下面的代码应该怎么修改?", "id": 168} |
|||
{"title": "local_doc_qa/local_doc_chat 接口响应是串行", "file": "2023-05-18.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/405", "detail": "**问题描述 / Problem Description**", "id": 169} |
|||
{"title": "为什么找到出处了,但是还是无法回答该问题?", "file": "2023-05-18.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/406", "detail": "", "id": 170} |
|||
{"title": "请问下:知识库测试中的:添加单条内容,如果换成文本导入是是怎样的格式?我发现添加单条内容测试效果很好.", "file": "2023-05-18.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/412", "detail": "我发现在知识库测试中`添加单条内容`,并且勾选`禁止内容分句入库`,即使 `不开启上下文关联`的测试效果都非常好.", "id": 171} |
|||
{"title": "[BUG] 无法配置知识库", "file": "2023-05-18.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/413", "detail": "**问题描述 / Problem Description**", "id": 172} |
|||
{"title": "[BUG] 部署在阿里PAI平台的EAS上访问页面是白屏", "file": "2023-05-19.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/414", "detail": "**问题描述 / Problem Description**", "id": 173} |
|||
{"title": "API部署后调用/local_doc_qa/local_doc_chat 返回Knowledge base samples not found", "file": "2023-05-19.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/416", "detail": "入参", "id": 174} |
|||
{"title": "[FEATURE] 上传word另存为的txt文件报 'ascii' codec can't decode byte 0xb9 in position 6: ordinal not in range(128)", "file": "2023-05-20.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/421", "detail": "上传word另存为的txt文件报", "id": 175} |
|||
{"title": "创建保存的知识库刷新后没有出来,这个知识库是永久保存的吗?可以连外部的 向量知识库吗?", "file": "2023-05-21.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/422", "detail": "创建保存的知识库刷新后没有出来,这个知识库是永久保存的吗?可以连外部的 向量知识库吗?", "id": 176} |
|||
{"title": "[BUG] 用colab运行,无法加载模型,报错:'NoneType' object has no attribute 'message_types_by_name'", "file": "2023-05-21.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/423", "detail": "**问题描述 / Problem Description**", "id": 177} |
|||
{"title": "请问是否需要用到向量数据库?以及什么时候需要用到向量数据库?", "file": "2023-05-21.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/424", "detail": "目前用的是 text2vec , 请问是否需要用到向量数据库?以及什么时候需要用到向量数据库?", "id": 178} |
|||
{"title": "huggingface模型引用问题", "file": "2023-05-22.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/427", "detail": "它最近似乎变成了一个Error?", "id": 179} |
|||
{"title": "你好,加载本地txt文件出现这个killed错误,TXT文件有100M左右大小。原因是?谢谢。", "file": "2023-05-22.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/429", "detail": "<img width=\"677\" alt=\"929aca3b22b8cd74e997a87b61d241b\" src=\"https://github.com/imClumsyPanda/langchain-ChatGLM/assets/109277248/24024522-c884-4170-b5cf-a498491bd8bc\">", "id": 180} |
|||
{"title": "想请问一下,关于对本地知识的管理是如何管理?例如:通过http API接口添加数据 或者 删除某条数据", "file": "2023-05-22.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/430", "detail": "例如:通过http API接口添加、删除、修改 某条数据。", "id": 181} |
|||
{"title": "[FEATURE] 双栏pdf识别问题", "file": "2023-05-22.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/432", "detail": "试了一下模型,感觉对单栏pdf识别的准确性较高,但是由于使用的基本是ocr的技术,对一些双栏pdf论文识别出来有很多问题,请问有什么办法改善吗?", "id": 182} |
|||
{"title": "部署启动小问题,小弟初学求大佬解答", "file": "2023-05-22.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/433", "detail": "1.python loader/image_loader.py时,提示ModuleNotFoundError: No module named 'configs',但是跑python webui.py还是还能跑", "id": 183} |
|||
{"title": "能否支持检测到目录下文档有增加而去增量加载文档,不影响前台对话,其实就是支持读写分离。如果能支持查询哪些文档向量化了,删除过时文档等就更好了,谢谢。", "file": "2023-05-22.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/434", "detail": "**功能描述 / Feature Description**", "id": 184} |
|||
{"title": "[BUG] 简洁阐述问题 / windows 下cuda错误,请用https://github.com/Keith-Hon/bitsandbytes-windows.git", "file": "2023-05-22.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/435", "detail": "pip install git+https://github.com/Keith-Hon/bitsandbytes-windows.git", "id": 185} |
|||
{"title": "[BUG] from commit 33bbb47, Required library version not found: libbitsandbytes_cuda121_nocublaslt.so. Maybe you need to compile it from source?", "file": "2023-05-23.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/438", "detail": "**问题描述 / Problem Description**", "id": 186} |
|||
{"title": "[BUG] 简洁阐述问题 / Concise description of the issue上传60m的txt文件报错,显示超时,请问这个能上传的文件大小有限制吗", "file": "2023-05-23.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/439", "detail": "ERROR 2023-05-23 11:13:09,627-1d: Timeout reached while detecting encoding for ./docs/GLM模型格式数据.txt", "id": 187} |
|||
{"title": "[BUG] TypeError: issubclass() arg 1 must be a class", "file": "2023-05-23.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/440", "detail": "**问题描述**", "id": 188} |
|||
{"title": "执行python3 webui.py后,一直提示”模型未成功加载,请到页面左上角\"模型配置\"选项卡中重新选择后点击\"加载模型\"按钮“", "file": "2023-05-23.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/441", "detail": "**问题描述 / Problem Description**", "id": 189} |
|||
{"title": "是否能提供网页文档得导入支持", "file": "2023-05-23.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/444", "detail": "现在很多都是在线文档作为协作得工具,所以通过URL导入在线文档需求更大", "id": 190} |
|||
{"title": "[BUG] history 索引问题", "file": "2023-05-23.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/445", "detail": "在比较对话框的history和模型chat function 中的history时, 发现并不匹配,在传入 llm._call 时,history用的索引是不是有点问题,导致上一轮对话的内容并不输入给模型。", "id": 191} |
|||
{"title": "[BUG] moss_llm没有实现", "file": "2023-05-23.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/447", "detail": "有些方法没支持,如history_len", "id": 192} |
|||
{"title": "请问langchain-ChatGLM如何删除一条本地知识库的数据?", "file": "2023-05-23.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/448", "detail": "例如:用户刚刚提交了一条错误的数据到本地知识库中了,现在如何在本地知识库从找到,并且对此删除。", "id": 193} |
|||
{"title": "[BUG] 简洁阐述问题 / UnboundLocalError: local variable 'resp' referenced before assignment", "file": "2023-05-24.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/450", "detail": "在最新一版的代码中, 运行api.py 出现了以上错误(UnboundLocalError: local variable 'resp' referenced before assignment), 通过debug的方式观察到local_doc_qa.llm.generatorAnswer(prompt=question, history=history,streaming=True)可能不返回任何值。", "id": 194} |
|||
{"title": "请问有没有 PROMPT_TEMPLATE 能让模型不回答敏感问题", "file": "2023-05-24.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/452", "detail": "## PROMPT_TEMPLATE问题", "id": 195} |
|||
{"title": "[BUG] 测试环境 Python 版本有误", "file": "2023-05-24.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/456", "detail": "**问题描述 / Problem Description**", "id": 196} |
|||
{"title": "[BUG] webui 部署后样式不正确", "file": "2023-05-24.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/458", "detail": "**问题描述 / Problem Description**", "id": 197} |
|||
{"title": "配置默认LLM模型的问题", "file": "2023-05-24.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/459", "detail": "**问题描述 / Problem Description**", "id": 198} |
|||
{"title": "[FEATURE]是时候更新一下autoDL的镜像了", "file": "2023-05-24.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/460", "detail": "如题,跑了下autoDL的镜像,发现是4.27号的,git pull新版本的代码功能+老的依赖环境,各种奇奇怪怪的问题。", "id": 199} |
|||
{"title": "[BUG] tag:0.1.13 以cpu模式下,想使用本地模型无法跑起来,各种路径参数问题", "file": "2023-05-24.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/462", "detail": "-------------------------------------------------------------------------------", "id": 200} |
|||
{"title": "[BUG] 有没有同学遇到过这个错!!!加载本地txt文件出现这个killed错误,TXT文件有100M左右大小。", "file": "2023-05-25.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/463", "detail": "运行cli_demo.py。是本地的txt文件太大了吗?100M左右。", "id": 201} |
|||
{"title": "API版本能否提供WEBSOCKET的流式接口", "file": "2023-05-25.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/464", "detail": "webui 版本中,采用了WS的流式输出,整体感知反应很快", "id": 202} |
|||
{"title": "[BUG] 安装bug记录", "file": "2023-05-25.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/465", "detail": "按照[install文档](https://github.com/imClumsyPanda/langchain-ChatGLM/blob/master/docs/INSTALL.md)安装的,", "id": 203} |
|||
{"title": "VUE的pnmp i执行失败的修复-用npm i命令即可", "file": "2023-05-25.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/466", "detail": "感谢作者!非常棒的应用,用的很开心。", "id": 204} |
|||
{"title": "请教个问题,有没有人知道cuda11.4是否支持???", "file": "2023-05-25.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/467", "detail": "请教个问题,有没有人知道cuda11.4是否支持???", "id": 205} |
|||
{"title": "请问有实现多轮问答中基于问题的搜索上下文关联么", "file": "2023-05-25.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/468", "detail": "在基于知识库的多轮问答中,第一个问题讲述了一个主题,后续的问题描述没有包含这个主题的关键词,但又存在上下文的关联。如果用后续问题去搜索知识库有可能会搜索出无关的信息,从而导致大模型无法正确回答问题。请问这个项目要考虑这种情况吗?", "id": 206} |
|||
{"title": "[BUG] 内存不足的问题", "file": "2023-05-26.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/470", "detail": "我用了本地的chatglm-6b-int4模型,然后显示了内存不足(win10+32G内存+1080ti11G),一般需要多少内存才足够?这个bug应该如何解决?", "id": 207} |
|||
{"title": "[BUG] 纯内网环境安装pycocotools失败", "file": "2023-05-26.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/472", "detail": "**问题描述 / Problem Description**", "id": 208} |
|||
{"title": "[BUG] webui.py 重新加载模型会导致 KeyError", "file": "2023-05-26.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/473", "detail": "**问题描述 / Problem Description**", "id": 209} |
|||
{"title": "chatyuan无法使用", "file": "2023-05-26.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/475", "detail": "**问题描述 / Problem Description**", "id": 210} |
|||
{"title": "[BUG] 文本分割模型AliTextSplitter存在bug,会把“.”作为分割符", "file": "2023-05-26.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/476", "detail": "阿里达摩院的语义分割模型存在bug,默认会把\".”作为分割符进行分割而不管上下文语义。是否还有其他分割符则未知。建议的修改方案:把“.”统一替换为其他字符,分割后再替换回来。或者添加其他分割模型。", "id": 211} |
|||
{"title": "[BUG] RuntimeError: Error in faiss::FileIOReader::FileIOReader(const char*) a", "file": "2023-05-27.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/479", "detail": "**问题描述 / Problem Description**", "id": 212} |
|||
{"title": "[FEATURE] 安装,为什么conda create要额外指定路径 用-p ,而不是默认的/envs下面", "file": "2023-05-28.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/481", "detail": "##**功能描述 / Feature Description**", "id": 213} |
|||
{"title": "[小白求助] 通过Anaconda执行webui.py后,无法打开web链接", "file": "2023-05-28.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/485", "detail": "在执行webui.py命令后,http://0.0.0.0:7860复制到浏览器后无法打开,显示“无法访问此网站”。", "id": 214} |
|||
{"title": "[BUG] 使用 p-tuningv2后的模型,重新加载报错", "file": "2023-05-29.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/486", "detail": "把p-tunningv2训练完后的相关文件放到了p-tunningv2文件夹下,勾选使用p-tuningv2点重新加载模型,控制台输错错误信息:", "id": 215} |
|||
{"title": "[小白求助] 服务器上执行webui.py后,在本地无法打开web链接", "file": "2023-05-29.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/487", "detail": "此项目执行在xxx.xx.xxx.xxx服务器上,我在webui.py上的代码为 (demo", "id": 216} |
|||
{"title": "[FEATURE] 能不能支持VisualGLM-6B", "file": "2023-05-29.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/488", "detail": "**功能描述 / Feature Description**", "id": 217} |
|||
{"title": "你好,问一下各位,后端api部署的时候,支持多用户同时问答吗???", "file": "2023-05-29.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/489", "detail": "支持多用户的话,最多支持多少用户问答?根据硬件而定吧?", "id": 218} |
|||
{"title": "V100GPU显存占满,而利用率却为0,这是为什么?", "file": "2023-05-29.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/491", "detail": "<img width=\"731\" alt=\"de45fe2b6cb76fa091b6e8f76a3de60\" src=\"https://github.com/imClumsyPanda/langchain-ChatGLM/assets/109277248/c32efd52-7dbf-4e9b-bd4d-0944d73d0b8b\">", "id": 219} |
|||
{"title": "[求助] 如果在公司内部搭建产品知识库,使用INT-4模型,200人规模需要配置多少显存的服务器?", "file": "2023-05-29.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/492", "detail": "如题,计划给公司搭一个在线知识库。", "id": 220} |
|||
{"title": "你好,请教个问题,目前问答回复需要20秒左右,如何提高速度?V10032G服务器。", "file": "2023-05-29.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/493", "detail": "**问题描述 / Problem Description**", "id": 221} |
|||
{"title": "[FEATURE] 如何实现只匹配下文,而不要上文的结果", "file": "2023-05-29.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/494", "detail": "在构建自己的知识库时,主要采用问答对的形式,那么也就是我需要的回答是在我的问题下面的内容,但是目前设置了chunk_size的值以后匹配的是上下文的内容,但我实际并不需要上文的。为了实现更完整的展示下面的答案,我只能调大chunk_size的值,但实际上上文的一半内容都是我不需要的。也就是扔了一半没用的东西给prompt,在faiss.py中我也没找到这块的一些描述,请问该如何进行修改呢?", "id": 222} |
|||
{"title": "你好,问一下,我调用api.py部署,为什么用ip加端口可以使用postman调用,而改为域名使用postman无法调用?", "file": "2023-05-30.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/497", "detail": "", "id": 223} |
|||
{"title": "调用api.py中的stream_chat,返回source_documents中出现中文乱码。", "file": "2023-05-30.04", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/498", "detail": "-------------------------------------------------------------------------------", "id": 224} |
|||
{"title": "[BUG] 捉个虫,api.py中的stream_chat解析json问题", "file": "2023-05-30.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/501", "detail": "**问题描述 / Problem Description**", "id": 225} |
|||
{"title": "windows本地部署遇到了omp错误", "file": "2023-05-31.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/502", "detail": "**问题描述 / Problem Description**", "id": 226} |
|||
{"title": "[BUG] bug14 ,\"POST /local_doc_qa/upload_file HTTP/1.1\" 422 Unprocessable Entity", "file": "2023-05-31.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/503", "detail": "上传的文件报错,返回错误,api.py", "id": 227} |
|||
{"title": "你好,请教个问题,api.py部署的时候,如何改为多线程调用?谢谢", "file": "2023-05-31.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/505", "detail": "目前的api.py脚本不支持多线程", "id": 228} |
|||
{"title": "你好,请教一下。api.py部署的时候,能不能提供给后端流失返回结果。", "file": "2023-05-31.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/507", "detail": "curl -X 'POST' \\", "id": 229} |
|||
{"title": "流式输出,流式接口,使用server-sent events技术。", "file": "2023-05-31.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/508", "detail": "想这样一样,https://blog.csdn.net/weixin_43228814/article/details/130063010", "id": 230} |
|||
{"title": "计划增加流式输出功能吗?ChatGLM模型通过api方式调用响应时间慢怎么破,Fastapi流式接口来解惑,能快速提升响应速度", "file": "2023-05-31.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/509", "detail": "**问题描述 / Problem Description**", "id": 231} |
|||
{"title": "[BUG] 知识库上传时发生ERROR (could not open xxx for reading: No such file or directory)", "file": "2023-05-31.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/510", "detail": "**问题描述 / Problem Description**", "id": 232} |
|||
{"title": "api.py脚本打算增加SSE流式输出吗?", "file": "2023-05-31.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/511", "detail": "curl调用的时候可以检测第一个字,从而提升回复的体验", "id": 233} |
|||
{"title": "[BUG] 使用tornado实现webSocket,可以多个客户端同时连接,并且实现流式回复,但是多个客户端同时使用,答案就很乱,是模型不支持多线程吗", "file": "2023-05-31.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/512", "detail": "import asyncio", "id": 234} |
|||
{"title": "支持 chinese_alpaca_plus_lora 吗 基于llama的", "file": "2023-06-01.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/514", "detail": "支持 chinese_alpaca_plus_lora 吗 基于llama的,https://github.com/ymcui/Chinese-LLaMA-Alpaca这个项目的", "id": 235} |
|||
{"title": "[BUG] 现在能读图片的pdf了,但是文字的pdf反而读不了了,什么情况???", "file": "2023-06-01.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/515", "detail": "**问题描述 / Problem Description**", "id": 236} |
|||
{"title": "在推理的过程中卡住不动,进程无法正常结束", "file": "2023-06-01.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/516", "detail": "**问题描述 / Problem Description**", "id": 237} |
|||
{"title": "curl调用的时候,从第二轮开始,curl如何传参可以实现多轮对话?", "file": "2023-06-01.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/517", "detail": "第一轮调用:", "id": 238} |
|||
{"title": "建议添加api.py部署后的日志管理功能?", "file": "2023-06-01.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/518", "detail": "-------------------------------------------------------------------------------", "id": 239} |
|||
{"title": "有大佬知道,怎么多线程部署api.py脚本吗?", "file": "2023-06-01.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/519", "detail": "api.py部署后,使用下面的请求,时间较慢,好像是单线程,如何改为多线程部署api.py:", "id": 240} |
|||
{"title": "[BUG] 上传文件到知识库 任何格式与内容都永远失败", "file": "2023-06-01.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/520", "detail": "上传知识库的时候,传txt无法解析,就算是穿content/sample里的样例txt也无法解析,上传md、pdf等都无法加载,会持续性等待,等到了超过30分钟也不行。", "id": 241} |
|||
{"title": "关于prompt_template的问题", "file": "2023-06-01.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/521", "detail": "请问这段prompt_template是什么意思,要怎么使用?可以给一个具体模板参考下吗?", "id": 242} |
|||
{"title": "[BUG] 简洁阐述问题 / Concise description of the issue", "file": "2023-06-01.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/522", "detail": "**问题描述 / Problem Description**", "id": 243} |
|||
{"title": "中文分词句号处理(关于表达金额之间的\".\")", "file": "2023-06-02.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/523", "detail": "建议处理12.6亿元的这样的分词,最好别分成12 和6亿这样的,需要放到一起", "id": 244} |
|||
{"title": "ImportError: cannot import name 'inference' from 'paddle'", "file": "2023-06-02.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/526", "detail": "在网上找了一圈,有说升级paddle的,我做了还是没有用,有说安装paddlepaddle的,我找了豆瓣的镜像源,但安装报错cannot detect archive format", "id": 245} |
|||
{"title": "[BUG] webscoket 接口串行问题(/local_doc_qa/stream-chat/{knowledge_base_id})", "file": "2023-06-02.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/527", "detail": "**问题描述 / Problem Description**", "id": 246} |
|||
{"title": "[FEATURE] 刷新页面更新知识库列表", "file": "2023-06-02.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/528", "detail": "**功能描述以及改进方案**", "id": 247} |
|||
{"title": "[BUG] 使用ptuning微调模型后,问答效果并不好", "file": "2023-06-02.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/530", "detail": "### 未调用ptuning", "id": 248} |
|||
{"title": "[BUG] 多轮对话效果不佳", "file": "2023-06-02.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/532", "detail": "在进行多轮对话的时候,无论设置的history_len是多少,效果都不好。事实上我将其设置成了最大值10,但在对话中,仍然无法实现多轮对话:", "id": 249} |
|||
{"title": "RuntimeError: MPS backend out of memory (MPS allocated: 18.00 GB, other allocations: 4.87 MB, max allowed: 18.13 GB)", "file": "2023-06-02.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/533", "detail": "**问题描述**", "id": 250} |
|||
{"title": " 请大家重视这个issue!真正使用肯定是多用户并发问答,希望增加此功能!!!", "file": "2023-06-02.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/534", "detail": "这得看你有多少显卡", "id": 251} |
|||
{"title": "在启动项目的时候如何使用到多张gpu啊?", "file": "2023-06-02.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/535", "detail": "**在启动项目的时候如何使用到多张gpu啊?**", "id": 252} |
|||
{"title": " 使用流式输出的时候,curl调用的格式是什么?", "file": "2023-06-02.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/536", "detail": "app.websocket(\"/local_doc_qa/stream-chat/{knowledge_base_id}\")(stream_chat)中的knowledge_base_id应该填什么???", "id": 253} |
|||
{"title": "使用本地 vicuna-7b模型启动错误", "file": "2023-06-02.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/538", "detail": "环境: ubuntu 22.04 cuda 12.1 没有安装nccl,使用rtx2080与m60显卡并行计算", "id": 254} |
|||
{"title": "为什么会不调用GPU直接调用CPU呢", "file": "2023-06-02.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/539", "detail": "我的阿里云配置是16G显存,用默认代码跑webui.py时提示", "id": 255} |
|||
{"title": "上传多个文件时会互相覆盖", "file": "2023-06-03.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/541", "detail": "1、在同一个知识库中上传多个文件时会互相覆盖,无法结合多个文档的知识,有大佬知道怎么解决吗?", "id": 256} |
|||
{"title": "[BUG] ‘gcc’不是内部或外部命令/LLM对话只能持续一轮", "file": "2023-06-03.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/542", "detail": "No compiled kernel found.", "id": 257} |
|||
{"title": "以API模式启动项目却没有知识库的接口列表?", "file": "2023-06-04.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/544", "detail": "请问如何获取知识库的接口列表?如果没有需要自行编写的话,可不可以提供相关的获取方式,感谢", "id": 258} |
|||
{"title": "程序以API模式启动的时候,如何才能让接口以stream模式被调用呢?", "file": "2023-06-05.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/546", "detail": "作者您好,我在以API模式进行程序启动后,我发现接口响应时间很长,怎么样才能让接口以stream模式被调用呢?我想实现像webui模式的回答那样", "id": 259} |
|||
{"title": "关于原文中表格转为文本后数据相关度问题。", "file": "2023-06-06.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/547", "detail": "原文中表格数据转换为文本,以 (X-Y:值;...) 的格式每一行组织成一句话,但这样做后发现相关度较低,效果很差,有何好的方案吗?", "id": 260} |
|||
{"title": "启动后LLM和知识库问答模式均只有最后一轮记录", "file": "2023-06-06.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/548", "detail": "拉取最新代码,问答时,每次页面只显示最后一次问答记录,需要修改什么参数才可以保留历史记录?", "id": 261} |
|||
{"title": "提供system message配置,以便于让回答不要超出知识库范围", "file": "2023-06-06.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/549", "detail": "**功能描述 / Feature Description**", "id": 262} |
|||
{"title": "[BUG] 使用p-tunningv2报错", "file": "2023-06-06.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/551", "detail": "按照readme的指示把p-tunningv2训练完后的文件放到了p-tunningv2文件夹下,勾选使用p-tuningv2点重新加载模型,控制台提示错误信息:", "id": 263} |
|||
{"title": "[BUG] 智障,这么多问题,也好意思放出来,浪费时间", "file": "2023-06-06.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/553", "detail": "。。。", "id": 264} |
|||
{"title": "[FEATURE] 我看代码文件中有一个ali_text_splitter.py,为什么不用他这个文本分割器了?", "file": "2023-06-06.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/554", "detail": "我看代码文件中有一个ali_text_splitter.py,为什么不用他这个文本分割器了?", "id": 265} |
|||
{"title": "加载文档函数报错", "file": "2023-06-06.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/557", "detail": "def load_file(filepath, sentence_size=SENTENCE_SIZE):", "id": 266} |
|||
{"title": "参考指引安装docker后,运行cli_demo.py,提示killed", "file": "2023-06-06.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/558", "detail": "root@b3d1bd08095c:/chatGLM# python3 cli_demo.py", "id": 267} |
|||
{"title": "注意:如果安装错误,注意这两个包的版本 wandb==0.11.0 protobuf==3.18.3", "file": "2023-06-06.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/559", "detail": "Error1: 如果启动异常报错 `protobuf` 需要更新到 `protobuf==3.18.3 `", "id": 268} |
|||
{"title": "知识库对长文的知识相关度匹配不太理想有何优化方向", "file": "2023-06-07.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/563", "detail": "我们可能录入一个文章有 1W 字,里面涉及这个文章主题的很多角度问题,我们针对他提问,他相关度匹配的内容和实际我们需要的答案相差很大怎么办。", "id": 269} |
|||
{"title": "使用stream-chat函数进行流式输出的时候,能使用curl调用吗?", "file": "2023-06-07.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/565", "detail": "为什么下面这样调用会报错???", "id": 270} |
|||
{"title": "有大佬实践过 并行 或者 多线程 的部署方案吗?", "file": "2023-06-07.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/566", "detail": "+1", "id": 271} |
|||
{"title": "多线程部署遇到问题?", "file": "2023-06-07.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/567", "detail": "<img width=\"615\" alt=\"3d87bf74f0cf1a4820cc9e46b245859\" src=\"https://github.com/imClumsyPanda/langchain-ChatGLM/assets/109277248/8787570d-88bd-434e-aaa4-cb9276d1aa50\">", "id": 272} |
|||
{"title": "[BUG] 用fastchat加载vicuna-13b模型进行知识库的问答有token的限制错误", "file": "2023-06-07.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/569", "detail": "当我开启fastchat的vicuna-13b的api服务,然后config那里配置好(api本地测试过可以返回结果),然后知识库加载好之后(知识库大概有1000多个文档,用chatGLM可以正常推理),进行问答时出现token超过限制,就问了一句hello;", "id": 273} |
|||
{"title": "现在的添加知识库,文件多了总是报错,也不知道自己加载了哪些文件,报错后也不知道是全部失败还是一部分成功;希望能有个加载指定文件夹作为知识库的功能", "file": "2023-06-07.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/574", "detail": "**功能描述 / Feature Description**", "id": 274} |
|||
{"title": "[BUG] moss模型本地加载报错", "file": "2023-06-08.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/577", "detail": "moss模型本地加载报错:", "id": 275} |
|||
{"title": "加载本地moss模型报错Can't instantiate abstract class MOSSLLM with abstract methods _history_len", "file": "2023-06-08.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/578", "detail": "(vicuna) ps@ps[13:56:20]:/data/chat/langchain-ChatGLM2/langchain-ChatGLM-0.1.13$ python webui.py --model-dir local_models --model moss --no-remote-model", "id": 276} |
|||
{"title": "[FEATURE] 能增加在前端页面控制prompt_template吗?或是能支持前端页面选择使用哪个prompt?", "file": "2023-06-08.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/579", "detail": "目前只能在config里修改一个prompt,想在多个不同场景切换比较麻烦", "id": 277} |
|||
{"title": "[BUG] streamlit ui的bug,在增加知识库时会报错", "file": "2023-06-08.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/580", "detail": "**问题描述 / Problem Description**", "id": 278} |
|||
{"title": "[FEATURE] webui/webui_st可以支持history吗?目前仅能一次对话", "file": "2023-06-08.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/581", "detail": "试了下webui和webui_st都不支持历史对话啊,只能对话一次,不能默认开启所有history吗?", "id": 279} |
|||
{"title": "启动python cli_demo.py --model chatglm-6b-int4-qe报错", "file": "2023-06-09.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/585", "detail": "下载好模型,和相关依赖环境,之间运行`python cli_demo.py --model chatglm-6b-int4-qe`报错了:", "id": 280} |
|||
{"title": "重新构建知识库报错", "file": "2023-06-09.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/586", "detail": "**问题描述 / Problem Description**", "id": 281} |
|||
{"title": "[FEATURE] 能否屏蔽paddle,我不需要OCR,效果差依赖环境还很复杂", "file": "2023-06-09.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/587", "detail": "希望能不依赖paddle", "id": 282} |
|||
{"title": "question :文档向量化这个可以自己手动实现么?", "file": "2023-06-09.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/589", "detail": "现有公司级数据500G+,需要使用这个功能,请问如何手动实现这个向量化,然后并加载", "id": 283} |
|||
{"title": "view前端能进行流式的返回吗??", "file": "2023-06-09.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/590", "detail": "view前端能进行流式的返回吗??", "id": 284} |
|||
{"title": "[BUG] Load parallel cpu kernel failed, using default cpu kernel code", "file": "2023-06-11.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/594", "detail": "**问题描述 / Problem Description**", "id": 285} |
|||
{"title": "[BUG] 简洁阐述问题 / Concise description of the issue", "file": "2023-06-11.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/595", "detail": "**问题描述 / Problem Description**", "id": 286} |
|||
{"title": "我在上传本地知识库时提示KeyError: 'name'错误,本地知识库都是.txt文件,文件数量大约是2000+。", "file": "2023-06-12.05", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/597", "detail": "<img width=\"649\" alt=\"KError\" src=\"https://github.com/imClumsyPanda/langchain-ChatGLM/assets/59411575/1ecc8182-aeee-4a0a-bbc3-74c2f1373f2d\">", "id": 287} |
|||
{"title": "model_config.py中有vicuna-13b-hf模型的配置信息,但是好像还是不可用?", "file": "2023-06-12.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/600", "detail": "@dongyihua543", "id": 288} |
|||
{"title": "ImportError: Using SOCKS proxy, but the 'socksio' package is not installed. Make sure to install httpx using `pip install httpx[socks]`.", "file": "2023-06-12.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/605", "detail": "应该代理问题,但是尝试了好多方法都解决不了,", "id": 289} |
|||
{"title": "[BUG] similarity_search_with_score_by_vector在找不到匹配的情况下出错", "file": "2023-06-12.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/607", "detail": "在设置匹配阈值 VECTOR_SEARCH_SCORE_THRESHOLD 的情况下,vectorstore会返回空,此时上述处理函数会出错", "id": 290} |
|||
{"title": "[FEATURE] 请问如何搭建英文知识库呢", "file": "2023-06-12.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/609", "detail": "**功能描述 / Feature Description**", "id": 291} |
|||
{"title": "谁有vicuna权重?llama转换之后的", "file": "2023-06-13.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/611", "detail": "**问题描述 / Problem Description**", "id": 292} |
|||
{"title": "[FEATURE] API能实现上传文件夹的功能么?", "file": "2023-06-13.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/612", "detail": "用户懒得全选所有的文件,就想上传个文件夹,请问下API能实现这个功能么?", "id": 293} |
|||
{"title": "请问在多卡部署后,上传单个文件作为知识库,用的是单卡在生成向量还是多卡?", "file": "2023-06-13.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/614", "detail": "目前我检测我本地多卡部署的,好像生成知识库向量的时候用的还是单卡", "id": 294} |
|||
{"title": "[BUG] python webui.py提示非法指令", "file": "2023-06-13.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/615", "detail": "(/data/conda-langchain [root@chatglm langchain-ChatGLM]# python webui.py", "id": 295} |
|||
{"title": "知识库文件跨行切分问题", "file": "2023-06-13.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/616", "detail": "我的知识库文件txt文件,是一行一条知识,用\\n分行。", "id": 296} |
|||
{"title": "[FEATURE] bing搜索问答有流式的API么?", "file": "2023-06-13.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/617", "detail": "web端是有这个bing搜索回答,但api接口没有发现,大佬能给个提示么?", "id": 297} |
|||
{"title": "希望出一个macos m2的安装教程", "file": "2023-06-14.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/620", "detail": "mac m2安装,模型加载成功了,知识库文件也上传成功了,但是一问答就会报错,报错内容如下", "id": 298} |
|||
{"title": "为【出处】提供高亮显示", "file": "2023-06-14.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/621", "detail": "具体出处里面,对相关的内容高亮显示,不包含前后文。", "id": 299} |
|||
{"title": "[BUG] CPU运行cli_demo.py,不回答,hang住", "file": "2023-06-14.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/622", "detail": "没有GPU;32G内存的ubuntu机器。", "id": 300} |
|||
{"title": "关于删除知识库里面的文档后,LLM知识库对话的时候还是会返回该被删除文档的内容", "file": "2023-06-14.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/623", "detail": "如题,在vue前端成功执行删除知识库里面文档A.txt后,未能也在faiss索引中也删除该文档,LLM还是会返回这个A.txt的内容,并且以A.txt为出处,未能达到删除的效果", "id": 301} |
|||
{"title": "[BUG] 调用知识库进行问答,显存会一直叠加", "file": "2023-06-14.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/625", "detail": "14G的显存,调用的chatglm-6b-int8模型,进行知识库问答时,最多问答四次就会爆显存了,观察了一下显存使用情况,每一次使用就会增加一次显存,请问这样是正常的吗?是否有什么配置需要开启可以解决这个问题?例如进行一次知识库问答清空上次问题的显存?", "id": 302} |
|||
{"title": "[BUG] web页面 重新构建数据库 失败,导致 原来的上传的数据库都没了", "file": "2023-06-14.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/626", "detail": "web页面 重新构建数据库 失败,导致 原来的上传的数据库都没了", "id": 303} |
|||
{"title": "在CPU上运行webui.py报错Tensor on device cpu is not on the expected device meta!", "file": "2023-06-14.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/627", "detail": "在CPU上运行python webui.py能启动,但最后有:RuntimeError: Tensor on device cpu is not on the expected device meta!", "id": 304} |
|||
{"title": "OSError: [WinError 1114] 动态链接库(DLL)初始化例程失败。 Error loading \"E:\\xxx\\envs\\langchain\\lib\\site-packages\\torch\\lib\\caffe2_nvrtc.dll\" or one of its dependencies.哪位大佬知道如何解决吗?", "file": "2023-06-14.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/629", "detail": "**问题描述 / Problem Description**", "id": 305} |
|||
{"title": "[BUG] WEBUI删除知识库文档,会导致知识库问答失败", "file": "2023-06-15.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/632", "detail": "如题,从知识库已有文件中选择要删除的文件,点击删除后,在问答框输入内容回车报错", "id": 306} |
|||
{"title": "更新后的版本中,删除知识库中的文件,再提问出现error错误", "file": "2023-06-15.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/634", "detail": "针对更新版本,识别到一个问题,过程如下:", "id": 307} |
|||
{"title": "我配置好了环境,想要实现本地知识库的问答?可是它返回给我的", "file": "2023-06-15.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/637", "detail": "没有总结,只有相关度的回复,但是我看演示里面表现的,回复是可以实现总结的,我去查询代码", "id": 308} |
|||
{"title": "[BUG] NPM run dev can not successfully start the VUE frontend", "file": "2023-06-15.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/638", "detail": "**问题描述 / Problem Description**", "id": 309} |
|||
{"title": "[BUG] 简洁阐述问题 / Concise description of the issue", "file": "2023-06-15.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/639", "detail": "**问题描述 / Problem Description**", "id": 310} |
|||
{"title": "提一个模型加载的bug,我在截图中修复了,你们有空可以看一下。", "file": "2023-06-15.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/642", "detail": "", "id": 311} |
|||
{"title": "[求助]关于设置embedding model路径的问题", "file": "2023-06-16.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/643", "detail": "如题,我之前成功跑起来过一次,但因环境丢失重新配置 再运行webui就总是报错", "id": 312} |
|||
{"title": "Lora微调后的模型可以直接使用吗", "file": "2023-06-16.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/646", "detail": "看model_config.py里是有USE_LORA这个参数的,但是在cli_demo.py和webui.py这两个里面都没有用到,实际测试下来模型没有微调的效果,想问问现在这个功能实现了吗", "id": 313} |
|||
{"title": "write_check_file在tmp_files目录下生成的load_file.txt是否需要一直保留,占用空间很大,在建完索引后能否删除", "file": "2023-06-16.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/647", "detail": "**功能描述 / Feature Description**", "id": 314} |
|||
{"title": "[BUG] /local_doc_qa/list_files?knowledge_base_id=test删除知识库bug", "file": "2023-06-16.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/649", "detail": "1.新建test知识库并上传文件(在vue前端完成并检查后端发现确实生成了test文件夹以及下面的content和vec_store", "id": 315} |
|||
{"title": "[BUG] vue webui无法加载知识库", "file": "2023-06-16.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/650", "detail": "拉取了最新的代码,分别运行了后端api和前端web,点击知识库,始终只能显示simple,无法加载知识库", "id": 316} |
|||
{"title": "不能本地加载moss模型吗?", "file": "2023-06-16.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/652", "detail": "手动下载模型设置local_model_path路径依旧提示缺少文件,该如何正确配置?", "id": 317} |
|||
{"title": "macos m2 pro docker 安装失败", "file": "2023-06-17.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/654", "detail": "macos m2 pro docker 安装失败", "id": 318} |
|||
{"title": " [BUG] mac m1 pro 运行提示 zsh: segmentation fault", "file": "2023-06-17.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/655", "detail": "运行: python webui.py", "id": 319} |
|||
{"title": "安装 requirements 报错", "file": "2023-06-17.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/656", "detail": "(langchainchatglm) D:\\github\\langchain-ChatGLM>pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple/", "id": 320} |
|||
{"title": "[BUG] AssertionError", "file": "2023-06-17.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/658", "detail": "**问题描述 / Problem Description**", "id": 321} |
|||
{"title": "[FEATURE] 支持AMD win10 本地部署吗?", "file": "2023-06-18.06", "url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/660", "detail": "**功能描述 / Feature Description**", "id": 322} |
@ -0,0 +1,835 @@ |
|||
ChatGPT是OpenAI开发的一个大型语言模型,可以提供各种主题的信息, |
|||
|
|||
# 如何向 ChatGPT 提问以获得高质量答案:提示技巧工程完全指南 |
|||
|
|||
## 介绍 |
|||
|
|||
我很高兴欢迎您阅读我的最新书籍《The Art of Asking ChatGPT for High-Quality Answers: A complete Guide to Prompt Engineering Techniques》。本书是一本全面指南,介绍了各种提示技术,用于从ChatGPT中生成高质量的答案。 |
|||
|
|||
我们将探讨如何使用不同的提示工程技术来实现不同的目标。ChatGPT是一款最先进的语言模型,能够生成类似人类的文本。然而,理解如何正确地向ChatGPT提问以获得我们所需的高质量输出非常重要。而这正是本书的目的。 |
|||
|
|||
无论您是普通人、研究人员、开发人员,还是只是想在自己的领域中将ChatGPT作为个人助手的人,本书都是为您编写的。我使用简单易懂的语言,提供实用的解释,并在每个提示技术中提供了示例和提示公式。通过本书,您将学习如何使用提示工程技术来控制ChatGPT的输出,并生成符合您特定需求的文本。 |
|||
|
|||
在整本书中,我们还提供了如何结合不同的提示技术以实现更具体结果的示例。我希望您能像我写作时一样,享受阅读本书并从中获得知识。 |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第一章:Prompt 工程技术简介 |
|||
|
|||
什么是 Prompt 工程? |
|||
|
|||
Prompt 工程是创建提示或指导像 ChatGPT 这样的语言模型输出的过程。它允许用户控制模型的输出并生成符合其特定需求的文本。 |
|||
|
|||
ChatGPT 是一种先进的语言模型,能够生成类似于人类的文本。它建立在 Transformer 架构上,可以处理大量数据并生成高质量的文本。 |
|||
|
|||
然而,为了从 ChatGPT 中获得最佳结果,重要的是要了解如何正确地提示模型。 提示可以让用户控制模型的输出并生成相关、准确和高质量的文本。 在使用 ChatGPT 时,了解它的能力和限制非常重要。 |
|||
|
|||
该模型能够生成类似于人类的文本,但如果没有适当的指导,它可能无法始终产生期望的输出。 |
|||
|
|||
这就是 Prompt 工程的作用,通过提供清晰而具体的指令,您可以引导模型的输出并确保其相关。 |
|||
|
|||
**Prompt 公式是提示的特定格式,通常由三个主要元素组成:** |
|||
|
|||
- 任务:对提示要求模型生成的内容进行清晰而简洁的陈述。 |
|||
|
|||
- 指令:在生成文本时模型应遵循的指令。 |
|||
|
|||
- 角色:模型在生成文本时应扮演的角色。 |
|||
|
|||
在本书中,我们将探讨可用于 ChatGPT 的各种 Prompt 工程技术。我们将讨论不同类型的提示,以及如何使用它们实现您想要的特定目标。 |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第二章:指令提示技术 |
|||
|
|||
现在,让我们开始探索“指令提示技术”,以及如何使用它从ChatGPT中生成高质量的文本。 |
|||
|
|||
指令提示技术是通过为模型提供具体指令来引导ChatGPT的输出的一种方法。这种技术对于确保输出相关和高质量非常有用。 |
|||
|
|||
要使用指令提示技术,您需要为模型提供清晰简洁的任务,以及具体的指令以供模型遵循。 |
|||
|
|||
例如,如果您正在生成客户服务响应,您将提供任务,例如“生成响应客户查询”的指令,例如“响应应该专业且提供准确的信息”。 |
|||
|
|||
提示公式:“按照以下指示生成[任务]:[指令]” |
|||
|
|||
示例: |
|||
|
|||
**生成客户服务响应:** |
|||
|
|||
- 任务:生成响应客户查询 |
|||
- 指令:响应应该专业且提供准确的信息 |
|||
- 提示公式:“按照以下指示生成专业且准确的客户查询响应:响应应该专业且提供准确的信息。” |
|||
|
|||
**生成法律文件:** |
|||
|
|||
- 任务:生成法律文件 |
|||
- 指令:文件应符合相关法律法规 |
|||
- 提示公式:“按照以下指示生成符合相关法律法规的法律文件:文件应符合相关法律法规。” |
|||
|
|||
使用指令提示技术时,重要的是要记住指令应该清晰具体。这将有助于确保输出相关和高质量。可以将指令提示技术与下一章节中解释的“角色提示”和“种子词提示”相结合,以增强ChatGPT的输出。 |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第三章:角色提示 |
|||
|
|||
角色提示技术是通过为ChatGPT指定一个特定的角色来引导其输出的一种方式。这种技术对于生成针对特定上下文或受众的文本非常有用。 |
|||
|
|||
要使用角色提示技术,您需要为模型提供一个清晰具体的角色。 |
|||
|
|||
例如,如果您正在生成客户服务回复,您可以提供一个角色,如“客户服务代表”。 |
|||
|
|||
提示公式:“作为[角色]生成[任务]” |
|||
|
|||
示例: |
|||
|
|||
**生成客户服务回复:** |
|||
|
|||
- 任务:生成对客户查询的回复 |
|||
- 角色:客户服务代表 |
|||
- 提示公式:“作为客户服务代表,生成对客户查询的回复。” |
|||
|
|||
**生成法律文件:** |
|||
|
|||
- 任务:生成法律文件 |
|||
- 角色:律师 |
|||
- 提示公式:“作为律师,生成法律文件。” |
|||
|
|||
将角色提示技术与指令提示和种子词提示结合使用可以增强ChatGPT的输出。 |
|||
|
|||
**下面是一个示例,展示了如何将指令提示、角色提示和种子词提示技术结合使用:** |
|||
|
|||
- 任务:为新智能手机生成产品描述 |
|||
- 指令:描述应该是有信息量的,具有说服力,并突出智能手机的独特功能 |
|||
- 角色:市场代表 种子词:“创新的” |
|||
- 提示公式:“作为市场代表,生成一个有信息量的、有说服力的产品描述,突出新智能手机的创新功能。该智能手机具有以下功能[插入您的功能]” |
|||
|
|||
在这个示例中,指令提示用于确保产品描述具有信息量和说服力。角色提示用于确保描述是从市场代表的角度书写的。而种子词提示则用于确保描述侧重于智能手机的创新功能。 |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第四章:标准提示 |
|||
|
|||
标准提示是一种简单的方法,通过为模型提供一个特定的任务来引导ChatGPT的输出。例如,如果您想生成一篇新闻文章的摘要,您可以提供一个任务,如“总结这篇新闻文章”。 |
|||
|
|||
提示公式:“生成一个[任务]” |
|||
|
|||
例如: |
|||
|
|||
**生成新闻文章的摘要:** |
|||
|
|||
- 任务:总结这篇新闻文章 |
|||
- 提示公式:“生成这篇新闻文章的摘要” |
|||
|
|||
**生成一篇产品评论:** |
|||
|
|||
- 任务:为一款新智能手机撰写评论 |
|||
- 提示公式:“生成这款新智能手机的评论” |
|||
|
|||
此外,标准提示可以与其他技术(如角色提示和种子词提示)结合使用,以增强ChatGPT的输出。 |
|||
|
|||
**以下是如何将标准提示、角色提示和种子词提示技术结合使用的示例:** |
|||
|
|||
- 任务:为一台新笔记本电脑撰写产品评论 |
|||
- 说明:评论应客观、信息丰富,强调笔记本电脑的独特特点 |
|||
- 角色:技术专家 |
|||
- 种子词:“强大的” |
|||
- 提示公式:“作为一名技术专家,生成一个客观而且信息丰富的产品评论,强调新笔记本电脑的强大特点。” |
|||
|
|||
在这个示例中,标准提示技术用于确保模型生成产品评论。角色提示用于确保评论是从技术专家的角度写的。而种子词提示用于确保评论侧重于笔记本电脑的强大特点。 |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第五章:零、一和少样本提示 |
|||
|
|||
零样本、一样本和少样本提示是用于从ChatGPT生成文本的技术,最少或没有任何示例。当特定任务的数据有限或任务是新的且未定义时,这些技术非常有用。 |
|||
|
|||
当任务没有可用的示例时,使用零样本提示技术。模型提供一个通用任务,根据对任务的理解生成文本。 |
|||
|
|||
当任务只有一个示例可用时,使用一样本提示技术。模型提供示例,并根据对示例的理解生成文本。 |
|||
|
|||
当任务只有有限数量的示例可用时,使用少样本提示技术。模型提供示例,并根据对示例的理解生成文本。 |
|||
|
|||
提示公式:“基于[数量]个示例生成文本” |
|||
|
|||
例如: |
|||
|
|||
**为没有可用示例的新产品编写产品描述:** |
|||
|
|||
- 任务:为新的智能手表编写产品描述 |
|||
|
|||
- 提示公式:“基于零个示例为这款新智能手表生成产品描述” |
|||
|
|||
**使用一个示例生成产品比较:** |
|||
|
|||
- 任务:将新款智能手机与最新的iPhone进行比较 |
|||
|
|||
- 提示公式:“使用一个示例(最新的iPhone)为这款新智能手机生成产品比较” |
|||
|
|||
**使用少量示例生成产品评论:** |
|||
|
|||
- 任务:为新的电子阅读器撰写评论 |
|||
|
|||
- 提示公式:“使用少量示例(3个其他电子阅读器)为这款新电子阅读器生成评论” |
|||
|
|||
|
|||
这些技术可用于根据模型对任务或提供的示例的理解生成文本。 |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第六章:“让我们思考一下”提示 |
|||
|
|||
“让我们思考一下”提示是一种技巧,可鼓励ChatGPT生成反思和思考性的文本。这种技术适用于撰写论文、诗歌或创意写作等任务。 |
|||
|
|||
“让我们思考一下”提示的公式非常简单,即“让我们思考一下”后跟一个主题或问题。 |
|||
|
|||
例如: |
|||
|
|||
**生成一篇反思性论文:** |
|||
|
|||
- 任务:就个人成长主题写一篇反思性论文 |
|||
|
|||
- 提示公式:“让我们思考一下:个人成长” |
|||
|
|||
**生成一首诗:** |
|||
|
|||
- 任务:写一首关于季节变化的诗 |
|||
|
|||
- 提示公式:“让我们思考一下:季节变化” |
|||
|
|||
|
|||
这个提示要求对特定主题或想法展开对话或讨论。发言者邀请ChatGPT参与讨论相关主题。 |
|||
|
|||
模型提供了一个提示,作为对话或文本生成的起点。 |
|||
|
|||
然后,模型使用其训练数据和算法生成与提示相关的响应。这种技术允许ChatGPT根据提供的提示生成上下文适当且连贯的文本。 |
|||
|
|||
**要使用“让我们思考一下提示”技术与ChatGPT,您可以遵循以下步骤:** |
|||
|
|||
1. 确定您要讨论的主题或想法。 |
|||
|
|||
2. 制定一个明确表达主题或想法的提示,并开始对话或文本生成。 |
|||
|
|||
3. 用“让我们思考”或“让我们讨论”开头的提示,表明您正在启动对话或讨论。 |
|||
|
|||
**以下是使用此技术的一些提示示例:** |
|||
|
|||
- 提示:“让我们思考气候变化对农业的影响” |
|||
|
|||
- 提示:“让我们讨论人工智能的当前状态” |
|||
|
|||
- 提示:“让我们谈谈远程工作的好处和缺点” 您还可以添加开放式问题、陈述或一段您希望模型继续或扩展的文本。 |
|||
|
|||
|
|||
提供提示后,模型将使用其训练数据和算法生成与提示相关的响应,并以连贯的方式继续对话。 |
|||
|
|||
这种独特的提示有助于ChatGPT以不同的视角和角度给出答案,从而产生更具动态性和信息性的段落。 |
|||
|
|||
使用提示的步骤简单易行,可以真正提高您的写作水平。尝试一下,看看效果如何吧。 |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第七章:自洽提示 |
|||
|
|||
自洽提示是一种技术,用于确保ChatGPT的输出与提供的输入一致。这种技术对于事实核查、数据验证或文本生成中的一致性检查等任务非常有用。 |
|||
|
|||
自洽提示的提示公式是输入文本后跟着指令“请确保以下文本是自洽的”。 |
|||
|
|||
或者,可以提示模型生成与提供的输入一致的文本。 |
|||
|
|||
提示示例及其公式: |
|||
|
|||
**示例1:文本生成** |
|||
|
|||
- 任务:生成产品评论 |
|||
|
|||
- 指令:评论应与输入中提供的产品信息一致 |
|||
|
|||
- 提示公式:“生成与以下产品信息一致的产品评论[插入产品信息]” |
|||
|
|||
**示例2:文本摘要** |
|||
|
|||
- 任务:概括一篇新闻文章 |
|||
|
|||
- 指令:摘要应与文章中提供的信息一致 |
|||
|
|||
- 提示公式:“用与提供的信息一致的方式概括以下新闻文章[插入新闻文章]” |
|||
|
|||
**示例3:文本完成** |
|||
|
|||
- 任务:完成一个句子 |
|||
|
|||
- 指令:完成应与输入中提供的上下文一致 |
|||
|
|||
- 提示公式:“以与提供的上下文一致的方式完成以下句子[插入句子]” |
|||
|
|||
**示例4:** |
|||
|
|||
1. **事实核查:** |
|||
|
|||
任务:检查给定新闻文章的一致性 |
|||
|
|||
输入文本:“文章中陈述该城市的人口为500万,但后来又说该城市的人口为700万。” |
|||
|
|||
提示公式:“请确保以下文本是自洽的:文章中陈述该城市的人口为500万,但后来又说该城市的人口为700万。” |
|||
|
|||
2. **数据验证:** |
|||
|
|||
任务:检查给定数据集的一致性 |
|||
|
|||
输入文本:“数据显示7月份的平均温度为30度,但最低温度记录为20度。” |
|||
|
|||
提示公式:“请确保以下文本是自洽的:数据显示7月份的平均温度为30度,但最低温度记录为20度。” |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第八章:种子词提示 |
|||
|
|||
种子词提示是一种通过提供特定的种子词或短语来控制ChatGPT输出的技术。种子词提示的提示公式是种子词或短语,后跟指令“请根据以下种子词生成文本”。 |
|||
|
|||
示例: |
|||
|
|||
**文本生成:** |
|||
|
|||
- 任务:编写一篇有关龙的故事 |
|||
- 种子词:“龙” |
|||
- 提示公式:“请根据以下种子词生成文本:龙” |
|||
|
|||
**语言翻译:** |
|||
|
|||
- 任务:将一句话从英语翻译成西班牙语 |
|||
- 种子词:“你好” |
|||
- 提示公式:“请根据以下种子词生成文本:你好” |
|||
|
|||
这种技术允许模型生成与种子词相关的文本并对其进行扩展。这是一种控制模型生成文本与某个特定主题或背景相关的方式。 |
|||
|
|||
种子词提示可以与角色提示和指令提示相结合,以创建更具体和有针对性的生成文本。通过提供种子词或短语,模型可以生成与该种子词或短语相关的文本,并通过提供有关期望输出和角色的信息,模型可以以特定于角色或指令的风格或语气生成文本。这样可以更好地控制生成的文本,并可用于各种应用程序。 |
|||
|
|||
以下是提示示例及其公式: |
|||
|
|||
**示例1:文本生成** |
|||
|
|||
- 任务:编写一首诗 |
|||
- 指令:诗应与种子词“爱”相关,并以十四行诗的形式书写。 |
|||
- 角色:诗人 |
|||
- 提示公式:“作为诗人,根据以下种子词生成与“爱”相关的十四行诗:” |
|||
|
|||
**示例2:文本完成** |
|||
|
|||
- 任务:完成一句话 |
|||
- 指令:完成应与种子词“科学”相关,并以研究论文的形式书写。 |
|||
- 角色:研究员 |
|||
- 提示公式:“作为研究员,请在与种子词“科学”相关且以研究论文的形式书写的情况下完成以下句子:[插入句子]” |
|||
|
|||
**示例3:文本摘要** |
|||
|
|||
- 任务:摘要一篇新闻文章 |
|||
- 指令:摘要应与种子词“政治”相关,并以中立和公正的语气书写。 |
|||
- 角色:记者 |
|||
- 提示公式:“作为记者,请以中立和公正的语气摘要以下新闻文章,与种子词“政治”相关:[插入新闻文章]” |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第九章:知识生成提示 |
|||
|
|||
知识生成提示是一种从ChatGPT中引出新的、原创的信息的技术。 |
|||
|
|||
知识生成提示的公式是“请生成关于X的新的和原创的信息”,其中X是感兴趣的主题。 |
|||
|
|||
这是一种利用模型预先存在的知识来生成新的信息或回答问题的技术。 |
|||
|
|||
要将此提示与ChatGPT一起使用,需要将问题或主题作为输入提供给模型,以及指定所生成文本的任务或目标的提示。 |
|||
|
|||
提示应包括有关所需输出的信息,例如要生成的文本类型以及任何特定的要求或限制。 |
|||
|
|||
以下是提示示例及其公式: |
|||
|
|||
**示例1:知识生成** |
|||
|
|||
- 任务:生成有关特定主题的新信息 |
|||
- 说明:生成的信息应准确且与主题相关 |
|||
- 提示公式:“生成有关[特定主题]的新的准确信息” |
|||
|
|||
**示例2:问答** |
|||
|
|||
- 任务:回答问题 |
|||
- 说明:答案应准确且与问题相关 |
|||
- 提示公式:“回答以下问题:[插入问题]” |
|||
|
|||
**示例3:知识整合** |
|||
|
|||
- 任务:将新信息与现有知识整合 |
|||
- 说明:整合应准确且与主题相关 |
|||
- 提示公式:“将以下信息与有关[特定主题]的现有知识整合:[插入新信息]” |
|||
|
|||
**示例4:数据分析** |
|||
|
|||
- 任务:从给定的数据集中生成有关客户行为的见解 |
|||
- 提示公式:“请从这个数据集中生成有关客户行为的新的和原创的信息” |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第十章:知识整合提示 |
|||
|
|||
这种技术利用模型的现有知识来整合新信息或连接不同的信息片段。 |
|||
|
|||
这种技术对于将现有知识与新信息相结合,以生成更全面的特定主题的理解非常有用。 |
|||
|
|||
**如何与ChatGPT一起使用:** |
|||
|
|||
- 模型应该提供新信息和现有知识作为输入,以及指定生成文本的任务或目标的提示。 |
|||
- 提示应包括有关所需输出的信息,例如要生成的文本类型以及任何特定的要求或限制。 |
|||
|
|||
提示示例及其公式: |
|||
|
|||
**示例1:知识整合** |
|||
|
|||
- 任务:将新信息与现有知识整合 |
|||
- 说明:整合应准确且与主题相关 |
|||
- 提示公式:“将以下信息与关于[具体主题]的现有知识整合:[插入新信息]” |
|||
|
|||
**示例2:连接信息片段** |
|||
|
|||
- 任务:连接不同的信息片段 |
|||
- 说明:连接应相关且逻辑清晰 |
|||
- 提示公式:“以相关且逻辑清晰的方式连接以下信息片段:[插入信息1] [插入信息2]” |
|||
|
|||
**示例3:更新现有知识** |
|||
|
|||
- 任务:使用新信息更新现有知识 |
|||
- 说明:更新的信息应准确且相关 |
|||
- 提示公式:“使用以下信息更新[具体主题]的现有知识:[插入新信息]” |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第十一章:多项选择提示 |
|||
|
|||
这种技术向模型提供一个问题或任务以及一组预定义的选项作为潜在答案。 |
|||
|
|||
该技术对于生成仅限于特定选项集的文本非常有用,可用于问答、文本完成和其他任务。模型可以生成仅限于预定义选项的文本。 |
|||
|
|||
要使用ChatGPT的多项选择提示,需要向模型提供一个问题或任务作为输入,以及一组预定义的选项作为潜在答案。提示还应包括有关所需输出的信息,例如要生成的文本类型以及任何特定要求或限制。 |
|||
|
|||
提示示例及其公式: |
|||
|
|||
**示例1:问答** |
|||
|
|||
- 任务:回答一个多项选择题 |
|||
- 说明:答案应该是预定义的选项之一 |
|||
- 提示公式:“通过选择以下选项之一回答以下问题:[插入问题] [插入选项1] [插入选项2] [插入选项3]” |
|||
|
|||
**示例2:文本完成** |
|||
|
|||
- 任务:使用预定义选项之一完成句子 |
|||
- 说明:完成应该是预定义的选项之一 |
|||
- 提示公式:“通过选择以下选项之一完成以下句子:[插入句子] [插入选项1] [插入选项2] [插入选项3]” |
|||
|
|||
**示例3:情感分析** |
|||
|
|||
- 任务:将文本分类为积极、中立或消极 |
|||
- 说明:分类应该是预定义的选项之一 |
|||
- 提示公式:“通过选择以下选项之一,将以下文本分类为积极、中立或消极:[插入文本] [积极] [中立] [消极]” |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第十二章:可解释的软提示 |
|||
|
|||
可解释的软提示是一种技术,可以在提供一定的灵活性的同时控制模型生成的文本。它通过提供一组受控输入和关于所需输出的附加信息来实现。这种技术可以生成更具解释性和可控性的生成文本。 |
|||
|
|||
提示示例及其公式: |
|||
|
|||
**示例1:文本生成** |
|||
|
|||
- 任务:生成一个故事 |
|||
- 指令:故事应基于一组给定的角色和特定的主题 |
|||
- 提示公式:“基于以下角色生成故事:[插入角色]和主题:[插入主题]” |
|||
|
|||
**示例2:文本完成** |
|||
|
|||
- 任务:完成一句话 |
|||
- 指令:完成应以特定作者的风格为基础 |
|||
- 提示公式:“以[特定作者]的风格完成以下句子:[插入句子]” |
|||
|
|||
**示例3:语言建模** |
|||
|
|||
- 任务:以特定风格生成文本 |
|||
- 指令:文本应以特定时期的风格为基础 |
|||
- 提示公式:“以[特定时期]的风格生成文本:[插入上下文]” |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第十三章:控制生成提示 |
|||
|
|||
控制生成提示是一种技术,可让模型在生成文本时对输出进行高度控制。 |
|||
|
|||
这可以通过提供一组特定的输入来实现,例如模板、特定词汇或一组约束条件,这些输入可用于指导生成过程。 |
|||
|
|||
以下是一些示例和它们的公式: |
|||
|
|||
**示例1:文本生成** |
|||
|
|||
- 任务:生成一个故事 |
|||
- 说明:该故事应基于特定的模板 |
|||
- 提示公式:“根据以下模板生成故事:[插入模板]” |
|||
|
|||
**示例2:文本补全** |
|||
|
|||
- 任务:完成一句话 |
|||
- 说明:完成应使用特定的词汇 |
|||
- 提示公式:“使用以下词汇完成以下句子:[插入词汇]:[插入句子]” |
|||
|
|||
**示例3:语言建模** |
|||
|
|||
- 任务:以特定风格生成文本 |
|||
- 说明:文本应遵循一组特定的语法规则 |
|||
- 提示公式:“生成遵循以下语法规则的文本:[插入规则]:[插入上下文]” |
|||
|
|||
通过提供一组特定的输入来指导生成过程,控制生成提示使得生成的文本更具可控性和可预测性。 |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第十四章:问答提示 |
|||
|
|||
问答提示是一种技术,可以让模型生成回答特定问题或任务的文本。通过将问题或任务与可能与问题或任务相关的任何其他信息一起作为输入提供给模型来实现此目的。 |
|||
|
|||
一些提示示例及其公式如下: |
|||
|
|||
**示例1:事实问题回答** |
|||
|
|||
- 任务:回答一个事实性问题 |
|||
- 说明:答案应准确且相关 |
|||
- 提示公式:“回答以下事实问题:[插入问题]” |
|||
|
|||
**示例2:定义** |
|||
|
|||
- 任务:提供一个词的定义 |
|||
- 说明:定义应准确 |
|||
- 提示公式:“定义以下词汇:[插入单词]” |
|||
|
|||
**示例3:信息检索** |
|||
|
|||
- 任务:从特定来源检索信息 |
|||
- 说明:检索到的信息应相关 |
|||
- 提示公式:“从以下来源检索有关[特定主题]的信息:[插入来源]” 这对于问答和信息检索等任务非常有用。 |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第十五章:概述提示 |
|||
|
|||
概述提示是一种技术,允许模型在保留其主要思想和信息的同时生成给定文本的较短版本。 |
|||
|
|||
这可以通过将较长的文本作为输入提供给模型并要求其生成该文本的摘要来实现。 |
|||
|
|||
这种技术对于文本概述和信息压缩等任务非常有用。 |
|||
|
|||
**如何在ChatGPT中使用:** |
|||
|
|||
- 应该向模型提供较长的文本作为输入,并要求其生成该文本的摘要。 |
|||
- 提示还应包括有关所需输出的信息,例如摘要的所需长度和任何特定要求或限制。 |
|||
|
|||
提示示例及其公式: |
|||
|
|||
**示例1:文章概述** |
|||
|
|||
- 任务:概述新闻文章 |
|||
- 说明:摘要应是文章主要观点的简要概述 |
|||
- 提示公式:“用一句简短的话概括以下新闻文章:[插入文章]” |
|||
|
|||
**示例2:会议记录** |
|||
|
|||
- 任务:概括会议记录 |
|||
- 说明:摘要应突出会议的主要决策和行动 |
|||
- 提示公式:“通过列出主要决策和行动来总结以下会议记录:[插入记录]” |
|||
|
|||
**示例3:书籍摘要** |
|||
|
|||
- 任务:总结一本书 |
|||
- 说明:摘要应是书的主要观点的简要概述 |
|||
- 提示公式:“用一段简短的段落总结以下书籍:[插入书名]” |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第十六章:对话提示 |
|||
|
|||
对话提示是一种技术,允许模型生成模拟两个或更多实体之间对话的文本。通过为模型提供一个上下文和一组角色或实体,以及它们的角色和背景,并要求模型在它们之间生成对话。 |
|||
|
|||
因此,应为模型提供上下文和一组角色或实体,以及它们的角色和背景。还应向模型提供有关所需输出的信息,例如对话或交谈的类型以及任何特定的要求或限制。 |
|||
|
|||
提示示例及其公式: |
|||
|
|||
**示例1:对话生成** |
|||
|
|||
- 任务:生成两个角色之间的对话 |
|||
- 说明:对话应自然且与给定上下文相关 |
|||
- 提示公式:“在以下情境中生成以下角色之间的对话[插入角色]” |
|||
|
|||
**示例2:故事写作** |
|||
|
|||
- 任务:在故事中生成对话 |
|||
- 说明:对话应与故事的角色和事件一致 |
|||
- 提示公式:“在以下故事中生成以下角色之间的对话[插入故事]” |
|||
|
|||
**示例3:聊天机器人开发** |
|||
|
|||
- 任务:为客服聊天机器人生成对话 |
|||
- 说明:对话应专业且提供准确的信息 |
|||
- 提示公式:“在客户询问[插入主题]时,为客服聊天机器人生成专业和准确的对话” |
|||
|
|||
因此,这种技术对于对话生成、故事写作和聊天机器人开发等任务非常有用。 |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第十七章:对抗性提示 |
|||
|
|||
对抗性提示是一种技术,它允许模型生成抵抗某些类型的攻击或偏见的文本。这种技术可用于训练更为稳健和抵抗某些类型攻击或偏见的模型。 |
|||
|
|||
要在ChatGPT中使用对抗性提示,需要为模型提供一个提示,该提示旨在使模型难以生成符合期望输出的文本。提示还应包括有关所需输出的信息,例如要生成的文本类型和任何特定要求或约束。 |
|||
|
|||
提示示例及其公式: |
|||
|
|||
**示例1:用于文本分类的对抗性提示** |
|||
|
|||
- 任务:生成被分类为特定标签的文本 |
|||
- 说明:生成的文本应难以分类为特定标签 |
|||
- 提示公式:“生成难以分类为[插入标签]的文本” |
|||
|
|||
**示例2:用于情感分析的对抗性提示** |
|||
|
|||
- 任务:生成难以分类为特定情感的文本 |
|||
- 说明:生成的文本应难以分类为特定情感 |
|||
- 提示公式:“生成难以分类为具有[插入情感]情感的文本” |
|||
|
|||
**示例3:用于语言翻译的对抗性提示** |
|||
|
|||
- 任务:生成难以翻译的文本 |
|||
- 说明:生成的文本应难以翻译为目标语言 |
|||
- 提示公式:“生成难以翻译为[插入目标语言]的文本” |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第十八章:聚类提示 |
|||
|
|||
聚类提示是一种技术,它可以让模型根据某些特征或特点将相似的数据点分组在一起。 |
|||
|
|||
通过提供一组数据点并要求模型根据某些特征或特点将它们分组成簇,可以实现这一目标。 |
|||
|
|||
这种技术在数据分析、机器学习和自然语言处理等任务中非常有用。 |
|||
|
|||
**如何在ChatGPT中使用:** |
|||
|
|||
应该向模型提供一组数据点,并要求它根据某些特征或特点将它们分组成簇。提示还应包括有关所需输出的信息,例如要生成的簇数和任何特定的要求或约束。 |
|||
|
|||
提示示例及其公式: |
|||
|
|||
**示例1:客户评论的聚类** |
|||
|
|||
- 任务:将相似的客户评论分组在一起 |
|||
- 说明:应根据情感将评论分组 |
|||
- 提示公式:“将以下客户评论根据情感分组成簇:[插入评论]” |
|||
|
|||
**示例2:新闻文章的聚类** |
|||
|
|||
- 任务:将相似的新闻文章分组在一起 |
|||
- 说明:应根据主题将文章分组 |
|||
- 提示公式:“将以下新闻文章根据主题分组成簇:[插入文章]” |
|||
|
|||
**示例3:科学论文的聚类** |
|||
|
|||
- 任务:将相似的科学论文分组在一起 |
|||
- 说明:应根据研究领域将论文分组 |
|||
- 提示公式:“将以下科学论文根据研究领域分组成簇:[插入论文]” |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第十九章:强化学习提示 |
|||
|
|||
强化学习提示是一种技术,可以使模型从过去的行动中学习,并随着时间的推移提高其性能。要在ChatGPT中使用强化学习提示,需要为模型提供一组输入和奖励,并允许其根据接收到的奖励调整其行为。提示还应包括有关期望输出的信息,例如要完成的任务以及任何特定要求或限制。这种技术对于决策制定、游戏玩法和自然语言生成等任务非常有用。 |
|||
|
|||
提示示例及其公式: |
|||
|
|||
**示例1:用于文本生成的强化学习** |
|||
|
|||
- 任务:生成与特定风格一致的文本 |
|||
- 说明:模型应根据为生成与特定风格一致的文本而接收到的奖励来调整其行为 |
|||
- 提示公式:“使用强化学习来生成与以下风格一致的文本[插入风格]” |
|||
|
|||
**示例2:用于语言翻译的强化学习** |
|||
|
|||
- 任务:将文本从一种语言翻译成另一种语言 |
|||
- 说明:模型应根据为生成准确翻译而接收到的奖励来调整其行为 |
|||
- 提示公式:“使用强化学习将以下文本[插入文本]从[插入语言]翻译成[插入语言]” |
|||
|
|||
**示例3:用于问答的强化学习** |
|||
|
|||
- 任务:回答问题 |
|||
- 说明:模型应根据为生成准确答案而接收到的奖励来调整其行为 |
|||
- 提示公式:“使用强化学习来回答以下问题[插入问题]” |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第二十章:课程学习提示 |
|||
|
|||
课程学习是一种技术,允许模型通过先训练简单任务,逐渐增加难度来学习复杂任务。 |
|||
|
|||
要在ChatGPT中使用课程学习提示,模型应该提供一系列任务,这些任务逐渐增加难度。 |
|||
|
|||
提示还应包括有关期望输出的信息,例如要完成的最终任务以及任何特定要求或约束条件。 |
|||
|
|||
此技术对自然语言处理、图像识别和机器学习等任务非常有用。 |
|||
|
|||
提示示例及其公式: |
|||
|
|||
**示例1:用于文本生成的课程学习** |
|||
|
|||
- 任务:生成与特定风格一致的文本 |
|||
- 说明:模型应该在移动到更复杂的风格之前先在简单的风格上进行训练。 |
|||
- 提示公式:“使用课程学习来生成与以下风格[插入风格]一致的文本,按照以下顺序[插入顺序]。” |
|||
|
|||
**示例2:用于语言翻译的课程学习** |
|||
|
|||
- 任务:将文本从一种语言翻译成另一种语言 |
|||
- 说明:模型应该在移动到更复杂的语言之前先在简单的语言上进行训练。 |
|||
- 提示公式:“使用课程学习将以下语言[插入语言]的文本翻译成以下顺序[插入顺序]。” |
|||
|
|||
**示例3:用于问题回答的课程学习** |
|||
|
|||
- 任务:回答问题 |
|||
- 说明:模型应该在移动到更复杂的问题之前先在简单的问题上进行训练。 |
|||
- 提示公式:“使用课程学习来回答以下问题[插入问题],按照以下顺序[插入顺序]生成答案。” |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第二十一章:情感分析提示 |
|||
|
|||
情感分析是一种技术,允许模型确定文本的情绪色彩或态度,例如它是积极的、消极的还是中立的。 |
|||
|
|||
要在ChatGPT中使用情感分析提示,模型应该提供一段文本并要求根据其情感分类。 |
|||
|
|||
提示还应包括关于所需输出的信息,例如要检测的情感类型(例如积极的、消极的、中立的)和任何特定要求或约束条件。 |
|||
|
|||
提示示例及其公式: |
|||
|
|||
**示例1:客户评论的情感分析** |
|||
|
|||
- 任务:确定客户评论的情感 |
|||
- 说明:模型应该将评论分类为积极的、消极的或中立的 |
|||
- 提示公式:“对以下客户评论进行情感分析[插入评论],并将它们分类为积极的、消极的或中立的。” |
|||
|
|||
**示例2:推文的情感分析** |
|||
|
|||
- 任务:确定推文的情感 |
|||
- 说明:模型应该将推文分类为积极的、消极的或中立的 |
|||
- 提示公式:“对以下推文进行情感分析[插入推文],并将它们分类为积极的、消极的或中立的。” |
|||
|
|||
**示例3:产品评论的情感分析** |
|||
|
|||
- 任务:确定产品评论的情感 |
|||
- 说明:模型应该将评论分类为积极的、消极的或中立的 |
|||
- 提示公式:“对以下产品评论进行情感分析[插入评论],并将它们分类为积极的、消极的或中立的。” |
|||
|
|||
这种技术对自然语言处理、客户服务和市场研究等任务非常有用。 |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第二十二章:命名实体识别提示 |
|||
|
|||
命名实体识别(NER)是一种技术,它可以使模型识别和分类文本中的命名实体,例如人名、组织机构、地点和日期等。 |
|||
|
|||
要在ChatGPT中使用命名实体识别提示,需要向模型提供一段文本,并要求它识别和分类文本中的命名实体。 |
|||
|
|||
提示还应包括有关所需输出的信息,例如要识别的命名实体类型(例如人名、组织机构、地点、日期)以及任何特定要求或约束条件。 |
|||
|
|||
提示示例及其公式: |
|||
|
|||
**示例1:新闻文章中的命名实体识别** |
|||
|
|||
- 任务:在新闻文章中识别和分类命名实体 |
|||
- 说明:模型应识别和分类人名、组织机构、地点和日期 |
|||
- 提示公式:“在以下新闻文章[插入文章]上执行命名实体识别,并识别和分类人名、组织机构、地点和日期。” |
|||
|
|||
**示例2:法律文件中的命名实体识别** |
|||
|
|||
- 任务:在法律文件中识别和分类命名实体 |
|||
- 说明:模型应识别和分类人名、组织机构、地点和日期 |
|||
- 提示公式:“在以下法律文件[插入文件]上执行命名实体识别,并识别和分类人名、组织机构、地点和日期。” |
|||
|
|||
**示例3:研究论文中的命名实体识别** |
|||
|
|||
- 任务:在研究论文中识别和分类命名实体 |
|||
- 说明:模型应识别和分类人名、组织机构、地点和日期 |
|||
- 提示公式:“在以下研究论文[插入论文]上执行命名实体识别,并识别和分类人名、组织机构、地点和日期。” |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第二十三章:文本分类提示 |
|||
|
|||
文本分类是一种技术,它可以让模型将文本分成不同的类别。这种技术对于自然语言处理、文本分析和情感分析等任务非常有用。 |
|||
|
|||
需要注意的是,文本分类和情感分析是不同的。情感分析特别关注于确定文本中表达的情感或情绪。这可能包括确定文本表达了积极、消极还是中性的情感。情感分析通常用于客户评论、社交媒体帖子和其他需要表达情感的文本。 |
|||
|
|||
要在ChatGPT中使用文本分类提示,模型需要提供一段文本,并要求它根据预定义的类别或标签进行分类。提示还应包括有关所需输出的信息,例如类别或标签的数量以及任何特定的要求或约束。 |
|||
|
|||
提示示例及其公式: |
|||
|
|||
**示例1:对客户评论进行文本分类** |
|||
|
|||
- 任务:将客户评论分类为不同的类别,例如电子产品、服装和家具 |
|||
- 说明:模型应根据评论的内容对其进行分类 |
|||
- 提示公式:“对以下客户评论 [插入评论] 进行文本分类,并根据其内容将其分类为不同的类别,例如电子产品、服装和家具。” |
|||
|
|||
**示例2:对新闻文章进行文本分类** |
|||
|
|||
- 任务:将新闻文章分类为不同的类别,例如体育、政治和娱乐 |
|||
- 说明:模型应根据文章的内容对其进行分类 |
|||
- 提示公式:“对以下新闻文章 [插入文章] 进行文本分类,并根据其内容将其分类为不同的类别,例如体育、政治和娱乐。” |
|||
|
|||
**示例3:对电子邮件进行文本分类** |
|||
|
|||
- 任务:将电子邮件分类为不同的类别,例如垃圾邮件、重要邮件或紧急邮件 |
|||
- 说明:模型应根据电子邮件的内容和发件人对其进行分类 |
|||
- 提示公式:“对以下电子邮件 [插入电子邮件] 进行文本分类,并根据其内容和发件人将其分类为不同的类别,例如垃圾邮件、重要邮件或紧急邮件。” |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 第二十四章:文本生成提示 |
|||
|
|||
文本生成提示与本书中提到的其他提示技术相关,例如:零、一、几次提示,受控生成提示,翻译提示,语言建模提示,句子补全提示等。这些提示都与生成文本有关,但它们在生成文本的方式和放置在生成文本上的特定要求或限制方面有所不同。文本生成提示可用于微调预训练模型或训练新模型以执行特定任务。 |
|||
|
|||
提示示例及其公式: |
|||
|
|||
**示例1:故事创作的文本生成** |
|||
|
|||
- 任务:根据给定的提示生成故事 |
|||
- 说明:故事应至少包含1000个单词,并包括一组特定的角色和情节。 |
|||
- 提示公式:“根据以下提示[插入提示]生成一个至少包含1000个单词,包括角色[插入角色]和情节[插入情节]的故事。” |
|||
|
|||
**示例2:语言翻译的文本生成** |
|||
|
|||
- 任务:将给定的文本翻译成另一种语言 |
|||
- 说明:翻译应准确并符合习惯用语。 |
|||
- 提示公式:“将以下文本[插入文本]翻译成[插入目标语言],并确保其准确且符合习惯用语。” |
|||
|
|||
**示例3:文本完成的文本生成** |
|||
|
|||
- 任务:完成给定的文本 |
|||
- 说明:生成的文本应与输入文本连贯一致。 |
|||
- 提示公式:“完成以下文本[插入文本],并确保其连贯一致且符合输入文本。” |
|||
|
|||
<div style="page-break-after:always;"></div> |
|||
|
|||
## 结语 |
|||
|
|||
正如本书中所探讨的那样,快速工程是一种利用像ChatGPT这样的语言模型获得高质量答案的强大工具。通过精心设计各种技巧的提示,我们可以引导模型生成符合我们特定需求和要求的文本。 |
|||
|
|||
在第二章中,我们讨论了如何使用指令提示向模型提供清晰明确的指导。在第三章中,我们探讨了如何使用角色提示生成特定的语音或风格的文本。在第四章中,我们研究了如何使用标准提示作为微调模型性能的起点。我们还研究了几种高级提示技术,例如Zero、One和Few Shot Prompting、Self-Consistency、Seed-word Prompt、Knowledge Generation Prompt、Knowledge Integration prompts、Multiple Choice prompts、Interpretable Soft Prompts、Controlled generation prompts、Question-answering prompts、Summarization prompts、Dialogue prompts、Adversarial prompts、Clustering prompts、Reinforcement learning prompts、Curriculum learning prompts、Sentiment analysis prompts、Named entity recognition prompts和Text classification prompts(对应章节的名字)。 |
|||
|
|||
这些技术中的每一种都可以以不同的方式使用,以实现各种不同的结果。随着您继续使用ChatGPT和其他语言模型,值得尝试不同的技巧组合,以找到最适合您特定用例的方法。 |
|||
|
|||
最后,您可以查看我写的其他主题的书籍。 |
|||
|
|||
感谢您阅读整本书。期待在我的其他书中与您见面。 |
|||
|
|||
(本文翻译自《The Art of Asking ChatGPT for High-Quality Answers A Complete Guide to Prompt Engineering Techniques》这本书,本文的翻译全部由ChatGpt完成,我只是把翻译内容给稍微排版了一下。做完了才发现这个工作早就有人做过了...下面是我以此事件让New Bing编写的一个小故事,希望大家喜欢) |
|||
|
|||
> 他终于画完了他的画,心满意足地把它挂在了墙上。他觉得这是他一生中最伟大的作品,无人能及。他邀请了所有的朋友来欣赏,期待着他们的赞美和惊叹。 可是当他们看到画时,却没有一个人说话。他们只是互相对视,然后低头咳嗽,或者假装看手机。他感到很奇怪,难道他们都不懂艺术吗?难道他们都没有眼光吗? 他忍不住问其中一个朋友:“你觉得我的画怎么样?” 朋友犹豫了一下,说:“嗯……其实……这个画……我以前在哪里见过。” “见过?你在哪里见过?”他惊讶地问。 “就在……就在那边啊。”朋友指了指墙角的一个小框架,“那不就是你上个月买回来的那幅名画吗?你怎么把它照抄了一遍? ——New Bing |
|||
|
|||
[这就是那幅名画]: http://yesaiwen.com/art-of-asking-chatgpt-for-high-quality-answ-engineering-techniques/#i-3 "《如何向ChatGPT提问并获得高质量的答案》" |
@ -0,0 +1,8 @@ |
|||
2023-11-24 10:12:09 | INFO | model_worker | Loading the model ['vicuna-13b-v1.5'] on worker 01c5bdb7 ... |
|||
2023-11-24 10:12:10 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] |
|||
2023-11-24 10:12:11 | ERROR | stderr |
Loading checkpoint shards: 33%|████████████████████████████████████████████████████████▋ | 1/3 [00:01<00:03, 1.50s/it] |
|||
2023-11-24 10:12:13 | ERROR | stderr |
Loading checkpoint shards: 67%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████▎ | 2/3 [00:03<00:01, 1.91s/it] |
|||
2023-11-24 10:12:15 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:05<00:00, 1.94s/it] |
|||
2023-11-24 10:12:15 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:05<00:00, 1.89s/it] |
|||
2023-11-24 10:12:15 | ERROR | stderr | |
|||
2023-11-24 10:12:22 | INFO | model_worker | Register to controller |
@ -0,0 +1,36 @@ |
|||
2023-11-30 23:53:31 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker 02fe9f8b ... |
|||
2023-11-30 23:53:31 | ERROR | stderr | Process model_worker - chatglm3-6b: |
|||
2023-11-30 23:53:31 | ERROR | stderr | Traceback (most recent call last): |
|||
2023-11-30 23:53:31 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap |
|||
2023-11-30 23:53:31 | ERROR | stderr | self.run() |
|||
2023-11-30 23:53:31 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/multiprocessing/process.py", line 108, in run |
|||
2023-11-30 23:53:31 | ERROR | stderr | self._target(*self._args, **self._kwargs) |
|||
2023-11-30 23:53:31 | ERROR | stderr | File "/Users/Angela/Documents/02. 程序文件夹/mac-llm/Langchain-Chatchatv0.2.7/startup.py", line 383, in run_model_worker |
|||
2023-11-30 23:53:31 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, **kwargs) |
|||
2023-11-30 23:53:31 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-30 23:53:31 | ERROR | stderr | File "/Users/Angela/Documents/02. 程序文件夹/mac-llm/Langchain-Chatchatv0.2.7/startup.py", line 211, in create_model_worker_app |
|||
2023-11-30 23:53:31 | ERROR | stderr | worker = ModelWorker( |
|||
2023-11-30 23:53:31 | ERROR | stderr | ^^^^^^^^^^^^ |
|||
2023-11-30 23:53:31 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/serve/model_worker.py", line 74, in __init__ |
|||
2023-11-30 23:53:31 | ERROR | stderr | self.model, self.tokenizer = load_model( |
|||
2023-11-30 23:53:31 | ERROR | stderr | ^^^^^^^^^^^ |
|||
2023-11-30 23:53:31 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/model/model_adapter.py", line 306, in load_model |
|||
2023-11-30 23:53:31 | ERROR | stderr | model, tokenizer = adapter.load_model(model_path, kwargs) |
|||
2023-11-30 23:53:31 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-30 23:53:31 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/model/model_adapter.py", line 730, in load_model |
|||
2023-11-30 23:53:31 | ERROR | stderr | tokenizer = AutoTokenizer.from_pretrained( |
|||
2023-11-30 23:53:31 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-30 23:53:31 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 718, in from_pretrained |
|||
2023-11-30 23:53:31 | ERROR | stderr | tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs) |
|||
2023-11-30 23:53:31 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-30 23:53:31 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 550, in get_tokenizer_config |
|||
2023-11-30 23:53:31 | ERROR | stderr | resolved_config_file = cached_file( |
|||
2023-11-30 23:53:31 | ERROR | stderr | ^^^^^^^^^^^^ |
|||
2023-11-30 23:53:31 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/utils/hub.py", line 430, in cached_file |
|||
2023-11-30 23:53:31 | ERROR | stderr | resolved_file = hf_hub_download( |
|||
2023-11-30 23:53:31 | ERROR | stderr | ^^^^^^^^^^^^^^^^ |
|||
2023-11-30 23:53:31 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn |
|||
2023-11-30 23:53:31 | ERROR | stderr | validate_repo_id(arg_value) |
|||
2023-11-30 23:53:31 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id |
|||
2023-11-30 23:53:31 | ERROR | stderr | raise HFValidationError( |
|||
2023-11-30 23:53:31 | ERROR | stderr | huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/Users/sunhua/Documents/LLM Model/ZhipuAI/chatglm3-6b'. Use `repo_type` argument if needed. |
@ -0,0 +1,8 @@ |
|||
2023-11-24 10:19:40 | INFO | model_worker | Loading the model ['vicuna-13b-v1.5'] on worker 11c7a7b0 ... |
|||
2023-11-24 10:19:41 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] |
|||
2023-11-24 10:19:42 | ERROR | stderr |
Loading checkpoint shards: 33%|████████████████████████████████████████████████████████▋ | 1/3 [00:01<00:03, 1.72s/it] |
|||
2023-11-24 10:19:44 | ERROR | stderr |
Loading checkpoint shards: 67%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████▎ | 2/3 [00:03<00:01, 1.76s/it] |
|||
2023-11-24 10:19:46 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:05<00:00, 1.74s/it] |
|||
2023-11-24 10:19:46 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:05<00:00, 1.74s/it] |
|||
2023-11-24 10:19:46 | ERROR | stderr | |
|||
2023-11-24 10:19:56 | INFO | model_worker | Register to controller |
@ -0,0 +1,20 @@ |
|||
2023-12-01 00:31:47 | INFO | model_worker | Loading the model ['Qwen-14B-Chat'] on worker 257ea1fb ... |
|||
2023-12-01 00:31:47 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/15 [00:00<?, ?it/s] |
|||
2023-12-01 00:31:48 | ERROR | stderr |
Loading checkpoint shards: 7%|███████▊ | 1/15 [00:00<00:09, 1.56it/s] |
|||
2023-12-01 00:31:48 | ERROR | stderr |
Loading checkpoint shards: 13%|███████████████▋ | 2/15 [00:01<00:09, 1.40it/s] |
|||
2023-12-01 00:31:49 | ERROR | stderr |
Loading checkpoint shards: 20%|███████████████████████▌ | 3/15 [00:02<00:08, 1.48it/s] |
|||
2023-12-01 00:31:50 | ERROR | stderr |
Loading checkpoint shards: 27%|███████████████████████████████▍ | 4/15 [00:02<00:07, 1.53it/s] |
|||
2023-12-01 00:31:50 | ERROR | stderr |
Loading checkpoint shards: 33%|███████████████████████████████████████▎ | 5/15 [00:03<00:06, 1.54it/s] |
|||
2023-12-01 00:31:51 | ERROR | stderr |
Loading checkpoint shards: 40%|███████████████████████████████████████████████▏ | 6/15 [00:03<00:05, 1.53it/s] |
|||
2023-12-01 00:31:52 | ERROR | stderr |
Loading checkpoint shards: 47%|███████████████████████████████████████████████████████ | 7/15 [00:04<00:05, 1.56it/s] |
|||
2023-12-01 00:31:52 | ERROR | stderr |
Loading checkpoint shards: 53%|██████████████████████████████████████████████████████████████▉ | 8/15 [00:05<00:04, 1.59it/s] |
|||
2023-12-01 00:31:53 | ERROR | stderr |
Loading checkpoint shards: 60%|██████████████████████████████████████████████████████████████████████▊ | 9/15 [00:05<00:03, 1.57it/s] |
|||
2023-12-01 00:31:54 | ERROR | stderr |
Loading checkpoint shards: 67%|██████████████████████████████████████████████████████████████████████████████ | 10/15 [00:06<00:03, 1.55it/s] |
|||
2023-12-01 00:31:55 | ERROR | stderr |
Loading checkpoint shards: 73%|█████████████████████████████████████████████████████████████████████████████████████▊ | 11/15 [00:08<00:03, 1.07it/s] |
|||
2023-12-01 00:31:58 | ERROR | stderr |
Loading checkpoint shards: 80%|█████████████████████████████████████████████████████████████████████████████████████████████▌ | 12/15 [00:10<00:04, 1.48s/it] |
|||
2023-12-01 00:31:59 | ERROR | stderr |
Loading checkpoint shards: 87%|█████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 13/15 [00:12<00:03, 1.52s/it] |
|||
2023-12-01 00:32:01 | ERROR | stderr |
Loading checkpoint shards: 93%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 14/15 [00:13<00:01, 1.49s/it] |
|||
2023-12-01 00:32:02 | ERROR | stderr |
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:15<00:00, 1.40s/it] |
|||
2023-12-01 00:32:02 | ERROR | stderr |
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:15<00:00, 1.00s/it] |
|||
2023-12-01 00:32:02 | ERROR | stderr | |
|||
2023-12-01 00:32:06 | INFO | model_worker | Register to controller |
@ -0,0 +1,12 @@ |
|||
2023-11-30 23:54:28 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker 2b0642c0 ... |
|||
2023-11-30 23:54:28 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] |
|||
2023-11-30 23:54:29 | ERROR | stderr |
Loading checkpoint shards: 14%|█████████████████ | 1/7 [00:00<00:03, 1.94it/s] |
|||
2023-11-30 23:54:30 | ERROR | stderr |
Loading checkpoint shards: 29%|██████████████████████████████████ | 2/7 [00:01<00:02, 1.82it/s] |
|||
2023-11-30 23:54:30 | ERROR | stderr |
Loading checkpoint shards: 43%|███████████████████████████████████████████████████ | 3/7 [00:01<00:02, 1.83it/s] |
|||
2023-11-30 23:54:31 | ERROR | stderr |
Loading checkpoint shards: 57%|████████████████████████████████████████████████████████████████████ | 4/7 [00:02<00:01, 1.85it/s] |
|||
2023-11-30 23:54:31 | ERROR | stderr |
Loading checkpoint shards: 71%|█████████████████████████████████████████████████████████████████████████████████████ | 5/7 [00:02<00:01, 1.84it/s] |
|||
2023-11-30 23:54:32 | ERROR | stderr |
Loading checkpoint shards: 86%|██████████████████████████████████████████████████████████████████████████████████████████████████████ | 6/7 [00:03<00:00, 1.60it/s] |
|||
2023-11-30 23:54:32 | ERROR | stderr |
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 1.78it/s] |
|||
2023-11-30 23:54:32 | ERROR | stderr |
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 1.78it/s] |
|||
2023-11-30 23:54:32 | ERROR | stderr | |
|||
2023-11-30 23:54:34 | INFO | model_worker | Register to controller |
@ -0,0 +1,13 @@ |
|||
2024-05-15 15:52:41 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker 2c407012 ... |
|||
2024-05-15 15:52:42 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] |
|||
2024-05-15 15:52:42 | ERROR | stderr |
Loading checkpoint shards: 14%|███████████████████████████████████ | 1/7 [00:00<00:03, 1.88it/s] |
|||
2024-05-15 15:52:43 | ERROR | stderr |
Loading checkpoint shards: 29%|██████████████████████████████████████████████████████████████████████ | 2/7 [00:01<00:02, 1.79it/s] |
|||
2024-05-15 15:52:43 | ERROR | stderr |
Loading checkpoint shards: 43%|█████████████████████████████████████████████████████████████████████████████████████████████████████████ | 3/7 [00:01<00:02, 1.80it/s] |
|||
2024-05-15 15:52:44 | ERROR | stderr |
Loading checkpoint shards: 57%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 4/7 [00:02<00:01, 1.84it/s] |
|||
2024-05-15 15:52:44 | ERROR | stderr |
Loading checkpoint shards: 71%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 5/7 [00:02<00:01, 1.82it/s] |
|||
2024-05-15 15:52:45 | ERROR | stderr |
Loading checkpoint shards: 86%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 6/7 [00:03<00:00, 1.77it/s] |
|||
2024-05-15 15:52:45 | ERROR | stderr |
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 2.06it/s] |
|||
2024-05-15 15:52:45 | ERROR | stderr |
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 1.91it/s] |
|||
2024-05-15 15:52:45 | ERROR | stderr | |
|||
2024-05-15 15:52:46 | INFO | model_worker | Register to controller |
|||
2024-05-15 15:52:51 | ERROR | stderr | Process model_worker - chatglm3-6b: |
@ -0,0 +1,19 @@ |
|||
2023-11-30 23:51:50 | INFO | model_worker | Loading the model ['vicuna-15b-v1.5'] on worker 36e7f9d5 ... |
|||
2023-11-30 23:51:50 | ERROR | stderr | Process model_worker - vicuna-15b-v1.5: |
|||
2023-11-30 23:51:50 | ERROR | stderr | Traceback (most recent call last): |
|||
2023-11-30 23:51:50 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap |
|||
2023-11-30 23:51:50 | ERROR | stderr | self.run() |
|||
2023-11-30 23:51:50 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/multiprocessing/process.py", line 108, in run |
|||
2023-11-30 23:51:50 | ERROR | stderr | self._target(*self._args, **self._kwargs) |
|||
2023-11-30 23:51:50 | ERROR | stderr | File "/Users/Angela/Documents/02. 程序文件夹/mac-llm/Langchain-Chatchatv0.2.7/startup.py", line 383, in run_model_worker |
|||
2023-11-30 23:51:50 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, **kwargs) |
|||
2023-11-30 23:51:50 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-30 23:51:50 | ERROR | stderr | File "/Users/Angela/Documents/02. 程序文件夹/mac-llm/Langchain-Chatchatv0.2.7/startup.py", line 211, in create_model_worker_app |
|||
2023-11-30 23:51:50 | ERROR | stderr | worker = ModelWorker( |
|||
2023-11-30 23:51:50 | ERROR | stderr | ^^^^^^^^^^^^ |
|||
2023-11-30 23:51:50 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/serve/model_worker.py", line 74, in __init__ |
|||
2023-11-30 23:51:50 | ERROR | stderr | self.model, self.tokenizer = load_model( |
|||
2023-11-30 23:51:50 | ERROR | stderr | ^^^^^^^^^^^ |
|||
2023-11-30 23:51:50 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/model/model_adapter.py", line 231, in load_model |
|||
2023-11-30 23:51:50 | ERROR | stderr | raise ValueError(f"Invalid device: {device}") |
|||
2023-11-30 23:51:50 | ERROR | stderr | ValueError: Invalid device: auto |
@ -0,0 +1,13 @@ |
|||
2023-11-20 00:48:32 | INFO | model_worker | Loading the model ['Qwen-7B-Chat'] on worker 48512c45 ... |
|||
2023-11-20 00:48:33 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/8 [00:00<?, ?it/s] |
|||
2023-11-20 00:48:34 | ERROR | stderr |
Loading checkpoint shards: 12%|████████████████████▋ | 1/8 [00:01<00:11, 1.69s/it] |
|||
2023-11-20 00:48:36 | ERROR | stderr |
Loading checkpoint shards: 25%|█████████████████████████████████████████▎ | 2/8 [00:03<00:10, 1.67s/it] |
|||
2023-11-20 00:48:38 | ERROR | stderr |
Loading checkpoint shards: 38%|█████████████████████████████████████████████████████████████▉ | 3/8 [00:05<00:08, 1.71s/it] |
|||
2023-11-20 00:48:40 | ERROR | stderr |
Loading checkpoint shards: 50%|██████████████████████████████████████████████████████████████████████████████████▌ | 4/8 [00:06<00:06, 1.70s/it] |
|||
2023-11-20 00:48:41 | ERROR | stderr |
Loading checkpoint shards: 62%|███████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 5/8 [00:08<00:04, 1.63s/it] |
|||
2023-11-20 00:48:43 | ERROR | stderr |
Loading checkpoint shards: 75%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▊ | 6/8 [00:09<00:03, 1.60s/it] |
|||
2023-11-20 00:48:44 | ERROR | stderr |
Loading checkpoint shards: 88%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 7/8 [00:11<00:01, 1.63s/it] |
|||
2023-11-20 00:48:45 | ERROR | stderr |
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:12<00:00, 1.50s/it] |
|||
2023-11-20 00:48:45 | ERROR | stderr |
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:12<00:00, 1.59s/it] |
|||
2023-11-20 00:48:45 | ERROR | stderr | |
|||
2023-11-20 00:48:48 | INFO | model_worker | Register to controller |
@ -0,0 +1,12 @@ |
|||
2023-11-22 11:35:24 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker 48c79337 ... |
|||
2023-11-22 11:35:24 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] |
|||
2023-11-22 11:35:24 | ERROR | stderr |
Loading checkpoint shards: 14%|██████████████ | 1/7 [00:00<00:03, 1.87it/s] |
|||
2023-11-22 11:35:25 | ERROR | stderr |
Loading checkpoint shards: 29%|████████████████████████████ | 2/7 [00:01<00:02, 1.85it/s] |
|||
2023-11-22 11:35:25 | ERROR | stderr |
Loading checkpoint shards: 43%|██████████████████████████████████████████ | 3/7 [00:01<00:02, 1.88it/s] |
|||
2023-11-22 11:35:26 | ERROR | stderr |
Loading checkpoint shards: 57%|████████████████████████████████████████████████████████ | 4/7 [00:02<00:01, 1.92it/s] |
|||
2023-11-22 11:35:26 | ERROR | stderr |
Loading checkpoint shards: 71%|██████████████████████████████████████████████████████████████████████ | 5/7 [00:02<00:01, 1.92it/s] |
|||
2023-11-22 11:35:27 | ERROR | stderr |
Loading checkpoint shards: 86%|████████████████████████████████████████████████████████████████████████████████████ | 6/7 [00:03<00:00, 1.92it/s] |
|||
2023-11-22 11:35:27 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 2.26it/s] |
|||
2023-11-22 11:35:27 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 2.04it/s] |
|||
2023-11-22 11:35:27 | ERROR | stderr | |
|||
2023-11-22 11:35:29 | INFO | model_worker | Register to controller |
@ -0,0 +1,12 @@ |
|||
2024-05-14 18:18:20 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker 5238ed79 ... |
|||
2024-05-14 18:18:20 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] |
|||
2024-05-14 18:18:21 | ERROR | stderr |
Loading checkpoint shards: 14%|███████████████████████▌ | 1/7 [00:00<00:03, 1.82it/s] |
|||
2024-05-14 18:18:22 | ERROR | stderr |
Loading checkpoint shards: 29%|███████████████████████████████████████████████▏ | 2/7 [00:01<00:03, 1.65it/s] |
|||
2024-05-14 18:18:23 | ERROR | stderr |
Loading checkpoint shards: 43%|██████████████████████████████████████████████████████████████████████▋ | 3/7 [00:02<00:02, 1.36it/s] |
|||
2024-05-14 18:18:23 | ERROR | stderr |
Loading checkpoint shards: 57%|██████████████████████████████████████████████████████████████████████████████████████████████▎ | 4/7 [00:03<00:02, 1.24it/s] |
|||
2024-05-14 18:18:25 | ERROR | stderr |
Loading checkpoint shards: 71%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▊ | 5/7 [00:04<00:02, 1.07s/it] |
|||
2024-05-14 18:18:26 | ERROR | stderr |
Loading checkpoint shards: 86%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 6/7 [00:05<00:01, 1.05s/it] |
|||
2024-05-14 18:18:27 | ERROR | stderr |
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:06<00:00, 1.05it/s] |
|||
2024-05-14 18:18:27 | ERROR | stderr |
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:06<00:00, 1.11it/s] |
|||
2024-05-14 18:18:27 | ERROR | stderr | |
|||
2024-05-14 18:18:30 | INFO | model_worker | Register to controller |
@ -0,0 +1,3 @@ |
|||
2023-11-24 09:58:45 | INFO | model_worker | Loading the model ['vicuna-13b-v1.5'] on worker 5c37c929 ... |
|||
2023-11-24 09:58:45 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] |
|||
2023-11-24 09:58:49 | ERROR | stderr |
Loading checkpoint shards: 33%|████████████████████████████████████████████████████████▋ | 1/3 [00:03<00:07, 3.67s/it] |
@ -0,0 +1,12 @@ |
|||
2023-11-24 10:04:37 | INFO | model_worker | Loading the model ['vicuna-13b-v1.5'] on worker 60d029a2 ... |
|||
2023-11-24 10:04:37 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] |
|||
2023-11-24 10:04:39 | ERROR | stderr |
Loading checkpoint shards: 33%|████████████████████████████████████████████████████████▋ | 1/3 [00:01<00:03, 1.75s/it] |
|||
2023-11-24 10:04:42 | ERROR | stderr |
Loading checkpoint shards: 67%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████▎ | 2/3 [00:04<00:02, 2.21s/it] |
|||
2023-11-24 10:04:44 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:06<00:00, 2.11s/it] |
|||
2023-11-24 10:04:44 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:06<00:00, 2.09s/it] |
|||
2023-11-24 10:04:44 | ERROR | stderr | |
|||
2023-11-24 10:04:44 | ERROR | stderr | /opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/generation/configuration_utils.py:381: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed. |
|||
2023-11-24 10:04:44 | ERROR | stderr | warnings.warn( |
|||
2023-11-24 10:04:44 | ERROR | stderr | /opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/generation/configuration_utils.py:386: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed. |
|||
2023-11-24 10:04:44 | ERROR | stderr | warnings.warn( |
|||
2023-11-24 10:04:53 | INFO | model_worker | Register to controller |
@ -0,0 +1,13 @@ |
|||
2023-11-30 23:45:41 | INFO | model_worker | Loading the model ['Qwen-7B-Chat'] on worker 6c56feff ... |
|||
2023-11-30 23:45:41 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/8 [00:00<?, ?it/s] |
|||
2023-11-30 23:45:42 | ERROR | stderr |
Loading checkpoint shards: 12%|██████████████▉ | 1/8 [00:00<00:05, 1.32it/s] |
|||
2023-11-30 23:45:43 | ERROR | stderr |
Loading checkpoint shards: 25%|█████████████████████████████▊ | 2/8 [00:01<00:04, 1.27it/s] |
|||
2023-11-30 23:45:44 | ERROR | stderr |
Loading checkpoint shards: 38%|████████████████████████████████████████████▋ | 3/8 [00:02<00:04, 1.25it/s] |
|||
2023-11-30 23:45:45 | ERROR | stderr |
Loading checkpoint shards: 50%|███████████████████████████████████████████████████████████▌ | 4/8 [00:03<00:03, 1.24it/s] |
|||
2023-11-30 23:45:46 | ERROR | stderr |
Loading checkpoint shards: 62%|██████████████████████████████████████████████████████████████████████████▍ | 5/8 [00:04<00:02, 1.20it/s] |
|||
2023-11-30 23:45:46 | ERROR | stderr |
Loading checkpoint shards: 75%|█████████████████████████████████████████████████████████████████████████████████████████▎ | 6/8 [00:04<00:01, 1.22it/s] |
|||
2023-11-30 23:45:47 | ERROR | stderr |
Loading checkpoint shards: 88%|████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 7/8 [00:05<00:00, 1.16it/s] |
|||
2023-11-30 23:45:48 | ERROR | stderr |
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:07<00:00, 1.02it/s] |
|||
2023-11-30 23:45:48 | ERROR | stderr |
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:07<00:00, 1.13it/s] |
|||
2023-11-30 23:45:48 | ERROR | stderr | |
|||
2023-11-30 23:45:50 | INFO | model_worker | Register to controller |
@ -0,0 +1,12 @@ |
|||
2023-11-20 16:16:51 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker 7a072b44 ... |
|||
2023-11-20 16:16:51 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] |
|||
2023-11-20 16:16:52 | ERROR | stderr |
Loading checkpoint shards: 14%|████████████▎ | 1/7 [00:00<00:03, 1.84it/s] |
|||
2023-11-20 16:16:52 | ERROR | stderr |
Loading checkpoint shards: 29%|████████████████████████▌ | 2/7 [00:01<00:02, 1.77it/s] |
|||
2023-11-20 16:16:53 | ERROR | stderr |
Loading checkpoint shards: 43%|████████████████████████████████████▊ | 3/7 [00:01<00:02, 1.76it/s] |
|||
2023-11-20 16:16:54 | ERROR | stderr |
Loading checkpoint shards: 57%|█████████████████████████████████████████████████▏ | 4/7 [00:02<00:01, 1.78it/s] |
|||
2023-11-20 16:16:54 | ERROR | stderr |
Loading checkpoint shards: 71%|█████████████████████████████████████████████████████████████▍ | 5/7 [00:02<00:01, 1.74it/s] |
|||
2023-11-20 16:16:55 | ERROR | stderr |
Loading checkpoint shards: 86%|█████████████████████████████████████████████████████████████████████████▋ | 6/7 [00:03<00:00, 1.75it/s] |
|||
2023-11-20 16:16:55 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 2.06it/s] |
|||
2023-11-20 16:16:55 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 1.88it/s] |
|||
2023-11-20 16:16:55 | ERROR | stderr | |
|||
2023-11-20 16:16:56 | INFO | model_worker | Register to controller |
@ -0,0 +1,4 @@ |
|||
2024-05-14 18:17:56 | INFO | model_worker | Loading the model ['Qwen-14B-Chat'] on worker 8c8fac9a ... |
|||
2024-05-14 18:17:57 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/15 [00:00<?, ?it/s] |
|||
2024-05-14 18:18:00 | ERROR | stderr |
Loading checkpoint shards: 7%|██████████▉ | 1/15 [00:02<00:39, 2.83s/it] |
|||
2024-05-14 18:18:03 | ERROR | stderr |
Loading checkpoint shards: 13%|█████████████████████▊ | 2/15 [00:05<00:38, 3.00s/it] |
@ -0,0 +1,12 @@ |
|||
2023-11-24 10:04:00 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker 963c85f6 ... |
|||
2023-11-24 10:04:00 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] |
|||
2023-11-24 10:04:01 | ERROR | stderr |
Loading checkpoint shards: 14%|████████████████████████▎ | 1/7 [00:00<00:03, 1.92it/s] |
|||
2023-11-24 10:04:01 | ERROR | stderr |
Loading checkpoint shards: 29%|████████████████████████████████████████████████▌ | 2/7 [00:01<00:02, 1.79it/s] |
|||
2023-11-24 10:04:02 | ERROR | stderr |
Loading checkpoint shards: 43%|████████████████████████████████████████████████████████████████████████▊ | 3/7 [00:01<00:02, 1.78it/s] |
|||
2023-11-24 10:04:02 | ERROR | stderr |
Loading checkpoint shards: 57%|█████████████████████████████████████████████████████████████████████████████████████████████████▏ | 4/7 [00:02<00:01, 1.83it/s] |
|||
2023-11-24 10:04:03 | ERROR | stderr |
Loading checkpoint shards: 71%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 5/7 [00:02<00:01, 1.81it/s] |
|||
2023-11-24 10:04:03 | ERROR | stderr |
Loading checkpoint shards: 86%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 6/7 [00:03<00:00, 1.81it/s] |
|||
2023-11-24 10:04:04 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 2.12it/s] |
|||
2023-11-24 10:04:04 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 1.94it/s] |
|||
2023-11-24 10:04:04 | ERROR | stderr | |
|||
2023-11-24 10:04:05 | INFO | model_worker | Register to controller |
@ -0,0 +1,12 @@ |
|||
2023-11-24 10:01:38 | INFO | model_worker | Loading the model ['vicuna-13b-v1.5'] on worker 9ef9de6f ... |
|||
2023-11-24 10:01:38 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] |
|||
2023-11-24 10:01:40 | ERROR | stderr |
Loading checkpoint shards: 33%|████████████████████████████████████████████████████████▋ | 1/3 [00:01<00:03, 1.52s/it] |
|||
2023-11-24 10:01:43 | ERROR | stderr |
Loading checkpoint shards: 67%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████▎ | 2/3 [00:04<00:02, 2.64s/it] |
|||
2023-11-24 10:01:46 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:07<00:00, 2.67s/it] |
|||
2023-11-24 10:01:46 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:07<00:00, 2.55s/it] |
|||
2023-11-24 10:01:46 | ERROR | stderr | |
|||
2023-11-24 10:01:46 | ERROR | stderr | /opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/generation/configuration_utils.py:381: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed. |
|||
2023-11-24 10:01:46 | ERROR | stderr | warnings.warn( |
|||
2023-11-24 10:01:46 | ERROR | stderr | /opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/generation/configuration_utils.py:386: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed. |
|||
2023-11-24 10:01:46 | ERROR | stderr | warnings.warn( |
|||
2023-11-24 10:01:55 | INFO | model_worker | Register to controller |
@ -0,0 +1,12 @@ |
|||
2023-11-22 11:37:48 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker a01adc6e ... |
|||
2023-11-22 11:37:48 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] |
|||
2023-11-22 11:37:49 | ERROR | stderr |
Loading checkpoint shards: 14%|██████████████ | 1/7 [00:00<00:03, 1.96it/s] |
|||
2023-11-22 11:37:50 | ERROR | stderr |
Loading checkpoint shards: 29%|████████████████████████████ | 2/7 [00:01<00:02, 1.91it/s] |
|||
2023-11-22 11:37:50 | ERROR | stderr |
Loading checkpoint shards: 43%|██████████████████████████████████████████ | 3/7 [00:01<00:02, 1.93it/s] |
|||
2023-11-22 11:37:51 | ERROR | stderr |
Loading checkpoint shards: 57%|████████████████████████████████████████████████████████ | 4/7 [00:02<00:01, 1.97it/s] |
|||
2023-11-22 11:37:51 | ERROR | stderr |
Loading checkpoint shards: 71%|██████████████████████████████████████████████████████████████████████ | 5/7 [00:02<00:01, 1.96it/s] |
|||
2023-11-22 11:37:52 | ERROR | stderr |
Loading checkpoint shards: 86%|████████████████████████████████████████████████████████████████████████████████████ | 6/7 [00:03<00:00, 1.95it/s] |
|||
2023-11-22 11:37:52 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 2.28it/s] |
|||
2023-11-22 11:37:52 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 2.08it/s] |
|||
2023-11-22 11:37:52 | ERROR | stderr | |
|||
2023-11-22 11:37:53 | INFO | model_worker | Register to controller |
@ -0,0 +1,46 @@ |
|||
2023-11-20 00:46:07 | INFO | model_worker | Loading the model ['Qwen-7B-Chat'] on worker b9853ca4 ... |
|||
2023-11-20 00:46:07 | ERROR | stderr | Process model_worker - Qwen-7B-Chat: |
|||
2023-11-20 00:46:07 | ERROR | stderr | Traceback (most recent call last): |
|||
2023-11-20 00:46:07 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/configuration_utils.py", line 677, in _get_config_dict |
|||
2023-11-20 00:46:07 | ERROR | stderr | resolved_config_file = cached_file( |
|||
2023-11-20 00:46:07 | ERROR | stderr | ^^^^^^^^^^^^ |
|||
2023-11-20 00:46:07 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/utils/hub.py", line 430, in cached_file |
|||
2023-11-20 00:46:07 | ERROR | stderr | resolved_file = hf_hub_download( |
|||
2023-11-20 00:46:07 | ERROR | stderr | ^^^^^^^^^^^^^^^^ |
|||
2023-11-20 00:46:07 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn |
|||
2023-11-20 00:46:07 | ERROR | stderr | validate_repo_id(arg_value) |
|||
2023-11-20 00:46:07 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id |
|||
2023-11-20 00:46:07 | ERROR | stderr | raise HFValidationError( |
|||
2023-11-20 00:46:07 | ERROR | stderr | huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/Users/hua/Documents/LLM Model/qwen/Qwen-7B-Chat'. Use `repo_type` argument if needed. |
|||
2023-11-20 00:46:07 | ERROR | stderr | |
|||
2023-11-20 00:46:07 | ERROR | stderr | During handling of the above exception, another exception occurred: |
|||
2023-11-20 00:46:07 | ERROR | stderr | |
|||
2023-11-20 00:46:07 | ERROR | stderr | Traceback (most recent call last): |
|||
2023-11-20 00:46:07 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap |
|||
2023-11-20 00:46:07 | ERROR | stderr | self.run() |
|||
2023-11-20 00:46:07 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/multiprocessing/process.py", line 108, in run |
|||
2023-11-20 00:46:07 | ERROR | stderr | self._target(*self._args, **self._kwargs) |
|||
2023-11-20 00:46:07 | ERROR | stderr | File "/Users/sunhua/Documents/02. 程序文件夹/mac-llm/Langchain-Chatchatv0.2.7/startup.py", line 383, in run_model_worker |
|||
2023-11-20 00:46:07 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, **kwargs) |
|||
2023-11-20 00:46:07 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-20 00:46:07 | ERROR | stderr | File "/Users/sunhua/Documents/02. 程序文件夹/mac-llm/Langchain-Chatchatv0.2.7/startup.py", line 211, in create_model_worker_app |
|||
2023-11-20 00:46:07 | ERROR | stderr | worker = ModelWorker( |
|||
2023-11-20 00:46:07 | ERROR | stderr | ^^^^^^^^^^^^ |
|||
2023-11-20 00:46:07 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/serve/model_worker.py", line 74, in __init__ |
|||
2023-11-20 00:46:07 | ERROR | stderr | self.model, self.tokenizer = load_model( |
|||
2023-11-20 00:46:07 | ERROR | stderr | ^^^^^^^^^^^ |
|||
2023-11-20 00:46:07 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/model/model_adapter.py", line 306, in load_model |
|||
2023-11-20 00:46:07 | ERROR | stderr | model, tokenizer = adapter.load_model(model_path, kwargs) |
|||
2023-11-20 00:46:07 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-20 00:46:07 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/model/model_adapter.py", line 1441, in load_model |
|||
2023-11-20 00:46:07 | ERROR | stderr | config = AutoConfig.from_pretrained( |
|||
2023-11-20 00:46:07 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-20 00:46:07 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 1048, in from_pretrained |
|||
2023-11-20 00:46:07 | ERROR | stderr | config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) |
|||
2023-11-20 00:46:07 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-20 00:46:07 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/configuration_utils.py", line 622, in get_config_dict |
|||
2023-11-20 00:46:07 | ERROR | stderr | config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) |
|||
2023-11-20 00:46:07 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-20 00:46:07 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/configuration_utils.py", line 698, in _get_config_dict |
|||
2023-11-20 00:46:07 | ERROR | stderr | raise EnvironmentError( |
|||
2023-11-20 00:46:07 | ERROR | stderr | OSError: Can't load the configuration of '/Users/hua/Documents/LLM Model/qwen/Qwen-7B-Chat'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '/Users/hua/Documents/LLM Model/qwen/Qwen-7B-Chat' is the correct path to a directory containing a config.json file |
@ -0,0 +1,46 @@ |
|||
2023-11-20 00:47:36 | INFO | model_worker | Loading the model ['Qwen-7B-Chat'] on worker c17f4dde ... |
|||
2023-11-20 00:47:36 | ERROR | stderr | Process model_worker - Qwen-7B-Chat: |
|||
2023-11-20 00:47:36 | ERROR | stderr | Traceback (most recent call last): |
|||
2023-11-20 00:47:36 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/configuration_utils.py", line 677, in _get_config_dict |
|||
2023-11-20 00:47:36 | ERROR | stderr | resolved_config_file = cached_file( |
|||
2023-11-20 00:47:36 | ERROR | stderr | ^^^^^^^^^^^^ |
|||
2023-11-20 00:47:36 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/utils/hub.py", line 430, in cached_file |
|||
2023-11-20 00:47:36 | ERROR | stderr | resolved_file = hf_hub_download( |
|||
2023-11-20 00:47:36 | ERROR | stderr | ^^^^^^^^^^^^^^^^ |
|||
2023-11-20 00:47:36 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn |
|||
2023-11-20 00:47:36 | ERROR | stderr | validate_repo_id(arg_value) |
|||
2023-11-20 00:47:36 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id |
|||
2023-11-20 00:47:36 | ERROR | stderr | raise HFValidationError( |
|||
2023-11-20 00:47:36 | ERROR | stderr | huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/Users/sunhuaa/Documents/LLM Model/qwen/Qwen-7B-Chat'. Use `repo_type` argument if needed. |
|||
2023-11-20 00:47:36 | ERROR | stderr | |
|||
2023-11-20 00:47:36 | ERROR | stderr | During handling of the above exception, another exception occurred: |
|||
2023-11-20 00:47:36 | ERROR | stderr | |
|||
2023-11-20 00:47:36 | ERROR | stderr | Traceback (most recent call last): |
|||
2023-11-20 00:47:36 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap |
|||
2023-11-20 00:47:36 | ERROR | stderr | self.run() |
|||
2023-11-20 00:47:36 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/multiprocessing/process.py", line 108, in run |
|||
2023-11-20 00:47:36 | ERROR | stderr | self._target(*self._args, **self._kwargs) |
|||
2023-11-20 00:47:36 | ERROR | stderr | File "/Users/sunhua/Documents/02. 程序文件夹/mac-llm/Langchain-Chatchatv0.2.7/startup.py", line 383, in run_model_worker |
|||
2023-11-20 00:47:36 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, **kwargs) |
|||
2023-11-20 00:47:36 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-20 00:47:36 | ERROR | stderr | File "/Users/sunhua/Documents/02. 程序文件夹/mac-llm/Langchain-Chatchatv0.2.7/startup.py", line 211, in create_model_worker_app |
|||
2023-11-20 00:47:36 | ERROR | stderr | worker = ModelWorker( |
|||
2023-11-20 00:47:36 | ERROR | stderr | ^^^^^^^^^^^^ |
|||
2023-11-20 00:47:36 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/serve/model_worker.py", line 74, in __init__ |
|||
2023-11-20 00:47:36 | ERROR | stderr | self.model, self.tokenizer = load_model( |
|||
2023-11-20 00:47:36 | ERROR | stderr | ^^^^^^^^^^^ |
|||
2023-11-20 00:47:36 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/model/model_adapter.py", line 306, in load_model |
|||
2023-11-20 00:47:36 | ERROR | stderr | model, tokenizer = adapter.load_model(model_path, kwargs) |
|||
2023-11-20 00:47:36 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-20 00:47:36 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/model/model_adapter.py", line 1441, in load_model |
|||
2023-11-20 00:47:36 | ERROR | stderr | config = AutoConfig.from_pretrained( |
|||
2023-11-20 00:47:36 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-20 00:47:36 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 1048, in from_pretrained |
|||
2023-11-20 00:47:36 | ERROR | stderr | config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) |
|||
2023-11-20 00:47:36 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-20 00:47:36 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/configuration_utils.py", line 622, in get_config_dict |
|||
2023-11-20 00:47:36 | ERROR | stderr | config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) |
|||
2023-11-20 00:47:36 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-20 00:47:36 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/configuration_utils.py", line 698, in _get_config_dict |
|||
2023-11-20 00:47:36 | ERROR | stderr | raise EnvironmentError( |
|||
2023-11-20 00:47:36 | ERROR | stderr | OSError: Can't load the configuration of '/Users/sunhuaa/Documents/LLM Model/qwen/Qwen-7B-Chat'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '/Users/sunhuaa/Documents/LLM Model/qwen/Qwen-7B-Chat' is the correct path to a directory containing a config.json file |
@ -0,0 +1,13 @@ |
|||
2023-11-30 23:50:59 | INFO | model_worker | Loading the model ['Qwen-7B-Chat'] on worker c400282a ... |
|||
2023-11-30 23:50:59 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/8 [00:00<?, ?it/s] |
|||
2023-11-30 23:51:00 | ERROR | stderr |
Loading checkpoint shards: 12%|██████████████▉ | 1/8 [00:00<00:04, 1.41it/s] |
|||
2023-11-30 23:51:00 | ERROR | stderr |
Loading checkpoint shards: 25%|█████████████████████████████▊ | 2/8 [00:01<00:04, 1.35it/s] |
|||
2023-11-30 23:51:01 | ERROR | stderr |
Loading checkpoint shards: 38%|████████████████████████████████████████████▋ | 3/8 [00:02<00:03, 1.34it/s] |
|||
2023-11-30 23:51:02 | ERROR | stderr |
Loading checkpoint shards: 50%|███████████████████████████████████████████████████████████▌ | 4/8 [00:02<00:03, 1.33it/s] |
|||
2023-11-30 23:51:03 | ERROR | stderr |
Loading checkpoint shards: 62%|██████████████████████████████████████████████████████████████████████████▍ | 5/8 [00:03<00:02, 1.34it/s] |
|||
2023-11-30 23:51:03 | ERROR | stderr |
Loading checkpoint shards: 75%|█████████████████████████████████████████████████████████████████████████████████████████▎ | 6/8 [00:04<00:01, 1.33it/s] |
|||
2023-11-30 23:51:04 | ERROR | stderr |
Loading checkpoint shards: 88%|████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 7/8 [00:05<00:00, 1.33it/s] |
|||
2023-11-30 23:51:05 | ERROR | stderr |
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:05<00:00, 1.48it/s] |
|||
2023-11-30 23:51:05 | ERROR | stderr |
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:05<00:00, 1.39it/s] |
|||
2023-11-30 23:51:05 | ERROR | stderr | |
|||
2023-11-30 23:51:06 | INFO | model_worker | Register to controller |
@ -0,0 +1,13 @@ |
|||
2023-11-20 00:51:15 | INFO | model_worker | Loading the model ['Qwen-7B-Chat'] on worker c43b8684 ... |
|||
2023-11-20 00:51:15 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/8 [00:00<?, ?it/s] |
|||
2023-11-20 00:51:16 | ERROR | stderr |
Loading checkpoint shards: 12%|████████████████████▋ | 1/8 [00:01<00:07, 1.05s/it] |
|||
2023-11-20 00:51:17 | ERROR | stderr |
Loading checkpoint shards: 25%|█████████████████████████████████████████▎ | 2/8 [00:02<00:06, 1.05s/it] |
|||
2023-11-20 00:51:19 | ERROR | stderr |
Loading checkpoint shards: 38%|█████████████████████████████████████████████████████████████▉ | 3/8 [00:03<00:05, 1.13s/it] |
|||
2023-11-20 00:51:19 | ERROR | stderr |
Loading checkpoint shards: 50%|██████████████████████████████████████████████████████████████████████████████████▌ | 4/8 [00:03<00:03, 1.12it/s] |
|||
2023-11-20 00:51:20 | ERROR | stderr |
Loading checkpoint shards: 62%|███████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 5/8 [00:04<00:02, 1.30it/s] |
|||
2023-11-20 00:51:20 | ERROR | stderr |
Loading checkpoint shards: 75%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▊ | 6/8 [00:04<00:01, 1.48it/s] |
|||
2023-11-20 00:51:21 | ERROR | stderr |
Loading checkpoint shards: 88%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 7/8 [00:05<00:00, 1.63it/s] |
|||
2023-11-20 00:51:21 | ERROR | stderr |
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:05<00:00, 1.96it/s] |
|||
2023-11-20 00:51:21 | ERROR | stderr |
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:05<00:00, 1.41it/s] |
|||
2023-11-20 00:51:21 | ERROR | stderr | |
|||
2023-11-20 00:51:25 | INFO | model_worker | Register to controller |
@ -0,0 +1,11 @@ |
|||
2023-11-24 13:33:33 | INFO | model_worker | Loading the model ['vicuna-7b-v1.5'] on worker cfde29ec ... |
|||
2023-11-24 13:33:34 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] |
|||
2023-11-24 13:33:35 | ERROR | stderr |
Loading checkpoint shards: 50%|█████████████████████████████████████████████████████████████████████████████████████ | 1/2 [00:01<00:01, 1.46s/it] |
|||
2023-11-24 13:33:36 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:02<00:00, 1.08it/s] |
|||
2023-11-24 13:33:36 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:02<00:00, 1.01s/it] |
|||
2023-11-24 13:33:36 | ERROR | stderr | |
|||
2023-11-24 13:33:36 | ERROR | stderr | /opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/generation/configuration_utils.py:381: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed. |
|||
2023-11-24 13:33:36 | ERROR | stderr | warnings.warn( |
|||
2023-11-24 13:33:36 | ERROR | stderr | /opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/generation/configuration_utils.py:386: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed. |
|||
2023-11-24 13:33:36 | ERROR | stderr | warnings.warn( |
|||
2023-11-24 13:33:38 | INFO | model_worker | Register to controller |
@ -0,0 +1,12 @@ |
|||
2023-11-22 11:16:22 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker e7ac4878 ... |
|||
2023-11-22 11:16:22 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] |
|||
2023-11-22 11:16:23 | ERROR | stderr |
Loading checkpoint shards: 14%|████████████▎ | 1/7 [00:00<00:03, 1.78it/s] |
|||
2023-11-22 11:16:23 | ERROR | stderr |
Loading checkpoint shards: 29%|████████████████████████▌ | 2/7 [00:01<00:02, 1.71it/s] |
|||
2023-11-22 11:16:24 | ERROR | stderr |
Loading checkpoint shards: 43%|████████████████████████████████████▊ | 3/7 [00:01<00:02, 1.73it/s] |
|||
2023-11-22 11:16:24 | ERROR | stderr |
Loading checkpoint shards: 57%|█████████████████████████████████████████████████▏ | 4/7 [00:02<00:01, 1.82it/s] |
|||
2023-11-22 11:16:25 | ERROR | stderr |
Loading checkpoint shards: 71%|█████████████████████████████████████████████████████████████▍ | 5/7 [00:02<00:01, 1.86it/s] |
|||
2023-11-22 11:16:25 | ERROR | stderr |
Loading checkpoint shards: 86%|█████████████████████████████████████████████████████████████████████████▋ | 6/7 [00:03<00:00, 1.80it/s] |
|||
2023-11-22 11:16:26 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 2.13it/s] |
|||
2023-11-22 11:16:26 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 1.92it/s] |
|||
2023-11-22 11:16:26 | ERROR | stderr | |
|||
2023-11-22 11:16:28 | INFO | model_worker | Register to controller |
@ -0,0 +1,12 @@ |
|||
2023-11-20 00:52:22 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker e7e73a0b ... |
|||
2023-11-20 00:52:22 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] |
|||
2023-11-20 00:52:22 | ERROR | stderr |
Loading checkpoint shards: 14%|███████████████████████▌ | 1/7 [00:00<00:03, 1.81it/s] |
|||
2023-11-20 00:52:23 | ERROR | stderr |
Loading checkpoint shards: 29%|███████████████████████████████████████████████▏ | 2/7 [00:01<00:02, 1.71it/s] |
|||
2023-11-20 00:52:24 | ERROR | stderr |
Loading checkpoint shards: 43%|██████████████████████████████████████████████████████████████████████▋ | 3/7 [00:01<00:02, 1.68it/s] |
|||
2023-11-20 00:52:24 | ERROR | stderr |
Loading checkpoint shards: 57%|██████████████████████████████████████████████████████████████████████████████████████████████▎ | 4/7 [00:02<00:01, 1.70it/s] |
|||
2023-11-20 00:52:25 | ERROR | stderr |
Loading checkpoint shards: 71%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▊ | 5/7 [00:02<00:01, 1.65it/s] |
|||
2023-11-20 00:52:25 | ERROR | stderr |
Loading checkpoint shards: 86%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 6/7 [00:03<00:00, 1.66it/s] |
|||
2023-11-20 00:52:26 | ERROR | stderr |
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 1.94it/s] |
|||
2023-11-20 00:52:26 | ERROR | stderr |
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 1.79it/s] |
|||
2023-11-20 00:52:26 | ERROR | stderr | |
|||
2023-11-20 00:52:27 | INFO | model_worker | Register to controller |
@ -0,0 +1,36 @@ |
|||
2023-11-30 23:52:48 | INFO | model_worker | Loading the model ['vicuna-15b-v1.5'] on worker e8f91260 ... |
|||
2023-11-30 23:52:48 | ERROR | stderr | Process model_worker - vicuna-15b-v1.5: |
|||
2023-11-30 23:52:48 | ERROR | stderr | Traceback (most recent call last): |
|||
2023-11-30 23:52:48 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap |
|||
2023-11-30 23:52:48 | ERROR | stderr | self.run() |
|||
2023-11-30 23:52:48 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/multiprocessing/process.py", line 108, in run |
|||
2023-11-30 23:52:48 | ERROR | stderr | self._target(*self._args, **self._kwargs) |
|||
2023-11-30 23:52:48 | ERROR | stderr | File "/Users/Angela/Documents/02. 程序文件夹/mac-llm/Langchain-Chatchatv0.2.7/startup.py", line 383, in run_model_worker |
|||
2023-11-30 23:52:48 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, **kwargs) |
|||
2023-11-30 23:52:48 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-30 23:52:48 | ERROR | stderr | File "/Users/Angela/Documents/02. 程序文件夹/mac-llm/Langchain-Chatchatv0.2.7/startup.py", line 211, in create_model_worker_app |
|||
2023-11-30 23:52:48 | ERROR | stderr | worker = ModelWorker( |
|||
2023-11-30 23:52:48 | ERROR | stderr | ^^^^^^^^^^^^ |
|||
2023-11-30 23:52:48 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/serve/model_worker.py", line 74, in __init__ |
|||
2023-11-30 23:52:48 | ERROR | stderr | self.model, self.tokenizer = load_model( |
|||
2023-11-30 23:52:48 | ERROR | stderr | ^^^^^^^^^^^ |
|||
2023-11-30 23:52:48 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/model/model_adapter.py", line 306, in load_model |
|||
2023-11-30 23:52:48 | ERROR | stderr | model, tokenizer = adapter.load_model(model_path, kwargs) |
|||
2023-11-30 23:52:48 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-30 23:52:48 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/model/model_adapter.py", line 69, in load_model |
|||
2023-11-30 23:52:48 | ERROR | stderr | tokenizer = AutoTokenizer.from_pretrained( |
|||
2023-11-30 23:52:48 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-30 23:52:48 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 718, in from_pretrained |
|||
2023-11-30 23:52:48 | ERROR | stderr | tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs) |
|||
2023-11-30 23:52:48 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-30 23:52:48 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 550, in get_tokenizer_config |
|||
2023-11-30 23:52:48 | ERROR | stderr | resolved_config_file = cached_file( |
|||
2023-11-30 23:52:48 | ERROR | stderr | ^^^^^^^^^^^^ |
|||
2023-11-30 23:52:48 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/transformers/utils/hub.py", line 430, in cached_file |
|||
2023-11-30 23:52:48 | ERROR | stderr | resolved_file = hf_hub_download( |
|||
2023-11-30 23:52:48 | ERROR | stderr | ^^^^^^^^^^^^^^^^ |
|||
2023-11-30 23:52:48 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn |
|||
2023-11-30 23:52:48 | ERROR | stderr | validate_repo_id(arg_value) |
|||
2023-11-30 23:52:48 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 164, in validate_repo_id |
|||
2023-11-30 23:52:48 | ERROR | stderr | raise HFValidationError( |
|||
2023-11-30 23:52:48 | ERROR | stderr | huggingface_hub.utils._validators.HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: ''. |
@ -0,0 +1,13 @@ |
|||
2023-11-19 10:26:02 | INFO | model_worker | Loading the model ['Qwen-7B-Chat'] on worker ea9b99d9 ... |
|||
2023-11-19 10:26:03 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/8 [00:00<?, ?it/s] |
|||
2023-11-19 10:26:03 | ERROR | stderr |
Loading checkpoint shards: 12%|█████████████▏ | 1/8 [00:00<00:05, 1.39it/s] |
|||
2023-11-19 10:26:04 | ERROR | stderr |
Loading checkpoint shards: 25%|██████████████████████████▎ | 2/8 [00:01<00:03, 1.78it/s] |
|||
2023-11-19 10:26:04 | ERROR | stderr |
Loading checkpoint shards: 38%|███████████████████████████████████████▍ | 3/8 [00:01<00:02, 2.00it/s] |
|||
2023-11-19 10:26:05 | ERROR | stderr |
Loading checkpoint shards: 50%|████████████████████████████████████████████████████▌ | 4/8 [00:02<00:01, 2.12it/s] |
|||
2023-11-19 10:26:05 | ERROR | stderr |
Loading checkpoint shards: 62%|█████████████████████████████████████████████████████████████████▋ | 5/8 [00:02<00:01, 2.20it/s] |
|||
2023-11-19 10:26:06 | ERROR | stderr |
Loading checkpoint shards: 75%|██████████████████████████████████████████████████████████████████████████████▊ | 6/8 [00:02<00:00, 2.24it/s] |
|||
2023-11-19 10:26:06 | ERROR | stderr |
Loading checkpoint shards: 88%|███████████████████████████████████████████████████████████████████████████████████████████▉ | 7/8 [00:03<00:00, 2.29it/s] |
|||
2023-11-19 10:26:06 | ERROR | stderr |
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:03<00:00, 2.66it/s] |
|||
2023-11-19 10:26:06 | ERROR | stderr |
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:03<00:00, 2.26it/s] |
|||
2023-11-19 10:26:06 | ERROR | stderr | |
|||
2023-11-19 10:26:08 | INFO | model_worker | Register to controller |
@ -0,0 +1,12 @@ |
|||
2023-11-24 10:24:49 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker f72362d6 ... |
|||
2023-11-24 10:24:49 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] |
|||
2023-11-24 10:24:50 | ERROR | stderr |
Loading checkpoint shards: 14%|████████████████████████▎ | 1/7 [00:00<00:03, 1.80it/s] |
|||
2023-11-24 10:24:50 | ERROR | stderr |
Loading checkpoint shards: 29%|████████████████████████████████████████████████▌ | 2/7 [00:01<00:02, 1.68it/s] |
|||
2023-11-24 10:24:51 | ERROR | stderr |
Loading checkpoint shards: 43%|████████████████████████████████████████████████████████████████████████▊ | 3/7 [00:01<00:02, 1.64it/s] |
|||
2023-11-24 10:24:51 | ERROR | stderr |
Loading checkpoint shards: 57%|█████████████████████████████████████████████████████████████████████████████████████████████████▏ | 4/7 [00:02<00:01, 1.70it/s] |
|||
2023-11-24 10:24:52 | ERROR | stderr |
Loading checkpoint shards: 71%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 5/7 [00:02<00:01, 1.71it/s] |
|||
2023-11-24 10:24:53 | ERROR | stderr |
Loading checkpoint shards: 86%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 6/7 [00:03<00:00, 1.72it/s] |
|||
2023-11-24 10:24:53 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 2.05it/s] |
|||
2023-11-24 10:24:53 | ERROR | stderr |
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 1.83it/s] |
|||
2023-11-24 10:24:53 | ERROR | stderr | |
|||
2023-11-24 10:24:54 | INFO | model_worker | Register to controller |
@ -0,0 +1,20 @@ |
|||
2023-11-24 10:25:37 | INFO | model_worker | Loading the model ['Qwen-14B-Chat'] on worker fe33b8be ... |
|||
2023-11-24 10:25:37 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/15 [00:00<?, ?it/s] |
|||
2023-11-24 10:25:38 | ERROR | stderr |
Loading checkpoint shards: 7%|███████████▎ | 1/15 [00:00<00:08, 1.57it/s] |
|||
2023-11-24 10:25:38 | ERROR | stderr |
Loading checkpoint shards: 13%|██████████████████████▌ | 2/15 [00:01<00:08, 1.49it/s] |
|||
2023-11-24 10:25:39 | ERROR | stderr |
Loading checkpoint shards: 20%|█████████████████████████████████▊ | 3/15 [00:01<00:07, 1.59it/s] |
|||
2023-11-24 10:25:39 | ERROR | stderr |
Loading checkpoint shards: 27%|█████████████████████████████████████████████ | 4/15 [00:02<00:06, 1.61it/s] |
|||
2023-11-24 10:25:40 | ERROR | stderr |
Loading checkpoint shards: 33%|████████████████████████████████████████████████████████▎ | 5/15 [00:03<00:06, 1.61it/s] |
|||
2023-11-24 10:25:41 | ERROR | stderr |
Loading checkpoint shards: 40%|███████████████████████████████████████████████████████████████████▌ | 6/15 [00:03<00:05, 1.52it/s] |
|||
2023-11-24 10:25:42 | ERROR | stderr |
Loading checkpoint shards: 47%|██████████████████████████████████████████████████████████████████████████████▊ | 7/15 [00:05<00:07, 1.08it/s] |
|||
2023-11-24 10:25:44 | ERROR | stderr |
Loading checkpoint shards: 53%|██████████████████████████████████████████████████████████████████████████████████████████▏ | 8/15 [00:06<00:07, 1.11s/it] |
|||
2023-11-24 10:25:45 | ERROR | stderr |
Loading checkpoint shards: 60%|█████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 9/15 [00:08<00:07, 1.20s/it] |
|||
2023-11-24 10:25:47 | ERROR | stderr |
Loading checkpoint shards: 67%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 10/15 [00:09<00:06, 1.25s/it] |
|||
2023-11-24 10:25:48 | ERROR | stderr |
Loading checkpoint shards: 73%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 11/15 [00:11<00:05, 1.30s/it] |
|||
2023-11-24 10:25:49 | ERROR | stderr |
Loading checkpoint shards: 80%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 12/15 [00:12<00:03, 1.31s/it] |
|||
2023-11-24 10:25:51 | ERROR | stderr |
Loading checkpoint shards: 87%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▌ | 13/15 [00:13<00:02, 1.32s/it] |
|||
2023-11-24 10:25:52 | ERROR | stderr |
Loading checkpoint shards: 93%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▊ | 14/15 [00:15<00:01, 1.34s/it] |
|||
2023-11-24 10:25:54 | ERROR | stderr |
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:17<00:00, 1.56s/it] |
|||
2023-11-24 10:25:54 | ERROR | stderr |
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:17<00:00, 1.14s/it] |
|||
2023-11-24 10:25:54 | ERROR | stderr | |
|||
2023-11-24 10:26:06 | INFO | model_worker | Register to controller |
@ -0,0 +1,13 @@ |
|||
2023-11-19 10:19:34 | INFO | model_worker | Loading the model ['Qwen-7B-Chat'] on worker fef5b914 ... |
|||
2023-11-19 10:19:35 | ERROR | stderr |
Loading checkpoint shards: 0%| | 0/8 [00:00<?, ?it/s] |
|||
2023-11-19 10:19:36 | ERROR | stderr |
Loading checkpoint shards: 12%|█████████████▏ | 1/8 [00:00<00:05, 1.35it/s] |
|||
2023-11-19 10:19:36 | ERROR | stderr |
Loading checkpoint shards: 25%|██████████████████████████▎ | 2/8 [00:01<00:04, 1.35it/s] |
|||
2023-11-19 10:19:37 | ERROR | stderr |
Loading checkpoint shards: 38%|███████████████████████████████████████▍ | 3/8 [00:02<00:03, 1.34it/s] |
|||
2023-11-19 10:19:38 | ERROR | stderr |
Loading checkpoint shards: 50%|████████████████████████████████████████████████████▌ | 4/8 [00:02<00:02, 1.33it/s] |
|||
2023-11-19 10:19:38 | ERROR | stderr |
Loading checkpoint shards: 62%|█████████████████████████████████████████████████████████████████▋ | 5/8 [00:03<00:02, 1.34it/s] |
|||
2023-11-19 10:19:39 | ERROR | stderr |
Loading checkpoint shards: 75%|██████████████████████████████████████████████████████████████████████████████▊ | 6/8 [00:04<00:01, 1.34it/s] |
|||
2023-11-19 10:19:40 | ERROR | stderr |
Loading checkpoint shards: 88%|███████████████████████████████████████████████████████████████████████████████████████████▉ | 7/8 [00:05<00:00, 1.13it/s] |
|||
2023-11-19 10:19:42 | ERROR | stderr |
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:06<00:00, 1.04it/s] |
|||
2023-11-19 10:19:42 | ERROR | stderr |
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:06<00:00, 1.18it/s] |
|||
2023-11-19 10:19:42 | ERROR | stderr | |
|||
2023-11-19 10:19:44 | INFO | model_worker | Register to controller |
@ -0,0 +1,16 @@ |
|||
2024-05-14 18:17:55 | ERROR | stderr | [32mINFO[0m: Started server process [[36m68750[0m] |
|||
2024-05-14 18:17:55 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2024-05-14 18:17:55 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2024-05-14 18:17:55 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2024-05-14 18:18:20 | ERROR | stderr | [32mINFO[0m: Started server process [[36m68797[0m] |
|||
2024-05-14 18:18:20 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2024-05-14 18:18:20 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2024-05-14 18:18:20 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2024-05-14 18:18:48 | INFO | stdout | [32mINFO[0m: ::1:61931 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
|||
2024-05-14 18:19:32 | INFO | stdout | [32mINFO[0m: ::1:62014 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
|||
2024-05-15 15:52:40 | ERROR | stderr | [32mINFO[0m: Started server process [[36m15002[0m] |
|||
2024-05-15 15:52:40 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2024-05-15 15:52:40 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2024-05-15 15:52:40 | ERROR | stderr | [31mERROR[0m: [Errno 48] error while attempting to bind on address ('127.0.0.1', 20000): address already in use |
|||
2024-05-15 15:52:40 | ERROR | stderr | [32mINFO[0m: Waiting for application shutdown. |
|||
2024-05-15 15:52:40 | ERROR | stderr | [32mINFO[0m: Application shutdown complete. |
@ -0,0 +1,37 @@ |
|||
2023-11-19 10:19:34 | ERROR | stderr | [32mINFO[0m: Started server process [[36m2925[0m] |
|||
2023-11-19 10:19:34 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-19 10:19:34 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-19 10:19:34 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://0.0.0.0:20000[0m (Press CTRL+C to quit) |
|||
2023-11-19 10:19:59 | INFO | stdout | [32mINFO[0m: 127.0.0.1:52215 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
|||
2023-11-19 10:26:02 | ERROR | stderr | [32mINFO[0m: Started server process [[36m3128[0m] |
|||
2023-11-19 10:26:02 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-19 10:26:02 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-19 10:26:02 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://0.0.0.0:20000[0m (Press CTRL+C to quit) |
|||
2023-11-19 10:26:33 | INFO | stdout | [32mINFO[0m: 127.0.0.1:53294 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
|||
2023-11-20 00:46:04 | ERROR | stderr | [32mINFO[0m: Started server process [[36m7170[0m] |
|||
2023-11-20 00:46:04 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-20 00:46:04 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-20 00:46:04 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://0.0.0.0:20000[0m (Press CTRL+C to quit) |
|||
2023-11-20 00:47:36 | ERROR | stderr | [32mINFO[0m: Started server process [[36m7546[0m] |
|||
2023-11-20 00:47:36 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-20 00:47:36 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-20 00:47:36 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://0.0.0.0:20000[0m (Press CTRL+C to quit) |
|||
2023-11-20 00:48:32 | ERROR | stderr | [32mINFO[0m: Started server process [[36m7902[0m] |
|||
2023-11-20 00:48:32 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-20 00:48:32 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-20 00:48:32 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://0.0.0.0:20000[0m (Press CTRL+C to quit) |
|||
2023-11-20 00:49:22 | INFO | stdout | [32mINFO[0m: 127.0.0.1:53741 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
|||
2023-11-20 00:51:14 | ERROR | stderr | [32mINFO[0m: Started server process [[36m9277[0m] |
|||
2023-11-20 00:51:14 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-20 00:51:14 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-20 00:51:14 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://0.0.0.0:20000[0m (Press CTRL+C to quit) |
|||
2023-11-20 00:52:21 | ERROR | stderr | [32mINFO[0m: Started server process [[36m9684[0m] |
|||
2023-11-20 00:52:21 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-20 00:52:21 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-20 00:52:21 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://0.0.0.0:20000[0m (Press CTRL+C to quit) |
|||
2023-11-20 00:53:02 | INFO | stdout | [32mINFO[0m: 127.0.0.1:54257 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
|||
2023-11-20 00:57:11 | INFO | stdout | [32mINFO[0m: 127.0.0.1:54562 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
|||
2023-11-20 16:16:51 | ERROR | stderr | [32mINFO[0m: Started server process [[36m13530[0m] |
|||
2023-11-20 16:16:51 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-20 16:16:51 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-20 16:16:51 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
@ -0,0 +1,20 @@ |
|||
2023-11-22 11:16:22 | ERROR | stderr | [32mINFO[0m: Started server process [[36m24649[0m] |
|||
2023-11-22 11:16:22 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-22 11:16:22 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-22 11:16:22 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2023-11-22 11:17:10 | INFO | stdout | [32mINFO[0m: ::1:50517 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
|||
2023-11-22 11:34:21 | INFO | stdout | [32mINFO[0m: ::1:51023 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [31m400 Bad Request[0m |
|||
2023-11-22 11:34:56 | INFO | stdout | [32mINFO[0m: ::1:51039 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [31m400 Bad Request[0m |
|||
2023-11-22 11:35:23 | ERROR | stderr | [32mINFO[0m: Started server process [[36m24767[0m] |
|||
2023-11-22 11:35:23 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-22 11:35:23 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-22 11:35:23 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2023-11-22 11:35:39 | INFO | stdout | [32mINFO[0m: ::1:51096 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [31m400 Bad Request[0m |
|||
2023-11-22 11:37:48 | ERROR | stderr | [32mINFO[0m: Started server process [[36m24808[0m] |
|||
2023-11-22 11:37:48 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-22 11:37:48 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-22 11:37:48 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2023-11-22 11:38:04 | INFO | stdout | [32mINFO[0m: ::1:51219 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
|||
2023-11-22 11:38:27 | INFO | stdout | [32mINFO[0m: ::1:51240 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [31m400 Bad Request[0m |
|||
2023-11-22 11:39:41 | INFO | stdout | [32mINFO[0m: ::1:51283 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
|||
2023-11-22 11:40:18 | INFO | stdout | [32mINFO[0m: ::1:51306 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
@ -0,0 +1,435 @@ |
|||
2023-11-24 09:58:44 | ERROR | stderr | [32mINFO[0m: Started server process [[36m61043[0m] |
|||
2023-11-24 09:58:44 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-24 09:58:44 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-24 09:58:44 | ERROR | stderr | [31mERROR[0m: [Errno 48] error while attempting to bind on address ('127.0.0.1', 20000): address already in use |
|||
2023-11-24 09:58:44 | ERROR | stderr | [32mINFO[0m: Waiting for application shutdown. |
|||
2023-11-24 09:58:44 | ERROR | stderr | [32mINFO[0m: Application shutdown complete. |
|||
2023-11-24 10:01:38 | ERROR | stderr | [32mINFO[0m: Started server process [[36m61129[0m] |
|||
2023-11-24 10:01:38 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-24 10:01:38 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-24 10:01:38 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2023-11-24 10:02:12 | INFO | stdout | [32mINFO[0m: ::1:63368 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
|||
2023-11-24 10:02:12 | ERROR | stderr | [31mERROR[0m: Exception in ASGI application |
|||
2023-11-24 10:02:12 | ERROR | stderr | Traceback (most recent call last): |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions |
|||
2023-11-24 10:02:12 | ERROR | stderr | yield |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 209, in _receive_event |
|||
2023-11-24 10:02:12 | ERROR | stderr | event = self._h11_state.next_event() |
|||
2023-11-24 10:02:12 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/h11/_connection.py", line 469, in next_event |
|||
2023-11-24 10:02:12 | ERROR | stderr | event = self._extract_next_receive_event() |
|||
2023-11-24 10:02:12 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/h11/_connection.py", line 419, in _extract_next_receive_event |
|||
2023-11-24 10:02:12 | ERROR | stderr | event = self._reader.read_eof() # type: ignore[attr-defined] |
|||
2023-11-24 10:02:12 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/h11/_readers.py", line 204, in read_eof |
|||
2023-11-24 10:02:12 | ERROR | stderr | raise RemoteProtocolError( |
|||
2023-11-24 10:02:12 | ERROR | stderr | h11._util.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read) |
|||
2023-11-24 10:02:12 | ERROR | stderr | |
|||
2023-11-24 10:02:12 | ERROR | stderr | The above exception was the direct cause of the following exception: |
|||
2023-11-24 10:02:12 | ERROR | stderr | |
|||
2023-11-24 10:02:12 | ERROR | stderr | Traceback (most recent call last): |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_transports/default.py", line 66, in map_httpcore_exceptions |
|||
2023-11-24 10:02:12 | ERROR | stderr | yield |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_transports/default.py", line 249, in __aiter__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | async for part in self._httpcore_stream: |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 361, in __aiter__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | async for part in self._stream: |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 337, in __aiter__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | raise exc |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 329, in __aiter__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | async for chunk in self._connection._receive_response_body(**kwargs): |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 198, in _receive_response_body |
|||
2023-11-24 10:02:12 | ERROR | stderr | event = await self._receive_event(timeout=timeout) |
|||
2023-11-24 10:02:12 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 208, in _receive_event |
|||
2023-11-24 10:02:12 | ERROR | stderr | with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}): |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/contextlib.py", line 155, in __exit__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | self.gen.throw(typ, value, traceback) |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions |
|||
2023-11-24 10:02:12 | ERROR | stderr | raise to_exc(exc) from exc |
|||
2023-11-24 10:02:12 | ERROR | stderr | httpcore.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read) |
|||
2023-11-24 10:02:12 | ERROR | stderr | |
|||
2023-11-24 10:02:12 | ERROR | stderr | The above exception was the direct cause of the following exception: |
|||
2023-11-24 10:02:12 | ERROR | stderr | |
|||
2023-11-24 10:02:12 | ERROR | stderr | Traceback (most recent call last): |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi |
|||
2023-11-24 10:02:12 | ERROR | stderr | result = await app( # type: ignore[func-returns-value] |
|||
2023-11-24 10:02:12 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | return await self.app(scope, receive, send) |
|||
2023-11-24 10:02:12 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastapi/applications.py", line 1106, in __call__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | await super().__call__(scope, receive, send) |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | await self.middleware_stack(scope, receive, send) |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | raise exc |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | await self.app(scope, receive, _send) |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/cors.py", line 83, in __call__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | await self.app(scope, receive, send) |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | raise exc |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | await self.app(scope, receive, sender) |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | raise e |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | await self.app(scope, receive, send) |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | await route.handle(scope, receive, send) |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle |
|||
2023-11-24 10:02:12 | ERROR | stderr | await self.app(scope, receive, send) |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/routing.py", line 69, in app |
|||
2023-11-24 10:02:12 | ERROR | stderr | await response(scope, receive, send) |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/responses.py", line 270, in __call__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | async with anyio.create_task_group() as task_group: |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 597, in __aexit__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | raise exceptions[0] |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/responses.py", line 273, in wrap |
|||
2023-11-24 10:02:12 | ERROR | stderr | await func() |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/responses.py", line 262, in stream_response |
|||
2023-11-24 10:02:12 | ERROR | stderr | async for chunk in self.body_iterator: |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/serve/openai_api_server.py", line 458, in chat_completion_stream_generator |
|||
2023-11-24 10:02:12 | ERROR | stderr | async for content in generate_completion_stream(gen_params, worker_addr): |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/serve/openai_api_server.py", line 638, in generate_completion_stream |
|||
2023-11-24 10:02:12 | ERROR | stderr | async for raw_chunk in response.aiter_raw(): |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_models.py", line 990, in aiter_raw |
|||
2023-11-24 10:02:12 | ERROR | stderr | async for raw_stream_bytes in self.stream: |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_client.py", line 146, in __aiter__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | async for chunk in self._stream: |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_transports/default.py", line 248, in __aiter__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | with map_httpcore_exceptions(): |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/contextlib.py", line 155, in __exit__ |
|||
2023-11-24 10:02:12 | ERROR | stderr | self.gen.throw(typ, value, traceback) |
|||
2023-11-24 10:02:12 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_transports/default.py", line 83, in map_httpcore_exceptions |
|||
2023-11-24 10:02:12 | ERROR | stderr | raise mapped_exc(message) from exc |
|||
2023-11-24 10:02:12 | ERROR | stderr | httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read) |
|||
2023-11-24 10:04:00 | ERROR | stderr | [32mINFO[0m: Started server process [[36m61185[0m] |
|||
2023-11-24 10:04:00 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-24 10:04:00 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-24 10:04:00 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2023-11-24 10:04:14 | INFO | stdout | [32mINFO[0m: ::1:63500 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
|||
2023-11-24 10:04:37 | ERROR | stderr | [32mINFO[0m: Started server process [[36m61223[0m] |
|||
2023-11-24 10:04:37 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-24 10:04:37 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-24 10:04:37 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2023-11-24 10:12:09 | ERROR | stderr | [32mINFO[0m: Started server process [[36m61355[0m] |
|||
2023-11-24 10:12:09 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-24 10:12:09 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-24 10:12:09 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2023-11-24 10:12:38 | INFO | stdout | [32mINFO[0m: ::1:63824 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
|||
2023-11-24 10:12:38 | ERROR | stderr | [31mERROR[0m: Exception in ASGI application |
|||
2023-11-24 10:12:38 | ERROR | stderr | Traceback (most recent call last): |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions |
|||
2023-11-24 10:12:38 | ERROR | stderr | yield |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 209, in _receive_event |
|||
2023-11-24 10:12:38 | ERROR | stderr | event = self._h11_state.next_event() |
|||
2023-11-24 10:12:38 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/h11/_connection.py", line 469, in next_event |
|||
2023-11-24 10:12:38 | ERROR | stderr | event = self._extract_next_receive_event() |
|||
2023-11-24 10:12:38 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/h11/_connection.py", line 419, in _extract_next_receive_event |
|||
2023-11-24 10:12:38 | ERROR | stderr | event = self._reader.read_eof() # type: ignore[attr-defined] |
|||
2023-11-24 10:12:38 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/h11/_readers.py", line 204, in read_eof |
|||
2023-11-24 10:12:38 | ERROR | stderr | raise RemoteProtocolError( |
|||
2023-11-24 10:12:38 | ERROR | stderr | h11._util.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read) |
|||
2023-11-24 10:12:38 | ERROR | stderr | |
|||
2023-11-24 10:12:38 | ERROR | stderr | The above exception was the direct cause of the following exception: |
|||
2023-11-24 10:12:38 | ERROR | stderr | |
|||
2023-11-24 10:12:38 | ERROR | stderr | Traceback (most recent call last): |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_transports/default.py", line 66, in map_httpcore_exceptions |
|||
2023-11-24 10:12:38 | ERROR | stderr | yield |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_transports/default.py", line 249, in __aiter__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | async for part in self._httpcore_stream: |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 361, in __aiter__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | async for part in self._stream: |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 337, in __aiter__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | raise exc |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 329, in __aiter__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | async for chunk in self._connection._receive_response_body(**kwargs): |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 198, in _receive_response_body |
|||
2023-11-24 10:12:38 | ERROR | stderr | event = await self._receive_event(timeout=timeout) |
|||
2023-11-24 10:12:38 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 208, in _receive_event |
|||
2023-11-24 10:12:38 | ERROR | stderr | with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}): |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/contextlib.py", line 155, in __exit__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | self.gen.throw(typ, value, traceback) |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions |
|||
2023-11-24 10:12:38 | ERROR | stderr | raise to_exc(exc) from exc |
|||
2023-11-24 10:12:38 | ERROR | stderr | httpcore.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read) |
|||
2023-11-24 10:12:38 | ERROR | stderr | |
|||
2023-11-24 10:12:38 | ERROR | stderr | The above exception was the direct cause of the following exception: |
|||
2023-11-24 10:12:38 | ERROR | stderr | |
|||
2023-11-24 10:12:38 | ERROR | stderr | Traceback (most recent call last): |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi |
|||
2023-11-24 10:12:38 | ERROR | stderr | result = await app( # type: ignore[func-returns-value] |
|||
2023-11-24 10:12:38 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | return await self.app(scope, receive, send) |
|||
2023-11-24 10:12:38 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastapi/applications.py", line 1106, in __call__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | await super().__call__(scope, receive, send) |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | await self.middleware_stack(scope, receive, send) |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | raise exc |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | await self.app(scope, receive, _send) |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/cors.py", line 83, in __call__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | await self.app(scope, receive, send) |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | raise exc |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | await self.app(scope, receive, sender) |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | raise e |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | await self.app(scope, receive, send) |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | await route.handle(scope, receive, send) |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle |
|||
2023-11-24 10:12:38 | ERROR | stderr | await self.app(scope, receive, send) |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/routing.py", line 69, in app |
|||
2023-11-24 10:12:38 | ERROR | stderr | await response(scope, receive, send) |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/responses.py", line 270, in __call__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | async with anyio.create_task_group() as task_group: |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 597, in __aexit__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | raise exceptions[0] |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/responses.py", line 273, in wrap |
|||
2023-11-24 10:12:38 | ERROR | stderr | await func() |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/responses.py", line 262, in stream_response |
|||
2023-11-24 10:12:38 | ERROR | stderr | async for chunk in self.body_iterator: |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/serve/openai_api_server.py", line 458, in chat_completion_stream_generator |
|||
2023-11-24 10:12:38 | ERROR | stderr | async for content in generate_completion_stream(gen_params, worker_addr): |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/serve/openai_api_server.py", line 638, in generate_completion_stream |
|||
2023-11-24 10:12:38 | ERROR | stderr | async for raw_chunk in response.aiter_raw(): |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_models.py", line 990, in aiter_raw |
|||
2023-11-24 10:12:38 | ERROR | stderr | async for raw_stream_bytes in self.stream: |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_client.py", line 146, in __aiter__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | async for chunk in self._stream: |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_transports/default.py", line 248, in __aiter__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | with map_httpcore_exceptions(): |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/contextlib.py", line 155, in __exit__ |
|||
2023-11-24 10:12:38 | ERROR | stderr | self.gen.throw(typ, value, traceback) |
|||
2023-11-24 10:12:38 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_transports/default.py", line 83, in map_httpcore_exceptions |
|||
2023-11-24 10:12:38 | ERROR | stderr | raise mapped_exc(message) from exc |
|||
2023-11-24 10:12:38 | ERROR | stderr | httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read) |
|||
2023-11-24 10:15:22 | INFO | stdout | [32mINFO[0m: ::1:64016 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [31m400 Bad Request[0m |
|||
2023-11-24 10:16:21 | INFO | stdout | [32mINFO[0m: ::1:64047 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [31m400 Bad Request[0m |
|||
2023-11-24 10:19:40 | ERROR | stderr | [32mINFO[0m: Started server process [[36m61513[0m] |
|||
2023-11-24 10:19:40 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-24 10:19:40 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-24 10:19:40 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2023-11-24 10:20:08 | INFO | stdout | [32mINFO[0m: ::1:64247 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
|||
2023-11-24 10:20:08 | ERROR | stderr | [31mERROR[0m: Exception in ASGI application |
|||
2023-11-24 10:20:08 | ERROR | stderr | Traceback (most recent call last): |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions |
|||
2023-11-24 10:20:08 | ERROR | stderr | yield |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 209, in _receive_event |
|||
2023-11-24 10:20:08 | ERROR | stderr | event = self._h11_state.next_event() |
|||
2023-11-24 10:20:08 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/h11/_connection.py", line 469, in next_event |
|||
2023-11-24 10:20:08 | ERROR | stderr | event = self._extract_next_receive_event() |
|||
2023-11-24 10:20:08 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/h11/_connection.py", line 419, in _extract_next_receive_event |
|||
2023-11-24 10:20:08 | ERROR | stderr | event = self._reader.read_eof() # type: ignore[attr-defined] |
|||
2023-11-24 10:20:08 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/h11/_readers.py", line 204, in read_eof |
|||
2023-11-24 10:20:08 | ERROR | stderr | raise RemoteProtocolError( |
|||
2023-11-24 10:20:08 | ERROR | stderr | h11._util.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read) |
|||
2023-11-24 10:20:08 | ERROR | stderr | |
|||
2023-11-24 10:20:08 | ERROR | stderr | The above exception was the direct cause of the following exception: |
|||
2023-11-24 10:20:08 | ERROR | stderr | |
|||
2023-11-24 10:20:08 | ERROR | stderr | Traceback (most recent call last): |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_transports/default.py", line 66, in map_httpcore_exceptions |
|||
2023-11-24 10:20:08 | ERROR | stderr | yield |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_transports/default.py", line 249, in __aiter__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | async for part in self._httpcore_stream: |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 361, in __aiter__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | async for part in self._stream: |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 337, in __aiter__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | raise exc |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 329, in __aiter__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | async for chunk in self._connection._receive_response_body(**kwargs): |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 198, in _receive_response_body |
|||
2023-11-24 10:20:08 | ERROR | stderr | event = await self._receive_event(timeout=timeout) |
|||
2023-11-24 10:20:08 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 208, in _receive_event |
|||
2023-11-24 10:20:08 | ERROR | stderr | with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}): |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/contextlib.py", line 155, in __exit__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | self.gen.throw(typ, value, traceback) |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions |
|||
2023-11-24 10:20:08 | ERROR | stderr | raise to_exc(exc) from exc |
|||
2023-11-24 10:20:08 | ERROR | stderr | httpcore.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read) |
|||
2023-11-24 10:20:08 | ERROR | stderr | |
|||
2023-11-24 10:20:08 | ERROR | stderr | The above exception was the direct cause of the following exception: |
|||
2023-11-24 10:20:08 | ERROR | stderr | |
|||
2023-11-24 10:20:08 | ERROR | stderr | Traceback (most recent call last): |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi |
|||
2023-11-24 10:20:08 | ERROR | stderr | result = await app( # type: ignore[func-returns-value] |
|||
2023-11-24 10:20:08 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | return await self.app(scope, receive, send) |
|||
2023-11-24 10:20:08 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastapi/applications.py", line 1106, in __call__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | await super().__call__(scope, receive, send) |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | await self.middleware_stack(scope, receive, send) |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | raise exc |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | await self.app(scope, receive, _send) |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/cors.py", line 83, in __call__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | await self.app(scope, receive, send) |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | raise exc |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | await self.app(scope, receive, sender) |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | raise e |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | await self.app(scope, receive, send) |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | await route.handle(scope, receive, send) |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle |
|||
2023-11-24 10:20:08 | ERROR | stderr | await self.app(scope, receive, send) |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/routing.py", line 69, in app |
|||
2023-11-24 10:20:08 | ERROR | stderr | await response(scope, receive, send) |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/responses.py", line 270, in __call__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | async with anyio.create_task_group() as task_group: |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 597, in __aexit__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | raise exceptions[0] |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/responses.py", line 273, in wrap |
|||
2023-11-24 10:20:08 | ERROR | stderr | await func() |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/responses.py", line 262, in stream_response |
|||
2023-11-24 10:20:08 | ERROR | stderr | async for chunk in self.body_iterator: |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/serve/openai_api_server.py", line 458, in chat_completion_stream_generator |
|||
2023-11-24 10:20:08 | ERROR | stderr | async for content in generate_completion_stream(gen_params, worker_addr): |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/serve/openai_api_server.py", line 638, in generate_completion_stream |
|||
2023-11-24 10:20:08 | ERROR | stderr | async for raw_chunk in response.aiter_raw(): |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_models.py", line 990, in aiter_raw |
|||
2023-11-24 10:20:08 | ERROR | stderr | async for raw_stream_bytes in self.stream: |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_client.py", line 146, in __aiter__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | async for chunk in self._stream: |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_transports/default.py", line 248, in __aiter__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | with map_httpcore_exceptions(): |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/contextlib.py", line 155, in __exit__ |
|||
2023-11-24 10:20:08 | ERROR | stderr | self.gen.throw(typ, value, traceback) |
|||
2023-11-24 10:20:08 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_transports/default.py", line 83, in map_httpcore_exceptions |
|||
2023-11-24 10:20:08 | ERROR | stderr | raise mapped_exc(message) from exc |
|||
2023-11-24 10:20:08 | ERROR | stderr | httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read) |
|||
2023-11-24 10:24:49 | ERROR | stderr | [32mINFO[0m: Started server process [[36m61597[0m] |
|||
2023-11-24 10:24:49 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-24 10:24:49 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-24 10:24:49 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2023-11-24 10:25:04 | INFO | stdout | [32mINFO[0m: ::1:64683 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
|||
2023-11-24 10:25:36 | ERROR | stderr | [32mINFO[0m: Started server process [[36m61641[0m] |
|||
2023-11-24 10:25:36 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-24 10:25:36 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-24 10:25:36 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2023-11-24 10:26:17 | INFO | stdout | [32mINFO[0m: ::1:64765 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
|||
2023-11-24 13:33:33 | ERROR | stderr | [32mINFO[0m: Started server process [[36m63574[0m] |
|||
2023-11-24 13:33:33 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-24 13:33:33 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-24 13:33:33 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2023-11-24 13:33:52 | INFO | stdout | [32mINFO[0m: ::1:62144 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
|||
2023-11-24 13:33:53 | ERROR | stderr | [31mERROR[0m: Exception in ASGI application |
|||
2023-11-24 13:33:53 | ERROR | stderr | Traceback (most recent call last): |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions |
|||
2023-11-24 13:33:53 | ERROR | stderr | yield |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 209, in _receive_event |
|||
2023-11-24 13:33:53 | ERROR | stderr | event = self._h11_state.next_event() |
|||
2023-11-24 13:33:53 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/h11/_connection.py", line 469, in next_event |
|||
2023-11-24 13:33:53 | ERROR | stderr | event = self._extract_next_receive_event() |
|||
2023-11-24 13:33:53 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/h11/_connection.py", line 419, in _extract_next_receive_event |
|||
2023-11-24 13:33:53 | ERROR | stderr | event = self._reader.read_eof() # type: ignore[attr-defined] |
|||
2023-11-24 13:33:53 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/h11/_readers.py", line 204, in read_eof |
|||
2023-11-24 13:33:53 | ERROR | stderr | raise RemoteProtocolError( |
|||
2023-11-24 13:33:53 | ERROR | stderr | h11._util.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read) |
|||
2023-11-24 13:33:53 | ERROR | stderr | |
|||
2023-11-24 13:33:53 | ERROR | stderr | The above exception was the direct cause of the following exception: |
|||
2023-11-24 13:33:53 | ERROR | stderr | |
|||
2023-11-24 13:33:53 | ERROR | stderr | Traceback (most recent call last): |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_transports/default.py", line 66, in map_httpcore_exceptions |
|||
2023-11-24 13:33:53 | ERROR | stderr | yield |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_transports/default.py", line 249, in __aiter__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | async for part in self._httpcore_stream: |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 361, in __aiter__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | async for part in self._stream: |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 337, in __aiter__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | raise exc |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 329, in __aiter__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | async for chunk in self._connection._receive_response_body(**kwargs): |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 198, in _receive_response_body |
|||
2023-11-24 13:33:53 | ERROR | stderr | event = await self._receive_event(timeout=timeout) |
|||
2023-11-24 13:33:53 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_async/http11.py", line 208, in _receive_event |
|||
2023-11-24 13:33:53 | ERROR | stderr | with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}): |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/contextlib.py", line 155, in __exit__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | self.gen.throw(typ, value, traceback) |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions |
|||
2023-11-24 13:33:53 | ERROR | stderr | raise to_exc(exc) from exc |
|||
2023-11-24 13:33:53 | ERROR | stderr | httpcore.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read) |
|||
2023-11-24 13:33:53 | ERROR | stderr | |
|||
2023-11-24 13:33:53 | ERROR | stderr | The above exception was the direct cause of the following exception: |
|||
2023-11-24 13:33:53 | ERROR | stderr | |
|||
2023-11-24 13:33:53 | ERROR | stderr | Traceback (most recent call last): |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi |
|||
2023-11-24 13:33:53 | ERROR | stderr | result = await app( # type: ignore[func-returns-value] |
|||
2023-11-24 13:33:53 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | return await self.app(scope, receive, send) |
|||
2023-11-24 13:33:53 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastapi/applications.py", line 1106, in __call__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | await super().__call__(scope, receive, send) |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | await self.middleware_stack(scope, receive, send) |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | raise exc |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | await self.app(scope, receive, _send) |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/cors.py", line 83, in __call__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | await self.app(scope, receive, send) |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | raise exc |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | await self.app(scope, receive, sender) |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | raise e |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | await self.app(scope, receive, send) |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | await route.handle(scope, receive, send) |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle |
|||
2023-11-24 13:33:53 | ERROR | stderr | await self.app(scope, receive, send) |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/routing.py", line 69, in app |
|||
2023-11-24 13:33:53 | ERROR | stderr | await response(scope, receive, send) |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/responses.py", line 270, in __call__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | async with anyio.create_task_group() as task_group: |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 597, in __aexit__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | raise exceptions[0] |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/responses.py", line 273, in wrap |
|||
2023-11-24 13:33:53 | ERROR | stderr | await func() |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/starlette/responses.py", line 262, in stream_response |
|||
2023-11-24 13:33:53 | ERROR | stderr | async for chunk in self.body_iterator: |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/serve/openai_api_server.py", line 458, in chat_completion_stream_generator |
|||
2023-11-24 13:33:53 | ERROR | stderr | async for content in generate_completion_stream(gen_params, worker_addr): |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/fastchat/serve/openai_api_server.py", line 638, in generate_completion_stream |
|||
2023-11-24 13:33:53 | ERROR | stderr | async for raw_chunk in response.aiter_raw(): |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_models.py", line 990, in aiter_raw |
|||
2023-11-24 13:33:53 | ERROR | stderr | async for raw_stream_bytes in self.stream: |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_client.py", line 146, in __aiter__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | async for chunk in self._stream: |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_transports/default.py", line 248, in __aiter__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | with map_httpcore_exceptions(): |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/contextlib.py", line 155, in __exit__ |
|||
2023-11-24 13:33:53 | ERROR | stderr | self.gen.throw(typ, value, traceback) |
|||
2023-11-24 13:33:53 | ERROR | stderr | File "/opt/homebrew/Caskroom/miniconda/base/envs/lc027/lib/python3.11/site-packages/httpx/_transports/default.py", line 83, in map_httpcore_exceptions |
|||
2023-11-24 13:33:53 | ERROR | stderr | raise mapped_exc(message) from exc |
|||
2023-11-24 13:33:53 | ERROR | stderr | httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read) |
@ -0,0 +1,31 @@ |
|||
2023-11-30 23:45:38 | ERROR | stderr | [32mINFO[0m: Started server process [[36m3940[0m] |
|||
2023-11-30 23:45:38 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-30 23:45:38 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-30 23:45:38 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2023-11-30 23:50:58 | ERROR | stderr | [32mINFO[0m: Started server process [[36m4961[0m] |
|||
2023-11-30 23:50:58 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-30 23:50:58 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-30 23:50:58 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2023-11-30 23:51:24 | INFO | stdout | [32mINFO[0m: ::1:51732 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
|||
2023-11-30 23:51:50 | ERROR | stderr | [32mINFO[0m: Started server process [[36m5143[0m] |
|||
2023-11-30 23:51:50 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-30 23:51:50 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-30 23:51:50 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2023-11-30 23:52:48 | ERROR | stderr | [32mINFO[0m: Started server process [[36m5397[0m] |
|||
2023-11-30 23:52:48 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-30 23:52:48 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-30 23:52:48 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2023-11-30 23:53:30 | ERROR | stderr | [32mINFO[0m: Started server process [[36m5478[0m] |
|||
2023-11-30 23:53:30 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-30 23:53:30 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-30 23:53:30 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2023-11-30 23:54:28 | ERROR | stderr | [32mINFO[0m: Started server process [[36m5497[0m] |
|||
2023-11-30 23:54:28 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-11-30 23:54:28 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-11-30 23:54:28 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2023-11-30 23:54:46 | INFO | stdout | [32mINFO[0m: ::1:51862 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |
|||
2023-12-01 00:31:46 | ERROR | stderr | [32mINFO[0m: Started server process [[36m7847[0m] |
|||
2023-12-01 00:31:46 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. |
|||
2023-12-01 00:31:46 | ERROR | stderr | [32mINFO[0m: Application startup complete. |
|||
2023-12-01 00:31:46 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://localhost:20000[0m (Press CTRL+C to quit) |
|||
2023-12-01 00:32:25 | INFO | stdout | [32mINFO[0m: ::1:52520 - "[1mPOST /v1/chat/completions HTTP/1.1[0m" [32m200 OK[0m |