Compare commits
28 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2f6fca1a6d | ||
|
|
9ac8908e25 | ||
|
|
38f180070e | ||
|
|
085a522d70 | ||
|
|
b65cd4de55 | ||
|
|
239dd5b48a | ||
|
|
b00dc933f5 | ||
|
|
2a209d43af | ||
|
|
9e100957eb | ||
|
|
54defd8a3c | ||
|
|
9e0379382f | ||
|
|
c3d55d5c8f | ||
|
|
383fe66cd7 | ||
|
|
0b392073b6 | ||
|
|
b79d7e4015 | ||
|
|
7407912bb8 | ||
|
|
c8e2e0283b | ||
|
|
4ada33e7e6 | ||
|
|
3683ac4003 | ||
|
|
10b3e16b8b | ||
|
|
51fac7431f | ||
|
|
2015bbe9a9 | ||
|
|
e48df175d7 | ||
|
|
f2be9ae32d | ||
|
|
28cbe3e24e | ||
|
|
5a04d015f9 | ||
|
|
e4b85ffada | ||
|
|
12c6ecb987 |
2
.github/workflows/docs-deploy-vercel.yml
vendored
2
.github/workflows/docs-deploy-vercel.yml
vendored
@@ -58,7 +58,7 @@ jobs:
|
||||
|
||||
# Step 4 - Builds the site using Hugo
|
||||
- name: Build
|
||||
run: cd docSite && hugo mod get -u github.com/colinwilson/lotusdocs@6d0568e && hugo -v --minify
|
||||
run: cd docSite && hugo mod get -u github.com/colinwilson/lotusdocs@6d0568e” && hugo -v --minify
|
||||
|
||||
# Step 5 - Push our generated site to Vercel
|
||||
- name: Deploy to Vercel
|
||||
|
||||
2
.github/workflows/docs-preview.yml
vendored
2
.github/workflows/docs-preview.yml
vendored
@@ -58,7 +58,7 @@ jobs:
|
||||
|
||||
# Step 4 - Builds the site using Hugo
|
||||
- name: Build
|
||||
run: cd docSite && hugo mod get -u github.com/colinwilson/lotusdocs@6d0568e && hugo -v --minify
|
||||
run: cd docSite && hugo mod get -u github.com/colinwilson/lotusdocs@6d0568e” && hugo -v --minify
|
||||
|
||||
# Step 5 - Push our generated site to Vercel
|
||||
- name: Deploy to Vercel
|
||||
|
||||
@@ -83,7 +83,6 @@ https://github.com/labring/FastGPT/assets/15308462/7d3a38df-eb0e-4388-9250-2409b
|
||||
- [x] 统一查阅对话记录,并对数据进行标注
|
||||
|
||||
`6` 其他
|
||||
- [x] 可视化模型配置。
|
||||
- [x] 支持语音输入和输出 (可配置语音输入语音回答)
|
||||
- [x] 模糊输入提示
|
||||
- [x] 模板市场
|
||||
|
||||
@@ -3,7 +3,7 @@ FROM hugomods/hugo:0.117.0 AS builder
|
||||
WORKDIR /app
|
||||
|
||||
ADD ./docSite hugo
|
||||
RUN cd /app/hugo && hugo mod get -u github.com/colinwilson/lotusdocs@6d0568e && hugo -v --minify
|
||||
RUN cd /app/hugo && hugo mod get -u github.com/colinwilson/lotusdocs@6d0568e” && hugo -v --minify
|
||||
|
||||
FROM fholzer/nginx-brotli:latest
|
||||
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 332 KiB |
@@ -13,8 +13,8 @@ weight: 707
|
||||
|
||||
下面配置文件示例中包含了系统参数和各个模型配置:
|
||||
|
||||
## 4.8.20+ 版本新配置文件示例
|
||||
> 从4.8.20版本开始,模型在页面中进行配置。
|
||||
## 4.6.8+ 版本新配置文件示例
|
||||
|
||||
```json
|
||||
{
|
||||
"feConfigs": {
|
||||
@@ -27,4 +27,4 @@ weight: 707
|
||||
"pgHNSWEfSearch": 100 // 向量搜索参数。越大,搜索越精确,但是速度越慢。设置为100,有99%+精度。
|
||||
}
|
||||
}
|
||||
```
|
||||
```
|
||||
@@ -11,7 +11,7 @@ weight: 707
|
||||
|
||||
1. 基础的网络知识:端口,防火墙……
|
||||
2. Docker 和 Docker Compose 基础知识
|
||||
3. 大模型相关接口和参数
|
||||
3. 大模型相关接口和参数
|
||||
4. RAG 相关知识:向量模型,向量数据库,向量检索
|
||||
|
||||
## 部署架构图
|
||||
@@ -211,8 +211,6 @@ docker restart oneapi
|
||||
|
||||
### 6. 配置模型
|
||||
|
||||
务必先配置至少一组模型,否则系统无法正常使用。
|
||||
|
||||
[点击查看模型配置教程](/docs/development/modelConfig/intro/)
|
||||
|
||||
## FAQ
|
||||
|
||||
@@ -9,31 +9,17 @@ images: []
|
||||
|
||||
## 一、错误排查方式
|
||||
|
||||
可以先找找[Issue](https://github.com/labring/FastGPT/issues),或新提 Issue,私有部署错误,务必提供详细的操作步骤、日志、截图,否则很难排查。
|
||||
|
||||
### 获取后端错误
|
||||
遇到问题先按下面方式排查。
|
||||
|
||||
1. `docker ps -a` 查看所有容器运行状态,检查是否全部 running,如有异常,尝试`docker logs 容器名`查看对应日志。
|
||||
2. 容器都运行正常的,`docker logs 容器名` 查看报错日志
|
||||
3. 带有`requestId`的,都是 OneAPI 提示错误,大部分都是因为模型接口报错。
|
||||
4. 无法解决时,可以找找[Issue](https://github.com/labring/FastGPT/issues),或新提 Issue,私有部署错误,务必提供详细的日志,否则很难排查。
|
||||
|
||||
### 前端错误
|
||||
|
||||
前端报错时,页面会出现崩溃,并提示检查控制台日志。可以打开浏览器控制台,并查看`console`中的 log 日志。还可以点击对应 log 的超链接,会提示到具体错误文件,可以把这些详细错误信息提供,方便排查。
|
||||
|
||||
### OneAPI 错误
|
||||
|
||||
带有`requestId`的,都是 OneAPI 提示错误,大部分都是因为模型接口报错。可以参考 [OneAPI 常见错误](/docs/development/faq/#三常见的-oneapi-错误)
|
||||
|
||||
## 二、通用问题
|
||||
|
||||
### 前端页面崩溃
|
||||
|
||||
1. 90% 情况是模型配置不正确:确保每类模型都至少有一个启用;检查模型中一些`对象`参数是否异常(数组和对象),如果为空,可以尝试给个空数组或空对象。
|
||||
2. 少部分是由于浏览器兼容问题,由于项目中包含一些高阶语法,可能低版本浏览器不兼容,可以将具体操作步骤和控制台中错误信息提供 issue。
|
||||
3. 关闭浏览器翻译功能,如果浏览器开启了翻译,可能会导致页面崩溃。
|
||||
|
||||
### 通过sealos部署的话,是否没有本地部署的一些限制?
|
||||
|
||||

|
||||
这是索引模型的长度限制,通过任何方式部署都一样的,但不同索引模型的配置不一样,可以在后台修改参数。
|
||||
|
||||
@@ -142,13 +128,9 @@ OneAPI 的 API Key 配置错误,需要修改`OPENAI_API_KEY`环境变量,并
|
||||
3. ....
|
||||
|
||||
|
||||
### Tiktoken 下载失败
|
||||
|
||||
由于 OneAPI 会在启动时从网络下载一个 tiktoken 的依赖,如果网络异常,就会导致启动失败。可以参考[OneAPI 离线部署](https://blog.csdn.net/wanh/article/details/139039216)解决。
|
||||
|
||||
## 四、常见模型问题
|
||||
|
||||
### 如何检查模型可用性问题
|
||||
### 如何检查模型问题
|
||||
|
||||
1. 私有部署模型,先确认部署的模型是否正常。
|
||||
2. 通过 CURL 请求,直接测试上游模型是否正常运行(云端模型或私有模型均进行测试)
|
||||
@@ -421,7 +403,3 @@ curl --location --request POST 'https://oneapi.xxxx/v1/chat/completions' \
|
||||
"tool_choice": "auto"
|
||||
}'
|
||||
```
|
||||
|
||||
### 向量检索得分大于 1
|
||||
|
||||
由于模型没有归一化导致的。目前仅支持归一化的模型。
|
||||
@@ -15,8 +15,8 @@ weight: 705
|
||||
|
||||
- [Git](http://git-scm.com/)
|
||||
- [Docker](https://www.docker.com/)(构建镜像)
|
||||
- [Node.js v20.14.0](http://nodejs.org)(版本尽量一样,可以使用nvm管理node版本)
|
||||
- [pnpm](https://pnpm.io/) 推荐版本 9.4.0 (目前官方的开发环境)
|
||||
- [Node.js v18.17 / v20.x](http://nodejs.org)(版本尽量一样,可以使用nvm管理node版本)
|
||||
- [pnpm](https://pnpm.io/) 版本 8.6.0 (目前官方的开发环境)
|
||||
- make命令: 根据不同平台,百度安装 (官方是GNU Make 4.3)
|
||||
|
||||
## 开始本地开发
|
||||
@@ -77,6 +77,8 @@ Mongo 数据库需要注意,需要注意在连接地址中增加 `directConnec
|
||||
可参考项目根目录下的 `dev.md`,第一次编译运行可能会有点慢,需要点耐心哦
|
||||
|
||||
```bash
|
||||
# 给自动化脚本代码执行权限(非 linux 系统, 可以手动执行里面的 postinstall.sh 文件内容)
|
||||
chmod -R +x ./scripts/
|
||||
# 代码根目录下执行,会安装根 package、projects 和 packages 内所有依赖
|
||||
# 如果提示 isolate-vm 安装失败,可以参考:https://github.com/laverdet/isolated-vm?tab=readme-ov-file#requirements
|
||||
pnpm i
|
||||
|
||||
@@ -11,9 +11,7 @@ weight: 744
|
||||
|
||||
从 4.8.20 版本开始,你可以直接在 FastGPT 页面中进行模型配置,并且系统内置了大量模型,无需从 0 开始配置。下面介绍模型配置的基本流程:
|
||||
|
||||
## 配置模型
|
||||
|
||||
### 1. 使用 OneAPI 对接模型提供商
|
||||
## 1. 使用 OneAPI 对接模型提供商
|
||||
|
||||
可以使用 [OneAPI 接入教程](/docs/development/modelconfig/one-api) 来进行模型聚合,从而可以对接更多模型提供商。你需要先在各服务商申请好 API 接入 OneAPI 后,才能在 FastGPT 中使用这些模型。示例流程如下:
|
||||
|
||||
@@ -28,46 +26,44 @@ weight: 744
|
||||
|
||||
在 OneAPI 配置好模型后,你就可以打开 FastGPT 页面,启用对应模型了。
|
||||
|
||||
### 2. 登录 root 用户
|
||||
## 2. 登录 root 用户
|
||||
|
||||
仅 root 用户可以进行模型配置。
|
||||
|
||||
### 3. 进入模型配置页面
|
||||
## 3. 进入模型配置页面
|
||||
|
||||
登录 root 用户后,在`账号-模型提供商-模型配置`中,你可以看到所有内置的模型和自定义模型,以及哪些模型启用了。
|
||||
|
||||

|
||||
|
||||
### 4. 配置介绍
|
||||
## 4. 配置介绍
|
||||
|
||||
{{% alert icon="🤖 " context="success" %}}
|
||||
注意:
|
||||
1. 目前语音识别模型和重排模型仅会生效一个,所以配置时候,只需要配置一个即可。
|
||||
2. 系统至少需要一个语言模型和一个索引模型才能正常使用。
|
||||
注意:目前语音识别模型和重排模型仅会生效一个,所以配置时候,只需要配置一个即可。
|
||||
{{% /alert %}}
|
||||
|
||||
#### 核心配置
|
||||
### 核心配置
|
||||
|
||||
- 模型 ID:接口请求时候,Body 中`model`字段的值,全局唯一。
|
||||
- 自定义请求地址/Key:如果需要绕过`OneAPI`,可以设置自定义请求地址和 Token。一般情况下不需要,如果 OneAPI 不支持某些模型,可以使用该特性。
|
||||
|
||||
#### 模型类型
|
||||
### 模型类型
|
||||
|
||||
1. 语言模型 - 进行文本对话,多模态模型支持图片识别。
|
||||
2. 索引模型 - 对文本块进行索引,用于相关文本检索。
|
||||
3. 重排模型 - 对检索结果进行重排,用于优化检索排名。
|
||||
4. 语音合成 - 将文本转换为语音。
|
||||
5. 语音识别 - 将语音转换为文本。
|
||||
3. 语音合成 - 将文本转换为语音。
|
||||
4. 语音识别 - 将语音转换为文本。
|
||||
5. 重排模型 - 对文本进行重排,用于优化文本质量。
|
||||
|
||||
#### 启用模型
|
||||
### 启用模型
|
||||
|
||||
系统内置了目前主流厂商的模型,如果你不熟悉配置,直接点击`启用`即可,需要注意的是,`模型 ID`需要和 OneAPI 中渠道的`模型`一致。
|
||||
系统内置了目前主流厂商的模型,如果你不熟悉配置,直接点击`启用`即可,需要注意到是,模型 ID 需要和 OneAPI 中渠道的`模型`一致。
|
||||
|
||||
| | |
|
||||
| --- | --- |
|
||||
|  |  |
|
||||
|
||||
#### 修改模型配置
|
||||
### 修改模型配置
|
||||
|
||||
点击模型右侧的齿轮即可进行模型配置,不同类型模型的配置有区别。
|
||||
|
||||
@@ -75,7 +71,7 @@ weight: 744
|
||||
| --- | --- |
|
||||
|  |  |
|
||||
|
||||
## 新增自定义模型
|
||||
### 新增自定义模型
|
||||
|
||||
如果系统内置的模型无法满足你的需求,你可以添加自定义模型。自定义模型中,如果`模型 ID`与系统内置的模型 ID 一致,则会被认为是修改系统模型。
|
||||
|
||||
@@ -83,7 +79,7 @@ weight: 744
|
||||
| --- | --- |
|
||||
|  |  |
|
||||
|
||||
#### 通过配置文件配置
|
||||
### 通过配置文件配置
|
||||
|
||||
如果你觉得通过页面配置模型比较麻烦,你也可以通过配置文件来配置模型。或者希望快速将一个系统的配置,复制到另一个系统,也可以通过配置文件来实现。
|
||||
|
||||
@@ -262,30 +258,6 @@ OneAPI 的语言识别接口,无法正确的识别其他模型(会始终识
|
||||
由于 OpenAI 没有提供 ReRank 模型,遵循的是 Cohere 的格式。[点击查看接口请求示例](/docs/development/faq/#如何检查模型问题)
|
||||
|
||||
|
||||
### 模型价格配置
|
||||
|
||||
商业版用户可以通过配置模型价格,来进行账号计费。系统包含两种计费模式:按总 tokens 计费和输入输出 Tokens 分开计费。
|
||||
|
||||
如果需要配置`输入输出 Tokens 分开计费模式`,则填写`模型输入价格`和`模型输出价格`两个值。
|
||||
如果需要配置`按总 tokens 计费模式`,则填写`模型综合价格`一个值。
|
||||
|
||||
## 如何提交内置模型
|
||||
|
||||
由于模型更新非常频繁,官方不一定及时更新,如果未能找到你期望的内置模型,你可以[提交 Issue](https://github.com/labring/FastGPT/issues),提供模型的名字和对应官网。或者直接[提交 PR](https://github.com/labring/FastGPT/pulls),提供模型配置。
|
||||
|
||||
|
||||
### 添加模型提供商
|
||||
|
||||
如果你需要添加模型提供商,需要修改以下代码:
|
||||
|
||||
1. FastGPT/packages/web/components/common/Icon/icons/model - 在此目录下,添加模型提供商的 svg 头像地址。
|
||||
2. 在 FastGPT 根目录下,运行`pnpm initIcon`,将图片加载到配置文件中。
|
||||
3. FastGPT/packages/global/core/ai/provider.ts - 在此文件中,追加模型提供商的配置。
|
||||
|
||||
### 添加模型
|
||||
|
||||
你可以在`FastGPT/packages/service/core/ai/config/provider`目录下,找对应模型提供商的配置文件,并追加模型配置。请自行全文检查,`model`字段,必须在所有模型中唯一。具体配置字段说明,参考[模型配置字段说明](/docs/development/modelconfig/intro/#通过配置文件配置)
|
||||
|
||||
## 旧版模型配置说明
|
||||
|
||||
配置好 OneAPI 后,需要在`config.json`文件中,手动的增加模型配置,并重启。
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
title: 'OpenAPI 介绍'
|
||||
description: 'FastGPT OpenAPI 介绍'
|
||||
title: 'Api Key 使用与鉴权'
|
||||
description: 'FastGPT Api Key 使用与鉴权'
|
||||
icon: 'key'
|
||||
draft: false
|
||||
toc: true
|
||||
@@ -27,7 +27,6 @@ FastGPT 的 API Key **有 2 类**,一类是全局通用的 key (无法直接
|
||||
| --------------------- | --------------------- |
|
||||
|  |  |
|
||||
|
||||
|
||||
## 基本配置
|
||||
|
||||
OpenAPI 中,所有的接口都通过 Header.Authorization 进行鉴权。
|
||||
@@ -7,12 +7,6 @@ toc: true
|
||||
weight: 852
|
||||
---
|
||||
|
||||
# 如何获取 AppId
|
||||
|
||||
可在应用详情的路径里获取 AppId。
|
||||
|
||||

|
||||
|
||||
# 发起对话
|
||||
|
||||
{{% alert icon="🤖 " context="success" %}}
|
||||
@@ -108,8 +102,8 @@ curl --location --request POST 'http://localhost:3000/api/v1/chat/completions' \
|
||||
{{% alert context="info" %}}
|
||||
- headers.Authorization: Bearer {{apikey}}
|
||||
- chatId: string | undefined 。
|
||||
- 为 `undefined` 时(不传入),不使用 FastGpt 提供的上下文功能,完全通过传入的 messages 构建上下文。
|
||||
- 为`非空字符串`时,意味着使用 chatId 进行对话,自动从 FastGpt 数据库取历史记录,并使用 messages 数组最后一个内容作为用户问题,其余 message 会被忽略。请自行确保 chatId 唯一,长度小于250,通常可以是自己系统的对话框ID。
|
||||
- 为 `undefined` 时(不传入),不使用 FastGpt 提供的上下文功能,完全通过传入的 messages 构建上下文。 不会将你的记录存储到数据库中,你也无法在记录汇总中查阅到。
|
||||
- 为`非空字符串`时,意味着使用 chatId 进行对话,自动从 FastGpt 数据库取历史记录,并使用 messages 数组最后一个内容作为用户问题。请自行确保 chatId 唯一,长度小于250,通常可以是自己系统的对话框ID。
|
||||
- messages: 结构与 [GPT接口](https://platform.openai.com/docs/api-reference/chat/object) chat模式一致。
|
||||
- responseChatItemId: string | undefined 。如果传入,则会将该值作为本次对话的响应消息的 ID,FastGPT 会自动将该 ID 存入数据库。请确保,在当前`chatId`下,`responseChatItemId`是唯一的。
|
||||
- detail: 是否返回中间值(模块状态,响应的完整结果等),`stream模式`下会通过`event`进行区分,`非stream模式`结果保存在`responseData`中。
|
||||
@@ -678,7 +672,7 @@ curl --location --request POST 'http://localhost:3000/api/core/chat/getHistories
|
||||
"appId": "appId",
|
||||
"offset": 0,
|
||||
"pageSize": 20,
|
||||
"source": "api"
|
||||
"source: "api"
|
||||
}'
|
||||
```
|
||||
|
||||
|
||||
@@ -735,7 +735,7 @@ data 为集合的 ID。
|
||||
|
||||
**4.8.19+**
|
||||
```bash
|
||||
curl --location --request POST 'http://localhost:3000/api/core/dataset/collection/listV2' \
|
||||
curl --location --request POST 'http://localhost:3000/api/core/dataset/collection/listv2' \
|
||||
--header 'Authorization: Bearer {{authorization}}' \
|
||||
--header 'Content-Type: application/json' \
|
||||
--data-raw '{
|
||||
|
||||
@@ -11,7 +11,7 @@ weight: 860
|
||||
|
||||
在 FastGPT V4.6.4 中,我们修改了分享链接的数据读取方式,为每个用户生成一个 localId,用于标识用户,从云端拉取对话记录。但是这种方式仅能保障用户在同一设备同一浏览器中使用,如果切换设备或者清空浏览器缓存则会丢失这些记录。这种方式存在一定的风险,因此我们仅允许用户拉取近`30天`的`20条`记录。
|
||||
|
||||
分享链接身份鉴权设计的目的在于,将 FastGPT 的对话框快速、安全的接入到你现有的系统中,仅需 2 个接口即可实现。该功能目前只在商业版中提供。
|
||||
分享链接身份鉴权设计的目的在于,将 FastGPT 的对话框快速、安全的接入到你现有的系统中,仅需 2 个接口即可实现。
|
||||
|
||||
## 使用说明
|
||||
|
||||
|
||||
@@ -60,10 +60,6 @@ FastGPT 使用了 one-api 项目来管理模型池,其可以兼容 OpenAI 、A
|
||||
|
||||
### 3. 配置模型
|
||||
|
||||
### 4. 配置模型
|
||||
|
||||
务必先配置至少一组模型,否则系统无法正常使用。
|
||||
|
||||
[点击查看模型配置教程](/docs/development/modelConfig/intro/)
|
||||
|
||||
## 收费
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title: 'V4.8.18(包含升级脚本)'
|
||||
title: 'V4.8.18'
|
||||
description: 'FastGPT V4.8.18 更新说明'
|
||||
icon: 'upgrade'
|
||||
draft: false
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title: 'V4.8.19(包含升级脚本)'
|
||||
title: 'V4.8.19(进行中)'
|
||||
description: 'FastGPT V4.8.19 更新说明'
|
||||
icon: 'upgrade'
|
||||
draft: false
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title: 'V4.8.20(包含升级脚本)'
|
||||
title: 'V4.8.20(进行中)'
|
||||
description: 'FastGPT V4.8.20 更新说明'
|
||||
icon: 'upgrade'
|
||||
draft: false
|
||||
@@ -17,8 +17,8 @@ weight: 804
|
||||
|
||||
### 3. 更新镜像:
|
||||
|
||||
- 更新 fastgpt 镜像 tag: v4.8.20-fix2
|
||||
- 更新 fastgpt-pro 商业版镜像 tag: v4.8.20-fix2
|
||||
- 更新 fastgpt 镜像 tag: v4.8.20
|
||||
- 更新 fastgpt-pro 商业版镜像 tag: v4.8.20
|
||||
- Sandbox 镜像无需更新
|
||||
|
||||
### 4. 运行升级脚本
|
||||
@@ -35,7 +35,7 @@ curl --location --request POST 'https://{{host}}/api/admin/initv4820' \
|
||||
|
||||
## 完整更新内容
|
||||
|
||||
1. 新增 - 可视化模型参数配置,取代原配置文件配置模型。预设超过 100 个模型配置。同时支持所有类型模型的一键测试。(预计下个版本会完全支持在页面上配置渠道)。
|
||||
1. 新增 - 可视化模型参数配置。预设超过 100 个模型配置。同时支持所有类型模型的一键测试。(预计下个版本会完全支持在页面上配置渠道)。
|
||||
2. 新增 - DeepSeek resoner 模型支持输出思考过程。
|
||||
3. 新增 - 使用记录导出和仪表盘。
|
||||
4. 新增 - markdown 语法扩展,支持音视频(代码块 audio 和 video)。
|
||||
|
||||
@@ -1,39 +0,0 @@
|
||||
---
|
||||
title: 'V4.8.21'
|
||||
description: 'FastGPT V4.8.21 更新说明'
|
||||
icon: 'upgrade'
|
||||
draft: false
|
||||
toc: true
|
||||
weight: 803
|
||||
---
|
||||
|
||||
## 更新指南
|
||||
|
||||
### 1. 做好数据库备份
|
||||
|
||||
### 2. 更新镜像:
|
||||
|
||||
- 更新 fastgpt 镜像 tag: v4.8.21-fix
|
||||
- 更新 fastgpt-pro 商业版镜像 tag: v4.8.21-fix
|
||||
- Sandbox 镜像无需更新
|
||||
|
||||
## 完整更新内容
|
||||
|
||||
1. 新增 - 弃用/已删除的插件提示。
|
||||
2. 新增 - 对话日志按来源分类、标题检索、导出功能。
|
||||
3. 新增 - 全局变量支持拖拽排序。
|
||||
4. 新增 - LLM 模型支持 top_p, response_format, json_schema 参数。
|
||||
5. 新增 - Doubao1.5 模型预设。阿里 embedding3 预设。
|
||||
6. 新增 - 向量模型支持归一化配置,以便适配未归一化的向量模型,例如 Doubao 的 embedding 模型。
|
||||
6. 新增 - AI 对话节点,支持输出思考过程结果,可用于其他节点引用。
|
||||
7. 优化 - 网站嵌入式聊天窗口,增加窗口位置适配。
|
||||
8. 优化 - 模型未配置时错误提示。
|
||||
9. 优化 - 适配非 Stream 模式思考输出。
|
||||
10. 优化 - 增加 TTS voice 未配置时的空指针保护。
|
||||
11. 优化 - Markdown 链接解析分割规则,改成严格匹配模式,牺牲兼容多种情况,减少误解析。
|
||||
12. 优化 - 减少未登录用户的数据获取范围,提高系统隐私性。
|
||||
13. 修复 - 简易模式,切换到其他非视觉模型时候,会强制关闭图片识别。
|
||||
14. 修复 - o1,o3 模型,在测试时候字段映射未生效导致报错。
|
||||
15. 修复 - 公众号对话空指针异常。
|
||||
16. 修复 - 多个音频/视频文件展示异常。
|
||||
17. 修复 - 分享链接鉴权报错后无限循环。
|
||||
@@ -1,61 +0,0 @@
|
||||
---
|
||||
title: 'V4.8.22(进行中)'
|
||||
description: 'FastGPT V4.8.22 更新说明'
|
||||
icon: 'upgrade'
|
||||
draft: false
|
||||
toc: true
|
||||
weight: 802
|
||||
---
|
||||
|
||||
## 🌟更新指南
|
||||
|
||||
### 1. 做好数据库备份
|
||||
|
||||
### 2. 更新镜像:
|
||||
|
||||
- 更新 fastgpt 镜像 tag: v4.8.22-alpha
|
||||
- 更新 fastgpt-pro 商业版镜像 tag: v4.8.22-alpha
|
||||
- Sandbox 镜像无需更新
|
||||
|
||||
### 3. 运行升级脚本
|
||||
|
||||
仅商业版,并提供 Saas 服务的用户需要运行该升级脚本。
|
||||
|
||||
从任意终端,发起 1 个 HTTP 请求。其中 {{rootkey}} 替换成环境变量里的 `rootkey`;{{host}} 替换成**FastGPT 域名**。
|
||||
|
||||
```bash
|
||||
curl --location --request POST 'https://{{host}}/api/admin/initv4822' \
|
||||
--header 'rootkey: {{rootkey}}' \
|
||||
--header 'Content-Type: application/json'
|
||||
```
|
||||
|
||||
会迁移联系方式到对应用户表中。
|
||||
|
||||
## 🚀 新增内容
|
||||
|
||||
1. AI 对话节点解析 `<think></think>` 标签内容作为思考链,便于各类模型进行思考链输出。需主动开启模型输出思考。
|
||||
2. 对话 API 优化,无论是否传递 chatId,都会保存对话日志。未传递 chatId,则随机生成一个 chatId 来进行存储。
|
||||
3. ppio 模型提供商
|
||||
|
||||
## ⚙️ 优化
|
||||
|
||||
1. 模型未配置时提示,减少冲突提示。
|
||||
2. 使用记录代码。
|
||||
3. 内容提取节点,字段描述过长时换行。同时修改其输出名用 key,而不是 description。
|
||||
4. 团队管理交互。
|
||||
5. 对话接口,非流响应,增加报错字段。
|
||||
|
||||
## 🐛 修复
|
||||
|
||||
1. 思考内容未进入到输出 Tokens.
|
||||
2. 思考链流输出时,有时与正文顺序偏差。
|
||||
3. API 调用工作流,如果传递的图片不支持 Head 检测时,图片会被过滤。已增加该类错误检测,避免被错误过滤。
|
||||
4. 模板市场部分模板错误。
|
||||
5. 免登录窗口无法正常判断语言识别是否开启。
|
||||
6. 对话日志导出,未兼容 sub path。
|
||||
7. 切换团队时未刷新成员列表
|
||||
8. list 接口在联查 member 时,存在空指针可能性。
|
||||
9. 工作流基础节点无法升级。
|
||||
10. 向量检索结果未去重。
|
||||
11. 用户选择节点无法正常连线。
|
||||
12. 对话记录保存时,source 未正常记录。
|
||||
@@ -7,7 +7,7 @@ toc: true
|
||||
weight: 234
|
||||
---
|
||||
|
||||
知识库搜索具体参数说明,以及内部逻辑请移步:[FastGPT知识库搜索方案](/docs/guide/knowledge_base/rag/)
|
||||
知识库搜索具体参数说明,以及内部逻辑请移步:[FastGPT知识库搜索方案](/docs/course/data_search/)
|
||||
|
||||
## 特点
|
||||
|
||||
@@ -27,7 +27,7 @@ weight: 234
|
||||
|
||||
### 输入 - 搜索参数
|
||||
|
||||
[点击查看参数介绍](/docs/guide/knowledge_base/dataset_engine/#搜索参数)
|
||||
[点击查看参数介绍](/docs/course/data_search/#搜索参数)
|
||||
|
||||
### 输出 - 引用内容
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ weight: 502
|
||||

|
||||
|
||||
{{% alert icon="🍅" context="success" %}}
|
||||
Tips: 安全起见,你可以设置一个额度或者过期时间,防止 key 被滥用。
|
||||
Tips: 安全起见,你可以设置一个额度或者过期时间,放置 key 被滥用。
|
||||
{{% /alert %}}
|
||||
|
||||
|
||||
|
||||
@@ -114,15 +114,15 @@ services:
|
||||
# fastgpt
|
||||
sandbox:
|
||||
container_name: sandbox
|
||||
image: ghcr.io/labring/fastgpt-sandbox:v4.8.21-fix # git
|
||||
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.8.21-fix # 阿里云
|
||||
image: ghcr.io/labring/fastgpt-sandbox:v4.8.20 # git
|
||||
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.8.20 # 阿里云
|
||||
networks:
|
||||
- fastgpt
|
||||
restart: always
|
||||
fastgpt:
|
||||
container_name: fastgpt
|
||||
image: ghcr.io/labring/fastgpt:v4.8.21-fix # git
|
||||
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.21-fix # 阿里云
|
||||
image: ghcr.io/labring/fastgpt:v4.8.20 # git
|
||||
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.20 # 阿里云
|
||||
ports:
|
||||
- 3000:3000
|
||||
networks:
|
||||
|
||||
@@ -72,15 +72,15 @@ services:
|
||||
# fastgpt
|
||||
sandbox:
|
||||
container_name: sandbox
|
||||
image: ghcr.io/labring/fastgpt-sandbox:v4.8.21-fix # git
|
||||
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.8.21-fix # 阿里云
|
||||
image: ghcr.io/labring/fastgpt-sandbox:v4.8.20 # git
|
||||
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.8.20 # 阿里云
|
||||
networks:
|
||||
- fastgpt
|
||||
restart: always
|
||||
fastgpt:
|
||||
container_name: fastgpt
|
||||
image: ghcr.io/labring/fastgpt:v4.8.21-fix # git
|
||||
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.21-fix # 阿里云
|
||||
image: ghcr.io/labring/fastgpt:v4.8.20 # git
|
||||
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.20 # 阿里云
|
||||
ports:
|
||||
- 3000:3000
|
||||
networks:
|
||||
|
||||
@@ -53,15 +53,15 @@ services:
|
||||
wait $$!
|
||||
sandbox:
|
||||
container_name: sandbox
|
||||
image: ghcr.io/labring/fastgpt-sandbox:v4.8.21-fix # git
|
||||
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.8.21-fix # 阿里云
|
||||
image: ghcr.io/labring/fastgpt-sandbox:v4.8.20 # git
|
||||
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.8.20 # 阿里云
|
||||
networks:
|
||||
- fastgpt
|
||||
restart: always
|
||||
fastgpt:
|
||||
container_name: fastgpt
|
||||
image: ghcr.io/labring/fastgpt:v4.8.21-fix # git
|
||||
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.21-fix # 阿里云
|
||||
image: ghcr.io/labring/fastgpt:v4.8.20 # git
|
||||
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.20 # 阿里云
|
||||
ports:
|
||||
- 3000:3000
|
||||
networks:
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
"format-code": "prettier --config \"./.prettierrc.js\" --write \"./**/src/**/*.{ts,tsx,scss}\"",
|
||||
"format-doc": "zhlint --dir ./docSite *.md --fix",
|
||||
"gen:theme-typings": "chakra-cli tokens packages/web/styles/theme.ts --out node_modules/.pnpm/node_modules/@chakra-ui/styled-system/dist/theming.types.d.ts",
|
||||
"postinstall": "pnpm gen:theme-typings",
|
||||
"postinstall": "sh ./scripts/postinstall.sh",
|
||||
"initIcon": "node ./scripts/icon/init.js",
|
||||
"previewIcon": "node ./scripts/icon/index.js",
|
||||
"api:gen": "tsc ./scripts/openapi/index.ts && node ./scripts/openapi/index.js && npx @redocly/cli build-docs ./scripts/openapi/openapi.json -o ./projects/app/public/openapi/index.html",
|
||||
|
||||
@@ -16,8 +16,8 @@ export const bucketNameMap = {
|
||||
}
|
||||
};
|
||||
|
||||
export const ReadFileBaseUrl = `${process.env.FILE_DOMAIN || process.env.FE_DOMAIN || ''}${process.env.NEXT_PUBLIC_BASE_URL || ''}/api/common/file/read`;
|
||||
export const ReadFileBaseUrl = `${process.env.FE_DOMAIN || ''}${process.env.NEXT_PUBLIC_BASE_URL || ''}/api/common/file/read`;
|
||||
|
||||
export const documentFileType = '.txt, .docx, .csv, .xlsx, .pdf, .md, .html, .pptx';
|
||||
export const imageFileType =
|
||||
'.jpg, .jpeg, .png, .gif, .bmp, .webp, .svg, .tiff, .tif, .ico, .heic, .heif, .avif, .raw, .cr2, .nef, .arw, .dng, .psd, .ai, .eps, .emf, .wmf, .jfif, .exif, .pgm, .ppm, .pbm, .jp2, .j2k, .jpf, .jpx, .jpm, .mj2, .xbm, .pcx';
|
||||
'.jpg, .jpeg, .png, .gif, .bmp, .webp, .svg, .tiff, .tif, .ico, .heic, .heif, .avif';
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import { detect } from 'jschardet';
|
||||
import { documentFileType } from './constants';
|
||||
import { documentFileType, imageFileType } from './constants';
|
||||
import { ChatFileTypeEnum } from '../../core/chat/constants';
|
||||
import { UserChatItemValueItemType } from '../../core/chat/type';
|
||||
import * as fs from 'fs';
|
||||
@@ -25,7 +25,6 @@ export const detectFileEncodingByPath = async (path: string) => {
|
||||
const fd = await fs.promises.open(path, 'r');
|
||||
try {
|
||||
// Read file head
|
||||
// @ts-ignore
|
||||
const { bytesRead } = await fd.read(buffer, 0, MAX_BYTES, 0);
|
||||
const actualBuffer = buffer.slice(0, bytesRead);
|
||||
|
||||
@@ -38,49 +37,40 @@ export const detectFileEncodingByPath = async (path: string) => {
|
||||
// Url => user upload file type
|
||||
export const parseUrlToFileType = (url: string): UserChatItemValueItemType['file'] | undefined => {
|
||||
if (typeof url !== 'string') return;
|
||||
const parseUrl = new URL(url, 'https://locaohost:3000');
|
||||
|
||||
// Handle base64 image
|
||||
if (url.startsWith('data:')) {
|
||||
const matches = url.match(/^data:([^;]+);base64,/);
|
||||
if (!matches) return;
|
||||
const filename = (() => {
|
||||
// Check base64 image
|
||||
if (url.startsWith('data:image/')) {
|
||||
const mime = url.split(',')[0].split(':')[1].split(';')[0];
|
||||
return `image.${mime.split('/')[1]}`;
|
||||
}
|
||||
// Old version file url: https://xxx.com/file/read?filename=xxx.pdf
|
||||
const filenameQuery = parseUrl.searchParams.get('filename');
|
||||
if (filenameQuery) return filenameQuery;
|
||||
|
||||
const mimeType = matches[1].toLowerCase();
|
||||
if (!mimeType.startsWith('image/')) return;
|
||||
// Common file: https://xxx.com/xxx.pdf?xxxx=xxx
|
||||
const pathname = parseUrl.pathname;
|
||||
if (pathname) return pathname.split('/').pop();
|
||||
})();
|
||||
|
||||
const extension = mimeType.split('/')[1];
|
||||
if (!filename) return;
|
||||
|
||||
const extension = filename.split('.').pop()?.toLowerCase() || '';
|
||||
|
||||
if (!extension) return;
|
||||
|
||||
if (documentFileType.includes(extension)) {
|
||||
return {
|
||||
type: ChatFileTypeEnum.image,
|
||||
name: `image.${extension}`,
|
||||
type: ChatFileTypeEnum.file,
|
||||
name: filename,
|
||||
url
|
||||
};
|
||||
}
|
||||
|
||||
try {
|
||||
const parseUrl = new URL(url, 'https://localhost:3000');
|
||||
|
||||
// Get filename from URL
|
||||
const filename = parseUrl.searchParams.get('filename') || parseUrl.pathname.split('/').pop();
|
||||
const extension = filename?.split('.').pop()?.toLowerCase() || '';
|
||||
|
||||
// If it's a document type, return as file, otherwise treat as image
|
||||
if (extension && documentFileType.includes(extension)) {
|
||||
return {
|
||||
type: ChatFileTypeEnum.file,
|
||||
name: filename || 'null',
|
||||
url
|
||||
};
|
||||
}
|
||||
|
||||
// Default to image type for non-document files
|
||||
if (imageFileType.includes(extension)) {
|
||||
return {
|
||||
type: ChatFileTypeEnum.image,
|
||||
name: filename || 'null.png',
|
||||
url
|
||||
};
|
||||
} catch (error) {
|
||||
return {
|
||||
type: ChatFileTypeEnum.image,
|
||||
name: 'invalid.png',
|
||||
name: filename,
|
||||
url
|
||||
};
|
||||
}
|
||||
|
||||
@@ -26,7 +26,7 @@ export const simpleText = (text = '') => {
|
||||
};
|
||||
|
||||
export const valToStr = (val: any) => {
|
||||
if (val === undefined) return '';
|
||||
if (val === undefined) return 'undefined';
|
||||
if (val === null) return 'null';
|
||||
|
||||
if (typeof val === 'object') return JSON.stringify(val);
|
||||
|
||||
6
packages/global/core/ai/model.d.ts
vendored
6
packages/global/core/ai/model.d.ts
vendored
@@ -26,16 +26,11 @@ type BaseModelItemType = {
|
||||
export type LLMModelItemType = PriceType &
|
||||
BaseModelItemType & {
|
||||
type: ModelTypeEnum.llm;
|
||||
// Model params
|
||||
maxContext: number;
|
||||
maxResponse: number;
|
||||
quoteMaxToken: number;
|
||||
maxTemperature?: number;
|
||||
|
||||
showTopP?: boolean;
|
||||
responseFormatList?: string[];
|
||||
showStopSign?: boolean;
|
||||
|
||||
censor?: boolean;
|
||||
vision?: boolean;
|
||||
reasoning?: boolean;
|
||||
@@ -64,7 +59,6 @@ export type EmbeddingModelItemType = PriceType &
|
||||
maxToken: number; // model max token
|
||||
weight: number; // training weight
|
||||
hidden?: boolean; // Disallow creation
|
||||
normalization?: boolean; // normalization processing
|
||||
defaultConfig?: Record<string, any>; // post request config
|
||||
dbConfig?: Record<string, any>; // Custom parameters for storage
|
||||
queryConfig?: Record<string, any>; // Custom parameters for query
|
||||
|
||||
@@ -61,9 +61,6 @@ export const getModelFromList = (
|
||||
model: string
|
||||
) => {
|
||||
const modelData = modelList.find((item) => item.model === model) ?? modelList[0];
|
||||
if (!modelData) {
|
||||
throw new Error('No Key model is configured');
|
||||
}
|
||||
const provider = getModelProvider(modelData.provider);
|
||||
return {
|
||||
...modelData,
|
||||
|
||||
@@ -22,7 +22,6 @@ export type ModelProviderIdType =
|
||||
| 'StepFun'
|
||||
| 'Yi'
|
||||
| 'Siliconflow'
|
||||
| 'PPIO'
|
||||
| 'Ollama'
|
||||
| 'BAAI'
|
||||
| 'FishAudio'
|
||||
@@ -72,6 +71,11 @@ export const ModelProviderList: ModelProviderType[] = [
|
||||
name: 'Groq',
|
||||
avatar: 'model/groq'
|
||||
},
|
||||
{
|
||||
id: 'AliCloud',
|
||||
name: i18nT('common:model_alicloud'),
|
||||
avatar: 'model/alicloud'
|
||||
},
|
||||
{
|
||||
id: 'Qwen',
|
||||
name: i18nT('common:model_qwen'),
|
||||
@@ -82,11 +86,6 @@ export const ModelProviderList: ModelProviderType[] = [
|
||||
name: i18nT('common:model_doubao'),
|
||||
avatar: 'model/doubao'
|
||||
},
|
||||
{
|
||||
id: 'DeepSeek',
|
||||
name: 'DeepSeek',
|
||||
avatar: 'model/deepseek'
|
||||
},
|
||||
{
|
||||
id: 'ChatGLM',
|
||||
name: i18nT('common:model_chatglm'),
|
||||
@@ -97,6 +96,11 @@ export const ModelProviderList: ModelProviderType[] = [
|
||||
name: i18nT('common:model_ernie'),
|
||||
avatar: 'model/ernie'
|
||||
},
|
||||
{
|
||||
id: 'DeepSeek',
|
||||
name: 'DeepSeek',
|
||||
avatar: 'model/deepseek'
|
||||
},
|
||||
{
|
||||
id: 'Moonshot',
|
||||
name: i18nT('common:model_moonshot'),
|
||||
@@ -158,21 +162,11 @@ export const ModelProviderList: ModelProviderType[] = [
|
||||
name: i18nT('common:model_moka'),
|
||||
avatar: 'model/moka'
|
||||
},
|
||||
{
|
||||
id: 'AliCloud',
|
||||
name: i18nT('common:model_alicloud'),
|
||||
avatar: 'model/alicloud'
|
||||
},
|
||||
{
|
||||
id: 'Siliconflow',
|
||||
name: i18nT('common:model_siliconflow'),
|
||||
avatar: 'model/siliconflow'
|
||||
},
|
||||
{
|
||||
id: 'PPIO',
|
||||
name: i18nT('common:model_ppio'),
|
||||
avatar: 'model/ppio'
|
||||
},
|
||||
{
|
||||
id: 'Other',
|
||||
name: i18nT('common:model_other'),
|
||||
|
||||
8
packages/global/core/ai/type.d.ts
vendored
8
packages/global/core/ai/type.d.ts
vendored
@@ -1,12 +1,14 @@
|
||||
import openai from 'openai';
|
||||
import type {
|
||||
ChatCompletionMessageToolCall,
|
||||
ChatCompletionChunk,
|
||||
ChatCompletionMessageParam as SdkChatCompletionMessageParam,
|
||||
ChatCompletionToolMessageParam,
|
||||
ChatCompletionContentPart as SdkChatCompletionContentPart,
|
||||
ChatCompletionUserMessageParam as SdkChatCompletionUserMessageParam,
|
||||
ChatCompletionToolMessageParam as SdkChatCompletionToolMessageParam,
|
||||
ChatCompletionAssistantMessageParam as SdkChatCompletionAssistantMessageParam
|
||||
ChatCompletionAssistantMessageParam as SdkChatCompletionAssistantMessageParam,
|
||||
ChatCompletionContentPartText
|
||||
} from 'openai/resources';
|
||||
import { ChatMessageTypeEnum } from './constants';
|
||||
import { WorkflowInteractiveResponseType } from '../workflow/template/system/interactive/type';
|
||||
@@ -46,7 +48,6 @@ export type ChatCompletionMessageParam = (
|
||||
| CustomChatCompletionToolMessageParam
|
||||
| CustomChatCompletionAssistantMessageParam
|
||||
) & {
|
||||
reasoning_text?: string;
|
||||
dataId?: string;
|
||||
hideInUI?: boolean;
|
||||
};
|
||||
@@ -70,8 +71,7 @@ export type ChatCompletionMessageFunctionCall =
|
||||
};
|
||||
|
||||
// Stream response
|
||||
export type StreamChatType = Stream<openai.Chat.Completions.ChatCompletionChunk>;
|
||||
export type UnStreamChatType = openai.Chat.Completions.ChatCompletion;
|
||||
export type StreamChatType = Stream<ChatCompletionChunk>;
|
||||
|
||||
export default openai;
|
||||
export * from 'openai';
|
||||
|
||||
20
packages/global/core/app/type.d.ts
vendored
20
packages/global/core/app/type.d.ts
vendored
@@ -74,17 +74,13 @@ export type AppDetailType = AppSchema & {
|
||||
export type AppSimpleEditFormType = {
|
||||
// templateId: string;
|
||||
aiSettings: {
|
||||
[NodeInputKeyEnum.aiModel]: string;
|
||||
[NodeInputKeyEnum.aiSystemPrompt]?: string | undefined;
|
||||
[NodeInputKeyEnum.aiChatTemperature]?: number;
|
||||
[NodeInputKeyEnum.aiChatMaxToken]?: number;
|
||||
[NodeInputKeyEnum.aiChatIsResponseText]: boolean;
|
||||
model: string;
|
||||
systemPrompt?: string | undefined;
|
||||
temperature?: number;
|
||||
maxToken?: number;
|
||||
isResponseAnswerText: boolean;
|
||||
maxHistories: number;
|
||||
[NodeInputKeyEnum.aiChatReasoning]?: boolean; // Is open reasoning mode
|
||||
[NodeInputKeyEnum.aiChatTopP]?: number;
|
||||
[NodeInputKeyEnum.aiChatStopSign]?: string;
|
||||
[NodeInputKeyEnum.aiChatResponseFormat]?: string;
|
||||
[NodeInputKeyEnum.aiChatJsonSchema]?: string;
|
||||
[NodeInputKeyEnum.aiChatReasoning]?: boolean;
|
||||
};
|
||||
dataset: {
|
||||
datasets: SelectedDatasetType;
|
||||
@@ -123,10 +119,6 @@ export type SettingAIDataType = {
|
||||
maxHistories?: number;
|
||||
[NodeInputKeyEnum.aiChatVision]?: boolean; // Is open vision mode
|
||||
[NodeInputKeyEnum.aiChatReasoning]?: boolean; // Is open reasoning mode
|
||||
[NodeInputKeyEnum.aiChatTopP]?: number;
|
||||
[NodeInputKeyEnum.aiChatStopSign]?: string;
|
||||
[NodeInputKeyEnum.aiChatResponseFormat]?: string;
|
||||
[NodeInputKeyEnum.aiChatJsonSchema]?: string;
|
||||
};
|
||||
|
||||
// variable
|
||||
|
||||
@@ -7,8 +7,6 @@ import { StoreNodeItemType } from '../workflow/type/node';
|
||||
import { DatasetSearchModeEnum } from '../dataset/constants';
|
||||
import { WorkflowTemplateBasicType } from '../workflow/type';
|
||||
import { AppTypeEnum } from './constants';
|
||||
import { AppErrEnum } from '../../common/error/code/app';
|
||||
import { PluginErrEnum } from '../../common/error/code/plugin';
|
||||
|
||||
export const getDefaultAppForm = (): AppSimpleEditFormType => {
|
||||
return {
|
||||
@@ -119,8 +117,7 @@ export const appWorkflow2Form = ({
|
||||
version: node.version,
|
||||
inputs: node.inputs,
|
||||
outputs: node.outputs,
|
||||
templateType: FlowNodeTemplateTypeEnum.other,
|
||||
pluginData: node.pluginData
|
||||
templateType: FlowNodeTemplateTypeEnum.other
|
||||
});
|
||||
} else if (node.flowNodeType === FlowNodeTypeEnum.systemConfig) {
|
||||
defaultAppForm.chatConfig = getAppChatConfig({
|
||||
@@ -150,18 +147,3 @@ export const getAppType = (config?: WorkflowTemplateBasicType | AppSimpleEditFor
|
||||
}
|
||||
return '';
|
||||
};
|
||||
|
||||
export const checkAppUnExistError = (error?: string) => {
|
||||
const unExistError: Array<string> = [
|
||||
AppErrEnum.unAuthApp,
|
||||
AppErrEnum.unExist,
|
||||
PluginErrEnum.unAuth,
|
||||
PluginErrEnum.unExist
|
||||
];
|
||||
|
||||
if (!!error && unExistError.includes(error)) {
|
||||
return error;
|
||||
} else {
|
||||
return undefined;
|
||||
}
|
||||
};
|
||||
|
||||
@@ -46,16 +46,7 @@ export const chats2GPTMessages = ({
|
||||
|
||||
messages.forEach((item) => {
|
||||
const dataId = reserveId ? item.dataId : undefined;
|
||||
if (item.obj === ChatRoleEnum.System) {
|
||||
const content = item.value?.[0]?.text?.content;
|
||||
if (content) {
|
||||
results.push({
|
||||
dataId,
|
||||
role: ChatCompletionRequestMessageRoleEnum.System,
|
||||
content
|
||||
});
|
||||
}
|
||||
} else if (item.obj === ChatRoleEnum.Human) {
|
||||
if (item.obj === ChatRoleEnum.Human) {
|
||||
const value = item.value
|
||||
.map((item) => {
|
||||
if (item.type === ChatItemValueTypeEnum.text) {
|
||||
@@ -89,6 +80,15 @@ export const chats2GPTMessages = ({
|
||||
role: ChatCompletionRequestMessageRoleEnum.User,
|
||||
content: simpleUserContentPart(value)
|
||||
});
|
||||
} else if (item.obj === ChatRoleEnum.System) {
|
||||
const content = item.value?.[0]?.text?.content;
|
||||
if (content) {
|
||||
results.push({
|
||||
dataId,
|
||||
role: ChatCompletionRequestMessageRoleEnum.System,
|
||||
content
|
||||
});
|
||||
}
|
||||
} else {
|
||||
const aiResults: ChatCompletionMessageParam[] = [];
|
||||
|
||||
@@ -349,7 +349,7 @@ export const chatValue2RuntimePrompt = (value: ChatItemValueItemType[]): Runtime
|
||||
};
|
||||
value.forEach((item) => {
|
||||
if (item.type === 'file' && item.file) {
|
||||
prompt.files.push(item.file);
|
||||
prompt.files?.push(item.file);
|
||||
} else if (item.text) {
|
||||
prompt.text += item.text.content;
|
||||
}
|
||||
|
||||
@@ -33,10 +33,8 @@ export enum WorkflowIOValueTypeEnum {
|
||||
dynamic = 'dynamic',
|
||||
|
||||
// plugin special type
|
||||
selectDataset = 'selectDataset',
|
||||
|
||||
// abandon
|
||||
selectApp = 'selectApp'
|
||||
selectApp = 'selectApp',
|
||||
selectDataset = 'selectDataset'
|
||||
}
|
||||
|
||||
export const toolValueTypeList = [
|
||||
@@ -144,10 +142,6 @@ export enum NodeInputKeyEnum {
|
||||
aiChatVision = 'aiChatVision',
|
||||
stringQuoteText = 'stringQuoteText',
|
||||
aiChatReasoning = 'aiChatReasoning',
|
||||
aiChatTopP = 'aiChatTopP',
|
||||
aiChatStopSign = 'aiChatStopSign',
|
||||
aiChatResponseFormat = 'aiChatResponseFormat',
|
||||
aiChatJsonSchema = 'aiChatJsonSchema',
|
||||
|
||||
// dataset
|
||||
datasetSelectList = 'datasets',
|
||||
@@ -160,10 +154,6 @@ export enum NodeInputKeyEnum {
|
||||
datasetSearchExtensionBg = 'datasetSearchExtensionBg',
|
||||
collectionFilterMatch = 'collectionFilterMatch',
|
||||
authTmbId = 'authTmbId',
|
||||
datasetDeepSearch = 'datasetDeepSearch',
|
||||
datasetDeepSearchModel = 'datasetDeepSearchModel',
|
||||
datasetDeepSearchMaxTimes = 'datasetDeepSearchMaxTimes',
|
||||
datasetDeepSearchBg = 'datasetDeepSearchBg',
|
||||
|
||||
// concat dataset
|
||||
datasetQuoteList = 'system_datasetQuoteList',
|
||||
|
||||
@@ -140,14 +140,7 @@ export enum FlowNodeTypeEnum {
|
||||
}
|
||||
|
||||
// node IO value type
|
||||
export const FlowValueTypeMap: Record<
|
||||
WorkflowIOValueTypeEnum,
|
||||
{
|
||||
label: string;
|
||||
value: WorkflowIOValueTypeEnum;
|
||||
abandon?: boolean;
|
||||
}
|
||||
> = {
|
||||
export const FlowValueTypeMap = {
|
||||
[WorkflowIOValueTypeEnum.string]: {
|
||||
label: 'String',
|
||||
value: WorkflowIOValueTypeEnum.string
|
||||
@@ -196,6 +189,10 @@ export const FlowValueTypeMap: Record<
|
||||
label: i18nT('common:core.workflow.Dataset quote'),
|
||||
value: WorkflowIOValueTypeEnum.datasetQuote
|
||||
},
|
||||
[WorkflowIOValueTypeEnum.selectApp]: {
|
||||
label: i18nT('common:plugin.App'),
|
||||
value: WorkflowIOValueTypeEnum.selectApp
|
||||
},
|
||||
[WorkflowIOValueTypeEnum.selectDataset]: {
|
||||
label: i18nT('common:core.chat.Select dataset'),
|
||||
value: WorkflowIOValueTypeEnum.selectDataset
|
||||
@@ -203,11 +200,6 @@ export const FlowValueTypeMap: Record<
|
||||
[WorkflowIOValueTypeEnum.dynamic]: {
|
||||
label: i18nT('common:core.workflow.dynamic_input'),
|
||||
value: WorkflowIOValueTypeEnum.dynamic
|
||||
},
|
||||
[WorkflowIOValueTypeEnum.selectApp]: {
|
||||
label: 'selectApp',
|
||||
value: WorkflowIOValueTypeEnum.selectApp,
|
||||
abandon: true
|
||||
}
|
||||
};
|
||||
|
||||
@@ -227,6 +219,3 @@ export const datasetQuoteValueDesc = `{
|
||||
q: string;
|
||||
a: string
|
||||
}[]`;
|
||||
export const datasetSelectValueDesc = `{
|
||||
datasetId: string;
|
||||
}[]`;
|
||||
|
||||
24
packages/global/core/workflow/runtime/type.d.ts
vendored
24
packages/global/core/workflow/runtime/type.d.ts
vendored
@@ -123,7 +123,6 @@ export type DispatchNodeResponseType = {
|
||||
temperature?: number;
|
||||
maxToken?: number;
|
||||
quoteList?: SearchDataResponseItemType[];
|
||||
reasoningText?: string;
|
||||
historyPreview?: {
|
||||
obj: `${ChatRoleEnum}`;
|
||||
value: string;
|
||||
@@ -134,17 +133,9 @@ export type DispatchNodeResponseType = {
|
||||
limit?: number;
|
||||
searchMode?: `${DatasetSearchModeEnum}`;
|
||||
searchUsingReRank?: boolean;
|
||||
queryExtensionResult?: {
|
||||
model: string;
|
||||
inputTokens: number;
|
||||
outputTokens: number;
|
||||
query: string;
|
||||
};
|
||||
deepSearchResult?: {
|
||||
model: string;
|
||||
inputTokens: number;
|
||||
outputTokens: number;
|
||||
};
|
||||
extensionModel?: string;
|
||||
extensionResult?: string;
|
||||
extensionTokens?: number;
|
||||
|
||||
// dataset concat
|
||||
concatLength?: number;
|
||||
@@ -207,11 +198,6 @@ export type DispatchNodeResponseType = {
|
||||
|
||||
// tool params
|
||||
toolParamsResult?: Record<string, any>;
|
||||
|
||||
// abandon
|
||||
extensionModel?: string;
|
||||
extensionResult?: string;
|
||||
extensionTokens?: number;
|
||||
};
|
||||
|
||||
export type DispatchNodeResultType<T = {}> = {
|
||||
@@ -235,10 +221,6 @@ export type AIChatNodeProps = {
|
||||
[NodeInputKeyEnum.aiChatIsResponseText]: boolean;
|
||||
[NodeInputKeyEnum.aiChatVision]?: boolean;
|
||||
[NodeInputKeyEnum.aiChatReasoning]?: boolean;
|
||||
[NodeInputKeyEnum.aiChatTopP]?: number;
|
||||
[NodeInputKeyEnum.aiChatStopSign]?: string;
|
||||
[NodeInputKeyEnum.aiChatResponseFormat]?: string;
|
||||
[NodeInputKeyEnum.aiChatJsonSchema]?: string;
|
||||
|
||||
[NodeInputKeyEnum.aiChatQuoteRole]?: AiChatQuoteRoleType;
|
||||
[NodeInputKeyEnum.aiChatQuoteTemplate]?: string;
|
||||
|
||||
@@ -10,7 +10,6 @@ import { FlowNodeOutputItemType, ReferenceValueType } from '../type/io';
|
||||
import { ChatItemType, NodeOutputItemType } from '../../../core/chat/type';
|
||||
import { ChatItemValueTypeEnum, ChatRoleEnum } from '../../../core/chat/constants';
|
||||
import { replaceVariable, valToStr } from '../../../common/string/tools';
|
||||
import { ChatCompletionChunk } from 'openai/resources';
|
||||
|
||||
export const getMaxHistoryLimitFromNodes = (nodes: StoreNodeItemType[]): number => {
|
||||
let limit = 10;
|
||||
@@ -293,12 +292,13 @@ export const getReferenceVariableValue = ({
|
||||
|
||||
export const formatVariableValByType = (val: any, valueType?: WorkflowIOValueTypeEnum) => {
|
||||
if (!valueType) return val;
|
||||
if (val === undefined || val === null) return;
|
||||
// Value type check, If valueType invalid, return undefined
|
||||
if (valueType.startsWith('array') && !Array.isArray(val)) return undefined;
|
||||
if (valueType === WorkflowIOValueTypeEnum.boolean) return Boolean(val);
|
||||
if (valueType === WorkflowIOValueTypeEnum.number) return Number(val);
|
||||
if (valueType === WorkflowIOValueTypeEnum.string) {
|
||||
if (val === undefined) return 'undefined';
|
||||
if (val === null) return 'null';
|
||||
return typeof val === 'object' ? JSON.stringify(val) : String(val);
|
||||
}
|
||||
if (
|
||||
@@ -420,137 +420,3 @@ export function rewriteNodeOutputByHistories(
|
||||
};
|
||||
});
|
||||
}
|
||||
|
||||
// Parse <think></think> tags to think and answer - unstream response
|
||||
export const parseReasoningContent = (text: string): [string, string] => {
|
||||
const regex = /<think>([\s\S]*?)<\/think>/;
|
||||
const match = text.match(regex);
|
||||
|
||||
if (!match) {
|
||||
return ['', text];
|
||||
}
|
||||
|
||||
const thinkContent = match[1].trim();
|
||||
|
||||
// Add answer (remaining text after think tag)
|
||||
const answerContent = text.slice(match.index! + match[0].length);
|
||||
|
||||
return [thinkContent, answerContent];
|
||||
};
|
||||
|
||||
// Parse <think></think> tags to think and answer - stream response
|
||||
export const parseReasoningStreamContent = () => {
|
||||
let isInThinkTag: boolean | undefined;
|
||||
|
||||
const startTag = '<think>';
|
||||
let startTagBuffer = '';
|
||||
|
||||
const endTag = '</think>';
|
||||
let endTagBuffer = '';
|
||||
|
||||
/*
|
||||
parseReasoning - 只控制是否主动解析 <think></think>,如果接口已经解析了,仍然会返回 think 内容。
|
||||
*/
|
||||
const parsePart = (
|
||||
part: {
|
||||
choices: {
|
||||
delta: {
|
||||
content?: string;
|
||||
reasoning_content?: string;
|
||||
};
|
||||
}[];
|
||||
},
|
||||
parseReasoning = false
|
||||
): [string, string] => {
|
||||
const content = part.choices?.[0]?.delta?.content || '';
|
||||
|
||||
// @ts-ignore
|
||||
const reasoningContent = part.choices?.[0]?.delta?.reasoning_content || '';
|
||||
if (reasoningContent || !parseReasoning) {
|
||||
isInThinkTag = false;
|
||||
return [reasoningContent, content];
|
||||
}
|
||||
|
||||
if (!content) {
|
||||
return ['', ''];
|
||||
}
|
||||
|
||||
// 如果不在 think 标签中,或者有 reasoningContent(接口已解析),则返回 reasoningContent 和 content
|
||||
if (isInThinkTag === false) {
|
||||
return ['', content];
|
||||
}
|
||||
|
||||
// 检测是否为 think 标签开头的数据
|
||||
if (isInThinkTag === undefined) {
|
||||
// Parse content think and answer
|
||||
startTagBuffer += content;
|
||||
// 太少内容时候,暂时不解析
|
||||
if (startTagBuffer.length < startTag.length) {
|
||||
return ['', ''];
|
||||
}
|
||||
|
||||
if (startTagBuffer.startsWith(startTag)) {
|
||||
isInThinkTag = true;
|
||||
return [startTagBuffer.slice(startTag.length), ''];
|
||||
}
|
||||
|
||||
// 如果未命中 think 标签,则认为不在 think 标签中,返回 buffer 内容作为 content
|
||||
isInThinkTag = false;
|
||||
return ['', startTagBuffer];
|
||||
}
|
||||
|
||||
// 确认是 think 标签内容,开始返回 think 内容,并实时检测 </think>
|
||||
/*
|
||||
检测 </think> 方案。
|
||||
存储所有疑似 </think> 的内容,直到检测到完整的 </think> 标签或超出 </think> 长度。
|
||||
content 返回值包含以下几种情况:
|
||||
abc - 完全未命中尾标签
|
||||
abc<th - 命中一部分尾标签
|
||||
abc</think> - 完全命中尾标签
|
||||
abc</think>abc - 完全命中尾标签
|
||||
</think>abc - 完全命中尾标签
|
||||
k>abc - 命中一部分尾标签
|
||||
*/
|
||||
// endTagBuffer 专门用来记录疑似尾标签的内容
|
||||
if (endTagBuffer) {
|
||||
endTagBuffer += content;
|
||||
if (endTagBuffer.includes(endTag)) {
|
||||
isInThinkTag = false;
|
||||
const answer = endTagBuffer.slice(endTag.length);
|
||||
return ['', answer];
|
||||
} else if (endTagBuffer.length >= endTag.length) {
|
||||
// 缓存内容超出尾标签长度,且仍未命中 </think>,则认为本次猜测 </think> 失败,仍处于 think 阶段。
|
||||
const tmp = endTagBuffer;
|
||||
endTagBuffer = '';
|
||||
return [tmp, ''];
|
||||
}
|
||||
return ['', ''];
|
||||
} else if (content.includes(endTag)) {
|
||||
// 返回内容,完整命中</think>,直接结束
|
||||
isInThinkTag = false;
|
||||
const [think, answer] = content.split(endTag);
|
||||
return [think, answer];
|
||||
} else {
|
||||
// 无 buffer,且未命中 </think>,开始疑似 </think> 检测。
|
||||
for (let i = 1; i < endTag.length; i++) {
|
||||
const partialEndTag = endTag.slice(0, i);
|
||||
// 命中一部分尾标签
|
||||
if (content.endsWith(partialEndTag)) {
|
||||
const think = content.slice(0, -partialEndTag.length);
|
||||
endTagBuffer += partialEndTag;
|
||||
return [think, ''];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// 完全未命中尾标签,还是 think 阶段。
|
||||
return [content, ''];
|
||||
};
|
||||
|
||||
const getStartTagBuffer = () => startTagBuffer;
|
||||
|
||||
return {
|
||||
parsePart,
|
||||
getStartTagBuffer
|
||||
};
|
||||
};
|
||||
|
||||
@@ -63,12 +63,14 @@ export const AiChatModule: FlowNodeTemplateType = {
|
||||
key: NodeInputKeyEnum.aiChatTemperature,
|
||||
renderTypeList: [FlowNodeInputTypeEnum.hidden], // Set in the pop-up window
|
||||
label: '',
|
||||
value: undefined,
|
||||
valueType: WorkflowIOValueTypeEnum.number
|
||||
},
|
||||
{
|
||||
key: NodeInputKeyEnum.aiChatMaxToken,
|
||||
renderTypeList: [FlowNodeInputTypeEnum.hidden], // Set in the pop-up window
|
||||
label: '',
|
||||
value: undefined,
|
||||
valueType: WorkflowIOValueTypeEnum.number
|
||||
},
|
||||
|
||||
@@ -96,30 +98,6 @@ export const AiChatModule: FlowNodeTemplateType = {
|
||||
valueType: WorkflowIOValueTypeEnum.boolean,
|
||||
value: true
|
||||
},
|
||||
{
|
||||
key: NodeInputKeyEnum.aiChatTopP,
|
||||
renderTypeList: [FlowNodeInputTypeEnum.hidden],
|
||||
label: '',
|
||||
valueType: WorkflowIOValueTypeEnum.number
|
||||
},
|
||||
{
|
||||
key: NodeInputKeyEnum.aiChatStopSign,
|
||||
renderTypeList: [FlowNodeInputTypeEnum.hidden],
|
||||
label: '',
|
||||
valueType: WorkflowIOValueTypeEnum.string
|
||||
},
|
||||
{
|
||||
key: NodeInputKeyEnum.aiChatResponseFormat,
|
||||
renderTypeList: [FlowNodeInputTypeEnum.hidden],
|
||||
label: '',
|
||||
valueType: WorkflowIOValueTypeEnum.string
|
||||
},
|
||||
{
|
||||
key: NodeInputKeyEnum.aiChatJsonSchema,
|
||||
renderTypeList: [FlowNodeInputTypeEnum.hidden],
|
||||
label: '',
|
||||
valueType: WorkflowIOValueTypeEnum.string
|
||||
},
|
||||
// settings modal ---
|
||||
{
|
||||
...Input_Template_System_Prompt,
|
||||
@@ -130,6 +108,7 @@ export const AiChatModule: FlowNodeTemplateType = {
|
||||
Input_Template_History,
|
||||
Input_Template_Dataset_Quote,
|
||||
Input_Template_File_Link_Prompt,
|
||||
|
||||
{ ...Input_Template_UserChatInput, toolDescription: i18nT('workflow:user_question') }
|
||||
],
|
||||
outputs: [
|
||||
@@ -151,20 +130,6 @@ export const AiChatModule: FlowNodeTemplateType = {
|
||||
description: i18nT('common:core.module.output.description.Ai response content'),
|
||||
valueType: WorkflowIOValueTypeEnum.string,
|
||||
type: FlowNodeOutputTypeEnum.static
|
||||
},
|
||||
{
|
||||
id: NodeOutputKeyEnum.reasoningText,
|
||||
key: NodeOutputKeyEnum.reasoningText,
|
||||
required: false,
|
||||
label: i18nT('workflow:reasoning_text'),
|
||||
valueType: WorkflowIOValueTypeEnum.string,
|
||||
type: FlowNodeOutputTypeEnum.static,
|
||||
invalid: true,
|
||||
invalidCondition: ({ inputs, llmModelList }) => {
|
||||
const model = inputs.find((item) => item.key === NodeInputKeyEnum.aiModel)?.value;
|
||||
const modelItem = llmModelList.find((item) => item.model === model);
|
||||
return modelItem?.reasoning !== true;
|
||||
}
|
||||
}
|
||||
]
|
||||
};
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
import {
|
||||
datasetQuoteValueDesc,
|
||||
datasetSelectValueDesc,
|
||||
FlowNodeInputTypeEnum,
|
||||
FlowNodeOutputTypeEnum,
|
||||
FlowNodeTypeEnum
|
||||
@@ -39,8 +38,7 @@ export const DatasetSearchModule: FlowNodeTemplateType = {
|
||||
label: i18nT('common:core.module.input.label.Select dataset'),
|
||||
value: [],
|
||||
valueType: WorkflowIOValueTypeEnum.selectDataset,
|
||||
required: true,
|
||||
valueDesc: datasetSelectValueDesc
|
||||
required: true
|
||||
},
|
||||
{
|
||||
key: NodeInputKeyEnum.datasetSimilarity,
|
||||
|
||||
@@ -43,12 +43,14 @@ export const ToolModule: FlowNodeTemplateType = {
|
||||
key: NodeInputKeyEnum.aiChatTemperature,
|
||||
renderTypeList: [FlowNodeInputTypeEnum.hidden], // Set in the pop-up window
|
||||
label: '',
|
||||
value: undefined,
|
||||
valueType: WorkflowIOValueTypeEnum.number
|
||||
},
|
||||
{
|
||||
key: NodeInputKeyEnum.aiChatMaxToken,
|
||||
renderTypeList: [FlowNodeInputTypeEnum.hidden], // Set in the pop-up window
|
||||
label: '',
|
||||
value: undefined,
|
||||
valueType: WorkflowIOValueTypeEnum.number
|
||||
},
|
||||
{
|
||||
@@ -58,30 +60,6 @@ export const ToolModule: FlowNodeTemplateType = {
|
||||
valueType: WorkflowIOValueTypeEnum.boolean,
|
||||
value: true
|
||||
},
|
||||
{
|
||||
key: NodeInputKeyEnum.aiChatTopP,
|
||||
renderTypeList: [FlowNodeInputTypeEnum.hidden],
|
||||
label: '',
|
||||
valueType: WorkflowIOValueTypeEnum.number
|
||||
},
|
||||
{
|
||||
key: NodeInputKeyEnum.aiChatStopSign,
|
||||
renderTypeList: [FlowNodeInputTypeEnum.hidden],
|
||||
label: '',
|
||||
valueType: WorkflowIOValueTypeEnum.string
|
||||
},
|
||||
{
|
||||
key: NodeInputKeyEnum.aiChatResponseFormat,
|
||||
renderTypeList: [FlowNodeInputTypeEnum.hidden],
|
||||
label: '',
|
||||
valueType: WorkflowIOValueTypeEnum.string
|
||||
},
|
||||
{
|
||||
key: NodeInputKeyEnum.aiChatJsonSchema,
|
||||
renderTypeList: [FlowNodeInputTypeEnum.hidden],
|
||||
label: '',
|
||||
valueType: WorkflowIOValueTypeEnum.string
|
||||
},
|
||||
|
||||
{
|
||||
...Input_Template_System_Prompt,
|
||||
|
||||
7
packages/global/core/workflow/type/io.d.ts
vendored
7
packages/global/core/workflow/type/io.d.ts
vendored
@@ -1,4 +1,3 @@
|
||||
import { LLMModelItemType } from '../../ai/model.d';
|
||||
import { LLMModelTypeEnum } from '../../ai/constants';
|
||||
import { WorkflowIOValueTypeEnum, NodeInputKeyEnum, NodeOutputKeyEnum } from '../constants';
|
||||
import { FlowNodeInputTypeEnum, FlowNodeOutputTypeEnum } from '../node/constant';
|
||||
@@ -78,12 +77,6 @@ export type FlowNodeOutputItemType = {
|
||||
defaultValue?: any;
|
||||
required?: boolean;
|
||||
|
||||
invalid?: boolean;
|
||||
invalidCondition?: (e: {
|
||||
inputs: FlowNodeInputItemType[];
|
||||
llmModelList: LLMModelItemType[];
|
||||
}) => boolean;
|
||||
|
||||
// component params
|
||||
customFieldConfig?: CustomFieldConfigType;
|
||||
};
|
||||
|
||||
11
packages/global/core/workflow/type/node.d.ts
vendored
11
packages/global/core/workflow/type/node.d.ts
vendored
@@ -43,17 +43,6 @@ export type FlowNodeCommonType = {
|
||||
pluginId?: string;
|
||||
isFolder?: boolean;
|
||||
// pluginType?: AppTypeEnum;
|
||||
pluginData?: PluginDataType;
|
||||
};
|
||||
|
||||
export type PluginDataType = {
|
||||
version: string;
|
||||
diagram?: string;
|
||||
userGuide?: string;
|
||||
courseUrl?: string;
|
||||
name?: string;
|
||||
avatar?: string;
|
||||
error?: string;
|
||||
};
|
||||
|
||||
type HandleType = {
|
||||
|
||||
10
packages/global/support/user/api.d.ts
vendored
10
packages/global/support/user/api.d.ts
vendored
@@ -1,9 +1,5 @@
|
||||
import { MemberGroupSchemaType, MemberGroupType } from 'support/permission/memberGroup/type';
|
||||
import { OAuthEnum } from './constant';
|
||||
import { TrackRegisterParams } from './login/api';
|
||||
import { TeamMemberStatusEnum } from './team/constant';
|
||||
import { OrgType } from './team/org/type';
|
||||
import { TeamMemberItemType } from './team/type';
|
||||
|
||||
export type PostLoginProps = {
|
||||
username: string;
|
||||
@@ -25,9 +21,3 @@ export type FastLoginProps = {
|
||||
token: string;
|
||||
code: string;
|
||||
};
|
||||
|
||||
export type SearchResult = {
|
||||
members: Omit<TeamMemberItemType, 'teamId' | 'permission'>[];
|
||||
orgs: Omit<OrgType, 'permission' | 'members'>[];
|
||||
groups: MemberGroupSchemaType[];
|
||||
};
|
||||
|
||||
@@ -13,7 +13,6 @@ export type CreateTeamProps = {
|
||||
defaultTeam?: boolean;
|
||||
memberName?: string;
|
||||
memberAvatar?: string;
|
||||
notificationAccount?: string;
|
||||
};
|
||||
export type UpdateTeamProps = Omit<ThirdPartyAccountType, 'externalWorkflowVariable'> & {
|
||||
name?: string;
|
||||
@@ -40,12 +39,6 @@ export type UpdateInviteProps = {
|
||||
tmbId: string;
|
||||
status: TeamMemberSchema['status'];
|
||||
};
|
||||
|
||||
export type UpdateStatusProps = {
|
||||
tmbId: string;
|
||||
status: TeamMemberSchema['status'];
|
||||
};
|
||||
|
||||
export type InviteMemberResponse = Record<
|
||||
'invite' | 'inValid' | 'inTeam',
|
||||
{ username: string; userId: string }[]
|
||||
|
||||
5
packages/global/support/user/team/type.d.ts
vendored
5
packages/global/support/user/team/type.d.ts
vendored
@@ -34,7 +34,6 @@ export type TeamTagSchema = TeamTagItemType & {
|
||||
_id: string;
|
||||
teamId: string;
|
||||
createTime: Date;
|
||||
updateTime?: Date;
|
||||
};
|
||||
|
||||
export type TeamMemberSchema = {
|
||||
@@ -42,7 +41,6 @@ export type TeamMemberSchema = {
|
||||
teamId: string;
|
||||
userId: string;
|
||||
createTime: Date;
|
||||
updateTime?: Date;
|
||||
name: string;
|
||||
role: `${TeamMemberRoleEnum}`;
|
||||
status: `${TeamMemberStatusEnum}`;
|
||||
@@ -81,9 +79,6 @@ export type TeamMemberItemType = {
|
||||
role: `${TeamMemberRoleEnum}`;
|
||||
status: `${TeamMemberStatusEnum}`;
|
||||
permission: TeamPermission;
|
||||
contact?: string;
|
||||
createTime: Date;
|
||||
updateTime?: Date;
|
||||
};
|
||||
|
||||
export type TeamTagItemType = {
|
||||
|
||||
2
packages/global/support/user/type.d.ts
vendored
2
packages/global/support/user/type.d.ts
vendored
@@ -17,7 +17,6 @@ export type UserModelSchema = {
|
||||
fastgpt_sem?: {
|
||||
keyword: string;
|
||||
};
|
||||
contact?: string;
|
||||
};
|
||||
|
||||
export type UserType = {
|
||||
@@ -30,7 +29,6 @@ export type UserType = {
|
||||
standardInfo?: standardInfoType;
|
||||
notificationAccount?: string;
|
||||
permission: TeamPermission;
|
||||
contact?: string;
|
||||
};
|
||||
|
||||
export type SourceMemberType = {
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
export const generateCsv = (headers: string[], data: string[][]) => {
|
||||
const csv = [headers.join(','), ...data.map((row) => row.join(','))].join('\n');
|
||||
return csv;
|
||||
};
|
||||
@@ -5,7 +5,6 @@ import { ClientSession, Types } from '../../../common/mongo';
|
||||
import { guessBase64ImageType } from '../utils';
|
||||
import { readFromSecondary } from '../../mongo/utils';
|
||||
import { addHours } from 'date-fns';
|
||||
import { imageFileType } from '@fastgpt/global/common/file/constants';
|
||||
|
||||
export const maxImgSize = 1024 * 1024 * 12;
|
||||
const base64MimeRegex = /data:image\/([^\)]+);base64/;
|
||||
@@ -26,19 +25,12 @@ export async function uploadMongoImg({
|
||||
const [base64Mime, base64Data] = base64Img.split(',');
|
||||
// Check if mime type is valid
|
||||
if (!base64MimeRegex.test(base64Mime)) {
|
||||
return Promise.reject('Invalid image base64');
|
||||
return Promise.reject('Invalid image mime type');
|
||||
}
|
||||
|
||||
const mime = `image/${base64Mime.match(base64MimeRegex)?.[1] ?? 'image/jpeg'}`;
|
||||
const binary = Buffer.from(base64Data, 'base64');
|
||||
let extension = mime.split('/')[1];
|
||||
if (extension.startsWith('x-')) {
|
||||
extension = extension.substring(2); // Remove 'x-' prefix
|
||||
}
|
||||
|
||||
if (!extension || !imageFileType.includes(`.${extension}`)) {
|
||||
return Promise.reject(`Invalid image file type: ${mime}`);
|
||||
}
|
||||
const extension = mime.split('/')[1];
|
||||
|
||||
const { _id } = await MongoImage.create({
|
||||
teamId,
|
||||
|
||||
@@ -63,13 +63,6 @@ export const getMongoModel = <T>(name: string, schema: mongoose.Schema) => {
|
||||
|
||||
const model = connectionMongo.model<T>(name, schema);
|
||||
|
||||
// Sync index
|
||||
syncMongoIndex(model);
|
||||
|
||||
return model;
|
||||
};
|
||||
|
||||
const syncMongoIndex = async (model: Model<any>) => {
|
||||
if (process.env.SYNC_INDEX !== '0' && process.env.NODE_ENV !== 'test') {
|
||||
try {
|
||||
model.syncIndexes({ background: true });
|
||||
@@ -77,6 +70,8 @@ const syncMongoIndex = async (model: Model<any>) => {
|
||||
addLog.error('Create index error', error);
|
||||
}
|
||||
}
|
||||
|
||||
return model;
|
||||
};
|
||||
|
||||
export const ReadPreference = connectionMongo.mongo.ReadPreference;
|
||||
|
||||
@@ -25,7 +25,7 @@ export const countGptMessagesTokens = async (
|
||||
number
|
||||
>({
|
||||
name: WorkerNameEnum.countGptMessagesTokens,
|
||||
maxReservedThreads: global.systemEnv?.tokenWorkers || 30
|
||||
maxReservedThreads: global.systemEnv?.tokenWorkers || 50
|
||||
});
|
||||
|
||||
const total = await workerController.run({ messages, tools, functionCall });
|
||||
|
||||
@@ -24,7 +24,7 @@ export const aiTranscriptions = async ({
|
||||
? { url: modelData.requestUrl }
|
||||
: {
|
||||
baseURL: aiAxiosConfig.baseUrl,
|
||||
url: '/audio/transcriptions'
|
||||
url: modelData.requestUrl || '/audio/transcriptions'
|
||||
}),
|
||||
headers: {
|
||||
Authorization: modelData.requestAuth
|
||||
|
||||
@@ -1,9 +1,7 @@
|
||||
import OpenAI from '@fastgpt/global/core/ai';
|
||||
import {
|
||||
ChatCompletionCreateParamsNonStreaming,
|
||||
ChatCompletionCreateParamsStreaming,
|
||||
StreamChatType,
|
||||
UnStreamChatType
|
||||
ChatCompletionCreateParamsStreaming
|
||||
} from '@fastgpt/global/core/ai/type';
|
||||
import { getErrText } from '@fastgpt/global/common/error/utils';
|
||||
import { addLog } from '../../common/system/log';
|
||||
@@ -40,30 +38,29 @@ export const getAxiosConfig = (props?: { userKey?: OpenaiAccountType }) => {
|
||||
};
|
||||
};
|
||||
|
||||
export const createChatCompletion = async ({
|
||||
type CompletionsBodyType =
|
||||
| ChatCompletionCreateParamsNonStreaming
|
||||
| ChatCompletionCreateParamsStreaming;
|
||||
type InferResponseType<T extends CompletionsBodyType> =
|
||||
T extends ChatCompletionCreateParamsStreaming
|
||||
? OpenAI.Chat.Completions.ChatCompletionChunk
|
||||
: OpenAI.Chat.Completions.ChatCompletion;
|
||||
|
||||
export const createChatCompletion = async <T extends CompletionsBodyType>({
|
||||
body,
|
||||
userKey,
|
||||
timeout,
|
||||
options
|
||||
}: {
|
||||
body: ChatCompletionCreateParamsNonStreaming | ChatCompletionCreateParamsStreaming;
|
||||
body: T;
|
||||
userKey?: OpenaiAccountType;
|
||||
timeout?: number;
|
||||
options?: OpenAI.RequestOptions;
|
||||
}): Promise<
|
||||
{
|
||||
getEmptyResponseTip: () => string;
|
||||
} & (
|
||||
| {
|
||||
response: StreamChatType;
|
||||
isStreamResponse: true;
|
||||
}
|
||||
| {
|
||||
response: UnStreamChatType;
|
||||
isStreamResponse: false;
|
||||
}
|
||||
)
|
||||
> => {
|
||||
}): Promise<{
|
||||
response: InferResponseType<T>;
|
||||
isStreamResponse: boolean;
|
||||
getEmptyResponseTip: () => string;
|
||||
}> => {
|
||||
try {
|
||||
const modelConstantsData = getLLMModel(body.model);
|
||||
|
||||
@@ -99,17 +96,9 @@ export const createChatCompletion = async ({
|
||||
return i18nT('chat:LLM_model_response_empty');
|
||||
};
|
||||
|
||||
if (isStreamResponse) {
|
||||
return {
|
||||
response,
|
||||
isStreamResponse: true,
|
||||
getEmptyResponseTip
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
response,
|
||||
isStreamResponse: false,
|
||||
response: response as InferResponseType<T>,
|
||||
isStreamResponse,
|
||||
getEmptyResponseTip
|
||||
};
|
||||
} catch (error) {
|
||||
|
||||
@@ -8,12 +8,6 @@
|
||||
"maxResponse": 4000,
|
||||
"quoteMaxToken": 120000,
|
||||
"maxTemperature": 0.99,
|
||||
"showTopP": true,
|
||||
"responseFormatList": [
|
||||
"text",
|
||||
"json_object"
|
||||
],
|
||||
"showStopSign": true,
|
||||
"vision": false,
|
||||
"toolChoice": true,
|
||||
"functionCall": false,
|
||||
@@ -36,12 +30,6 @@
|
||||
"maxResponse": 4000,
|
||||
"quoteMaxToken": 120000,
|
||||
"maxTemperature": 0.99,
|
||||
"showTopP": true,
|
||||
"responseFormatList": [
|
||||
"text",
|
||||
"json_object"
|
||||
],
|
||||
"showStopSign": true,
|
||||
"vision": false,
|
||||
"toolChoice": true,
|
||||
"functionCall": false,
|
||||
@@ -64,12 +52,6 @@
|
||||
"maxResponse": 4000,
|
||||
"quoteMaxToken": 900000,
|
||||
"maxTemperature": 0.99,
|
||||
"showTopP": true,
|
||||
"responseFormatList": [
|
||||
"text",
|
||||
"json_object"
|
||||
],
|
||||
"showStopSign": true,
|
||||
"vision": false,
|
||||
"toolChoice": false,
|
||||
"functionCall": false,
|
||||
@@ -92,12 +74,6 @@
|
||||
"maxResponse": 4000,
|
||||
"quoteMaxToken": 120000,
|
||||
"maxTemperature": 0.99,
|
||||
"showTopP": true,
|
||||
"responseFormatList": [
|
||||
"text",
|
||||
"json_object"
|
||||
],
|
||||
"showStopSign": true,
|
||||
"vision": false,
|
||||
"toolChoice": true,
|
||||
"functionCall": false,
|
||||
@@ -120,8 +96,6 @@
|
||||
"maxResponse": 1000,
|
||||
"quoteMaxToken": 6000,
|
||||
"maxTemperature": 0.99,
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"vision": true,
|
||||
"toolChoice": false,
|
||||
"functionCall": false,
|
||||
@@ -144,8 +118,6 @@
|
||||
"maxResponse": 1000,
|
||||
"quoteMaxToken": 6000,
|
||||
"maxTemperature": 0.99,
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"vision": true,
|
||||
"toolChoice": false,
|
||||
"functionCall": false,
|
||||
|
||||
@@ -8,8 +8,6 @@
|
||||
"maxResponse": 8000,
|
||||
"quoteMaxToken": 100000,
|
||||
"maxTemperature": 1,
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"vision": false,
|
||||
"toolChoice": true,
|
||||
"functionCall": false,
|
||||
@@ -32,8 +30,6 @@
|
||||
"maxResponse": 8000,
|
||||
"quoteMaxToken": 100000,
|
||||
"maxTemperature": 1,
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"vision": true,
|
||||
"toolChoice": true,
|
||||
"functionCall": false,
|
||||
@@ -56,8 +52,6 @@
|
||||
"maxResponse": 8000,
|
||||
"quoteMaxToken": 100000,
|
||||
"maxTemperature": 1,
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"vision": true,
|
||||
"toolChoice": true,
|
||||
"functionCall": false,
|
||||
@@ -80,8 +74,6 @@
|
||||
"maxResponse": 4096,
|
||||
"quoteMaxToken": 100000,
|
||||
"maxTemperature": 1,
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"vision": true,
|
||||
"toolChoice": true,
|
||||
"functionCall": false,
|
||||
|
||||
@@ -5,12 +5,9 @@
|
||||
"model": "deepseek-chat",
|
||||
"name": "Deepseek-chat",
|
||||
"maxContext": 64000,
|
||||
"maxResponse": 8000,
|
||||
"maxResponse": 4096,
|
||||
"quoteMaxToken": 60000,
|
||||
"maxTemperature": 1,
|
||||
"showTopP": true,
|
||||
"responseFormatList": ["text", "json_object"],
|
||||
"showStopSign": true,
|
||||
"maxTemperature": 1.5,
|
||||
"vision": false,
|
||||
"toolChoice": true,
|
||||
"functionCall": false,
|
||||
@@ -28,7 +25,7 @@
|
||||
"model": "deepseek-reasoner",
|
||||
"name": "Deepseek-reasoner",
|
||||
"maxContext": 64000,
|
||||
"maxResponse": 8000,
|
||||
"maxResponse": 4096,
|
||||
"quoteMaxToken": 60000,
|
||||
"maxTemperature": null,
|
||||
"vision": false,
|
||||
@@ -45,9 +42,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -1,102 +1,6 @@
|
||||
{
|
||||
"provider": "Doubao",
|
||||
"list": [
|
||||
{
|
||||
"model": "Doubao-1.5-lite-32k",
|
||||
"name": "Doubao-1.5-lite-32k",
|
||||
"maxContext": 32000,
|
||||
"maxResponse": 4000,
|
||||
"quoteMaxToken": 32000,
|
||||
"maxTemperature": 1,
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"vision": false,
|
||||
"toolChoice": true,
|
||||
"functionCall": false,
|
||||
"defaultSystemChatPrompt": "",
|
||||
"datasetProcess": true,
|
||||
"usedInClassify": true,
|
||||
"customCQPrompt": "",
|
||||
"usedInExtractFields": true,
|
||||
"usedInQueryExtension": true,
|
||||
"customExtractPrompt": "",
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "Doubao-1.5-pro-32k",
|
||||
"name": "Doubao-1.5-pro-32k",
|
||||
"maxContext": 32000,
|
||||
"maxResponse": 4000,
|
||||
"quoteMaxToken": 32000,
|
||||
"maxTemperature": 1,
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"vision": false,
|
||||
"toolChoice": true,
|
||||
"functionCall": false,
|
||||
"defaultSystemChatPrompt": "",
|
||||
"datasetProcess": true,
|
||||
"usedInClassify": true,
|
||||
"customCQPrompt": "",
|
||||
"usedInExtractFields": true,
|
||||
"usedInQueryExtension": true,
|
||||
"customExtractPrompt": "",
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "Doubao-1.5-pro-256k",
|
||||
"name": "Doubao-1.5-pro-256k",
|
||||
"maxContext": 256000,
|
||||
"maxResponse": 12000,
|
||||
"quoteMaxToken": 256000,
|
||||
"maxTemperature": 1,
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"vision": false,
|
||||
"toolChoice": true,
|
||||
"functionCall": false,
|
||||
"defaultSystemChatPrompt": "",
|
||||
"datasetProcess": true,
|
||||
"usedInClassify": true,
|
||||
"customCQPrompt": "",
|
||||
"usedInExtractFields": true,
|
||||
"usedInQueryExtension": true,
|
||||
"customExtractPrompt": "",
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "Doubao-1.5-vision-pro-32k",
|
||||
"name": "Doubao-1.5-vision-pro-32k",
|
||||
"maxContext": 32000,
|
||||
"maxResponse": 4000,
|
||||
"quoteMaxToken": 32000,
|
||||
"maxTemperature": 1,
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"vision": true,
|
||||
"toolChoice": true,
|
||||
"functionCall": false,
|
||||
"defaultSystemChatPrompt": "",
|
||||
"datasetProcess": true,
|
||||
"usedInClassify": true,
|
||||
"customCQPrompt": "",
|
||||
"usedInExtractFields": true,
|
||||
"usedInQueryExtension": true,
|
||||
"customExtractPrompt": "",
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "Doubao-lite-4k",
|
||||
"name": "Doubao-lite-4k",
|
||||
@@ -104,8 +8,6 @@
|
||||
"maxResponse": 4000,
|
||||
"quoteMaxToken": 4000,
|
||||
"maxTemperature": 1,
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"vision": false,
|
||||
"toolChoice": true,
|
||||
"functionCall": false,
|
||||
@@ -128,8 +30,6 @@
|
||||
"maxResponse": 4000,
|
||||
"quoteMaxToken": 32000,
|
||||
"maxTemperature": 1,
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"vision": false,
|
||||
"toolChoice": true,
|
||||
"functionCall": false,
|
||||
@@ -165,9 +65,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "Doubao-vision-lite-32k",
|
||||
@@ -189,9 +87,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "Doubao-pro-4k",
|
||||
@@ -213,9 +109,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "Doubao-pro-32k",
|
||||
@@ -237,9 +131,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "Doubao-pro-128k",
|
||||
@@ -261,9 +153,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "Doubao-vision-pro-32k",
|
||||
@@ -285,25 +175,21 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "Doubao-embedding-large",
|
||||
"name": "Doubao-embedding-large",
|
||||
"defaultToken": 512,
|
||||
"maxToken": 4096,
|
||||
"type": "embedding",
|
||||
"normalization": true
|
||||
"type": "embedding"
|
||||
},
|
||||
{
|
||||
"model": "Doubao-embedding",
|
||||
"name": "Doubao-embedding",
|
||||
"defaultToken": 512,
|
||||
"maxToken": 4096,
|
||||
"type": "embedding",
|
||||
"normalization": true
|
||||
"type": "embedding"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -21,9 +21,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "ERNIE-4.0-Turbo-8K",
|
||||
@@ -45,9 +43,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "ERNIE-Lite-8K",
|
||||
@@ -69,9 +65,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "ERNIE-Speed-128K",
|
||||
@@ -93,9 +87,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "Embedding-V1",
|
||||
|
||||
@@ -1,54 +1,6 @@
|
||||
{
|
||||
"provider": "Gemini",
|
||||
"list": [
|
||||
{
|
||||
"model": "gemini-2.0-flash",
|
||||
"name": "gemini-2.0-flash",
|
||||
"maxContext": 1000000,
|
||||
"maxResponse": 8000,
|
||||
"quoteMaxToken": 60000,
|
||||
"maxTemperature": 1,
|
||||
"vision": true,
|
||||
"toolChoice": true,
|
||||
"functionCall": false,
|
||||
"defaultSystemChatPrompt": "",
|
||||
"datasetProcess": true,
|
||||
"usedInClassify": true,
|
||||
"customCQPrompt": "",
|
||||
"usedInExtractFields": true,
|
||||
"usedInQueryExtension": true,
|
||||
"customExtractPrompt": "",
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
},
|
||||
{
|
||||
"model": "gemini-2.0-pro-exp",
|
||||
"name": "gemini-2.0-pro-exp",
|
||||
"maxContext": 2000000,
|
||||
"maxResponse": 8000,
|
||||
"quoteMaxToken": 100000,
|
||||
"maxTemperature": 1,
|
||||
"vision": true,
|
||||
"toolChoice": true,
|
||||
"functionCall": false,
|
||||
"defaultSystemChatPrompt": "",
|
||||
"datasetProcess": true,
|
||||
"usedInClassify": true,
|
||||
"customCQPrompt": "",
|
||||
"usedInExtractFields": true,
|
||||
"usedInQueryExtension": true,
|
||||
"customExtractPrompt": "",
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
},
|
||||
{
|
||||
"model": "gemini-1.5-flash",
|
||||
"name": "gemini-1.5-flash",
|
||||
@@ -69,9 +21,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "gemini-1.5-pro",
|
||||
@@ -93,9 +43,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "gemini-2.0-flash-exp",
|
||||
@@ -117,9 +65,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "gemini-2.0-flash-thinking-exp-1219",
|
||||
@@ -141,9 +87,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "gemini-2.0-flash-thinking-exp-01-21",
|
||||
@@ -165,9 +109,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "gemini-exp-1206",
|
||||
@@ -189,9 +131,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "text-embedding-004",
|
||||
|
||||
@@ -20,9 +20,7 @@
|
||||
"customExtractPrompt": "",
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "llama-3.3-70b-versatile",
|
||||
@@ -43,9 +41,7 @@
|
||||
"customExtractPrompt": "",
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -21,9 +21,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "hunyuan-lite",
|
||||
@@ -45,9 +43,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "hunyuan-pro",
|
||||
@@ -69,9 +65,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "hunyuan-standard",
|
||||
@@ -93,9 +87,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "hunyuan-turbo-vision",
|
||||
@@ -117,9 +109,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "hunyuan-turbo",
|
||||
@@ -141,9 +131,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "hunyuan-vision",
|
||||
@@ -165,9 +153,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "hunyuan-embedding",
|
||||
|
||||
@@ -21,9 +21,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "internlm3-8b-instruct",
|
||||
@@ -45,9 +43,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -21,9 +21,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "abab6.5s-chat",
|
||||
@@ -45,9 +43,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "speech-01-turbo",
|
||||
@@ -241,4 +237,4 @@
|
||||
"type": "tts"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -21,9 +21,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "ministral-8b-latest",
|
||||
@@ -45,9 +43,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "mistral-large-latest",
|
||||
@@ -69,9 +65,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "mistral-small-latest",
|
||||
@@ -93,9 +87,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -21,10 +21,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"responseFormatList": ["text", "json_object"]
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "moonshot-v1-32k",
|
||||
@@ -46,10 +43,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"responseFormatList": ["text", "json_object"]
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "moonshot-v1-128k",
|
||||
@@ -71,10 +65,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"responseFormatList": ["text", "json_object"]
|
||||
"type": "llm"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -8,13 +8,6 @@
|
||||
"maxResponse": 16000,
|
||||
"quoteMaxToken": 60000,
|
||||
"maxTemperature": 1.2,
|
||||
"showTopP": true,
|
||||
"responseFormatList": [
|
||||
"text",
|
||||
"json_object",
|
||||
"json_schema"
|
||||
],
|
||||
"showStopSign": true,
|
||||
"vision": true,
|
||||
"toolChoice": true,
|
||||
"functionCall": true,
|
||||
@@ -36,13 +29,6 @@
|
||||
"maxResponse": 4000,
|
||||
"quoteMaxToken": 60000,
|
||||
"maxTemperature": 1.2,
|
||||
"showTopP": true,
|
||||
"responseFormatList": [
|
||||
"text",
|
||||
"json_object",
|
||||
"json_schema"
|
||||
],
|
||||
"showStopSign": true,
|
||||
"vision": true,
|
||||
"toolChoice": true,
|
||||
"functionCall": true,
|
||||
@@ -82,9 +68,7 @@
|
||||
"fieldMap": {
|
||||
"max_tokens": "max_completion_tokens"
|
||||
},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "o1-mini",
|
||||
@@ -110,9 +94,7 @@
|
||||
"fieldMap": {
|
||||
"max_tokens": "max_completion_tokens"
|
||||
},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "o1",
|
||||
@@ -138,9 +120,7 @@
|
||||
"fieldMap": {
|
||||
"max_tokens": "max_completion_tokens"
|
||||
},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "o1-preview",
|
||||
@@ -166,9 +146,7 @@
|
||||
"fieldMap": {
|
||||
"max_tokens": "max_completion_tokens"
|
||||
},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "gpt-3.5-turbo",
|
||||
@@ -177,8 +155,6 @@
|
||||
"maxResponse": 4000,
|
||||
"quoteMaxToken": 13000,
|
||||
"maxTemperature": 1.2,
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"vision": false,
|
||||
"toolChoice": true,
|
||||
"functionCall": true,
|
||||
@@ -199,8 +175,6 @@
|
||||
"maxResponse": 4000,
|
||||
"quoteMaxToken": 60000,
|
||||
"maxTemperature": 1.2,
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"vision": true,
|
||||
"toolChoice": true,
|
||||
"functionCall": true,
|
||||
@@ -275,4 +249,4 @@
|
||||
"type": "stt"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
{
|
||||
"provider": "PPIO",
|
||||
"list": []
|
||||
}
|
||||
@@ -21,10 +21,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"responseFormatList": ["text", "json_object"]
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "qwen-plus",
|
||||
@@ -46,10 +43,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"responseFormatList": ["text", "json_object"]
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "qwen-vl-plus",
|
||||
@@ -69,9 +63,7 @@
|
||||
"usedInQueryExtension": true,
|
||||
"customExtractPrompt": "",
|
||||
"usedInToolCall": true,
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "qwen-max",
|
||||
@@ -93,10 +85,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"responseFormatList": ["text", "json_object"]
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "qwen-vl-max",
|
||||
@@ -118,9 +107,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "qwen-coder-turbo",
|
||||
@@ -142,9 +129,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "qwen2.5-7b-instruct",
|
||||
@@ -166,10 +151,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"responseFormatList": ["text", "json_object"]
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "qwen2.5-14b-instruct",
|
||||
@@ -191,10 +173,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"responseFormatList": ["text", "json_object"]
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "qwen2.5-32b-instruct",
|
||||
@@ -216,10 +195,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"responseFormatList": ["text", "json_object"]
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "qwen2.5-72b-instruct",
|
||||
@@ -241,17 +217,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true,
|
||||
"responseFormatList": ["text", "json_object"]
|
||||
},
|
||||
{
|
||||
"model": "text-embedding-v3",
|
||||
"name": "text-embedding-v3",
|
||||
"defaultToken": 512,
|
||||
"maxToken": 8000,
|
||||
"type": "embedding"
|
||||
"type": "llm"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -21,9 +21,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "Qwen/Qwen2-VL-72B-Instruct",
|
||||
@@ -44,9 +42,7 @@
|
||||
"customExtractPrompt": "",
|
||||
"defaultSystemChatPrompt": "",
|
||||
"defaultConfig": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "deepseek-ai/DeepSeek-V2.5",
|
||||
@@ -68,9 +64,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "BAAI/bge-m3",
|
||||
@@ -207,4 +201,4 @@
|
||||
"type": "rerank"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -19,9 +19,7 @@
|
||||
"customCQPrompt": "",
|
||||
"customExtractPrompt": "",
|
||||
"defaultSystemChatPrompt": "",
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "generalv3",
|
||||
@@ -41,9 +39,7 @@
|
||||
"customCQPrompt": "",
|
||||
"customExtractPrompt": "",
|
||||
"defaultSystemChatPrompt": "",
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "pro-128k",
|
||||
@@ -63,9 +59,7 @@
|
||||
"customCQPrompt": "",
|
||||
"customExtractPrompt": "",
|
||||
"defaultSystemChatPrompt": "",
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "generalv3.5",
|
||||
@@ -85,9 +79,7 @@
|
||||
"customCQPrompt": "",
|
||||
"customExtractPrompt": "",
|
||||
"defaultSystemChatPrompt": "",
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "max-32k",
|
||||
@@ -109,9 +101,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "4.0Ultra",
|
||||
@@ -133,9 +123,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -19,9 +19,7 @@
|
||||
"customCQPrompt": "",
|
||||
"customExtractPrompt": "",
|
||||
"defaultSystemChatPrompt": "",
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "step-1-8k",
|
||||
@@ -41,9 +39,7 @@
|
||||
"customCQPrompt": "",
|
||||
"customExtractPrompt": "",
|
||||
"defaultSystemChatPrompt": "",
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "step-1-32k",
|
||||
@@ -63,9 +59,7 @@
|
||||
"customCQPrompt": "",
|
||||
"customExtractPrompt": "",
|
||||
"defaultSystemChatPrompt": "",
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "step-1-128k",
|
||||
@@ -85,9 +79,7 @@
|
||||
"customCQPrompt": "",
|
||||
"customExtractPrompt": "",
|
||||
"defaultSystemChatPrompt": "",
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "step-1-256k",
|
||||
@@ -107,9 +99,7 @@
|
||||
"customCQPrompt": "",
|
||||
"customExtractPrompt": "",
|
||||
"defaultSystemChatPrompt": "",
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "step-1o-vision-32k",
|
||||
@@ -129,9 +119,7 @@
|
||||
"customCQPrompt": "",
|
||||
"customExtractPrompt": "",
|
||||
"defaultSystemChatPrompt": "",
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "step-1v-8k",
|
||||
@@ -151,9 +139,7 @@
|
||||
"customCQPrompt": "",
|
||||
"customExtractPrompt": "",
|
||||
"defaultSystemChatPrompt": "",
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "step-1v-32k",
|
||||
@@ -173,9 +159,7 @@
|
||||
"customCQPrompt": "",
|
||||
"customExtractPrompt": "",
|
||||
"defaultSystemChatPrompt": "",
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "step-2-mini",
|
||||
@@ -195,9 +179,7 @@
|
||||
"customCQPrompt": "",
|
||||
"customExtractPrompt": "",
|
||||
"defaultSystemChatPrompt": "",
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "step-2-16k",
|
||||
@@ -217,9 +199,7 @@
|
||||
"customCQPrompt": "",
|
||||
"customExtractPrompt": "",
|
||||
"defaultSystemChatPrompt": "",
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "step-2-16k-exp",
|
||||
@@ -239,9 +219,7 @@
|
||||
"customCQPrompt": "",
|
||||
"customExtractPrompt": "",
|
||||
"defaultSystemChatPrompt": "",
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "step-tts-mini",
|
||||
@@ -327,4 +305,4 @@
|
||||
"type": "tts"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -21,9 +21,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
},
|
||||
{
|
||||
"model": "yi-vision-v2",
|
||||
@@ -45,9 +43,7 @@
|
||||
"usedInToolCall": true,
|
||||
"defaultConfig": {},
|
||||
"fieldMap": {},
|
||||
"type": "llm",
|
||||
"showTopP": true,
|
||||
"showStopSign": true
|
||||
"type": "llm"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -11,11 +11,7 @@ import {
|
||||
ReRankModelItemType
|
||||
} from '@fastgpt/global/core/ai/model.d';
|
||||
import { debounce } from 'lodash';
|
||||
import {
|
||||
getModelProvider,
|
||||
ModelProviderIdType,
|
||||
ModelProviderType
|
||||
} from '@fastgpt/global/core/ai/provider';
|
||||
import { ModelProviderType } from '@fastgpt/global/core/ai/provider';
|
||||
import { findModelFromAlldata } from '../model';
|
||||
import {
|
||||
reloadFastGPTConfigBuffer,
|
||||
@@ -31,12 +27,7 @@ import { delay } from '@fastgpt/global/common/system/utils';
|
||||
export const loadSystemModels = async (init = false) => {
|
||||
const getProviderList = () => {
|
||||
const currentFileUrl = new URL(import.meta.url);
|
||||
const filePath = decodeURIComponent(
|
||||
process.platform === 'win32'
|
||||
? currentFileUrl.pathname.substring(1) // Remove leading slash on Windows
|
||||
: currentFileUrl.pathname
|
||||
);
|
||||
const modelsPath = path.join(path.dirname(filePath), 'provider');
|
||||
const modelsPath = path.join(path.dirname(currentFileUrl.pathname), 'provider');
|
||||
|
||||
return fs.readdirSync(modelsPath) as string[];
|
||||
};
|
||||
@@ -100,7 +91,7 @@ export const loadSystemModels = async (init = false) => {
|
||||
await Promise.all(
|
||||
providerList.map(async (name) => {
|
||||
const fileContent = (await import(`./provider/${name}`))?.default as {
|
||||
provider: ModelProviderIdType;
|
||||
provider: ModelProviderType;
|
||||
list: SystemModelItemType[];
|
||||
};
|
||||
|
||||
@@ -110,7 +101,7 @@ export const loadSystemModels = async (init = false) => {
|
||||
const modelData: any = {
|
||||
...fileModel,
|
||||
...dbModel?.metadata,
|
||||
provider: getModelProvider(dbModel?.metadata?.provider || fileContent.provider).id,
|
||||
provider: dbModel?.metadata?.provider || fileContent.provider,
|
||||
type: dbModel?.metadata?.type || fileModel.type,
|
||||
isCustom: false
|
||||
};
|
||||
@@ -152,7 +143,6 @@ export const loadSystemModels = async (init = false) => {
|
||||
console.error('Load models error', error);
|
||||
// @ts-ignore
|
||||
global.systemModelList = undefined;
|
||||
return Promise.reject(error);
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
@@ -32,14 +32,12 @@ export async function getVectorsByText({ model, input, type }: GetVectorProps) {
|
||||
model: model.model,
|
||||
input: [input]
|
||||
},
|
||||
model.requestUrl
|
||||
model.requestUrl && model.requestAuth
|
||||
? {
|
||||
path: model.requestUrl,
|
||||
headers: model.requestAuth
|
||||
? {
|
||||
Authorization: `Bearer ${model.requestAuth}`
|
||||
}
|
||||
: undefined
|
||||
headers: {
|
||||
Authorization: `Bearer ${model.requestAuth}`
|
||||
}
|
||||
}
|
||||
: {}
|
||||
)
|
||||
@@ -56,14 +54,7 @@ export async function getVectorsByText({ model, input, type }: GetVectorProps) {
|
||||
|
||||
const [tokens, vectors] = await Promise.all([
|
||||
countPromptTokens(input),
|
||||
Promise.all(
|
||||
res.data
|
||||
.map((item) => unityDimensional(item.embedding))
|
||||
.map((item) => {
|
||||
if (model.normalization) return normalization(item);
|
||||
return item;
|
||||
})
|
||||
)
|
||||
Promise.all(res.data.map((item) => unityDimensional(item.embedding)))
|
||||
]);
|
||||
|
||||
return {
|
||||
@@ -94,15 +85,3 @@ function unityDimensional(vector: number[]) {
|
||||
|
||||
return resultVector.concat(zeroVector);
|
||||
}
|
||||
// normalization processing
|
||||
function normalization(vector: number[]) {
|
||||
if (vector.some((item) => item > 1)) {
|
||||
// Calculate the Euclidean norm (L2 norm)
|
||||
const norm = Math.sqrt(vector.reduce((sum, val) => sum + val * val, 0));
|
||||
|
||||
// Normalize the vector by dividing each component by the norm
|
||||
return vector.map((val) => val / norm);
|
||||
}
|
||||
|
||||
return vector;
|
||||
}
|
||||
|
||||
@@ -25,11 +25,8 @@ export function reRankRecall({
|
||||
if (!model) {
|
||||
return Promise.reject('no rerank model');
|
||||
}
|
||||
if (documents.length === 0) {
|
||||
return Promise.resolve([]);
|
||||
}
|
||||
|
||||
const { baseUrl, authorization } = getAxiosConfig();
|
||||
const { baseUrl, authorization } = getAxiosConfig({});
|
||||
|
||||
let start = Date.now();
|
||||
return POST<PostReRankResponse>(
|
||||
@@ -41,7 +38,7 @@ export function reRankRecall({
|
||||
},
|
||||
{
|
||||
headers: {
|
||||
Authorization: model.requestAuth ? `Bearer ${model.requestAuth}` : authorization
|
||||
Authorization: model.requestAuth ? model.requestAuth : authorization
|
||||
},
|
||||
timeout: 30000
|
||||
}
|
||||
|
||||
@@ -42,27 +42,17 @@ type CompletionsBodyType =
|
||||
| ChatCompletionCreateParamsStreaming;
|
||||
type InferCompletionsBody<T> = T extends { stream: true }
|
||||
? ChatCompletionCreateParamsStreaming
|
||||
: T extends { stream: false }
|
||||
? ChatCompletionCreateParamsNonStreaming
|
||||
: ChatCompletionCreateParamsNonStreaming | ChatCompletionCreateParamsStreaming;
|
||||
: ChatCompletionCreateParamsNonStreaming;
|
||||
|
||||
export const llmCompletionsBodyFormat = <T extends CompletionsBodyType>(
|
||||
body: T & {
|
||||
response_format?: any;
|
||||
json_schema?: string;
|
||||
stop?: string;
|
||||
},
|
||||
body: T,
|
||||
model: string | LLMModelItemType
|
||||
): InferCompletionsBody<T> => {
|
||||
const modelData = typeof model === 'string' ? getLLMModel(model) : model;
|
||||
if (!modelData) {
|
||||
return body as unknown as InferCompletionsBody<T>;
|
||||
return body as InferCompletionsBody<T>;
|
||||
}
|
||||
|
||||
const response_format = body.response_format;
|
||||
const json_schema = body.json_schema ?? undefined;
|
||||
const stop = body.stop ?? undefined;
|
||||
|
||||
const requestBody: T = {
|
||||
...body,
|
||||
temperature:
|
||||
@@ -72,14 +62,7 @@ export const llmCompletionsBodyFormat = <T extends CompletionsBodyType>(
|
||||
temperature: body.temperature
|
||||
})
|
||||
: undefined,
|
||||
...modelData?.defaultConfig,
|
||||
response_format: response_format
|
||||
? {
|
||||
type: response_format,
|
||||
json_schema
|
||||
}
|
||||
: undefined,
|
||||
stop: stop?.split('|')
|
||||
...modelData?.defaultConfig
|
||||
};
|
||||
|
||||
// field map
|
||||
@@ -92,7 +75,9 @@ export const llmCompletionsBodyFormat = <T extends CompletionsBodyType>(
|
||||
});
|
||||
}
|
||||
|
||||
return requestBody as unknown as InferCompletionsBody<T>;
|
||||
// console.log(requestBody);
|
||||
|
||||
return requestBody as InferCompletionsBody<T>;
|
||||
};
|
||||
|
||||
export const llmStreamResponseToText = async (response: StreamChatType) => {
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import { connectionMongo, getMongoModel } from '../../common/mongo';
|
||||
const { Schema } = connectionMongo;
|
||||
import { ChatSchema as ChatType } from '@fastgpt/global/core/chat/type.d';
|
||||
import { ChatSourceEnum, ChatSourceMap } from '@fastgpt/global/core/chat/constants';
|
||||
import { ChatSourceMap } from '@fastgpt/global/core/chat/constants';
|
||||
import {
|
||||
TeamCollectionName,
|
||||
TeamMemberCollectionName
|
||||
@@ -52,10 +52,8 @@ const ChatSchema = new Schema({
|
||||
},
|
||||
source: {
|
||||
type: String,
|
||||
required: true,
|
||||
enum: Object.values(ChatSourceEnum)
|
||||
required: true
|
||||
},
|
||||
sourceName: String,
|
||||
shareId: {
|
||||
type: String
|
||||
},
|
||||
@@ -90,7 +88,7 @@ try {
|
||||
ChatSchema.index({ appId: 1, chatId: 1 });
|
||||
|
||||
// get chat logs;
|
||||
ChatSchema.index({ teamId: 1, appId: 1, updateTime: -1, sources: 1 });
|
||||
ChatSchema.index({ teamId: 1, appId: 1, updateTime: -1 });
|
||||
// get share chat history
|
||||
ChatSchema.index({ shareId: 1, outLinkUid: 1, updateTime: -1 });
|
||||
|
||||
|
||||
@@ -1,10 +1,6 @@
|
||||
import type { AIChatItemType, UserChatItemType } from '@fastgpt/global/core/chat/type.d';
|
||||
import { MongoApp } from '../app/schema';
|
||||
import {
|
||||
ChatItemValueTypeEnum,
|
||||
ChatRoleEnum,
|
||||
ChatSourceEnum
|
||||
} from '@fastgpt/global/core/chat/constants';
|
||||
import { ChatItemValueTypeEnum, ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import { MongoChatItem } from './chatItemSchema';
|
||||
import { MongoChat } from './chatSchema';
|
||||
import { addLog } from '../../common/system/log';
|
||||
@@ -26,8 +22,7 @@ type Props = {
|
||||
variables?: Record<string, any>;
|
||||
isUpdateUseTime: boolean;
|
||||
newTitle: string;
|
||||
source: `${ChatSourceEnum}`;
|
||||
sourceName?: string;
|
||||
source: string;
|
||||
shareId?: string;
|
||||
outLinkUid?: string;
|
||||
content: [UserChatItemType & { dataId?: string }, AIChatItemType & { dataId?: string }];
|
||||
@@ -45,7 +40,6 @@ export async function saveChat({
|
||||
isUpdateUseTime,
|
||||
newTitle,
|
||||
source,
|
||||
sourceName,
|
||||
shareId,
|
||||
outLinkUid,
|
||||
content,
|
||||
@@ -102,7 +96,6 @@ export async function saveChat({
|
||||
pluginInputs,
|
||||
title: newTitle,
|
||||
source,
|
||||
sourceName,
|
||||
shareId,
|
||||
outLinkUid,
|
||||
metadata: metadataUpdate,
|
||||
|
||||
@@ -92,10 +92,7 @@ export const loadRequestMessages = async ({
|
||||
const baseURL = process.env.FE_DOMAIN;
|
||||
if (!baseURL) return text;
|
||||
// 匹配 /api/system/img/xxx.xx 的图片链接,并追加 baseURL
|
||||
return text.replace(
|
||||
/(?<!https?:\/\/[^\s]*)(?:\/api\/system\/img\/[^\s.]*\.[^\s]*)/g,
|
||||
(match) => `${baseURL}${match}`
|
||||
);
|
||||
return text.replace(/(\/api\/system\/img\/[^\s.]*\.[^\s]*)/g, (match, p1) => `${baseURL}${p1}`);
|
||||
};
|
||||
const parseSystemMessage = (
|
||||
content: string | ChatCompletionContentPartText[]
|
||||
@@ -197,11 +194,7 @@ export const loadRequestMessages = async ({
|
||||
addLog.info(`Filter invalid image: ${imgUrl}`);
|
||||
return;
|
||||
}
|
||||
} catch (error: any) {
|
||||
if (error?.response?.status === 405) {
|
||||
return item;
|
||||
}
|
||||
addLog.warn(`Filter invalid image: ${imgUrl}`, { error });
|
||||
} catch (error) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -37,7 +37,12 @@ try {
|
||||
{ teamId: 1, datasetId: 1, fullTextToken: 'text' },
|
||||
{
|
||||
name: 'teamId_1_datasetId_1_fullTextToken_text',
|
||||
default_language: 'none'
|
||||
default_language: 'none',
|
||||
collation: {
|
||||
locale: 'simple', // 使用简单匹配规则
|
||||
strength: 2, // 忽略大小写
|
||||
caseLevel: false // 进一步确保大小写不敏感
|
||||
}
|
||||
}
|
||||
);
|
||||
DatasetDataTextSchema.index({ dataId: 1 }, { unique: true });
|
||||
|
||||
@@ -5,7 +5,7 @@ import {
|
||||
} from '@fastgpt/global/core/dataset/constants';
|
||||
import { recallFromVectorStore } from '../../../common/vectorStore/controller';
|
||||
import { getVectorsByText } from '../../ai/embedding';
|
||||
import { getEmbeddingModel, getDefaultRerankModel, getLLMModel } from '../../ai/model';
|
||||
import { getEmbeddingModel, getDefaultRerankModel } from '../../ai/model';
|
||||
import { MongoDatasetData } from '../data/schema';
|
||||
import {
|
||||
DatasetDataTextSchemaType,
|
||||
@@ -23,24 +23,18 @@ import json5 from 'json5';
|
||||
import { MongoDatasetCollectionTags } from '../tag/schema';
|
||||
import { readFromSecondary } from '../../../common/mongo/utils';
|
||||
import { MongoDatasetDataText } from '../data/dataTextSchema';
|
||||
import { ChatItemType } from '@fastgpt/global/core/chat/type';
|
||||
import { POST } from '../../../common/api/plusRequest';
|
||||
import { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants';
|
||||
import { datasetSearchQueryExtension } from './utils';
|
||||
|
||||
export type SearchDatasetDataProps = {
|
||||
histories: ChatItemType[];
|
||||
type SearchDatasetDataProps = {
|
||||
teamId: string;
|
||||
model: string;
|
||||
similarity?: number; // min distance
|
||||
limit: number; // max Token limit
|
||||
datasetIds: string[];
|
||||
searchMode?: `${DatasetSearchModeEnum}`;
|
||||
usingReRank?: boolean;
|
||||
reRankQuery: string;
|
||||
queries: string[];
|
||||
|
||||
[NodeInputKeyEnum.datasetSimilarity]?: number; // min distance
|
||||
[NodeInputKeyEnum.datasetMaxTokens]: number; // max Token limit
|
||||
[NodeInputKeyEnum.datasetSearchMode]?: `${DatasetSearchModeEnum}`;
|
||||
[NodeInputKeyEnum.datasetSearchUsingReRank]?: boolean;
|
||||
|
||||
/*
|
||||
{
|
||||
tags: {
|
||||
@@ -56,96 +50,7 @@ export type SearchDatasetDataProps = {
|
||||
collectionFilterMatch?: string;
|
||||
};
|
||||
|
||||
export type SearchDatasetDataResponse = {
|
||||
searchRes: SearchDataResponseItemType[];
|
||||
tokens: number;
|
||||
searchMode: `${DatasetSearchModeEnum}`;
|
||||
limit: number;
|
||||
similarity: number;
|
||||
usingReRank: boolean;
|
||||
usingSimilarityFilter: boolean;
|
||||
|
||||
queryExtensionResult?: {
|
||||
model: string;
|
||||
inputTokens: number;
|
||||
outputTokens: number;
|
||||
query: string;
|
||||
};
|
||||
deepSearchResult?: { model: string; inputTokens: number; outputTokens: number };
|
||||
};
|
||||
|
||||
export const datasetDataReRank = async ({
|
||||
data,
|
||||
query
|
||||
}: {
|
||||
data: SearchDataResponseItemType[];
|
||||
query: string;
|
||||
}): Promise<SearchDataResponseItemType[]> => {
|
||||
const results = await reRankRecall({
|
||||
query,
|
||||
documents: data.map((item) => ({
|
||||
id: item.id,
|
||||
text: `${item.q}\n${item.a}`
|
||||
}))
|
||||
});
|
||||
|
||||
if (results.length === 0) {
|
||||
return Promise.reject('Rerank error');
|
||||
}
|
||||
|
||||
// add new score to data
|
||||
const mergeResult = results
|
||||
.map((item, index) => {
|
||||
const target = data.find((dataItem) => dataItem.id === item.id);
|
||||
if (!target) return null;
|
||||
const score = item.score || 0;
|
||||
|
||||
return {
|
||||
...target,
|
||||
score: [{ type: SearchScoreTypeEnum.reRank, value: score, index }]
|
||||
};
|
||||
})
|
||||
.filter(Boolean) as SearchDataResponseItemType[];
|
||||
|
||||
return mergeResult;
|
||||
};
|
||||
export const filterDatasetDataByMaxTokens = async (
|
||||
data: SearchDataResponseItemType[],
|
||||
maxTokens: number
|
||||
) => {
|
||||
const filterMaxTokensResult = await (async () => {
|
||||
// Count tokens
|
||||
const tokensScoreFilter = await Promise.all(
|
||||
data.map(async (item) => ({
|
||||
...item,
|
||||
tokens: await countPromptTokens(item.q + item.a)
|
||||
}))
|
||||
);
|
||||
|
||||
const results: SearchDataResponseItemType[] = [];
|
||||
let totalTokens = 0;
|
||||
|
||||
for await (const item of tokensScoreFilter) {
|
||||
totalTokens += item.tokens;
|
||||
|
||||
if (totalTokens > maxTokens + 500) {
|
||||
break;
|
||||
}
|
||||
results.push(item);
|
||||
if (totalTokens > maxTokens) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return results.length === 0 ? data.slice(0, 1) : results;
|
||||
})();
|
||||
|
||||
return filterMaxTokensResult;
|
||||
};
|
||||
|
||||
export async function searchDatasetData(
|
||||
props: SearchDatasetDataProps
|
||||
): Promise<SearchDatasetDataResponse> {
|
||||
export async function searchDatasetData(props: SearchDatasetDataProps) {
|
||||
let {
|
||||
teamId,
|
||||
reRankQuery,
|
||||
@@ -383,7 +288,6 @@ export async function searchDatasetData(
|
||||
).lean()
|
||||
]);
|
||||
|
||||
const set = new Map<string, number>();
|
||||
const formatResult = results
|
||||
.map((item, index) => {
|
||||
const collection = collections.find((col) => String(col._id) === String(item.collectionId));
|
||||
@@ -399,6 +303,8 @@ export async function searchDatasetData(
|
||||
return;
|
||||
}
|
||||
|
||||
const score = item?.score || 0;
|
||||
|
||||
const result: SearchDataResponseItemType = {
|
||||
id: String(data._id),
|
||||
updateTime: data.updateTime,
|
||||
@@ -408,24 +314,12 @@ export async function searchDatasetData(
|
||||
datasetId: String(data.datasetId),
|
||||
collectionId: String(data.collectionId),
|
||||
...getCollectionSourceData(collection),
|
||||
score: [{ type: SearchScoreTypeEnum.embedding, value: item?.score || 0, index }]
|
||||
score: [{ type: SearchScoreTypeEnum.embedding, value: score, index }]
|
||||
};
|
||||
|
||||
return result;
|
||||
})
|
||||
.filter((item) => {
|
||||
if (!item) return false;
|
||||
if (set.has(item.id)) return false;
|
||||
set.set(item.id, 1);
|
||||
return true;
|
||||
})
|
||||
.map((item, index) => {
|
||||
if (!item) return;
|
||||
return {
|
||||
...item,
|
||||
score: item.score.map((item) => ({ ...item, index }))
|
||||
};
|
||||
}) as SearchDataResponseItemType[];
|
||||
.filter(Boolean) as SearchDataResponseItemType[];
|
||||
|
||||
return {
|
||||
embeddingRecallResults: formatResult,
|
||||
@@ -561,6 +455,47 @@ export async function searchDatasetData(
|
||||
tokenLen: 0
|
||||
};
|
||||
};
|
||||
const reRankSearchResult = async ({
|
||||
data,
|
||||
query
|
||||
}: {
|
||||
data: SearchDataResponseItemType[];
|
||||
query: string;
|
||||
}): Promise<SearchDataResponseItemType[]> => {
|
||||
try {
|
||||
const results = await reRankRecall({
|
||||
query,
|
||||
documents: data.map((item) => ({
|
||||
id: item.id,
|
||||
text: `${item.q}\n${item.a}`
|
||||
}))
|
||||
});
|
||||
|
||||
if (results.length === 0) {
|
||||
usingReRank = false;
|
||||
return [];
|
||||
}
|
||||
|
||||
// add new score to data
|
||||
const mergeResult = results
|
||||
.map((item, index) => {
|
||||
const target = data.find((dataItem) => dataItem.id === item.id);
|
||||
if (!target) return null;
|
||||
const score = item.score || 0;
|
||||
|
||||
return {
|
||||
...target,
|
||||
score: [{ type: SearchScoreTypeEnum.reRank, value: score, index }]
|
||||
};
|
||||
})
|
||||
.filter(Boolean) as SearchDataResponseItemType[];
|
||||
|
||||
return mergeResult;
|
||||
} catch (error) {
|
||||
usingReRank = false;
|
||||
return [];
|
||||
}
|
||||
};
|
||||
const multiQueryRecall = async ({
|
||||
embeddingLimit,
|
||||
fullTextLimit
|
||||
@@ -645,15 +580,10 @@ export async function searchDatasetData(
|
||||
set.add(str);
|
||||
return true;
|
||||
});
|
||||
try {
|
||||
return await datasetDataReRank({
|
||||
query: reRankQuery,
|
||||
data: filterSameDataResults
|
||||
});
|
||||
} catch (error) {
|
||||
usingReRank = false;
|
||||
return [];
|
||||
}
|
||||
return reRankSearchResult({
|
||||
query: reRankQuery,
|
||||
data: filterSameDataResults
|
||||
});
|
||||
})();
|
||||
|
||||
// embedding recall and fullText recall rrf concat
|
||||
@@ -698,7 +628,31 @@ export async function searchDatasetData(
|
||||
})();
|
||||
|
||||
// token filter
|
||||
const filterMaxTokensResult = await filterDatasetDataByMaxTokens(scoreFilter, maxTokens);
|
||||
const filterMaxTokensResult = await (async () => {
|
||||
const tokensScoreFilter = await Promise.all(
|
||||
scoreFilter.map(async (item) => ({
|
||||
...item,
|
||||
tokens: await countPromptTokens(item.q + item.a)
|
||||
}))
|
||||
);
|
||||
|
||||
const results: SearchDataResponseItemType[] = [];
|
||||
let totalTokens = 0;
|
||||
|
||||
for await (const item of tokensScoreFilter) {
|
||||
totalTokens += item.tokens;
|
||||
|
||||
if (totalTokens > maxTokens + 500) {
|
||||
break;
|
||||
}
|
||||
results.push(item);
|
||||
if (totalTokens > maxTokens) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return results.length === 0 ? scoreFilter.slice(0, 1) : results;
|
||||
})();
|
||||
|
||||
return {
|
||||
searchRes: filterMaxTokensResult,
|
||||
@@ -710,54 +664,3 @@ export async function searchDatasetData(
|
||||
usingSimilarityFilter
|
||||
};
|
||||
}
|
||||
|
||||
export type DefaultSearchDatasetDataProps = SearchDatasetDataProps & {
|
||||
[NodeInputKeyEnum.datasetSearchUsingExtensionQuery]?: boolean;
|
||||
[NodeInputKeyEnum.datasetSearchExtensionModel]?: string;
|
||||
[NodeInputKeyEnum.datasetSearchExtensionBg]?: string;
|
||||
};
|
||||
export const defaultSearchDatasetData = async ({
|
||||
datasetSearchUsingExtensionQuery,
|
||||
datasetSearchExtensionModel,
|
||||
datasetSearchExtensionBg,
|
||||
...props
|
||||
}: DefaultSearchDatasetDataProps): Promise<SearchDatasetDataResponse> => {
|
||||
const query = props.queries[0];
|
||||
|
||||
const extensionModel = datasetSearchUsingExtensionQuery
|
||||
? getLLMModel(datasetSearchExtensionModel)
|
||||
: undefined;
|
||||
|
||||
const { concatQueries, extensionQueries, rewriteQuery, aiExtensionResult } =
|
||||
await datasetSearchQueryExtension({
|
||||
query,
|
||||
extensionModel,
|
||||
extensionBg: datasetSearchExtensionBg
|
||||
});
|
||||
|
||||
const result = await searchDatasetData({
|
||||
...props,
|
||||
reRankQuery: rewriteQuery,
|
||||
queries: concatQueries
|
||||
});
|
||||
|
||||
return {
|
||||
...result,
|
||||
queryExtensionResult: aiExtensionResult
|
||||
? {
|
||||
model: aiExtensionResult.model,
|
||||
inputTokens: aiExtensionResult.inputTokens,
|
||||
outputTokens: aiExtensionResult.outputTokens,
|
||||
query: extensionQueries.join('\n')
|
||||
}
|
||||
: undefined
|
||||
};
|
||||
};
|
||||
|
||||
export type DeepRagSearchProps = SearchDatasetDataProps & {
|
||||
[NodeInputKeyEnum.datasetDeepSearchModel]?: string;
|
||||
[NodeInputKeyEnum.datasetDeepSearchMaxTimes]?: number;
|
||||
[NodeInputKeyEnum.datasetDeepSearchBg]?: string;
|
||||
};
|
||||
export const deepRagSearch = (data: DeepRagSearchProps) =>
|
||||
POST<SearchDatasetDataResponse>('/core/dataset/deepRag', data);
|
||||
|
||||
@@ -72,15 +72,12 @@ Human: ${query}
|
||||
if (result.extensionQueries?.length === 0) return;
|
||||
return result;
|
||||
})();
|
||||
|
||||
const extensionQueries = filterSamQuery(aiExtensionResult?.extensionQueries || []);
|
||||
if (aiExtensionResult) {
|
||||
queries = filterSamQuery(queries.concat(extensionQueries));
|
||||
queries = filterSamQuery(queries.concat(aiExtensionResult.extensionQueries));
|
||||
rewriteQuery = queries.join('\n');
|
||||
}
|
||||
|
||||
return {
|
||||
extensionQueries,
|
||||
concatQueries: queries,
|
||||
rewriteQuery,
|
||||
aiExtensionResult
|
||||
|
||||
@@ -1,5 +1,45 @@
|
||||
import { DatasetTrainingSchemaType } from '@fastgpt/global/core/dataset/type';
|
||||
import { addLog } from '../../../common/system/log';
|
||||
import { getErrText } from '@fastgpt/global/common/error/utils';
|
||||
import { MongoDatasetTraining } from './schema';
|
||||
import Papa from 'papaparse';
|
||||
|
||||
export const checkInvalidChunkAndLock = async ({
|
||||
err,
|
||||
errText,
|
||||
data
|
||||
}: {
|
||||
err: any;
|
||||
errText: string;
|
||||
data: DatasetTrainingSchemaType;
|
||||
}) => {
|
||||
if (err?.response) {
|
||||
addLog.error(`openai error: ${errText}`, {
|
||||
status: err.response?.status,
|
||||
statusText: err.response?.statusText,
|
||||
data: err.response?.data
|
||||
});
|
||||
} else {
|
||||
addLog.error(getErrText(err, errText), err);
|
||||
}
|
||||
|
||||
if (
|
||||
err?.message === 'invalid message format' ||
|
||||
err?.type === 'invalid_request_error' ||
|
||||
err?.code === 500
|
||||
) {
|
||||
addLog.error('Lock training data', err);
|
||||
|
||||
try {
|
||||
await MongoDatasetTraining.findByIdAndUpdate(data._id, {
|
||||
lockTime: new Date('2998/5/5')
|
||||
});
|
||||
} catch (error) {}
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
};
|
||||
|
||||
export const parseCsvTable2Chunks = (rawText: string) => {
|
||||
const csvArr = Papa.parse(rawText).data as string[][];
|
||||
|
||||
|
||||
@@ -46,15 +46,7 @@ export const runToolWithFunctionCall = async (
|
||||
externalProvider,
|
||||
stream,
|
||||
workflowStreamResponse,
|
||||
params: {
|
||||
temperature,
|
||||
maxToken,
|
||||
aiChatVision,
|
||||
aiChatTopP,
|
||||
aiChatStopSign,
|
||||
aiChatResponseFormat,
|
||||
aiChatJsonSchema
|
||||
}
|
||||
params: { temperature, maxToken, aiChatVision }
|
||||
} = workflowProps;
|
||||
|
||||
// Interactive
|
||||
@@ -212,18 +204,12 @@ export const runToolWithFunctionCall = async (
|
||||
const requestBody = llmCompletionsBodyFormat(
|
||||
{
|
||||
model: toolModel.model,
|
||||
|
||||
temperature,
|
||||
max_tokens,
|
||||
stream,
|
||||
messages: requestMessages,
|
||||
functions,
|
||||
function_call: 'auto',
|
||||
|
||||
temperature,
|
||||
max_tokens,
|
||||
top_p: aiChatTopP,
|
||||
stop: aiChatStopSign,
|
||||
response_format: aiChatResponseFormat,
|
||||
json_schema: aiChatJsonSchema
|
||||
function_call: 'auto'
|
||||
},
|
||||
toolModel
|
||||
);
|
||||
|
||||
@@ -334,7 +334,7 @@ const getMultiInput = async ({
|
||||
|
||||
return {
|
||||
documentQuoteText: text,
|
||||
userFiles: fileLinks.map((url) => parseUrlToFileType(url)).filter(Boolean)
|
||||
userFiles: fileLinks.map((url) => parseUrlToFileType(url))
|
||||
};
|
||||
};
|
||||
|
||||
|
||||
@@ -54,15 +54,7 @@ export const runToolWithPromptCall = async (
|
||||
externalProvider,
|
||||
stream,
|
||||
workflowStreamResponse,
|
||||
params: {
|
||||
temperature,
|
||||
maxToken,
|
||||
aiChatVision,
|
||||
aiChatTopP,
|
||||
aiChatStopSign,
|
||||
aiChatResponseFormat,
|
||||
aiChatJsonSchema
|
||||
}
|
||||
params: { temperature, maxToken, aiChatVision }
|
||||
} = workflowProps;
|
||||
|
||||
if (interactiveEntryToolParams) {
|
||||
@@ -223,14 +215,10 @@ export const runToolWithPromptCall = async (
|
||||
const requestBody = llmCompletionsBodyFormat(
|
||||
{
|
||||
model: toolModel.model,
|
||||
stream,
|
||||
messages: requestMessages,
|
||||
temperature,
|
||||
max_tokens,
|
||||
top_p: aiChatTopP,
|
||||
stop: aiChatStopSign,
|
||||
response_format: aiChatResponseFormat,
|
||||
json_schema: aiChatJsonSchema
|
||||
stream,
|
||||
messages: requestMessages
|
||||
},
|
||||
toolModel
|
||||
);
|
||||
|
||||
@@ -93,15 +93,7 @@ export const runToolWithToolChoice = async (
|
||||
stream,
|
||||
externalProvider,
|
||||
workflowStreamResponse,
|
||||
params: {
|
||||
temperature,
|
||||
maxToken,
|
||||
aiChatVision,
|
||||
aiChatTopP,
|
||||
aiChatStopSign,
|
||||
aiChatResponseFormat,
|
||||
aiChatJsonSchema
|
||||
}
|
||||
params: { temperature, maxToken, aiChatVision }
|
||||
} = workflowProps;
|
||||
|
||||
if (maxRunToolTimes <= 0 && response) {
|
||||
@@ -271,16 +263,12 @@ export const runToolWithToolChoice = async (
|
||||
const requestBody = llmCompletionsBodyFormat(
|
||||
{
|
||||
model: toolModel.model,
|
||||
temperature,
|
||||
max_tokens,
|
||||
stream,
|
||||
messages: requestMessages,
|
||||
tools,
|
||||
tool_choice: 'auto',
|
||||
temperature,
|
||||
max_tokens,
|
||||
top_p: aiChatTopP,
|
||||
stop: aiChatStopSign,
|
||||
response_format: aiChatResponseFormat,
|
||||
json_schema: aiChatJsonSchema
|
||||
tool_choice: 'auto'
|
||||
},
|
||||
toolModel
|
||||
);
|
||||
|
||||
@@ -16,16 +16,12 @@ export type DispatchToolModuleProps = ModuleDispatchProps<{
|
||||
[NodeInputKeyEnum.history]?: ChatItemType[];
|
||||
[NodeInputKeyEnum.userChatInput]: string;
|
||||
|
||||
[NodeInputKeyEnum.fileUrlList]?: string[];
|
||||
[NodeInputKeyEnum.aiModel]: string;
|
||||
[NodeInputKeyEnum.aiSystemPrompt]: string;
|
||||
[NodeInputKeyEnum.aiChatTemperature]: number;
|
||||
[NodeInputKeyEnum.aiChatMaxToken]: number;
|
||||
[NodeInputKeyEnum.aiChatVision]?: boolean;
|
||||
[NodeInputKeyEnum.aiChatTopP]?: number;
|
||||
[NodeInputKeyEnum.aiChatStopSign]?: string;
|
||||
[NodeInputKeyEnum.aiChatResponseFormat]?: string;
|
||||
[NodeInputKeyEnum.aiChatJsonSchema]?: string;
|
||||
[NodeInputKeyEnum.fileUrlList]?: string[];
|
||||
}> & {
|
||||
messages: ChatCompletionMessageParam[];
|
||||
toolNodes: ToolNodeItemType[];
|
||||
|
||||
@@ -3,13 +3,13 @@ import { filterGPTMessageByMaxContext, loadRequestMessages } from '../../../chat
|
||||
import type { ChatItemType, UserChatItemValueItemType } from '@fastgpt/global/core/chat/type.d';
|
||||
import { ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import { SseResponseEventEnum } from '@fastgpt/global/core/workflow/runtime/constants';
|
||||
import {
|
||||
parseReasoningContent,
|
||||
parseReasoningStreamContent,
|
||||
textAdaptGptResponse
|
||||
} from '@fastgpt/global/core/workflow/runtime/utils';
|
||||
import { textAdaptGptResponse } from '@fastgpt/global/core/workflow/runtime/utils';
|
||||
import { createChatCompletion } from '../../../ai/config';
|
||||
import type { ChatCompletionMessageParam, StreamChatType } from '@fastgpt/global/core/ai/type.d';
|
||||
import type {
|
||||
ChatCompletion,
|
||||
ChatCompletionMessageParam,
|
||||
StreamChatType
|
||||
} from '@fastgpt/global/core/ai/type.d';
|
||||
import { formatModelChars2Points } from '../../../../support/wallet/usage/utils';
|
||||
import type { LLMModelItemType } from '@fastgpt/global/core/ai/model.d';
|
||||
import { postTextCensor } from '../../../../common/api/requestPlusApi';
|
||||
@@ -51,7 +51,7 @@ import { ModelTypeEnum } from '@fastgpt/global/core/ai/model';
|
||||
|
||||
export type ChatProps = ModuleDispatchProps<
|
||||
AIChatNodeProps & {
|
||||
[NodeInputKeyEnum.userChatInput]?: string;
|
||||
[NodeInputKeyEnum.userChatInput]: string;
|
||||
[NodeInputKeyEnum.history]?: ChatItemType[] | number;
|
||||
[NodeInputKeyEnum.aiChatDatasetQuote]?: SearchDataResponseItemType[];
|
||||
}
|
||||
@@ -81,7 +81,7 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
|
||||
maxToken,
|
||||
history = 6,
|
||||
quoteQA,
|
||||
userChatInput = '',
|
||||
userChatInput,
|
||||
isResponseAnswerText = true,
|
||||
systemPrompt = '',
|
||||
aiChatQuoteRole = 'system',
|
||||
@@ -89,11 +89,6 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
|
||||
quotePrompt,
|
||||
aiChatVision,
|
||||
aiChatReasoning = true,
|
||||
aiChatTopP,
|
||||
aiChatStopSign,
|
||||
aiChatResponseFormat,
|
||||
aiChatJsonSchema,
|
||||
|
||||
fileUrlList: fileLinks, // node quote file links
|
||||
stringQuoteText //abandon
|
||||
}
|
||||
@@ -105,7 +100,7 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
|
||||
return Promise.reject('The chat model is undefined, you need to select a chat model.');
|
||||
}
|
||||
|
||||
aiChatVision = modelConstantsData.vision && aiChatVision;
|
||||
stream = stream && isResponseAnswerText;
|
||||
aiChatReasoning = !!aiChatReasoning && !!modelConstantsData.reasoning;
|
||||
|
||||
const chatHistories = getHistories(history, histories);
|
||||
@@ -165,21 +160,17 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
|
||||
|
||||
const requestMessages = await loadRequestMessages({
|
||||
messages: filterMessages,
|
||||
useVision: aiChatVision,
|
||||
useVision: modelConstantsData.vision && aiChatVision,
|
||||
origin: requestOrigin
|
||||
});
|
||||
|
||||
const requestBody = llmCompletionsBodyFormat(
|
||||
{
|
||||
model: modelConstantsData.model,
|
||||
stream,
|
||||
messages: requestMessages,
|
||||
temperature,
|
||||
max_tokens,
|
||||
top_p: aiChatTopP,
|
||||
stop: aiChatStopSign,
|
||||
response_format: aiChatResponseFormat as any,
|
||||
json_schema: aiChatJsonSchema
|
||||
stream,
|
||||
messages: requestMessages
|
||||
},
|
||||
modelConstantsData
|
||||
);
|
||||
@@ -195,19 +186,12 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
|
||||
});
|
||||
|
||||
const { answerText, reasoningText } = await (async () => {
|
||||
if (isStreamResponse) {
|
||||
if (!res) {
|
||||
return {
|
||||
answerText: '',
|
||||
reasoningText: ''
|
||||
};
|
||||
}
|
||||
if (res && isStreamResponse) {
|
||||
// sse response
|
||||
const { answer, reasoning } = await streamResponse({
|
||||
res,
|
||||
stream: response,
|
||||
aiChatReasoning,
|
||||
isResponseAnswerText,
|
||||
workflowStreamResponse
|
||||
});
|
||||
|
||||
@@ -216,49 +200,26 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
|
||||
reasoningText: reasoning
|
||||
};
|
||||
} else {
|
||||
const { content, reasoningContent } = (() => {
|
||||
const content = response.choices?.[0]?.message?.content || '';
|
||||
// @ts-ignore
|
||||
const reasoningContent: string = response.choices?.[0]?.message?.reasoning_content || '';
|
||||
|
||||
// API already parse reasoning content
|
||||
if (reasoningContent || !aiChatReasoning) {
|
||||
return {
|
||||
content,
|
||||
reasoningContent
|
||||
};
|
||||
}
|
||||
|
||||
const [think, answer] = parseReasoningContent(content);
|
||||
return {
|
||||
content: answer,
|
||||
reasoningContent: think
|
||||
};
|
||||
})();
|
||||
|
||||
// Some models do not support streaming
|
||||
const unStreamResponse = response as ChatCompletion;
|
||||
const answer = unStreamResponse.choices?.[0]?.message?.content || '';
|
||||
const reasoning = aiChatReasoning
|
||||
? // @ts-ignore
|
||||
unStreamResponse.choices?.[0]?.message?.reasoning_content || ''
|
||||
: '';
|
||||
if (stream) {
|
||||
if (aiChatReasoning && reasoningContent) {
|
||||
workflowStreamResponse?.({
|
||||
event: SseResponseEventEnum.fastAnswer,
|
||||
data: textAdaptGptResponse({
|
||||
reasoning_content: reasoningContent
|
||||
})
|
||||
});
|
||||
}
|
||||
if (isResponseAnswerText && content) {
|
||||
workflowStreamResponse?.({
|
||||
event: SseResponseEventEnum.fastAnswer,
|
||||
data: textAdaptGptResponse({
|
||||
text: content
|
||||
})
|
||||
});
|
||||
}
|
||||
// Some models do not support streaming
|
||||
workflowStreamResponse?.({
|
||||
event: SseResponseEventEnum.fastAnswer,
|
||||
data: textAdaptGptResponse({
|
||||
text: answer,
|
||||
reasoning_content: reasoning
|
||||
})
|
||||
});
|
||||
}
|
||||
|
||||
return {
|
||||
answerText: content,
|
||||
reasoningText: reasoningContent
|
||||
answerText: answer,
|
||||
reasoningText: reasoning
|
||||
};
|
||||
}
|
||||
})();
|
||||
@@ -270,8 +231,7 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
|
||||
const AIMessages: ChatCompletionMessageParam[] = [
|
||||
{
|
||||
role: ChatCompletionRequestMessageRoleEnum.Assistant,
|
||||
content: answerText,
|
||||
reasoning_text: reasoningText // reasoning_text is only recorded for response, but not for request
|
||||
content: answerText
|
||||
}
|
||||
];
|
||||
|
||||
@@ -289,7 +249,7 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
|
||||
});
|
||||
|
||||
return {
|
||||
answerText: answerText.trim(),
|
||||
answerText,
|
||||
reasoningText,
|
||||
[DispatchNodeResponseKeyEnum.nodeResponse]: {
|
||||
totalPoints: externalProvider.openaiAccount?.key ? 0 : totalPoints,
|
||||
@@ -299,8 +259,11 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
|
||||
outputTokens: outputTokens,
|
||||
query: `${userChatInput}`,
|
||||
maxToken: max_tokens,
|
||||
reasoningText,
|
||||
historyPreview: getHistoryPreview(chatCompleteMessages, 10000, aiChatVision),
|
||||
historyPreview: getHistoryPreview(
|
||||
chatCompleteMessages,
|
||||
10000,
|
||||
modelConstantsData.vision && aiChatVision
|
||||
),
|
||||
contextTotalLen: completeMessages.length
|
||||
},
|
||||
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
|
||||
@@ -408,7 +371,7 @@ async function getMultiInput({
|
||||
|
||||
return {
|
||||
documentQuoteText: text,
|
||||
userFiles: fileLinks.map((url) => parseUrlToFileType(url)).filter(Boolean)
|
||||
userFiles: fileLinks.map((url) => parseUrlToFileType(url))
|
||||
};
|
||||
}
|
||||
|
||||
@@ -507,14 +470,12 @@ async function streamResponse({
|
||||
res,
|
||||
stream,
|
||||
workflowStreamResponse,
|
||||
aiChatReasoning,
|
||||
isResponseAnswerText
|
||||
aiChatReasoning
|
||||
}: {
|
||||
res: NextApiResponse;
|
||||
stream: StreamChatType;
|
||||
workflowStreamResponse?: WorkflowResponseType;
|
||||
aiChatReasoning?: boolean;
|
||||
isResponseAnswerText?: boolean;
|
||||
}) {
|
||||
const write = responseWriteController({
|
||||
res,
|
||||
@@ -522,42 +483,28 @@ async function streamResponse({
|
||||
});
|
||||
let answer = '';
|
||||
let reasoning = '';
|
||||
const { parsePart, getStartTagBuffer } = parseReasoningStreamContent();
|
||||
|
||||
for await (const part of stream) {
|
||||
if (res.closed) {
|
||||
stream.controller?.abort();
|
||||
break;
|
||||
}
|
||||
|
||||
const [reasoningContent, content] = parsePart(part, aiChatReasoning);
|
||||
const content = part.choices?.[0]?.delta?.content || '';
|
||||
answer += content;
|
||||
|
||||
const reasoningContent = aiChatReasoning
|
||||
? part.choices?.[0]?.delta?.reasoning_content || ''
|
||||
: '';
|
||||
reasoning += reasoningContent;
|
||||
|
||||
if (aiChatReasoning && reasoningContent) {
|
||||
workflowStreamResponse?.({
|
||||
write,
|
||||
event: SseResponseEventEnum.answer,
|
||||
data: textAdaptGptResponse({
|
||||
reasoning_content: reasoningContent
|
||||
})
|
||||
});
|
||||
}
|
||||
|
||||
if (isResponseAnswerText && content) {
|
||||
workflowStreamResponse?.({
|
||||
write,
|
||||
event: SseResponseEventEnum.answer,
|
||||
data: textAdaptGptResponse({
|
||||
text: content
|
||||
})
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// if answer is empty, try to get value from startTagBuffer. (Cause: The response content is too short to exceed the minimum parse length)
|
||||
if (answer === '') {
|
||||
answer = getStartTagBuffer();
|
||||
workflowStreamResponse?.({
|
||||
write,
|
||||
event: SseResponseEventEnum.answer,
|
||||
data: textAdaptGptResponse({
|
||||
text: content,
|
||||
reasoning_content: reasoningContent
|
||||
})
|
||||
});
|
||||
}
|
||||
|
||||
return { answer, reasoning };
|
||||
|
||||
@@ -6,11 +6,13 @@ import { formatModelChars2Points } from '../../../../support/wallet/usage/utils'
|
||||
import type { SelectedDatasetType } from '@fastgpt/global/core/workflow/api.d';
|
||||
import type { SearchDataResponseItemType } from '@fastgpt/global/core/dataset/type';
|
||||
import type { ModuleDispatchProps } from '@fastgpt/global/core/workflow/runtime/type';
|
||||
import { getEmbeddingModel } from '../../../ai/model';
|
||||
import { deepRagSearch, defaultSearchDatasetData } from '../../../dataset/search/controller';
|
||||
import { getLLMModel, getEmbeddingModel } from '../../../ai/model';
|
||||
import { searchDatasetData } from '../../../dataset/search/controller';
|
||||
import { NodeInputKeyEnum, NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
|
||||
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants';
|
||||
import { DatasetSearchModeEnum } from '@fastgpt/global/core/dataset/constants';
|
||||
import { getHistories } from '../utils';
|
||||
import { datasetSearchQueryExtension } from '../../../dataset/search/utils';
|
||||
import { ChatNodeUsageType } from '@fastgpt/global/support/wallet/bill/type';
|
||||
import { checkTeamReRankPermission } from '../../../../support/permission/teamLimit';
|
||||
import { MongoDataset } from '../../../dataset/schema';
|
||||
@@ -23,19 +25,13 @@ type DatasetSearchProps = ModuleDispatchProps<{
|
||||
[NodeInputKeyEnum.datasetSimilarity]: number;
|
||||
[NodeInputKeyEnum.datasetMaxTokens]: number;
|
||||
[NodeInputKeyEnum.datasetSearchMode]: `${DatasetSearchModeEnum}`;
|
||||
[NodeInputKeyEnum.userChatInput]?: string;
|
||||
[NodeInputKeyEnum.userChatInput]: string;
|
||||
[NodeInputKeyEnum.datasetSearchUsingReRank]: boolean;
|
||||
[NodeInputKeyEnum.collectionFilterMatch]: string;
|
||||
[NodeInputKeyEnum.authTmbId]?: boolean;
|
||||
|
||||
[NodeInputKeyEnum.datasetSearchUsingExtensionQuery]: boolean;
|
||||
[NodeInputKeyEnum.datasetSearchExtensionModel]: string;
|
||||
[NodeInputKeyEnum.datasetSearchExtensionBg]: string;
|
||||
|
||||
[NodeInputKeyEnum.datasetDeepSearch]?: boolean;
|
||||
[NodeInputKeyEnum.datasetDeepSearchModel]?: string;
|
||||
[NodeInputKeyEnum.datasetDeepSearchMaxTimes]?: number;
|
||||
[NodeInputKeyEnum.datasetDeepSearchBg]?: string;
|
||||
[NodeInputKeyEnum.collectionFilterMatch]: string;
|
||||
[NodeInputKeyEnum.authTmbId]: boolean;
|
||||
}>;
|
||||
export type DatasetSearchResponse = DispatchNodeResultType<{
|
||||
[NodeOutputKeyEnum.datasetQuoteQA]: SearchDataResponseItemType[];
|
||||
@@ -55,18 +51,13 @@ export async function dispatchDatasetSearch(
|
||||
limit = 1500,
|
||||
usingReRank,
|
||||
searchMode,
|
||||
userChatInput = '',
|
||||
authTmbId = false,
|
||||
collectionFilterMatch,
|
||||
userChatInput,
|
||||
|
||||
datasetSearchUsingExtensionQuery,
|
||||
datasetSearchExtensionModel,
|
||||
datasetSearchExtensionBg,
|
||||
|
||||
datasetDeepSearch,
|
||||
datasetDeepSearchModel,
|
||||
datasetDeepSearchMaxTimes,
|
||||
datasetDeepSearchBg
|
||||
collectionFilterMatch,
|
||||
authTmbId = false
|
||||
}
|
||||
} = props as DatasetSearchProps;
|
||||
|
||||
@@ -94,12 +85,25 @@ export async function dispatchDatasetSearch(
|
||||
return emptyResult;
|
||||
}
|
||||
|
||||
const datasetIds = authTmbId
|
||||
? await filterDatasetsByTmbId({
|
||||
datasetIds: datasets.map((item) => item.datasetId),
|
||||
tmbId
|
||||
})
|
||||
: await Promise.resolve(datasets.map((item) => item.datasetId));
|
||||
// query extension
|
||||
const extensionModel = datasetSearchUsingExtensionQuery
|
||||
? getLLMModel(datasetSearchExtensionModel)
|
||||
: undefined;
|
||||
|
||||
const [{ concatQueries, rewriteQuery, aiExtensionResult }, datasetIds] = await Promise.all([
|
||||
datasetSearchQueryExtension({
|
||||
query: userChatInput,
|
||||
extensionModel,
|
||||
extensionBg: datasetSearchExtensionBg,
|
||||
histories: getHistories(6, histories)
|
||||
}),
|
||||
authTmbId
|
||||
? filterDatasetsByTmbId({
|
||||
datasetIds: datasets.map((item) => item.datasetId),
|
||||
tmbId
|
||||
})
|
||||
: Promise.resolve(datasets.map((item) => item.datasetId))
|
||||
]);
|
||||
|
||||
if (datasetIds.length === 0) {
|
||||
return emptyResult;
|
||||
@@ -112,11 +116,15 @@ export async function dispatchDatasetSearch(
|
||||
);
|
||||
|
||||
// start search
|
||||
const searchData = {
|
||||
histories,
|
||||
const {
|
||||
searchRes,
|
||||
tokens,
|
||||
usingSimilarityFilter,
|
||||
usingReRank: searchUsingReRank
|
||||
} = await searchDatasetData({
|
||||
teamId,
|
||||
reRankQuery: userChatInput,
|
||||
queries: [userChatInput],
|
||||
reRankQuery: `${rewriteQuery}`,
|
||||
queries: concatQueries,
|
||||
model: vectorModel.model,
|
||||
similarity,
|
||||
limit,
|
||||
@@ -124,106 +132,59 @@ export async function dispatchDatasetSearch(
|
||||
searchMode,
|
||||
usingReRank: usingReRank && (await checkTeamReRankPermission(teamId)),
|
||||
collectionFilterMatch
|
||||
};
|
||||
const {
|
||||
searchRes,
|
||||
tokens,
|
||||
usingSimilarityFilter,
|
||||
usingReRank: searchUsingReRank,
|
||||
queryExtensionResult,
|
||||
deepSearchResult
|
||||
} = datasetDeepSearch
|
||||
? await deepRagSearch({
|
||||
...searchData,
|
||||
datasetDeepSearchModel,
|
||||
datasetDeepSearchMaxTimes,
|
||||
datasetDeepSearchBg
|
||||
})
|
||||
: await defaultSearchDatasetData({
|
||||
...searchData,
|
||||
datasetSearchUsingExtensionQuery,
|
||||
datasetSearchExtensionModel,
|
||||
datasetSearchExtensionBg
|
||||
});
|
||||
});
|
||||
|
||||
// count bill results
|
||||
const nodeDispatchUsages: ChatNodeUsageType[] = [];
|
||||
// vector
|
||||
const { totalPoints: embeddingTotalPoints, modelName: embeddingModelName } =
|
||||
formatModelChars2Points({
|
||||
model: vectorModel.model,
|
||||
inputTokens: tokens,
|
||||
modelType: ModelTypeEnum.embedding
|
||||
});
|
||||
nodeDispatchUsages.push({
|
||||
totalPoints: embeddingTotalPoints,
|
||||
moduleName: node.name,
|
||||
model: embeddingModelName,
|
||||
inputTokens: tokens
|
||||
const { totalPoints, modelName } = formatModelChars2Points({
|
||||
model: vectorModel.model,
|
||||
inputTokens: tokens,
|
||||
modelType: ModelTypeEnum.embedding
|
||||
});
|
||||
// Query extension
|
||||
const { totalPoints: queryExtensionTotalPoints } = (() => {
|
||||
if (queryExtensionResult) {
|
||||
const { totalPoints, modelName } = formatModelChars2Points({
|
||||
model: queryExtensionResult.model,
|
||||
inputTokens: queryExtensionResult.inputTokens,
|
||||
outputTokens: queryExtensionResult.outputTokens,
|
||||
modelType: ModelTypeEnum.llm
|
||||
});
|
||||
nodeDispatchUsages.push({
|
||||
totalPoints,
|
||||
moduleName: i18nT('common:core.module.template.Query extension'),
|
||||
model: modelName,
|
||||
inputTokens: queryExtensionResult.inputTokens,
|
||||
outputTokens: queryExtensionResult.outputTokens
|
||||
});
|
||||
return {
|
||||
totalPoints
|
||||
};
|
||||
}
|
||||
return {
|
||||
totalPoints: 0
|
||||
};
|
||||
})();
|
||||
// Deep search
|
||||
const { totalPoints: deepSearchTotalPoints } = (() => {
|
||||
if (deepSearchResult) {
|
||||
const { totalPoints, modelName } = formatModelChars2Points({
|
||||
model: deepSearchResult.model,
|
||||
inputTokens: deepSearchResult.inputTokens,
|
||||
outputTokens: deepSearchResult.outputTokens,
|
||||
modelType: ModelTypeEnum.llm
|
||||
});
|
||||
nodeDispatchUsages.push({
|
||||
totalPoints,
|
||||
moduleName: i18nT('common:deep_rag_search'),
|
||||
model: modelName,
|
||||
inputTokens: deepSearchResult.inputTokens,
|
||||
outputTokens: deepSearchResult.outputTokens
|
||||
});
|
||||
return {
|
||||
totalPoints
|
||||
};
|
||||
}
|
||||
return {
|
||||
totalPoints: 0
|
||||
};
|
||||
})();
|
||||
const totalPoints = embeddingTotalPoints + queryExtensionTotalPoints + deepSearchTotalPoints;
|
||||
|
||||
const responseData: DispatchNodeResponseType & { totalPoints: number } = {
|
||||
totalPoints,
|
||||
query: userChatInput,
|
||||
model: vectorModel.model,
|
||||
query: concatQueries.join('\n'),
|
||||
model: modelName,
|
||||
inputTokens: tokens,
|
||||
similarity: usingSimilarityFilter ? similarity : undefined,
|
||||
limit,
|
||||
searchMode,
|
||||
searchUsingReRank: searchUsingReRank,
|
||||
quoteList: searchRes,
|
||||
queryExtensionResult,
|
||||
deepSearchResult
|
||||
quoteList: searchRes
|
||||
};
|
||||
const nodeDispatchUsages: ChatNodeUsageType[] = [
|
||||
{
|
||||
totalPoints,
|
||||
moduleName: node.name,
|
||||
model: modelName,
|
||||
inputTokens: tokens
|
||||
}
|
||||
];
|
||||
|
||||
if (aiExtensionResult) {
|
||||
const { totalPoints, modelName } = formatModelChars2Points({
|
||||
model: aiExtensionResult.model,
|
||||
inputTokens: aiExtensionResult.inputTokens,
|
||||
outputTokens: aiExtensionResult.outputTokens,
|
||||
modelType: ModelTypeEnum.llm
|
||||
});
|
||||
|
||||
responseData.totalPoints += totalPoints;
|
||||
responseData.inputTokens = aiExtensionResult.inputTokens;
|
||||
responseData.outputTokens = aiExtensionResult.outputTokens;
|
||||
responseData.extensionModel = modelName;
|
||||
responseData.extensionResult =
|
||||
aiExtensionResult.extensionQueries?.join('\n') ||
|
||||
JSON.stringify(aiExtensionResult.extensionQueries);
|
||||
|
||||
nodeDispatchUsages.push({
|
||||
totalPoints,
|
||||
moduleName: 'core.module.template.Query extension',
|
||||
model: modelName,
|
||||
inputTokens: aiExtensionResult.inputTokens,
|
||||
outputTokens: aiExtensionResult.outputTokens
|
||||
});
|
||||
}
|
||||
|
||||
return {
|
||||
quoteQA: searchRes,
|
||||
|
||||
@@ -232,14 +232,9 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
|
||||
chatNodeUsages = chatNodeUsages.concat(nodeDispatchUsages);
|
||||
}
|
||||
|
||||
if (toolResponses !== undefined && toolResponses !== null) {
|
||||
if (toolResponses !== undefined) {
|
||||
if (Array.isArray(toolResponses) && toolResponses.length === 0) return;
|
||||
if (
|
||||
!Array.isArray(toolResponses) &&
|
||||
typeof toolResponses === 'object' &&
|
||||
Object.keys(toolResponses).length === 0
|
||||
)
|
||||
return;
|
||||
if (typeof toolResponses === 'object' && Object.keys(toolResponses).length === 0) return;
|
||||
toolRunResponse = toolResponses;
|
||||
}
|
||||
|
||||
@@ -248,17 +243,12 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
|
||||
chatAssistantResponse = chatAssistantResponse.concat(assistantResponses);
|
||||
} else {
|
||||
if (reasoningText) {
|
||||
const isResponseReasoningText = inputs.find(
|
||||
(item) => item.key === NodeInputKeyEnum.aiChatReasoning
|
||||
)?.value;
|
||||
if (isResponseReasoningText) {
|
||||
chatAssistantResponse.push({
|
||||
type: ChatItemValueTypeEnum.reasoning,
|
||||
reasoning: {
|
||||
content: reasoningText
|
||||
}
|
||||
});
|
||||
}
|
||||
chatAssistantResponse.push({
|
||||
type: ChatItemValueTypeEnum.reasoning,
|
||||
reasoning: {
|
||||
content: reasoningText
|
||||
}
|
||||
});
|
||||
}
|
||||
if (answerText) {
|
||||
// save assistant text response
|
||||
|
||||
@@ -53,7 +53,7 @@ export const dispatchRunAppNode = async (props: Props): Promise<Response> => {
|
||||
|
||||
const userInputFiles = (() => {
|
||||
if (fileUrlList) {
|
||||
return fileUrlList.map((url) => parseUrlToFileType(url)).filter(Boolean);
|
||||
return fileUrlList.map((url) => parseUrlToFileType(url));
|
||||
}
|
||||
// Adapt version 4.8.13 upgrade
|
||||
return files;
|
||||
|
||||
@@ -38,10 +38,10 @@ type HttpRequestProps = ModuleDispatchProps<{
|
||||
[NodeInputKeyEnum.abandon_httpUrl]: string;
|
||||
[NodeInputKeyEnum.httpMethod]: string;
|
||||
[NodeInputKeyEnum.httpReqUrl]: string;
|
||||
[NodeInputKeyEnum.httpHeaders]?: PropsArrType[];
|
||||
[NodeInputKeyEnum.httpParams]?: PropsArrType[];
|
||||
[NodeInputKeyEnum.httpJsonBody]?: string;
|
||||
[NodeInputKeyEnum.httpFormBody]?: PropsArrType[];
|
||||
[NodeInputKeyEnum.httpHeaders]: PropsArrType[];
|
||||
[NodeInputKeyEnum.httpParams]: PropsArrType[];
|
||||
[NodeInputKeyEnum.httpJsonBody]: string;
|
||||
[NodeInputKeyEnum.httpFormBody]: PropsArrType[];
|
||||
[NodeInputKeyEnum.httpContentType]: ContentTypes;
|
||||
[NodeInputKeyEnum.addInputParam]: Record<string, any>;
|
||||
[NodeInputKeyEnum.httpTimeout]?: number;
|
||||
@@ -76,10 +76,10 @@ export const dispatchHttp468Request = async (props: HttpRequestProps): Promise<H
|
||||
params: {
|
||||
system_httpMethod: httpMethod = 'POST',
|
||||
system_httpReqUrl: httpReqUrl,
|
||||
system_httpHeader: httpHeader = [],
|
||||
system_httpHeader: httpHeader,
|
||||
system_httpParams: httpParams = [],
|
||||
system_httpJsonBody: httpJsonBody = '',
|
||||
system_httpFormBody: httpFormBody = [],
|
||||
system_httpJsonBody: httpJsonBody,
|
||||
system_httpFormBody: httpFormBody,
|
||||
system_httpContentType: httpContentType = ContentTypes.json,
|
||||
system_httpTimeout: httpTimeout = 60,
|
||||
[NodeInputKeyEnum.addInputParam]: dynamicInput,
|
||||
@@ -398,6 +398,41 @@ async function fetchData({
|
||||
};
|
||||
}
|
||||
|
||||
// function replaceVariable(text: string, obj: Record<string, any>) {
|
||||
// for (const [key, value] of Object.entries(obj)) {
|
||||
// if (value === undefined) {
|
||||
// text = text.replace(new RegExp(`{{(${key})}}`, 'g'), UNDEFINED_SIGN);
|
||||
// } else {
|
||||
// const replacement = JSON.stringify(value);
|
||||
// const unquotedReplacement =
|
||||
// replacement.startsWith('"') && replacement.endsWith('"')
|
||||
// ? replacement.slice(1, -1)
|
||||
// : replacement;
|
||||
// text = text.replace(new RegExp(`{{(${key})}}`, 'g'), () => unquotedReplacement);
|
||||
// }
|
||||
// }
|
||||
// return text || '';
|
||||
// }
|
||||
// function removeUndefinedSign(obj: Record<string, any>) {
|
||||
// for (const key in obj) {
|
||||
// if (obj[key] === UNDEFINED_SIGN) {
|
||||
// obj[key] = undefined;
|
||||
// } else if (Array.isArray(obj[key])) {
|
||||
// obj[key] = obj[key].map((item: any) => {
|
||||
// if (item === UNDEFINED_SIGN) {
|
||||
// return undefined;
|
||||
// } else if (typeof item === 'object') {
|
||||
// removeUndefinedSign(item);
|
||||
// }
|
||||
// return item;
|
||||
// });
|
||||
// } else if (typeof obj[key] === 'object') {
|
||||
// removeUndefinedSign(obj[key]);
|
||||
// }
|
||||
// }
|
||||
// return obj;
|
||||
// }
|
||||
|
||||
// Replace some special response from system plugin
|
||||
async function replaceSystemPluginResponse({
|
||||
response,
|
||||
|
||||
@@ -142,7 +142,7 @@ export const checkQuoteQAValue = (quoteQA?: SearchDataResponseItemType[]) => {
|
||||
if (quoteQA.length === 0) {
|
||||
return [];
|
||||
}
|
||||
if (quoteQA.some((item) => typeof item !== 'object' || !item.q)) {
|
||||
if (quoteQA.some((item) => !item.q)) {
|
||||
return undefined;
|
||||
}
|
||||
return quoteQA;
|
||||
|
||||
@@ -46,7 +46,6 @@ export async function getUserDetail({
|
||||
promotionRate: user.promotionRate,
|
||||
team: tmb,
|
||||
notificationAccount: tmb.notificationAccount,
|
||||
permission: tmb.permission,
|
||||
contact: user.contact
|
||||
permission: tmb.permission
|
||||
};
|
||||
}
|
||||
|
||||
@@ -57,7 +57,6 @@ const UserSchema = new Schema({
|
||||
},
|
||||
fastgpt_sem: Object,
|
||||
sourceDomain: String,
|
||||
contact: String,
|
||||
|
||||
/** @deprecated */
|
||||
avatar: String
|
||||
|
||||
@@ -36,9 +36,6 @@ const TeamMemberSchema = new Schema({
|
||||
type: Date,
|
||||
default: () => new Date()
|
||||
},
|
||||
updateTime: {
|
||||
type: Date
|
||||
},
|
||||
defaultTeam: {
|
||||
type: Boolean,
|
||||
default: false
|
||||
|
||||
@@ -86,12 +86,9 @@ export async function addSourceMember<T extends { tmbId: string }>({
|
||||
}): Promise<Array<T & { sourceMember: SourceMemberType }>> {
|
||||
if (!Array.isArray(list)) return [];
|
||||
|
||||
const tmbIdList = list
|
||||
.map((item) => (item.tmbId ? String(item.tmbId) : undefined))
|
||||
.filter(Boolean);
|
||||
const tmbList = await MongoTeamMember.find(
|
||||
{
|
||||
_id: { $in: tmbIdList }
|
||||
_id: { $in: list.map((item) => String(item.tmbId)) }
|
||||
},
|
||||
'tmbId name avatar status',
|
||||
{
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user