Compare commits

...

28 Commits

Author SHA1 Message Date
Archer
c25cd48e72 perf: chunk trigger and paragraph split (#4893)
* perf: chunk trigger and paragraph split

* update max size computed

* perf: i18n

* remove table
2025-05-26 18:57:22 +08:00
Archer
874300a56a fix: chinese name export (#4890)
* fix: chinese name export

* fix: xlsx white space

* doc

* doc
2025-05-25 21:19:29 +08:00
Archer
1dea2b71b4 perf: human check;perf: recursion get node response (#4888)
* perf: human check

* version

* perf: recursion get node response
2025-05-25 20:55:29 +08:00
Archer
a8673344b1 Test add menu (#4887)
* Feature: Add additional dataset options and their descriptions, updat… (#4874)

* Feature: Add additional dataset options and their descriptions, update menu components to support submenu functionality

* Optimize the menu component by removing the sub-menu position attribute, introducing the MyPopover component to support sub-menu functionality, and adding new dataset options and their descriptions in the dataset list.

---------

Co-authored-by: dreamer6680 <146868355@qq.com>

* api dataset tip

* remove invalid code

---------

Co-authored-by: dreamer6680 <1468683855@qq.com>
Co-authored-by: dreamer6680 <146868355@qq.com>
2025-05-25 20:16:03 +08:00
Archer
9709ae7a4f feat: The workflow quickly adds applications (#4882)
* feat: add node by handle (#4860)

* feat: add node by handle

* fix

* fix edge filter

* fix

* move utils

* move context

* scale handle

* move postion to handle params & optimize handle scale (#4878)

* move position to handle params

* close button scale

* perf: node template ui

* remove handle scale (#4880)

* feat: handle connect

* add mouse down duration check (#4881)

* perf: long press time

* tool handle size

* optimize add node by handle (#4883)

---------

Co-authored-by: heheer <heheer@sealos.io>
2025-05-23 19:20:12 +08:00
Archer
fae76e887a perf: dataset import params code (#4875)
* perf: dataset import params code

* perf: api dataset code

* model
2025-05-23 10:40:25 +08:00
dreamer6680
9af92d1eae Open Yufu Feishu Knowledge Base Permissions (#4867)
* add feishu yuque dataset

* Open Yufu Feishu Knowledge Base Permissions

* Refactor the dataset request module, optimize the import path, and fix the type definition

---------

Co-authored-by: dreamer6680 <146868355@qq.com>
2025-05-22 23:19:55 +08:00
Archer
6a6719e93d perf: isPc check;perf: dataset max token checker (#4872)
* perf: isPc check

* perf: dataset max token checker

* perf: dataset max token checker
2025-05-22 18:40:29 +08:00
Compasafe
50481f4ca8 fix: 修改语音组件中判断isPc的逻辑 (#4854)
* fix: 修改语音组件中判断isPc的逻辑

* fix: 修改语音组件中判断isPc的逻辑
2025-05-22 16:29:53 +08:00
Archer
88bd3aaa9e perf: backup import (#4866)
* i18n

* remove invalid code

* perf: backup import

* backup tip

* fix: indexsize invalid
2025-05-22 15:53:51 +08:00
Archer
dd3c251603 fix: stream response (#4853) 2025-05-21 10:21:20 +08:00
Archer
aa55f059d4 perf: chat history api;perf: full text error (#4852)
* perf: chat history api

* perf: i18n

* perf: full text
2025-05-20 22:31:32 +08:00
dreamer6680
89c9a02650 change ui of price (#4851)
Co-authored-by: dreamer6680 <146868355@qq.com>
2025-05-20 20:51:07 +08:00
heheer
0f3bfa280a fix quote reader duplicate rendering (#4845) 2025-05-20 20:21:00 +08:00
dependabot[bot]
593ebfd269 chore(deps): bump multer from 1.4.5-lts.1 to 2.0.0 (#4839)
Bumps [multer](https://github.com/expressjs/multer) from 1.4.5-lts.1 to 2.0.0.
- [Release notes](https://github.com/expressjs/multer/releases)
- [Changelog](https://github.com/expressjs/multer/blob/v2.0.0/CHANGELOG.md)
- [Commits](https://github.com/expressjs/multer/compare/v1.4.5-lts.1...v2.0.0)

---
updated-dependencies:
- dependency-name: multer
  dependency-version: 2.0.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-20 13:58:47 +08:00
John Chen
f6dc2204f5 fix:修正docker-compose-pgvecto.yml文件中,健康检查参数错误 (#4841) 2025-05-20 13:57:32 +08:00
Archer
d44c338059 perf: confirm ux (#4843)
* perf: delete tip ux

* perf: confirm ux
2025-05-20 13:41:56 +08:00
Archer
1dac2b70ec perf: stream timeout;feat: hnsw max_scan_tuples config;fix: fulltext search merge error (#4838)
* perf: stream timeout

* feat: hnsw max_scan_tuples config

* fix: fulltext search merge error

* perf: jieba code
2025-05-20 09:59:24 +08:00
Archer
9fef3e15fb Update doc (#4831)
* doc

* doc

* version update
2025-05-18 23:16:31 +08:00
Archer
2d2d0fffe9 Test apidataset (#4830)
* Dataset (#4822)

* apidataset support to basepath

* Resolve the error of the Feishu Knowledge Base modification configuration page not supporting baseurl bug.

* apibasepath

* add

* perf: api dataset

---------

Co-authored-by: dreamer6680 <1468683855@qq.com>
2025-05-17 22:41:10 +08:00
heheer
c6e0b5a1e7 offiaccount welcome text (#4827)
* offiaccount welcome text

* fix

* Update Image.tsx

---------

Co-authored-by: Archer <545436317@qq.com>
2025-05-17 22:03:18 +08:00
dependabot[bot]
932aa28a1f chore(deps): bump undici in /plugins/webcrawler/SPIDER (#4825)
Bumps [undici](https://github.com/nodejs/undici) from 6.21.1 to 6.21.3.
- [Release notes](https://github.com/nodejs/undici/releases)
- [Commits](https://github.com/nodejs/undici/compare/v6.21.1...v6.21.3)

---
updated-dependencies:
- dependency-name: undici
  dependency-version: 6.21.3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-17 01:16:31 +08:00
heheer
9c59bc2c17 fix: handle optional indexes in InputDataModal (#4828) 2025-05-16 15:07:33 +08:00
Archer
e145f63554 feat: chat error msg (#4826)
* perf: i18n

* feat: chat error msg

* feat: doc
2025-05-16 12:07:11 +08:00
Archer
554b2ca8dc perf: mcp tool type (#4820) 2025-05-15 18:14:32 +08:00
Archer
4e83840c14 perf: tool call check (#4818)
* i18n

* tool call

* fix: mcp create permission;Plugin unauth tip

* fix: mcp create permission;Plugin unauth tip

* fix: Cite modal permission

* remove invalide cite

* perf: prompt

* filter fulltext search

* fix: ts

* fix: ts

* fix: ts
2025-05-15 15:51:34 +08:00
heheer
a6c80684d1 fix version match (#4814) 2025-05-14 17:45:31 +08:00
Archer
a4db03a3b7 feat: session id (#4817)
* feat: session id

* feat: Add default index
2025-05-14 17:24:02 +08:00
251 changed files with 6226 additions and 4006 deletions

View File

@@ -21,7 +21,7 @@
"i18n-ally.namespace": true, "i18n-ally.namespace": true,
"i18n-ally.pathMatcher": "{locale}/{namespaces}.json", "i18n-ally.pathMatcher": "{locale}/{namespaces}.json",
"i18n-ally.extract.targetPickingStrategy": "most-similar-by-key", "i18n-ally.extract.targetPickingStrategy": "most-similar-by-key",
"i18n-ally.translate.engines": ["google"], "i18n-ally.translate.engines": ["deepl","google"],
"[typescript]": { "[typescript]": {
"editor.defaultFormatter": "esbenp.prettier-vscode" "editor.defaultFormatter": "esbenp.prettier-vscode"
}, },

View File

@@ -132,15 +132,15 @@ services:
# fastgpt # fastgpt
sandbox: sandbox:
container_name: sandbox container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.9.8 # git image: ghcr.io/labring/fastgpt-sandbox:v4.9.9 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.8 # 阿里云 # image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.9 # 阿里云
networks: networks:
- fastgpt - fastgpt
restart: always restart: always
fastgpt-mcp-server: fastgpt-mcp-server:
container_name: fastgpt-mcp-server container_name: fastgpt-mcp-server
image: ghcr.io/labring/fastgpt-mcp_server:v4.9.8 # git image: ghcr.io/labring/fastgpt-mcp_server:v4.9.9 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.9.8 # 阿里云 # image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.9.9 # 阿里云
ports: ports:
- 3005:3000 - 3005:3000
networks: networks:
@@ -150,8 +150,8 @@ services:
- FASTGPT_ENDPOINT=http://fastgpt:3000 - FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt: fastgpt:
container_name: fastgpt container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.9.8 # git image: ghcr.io/labring/fastgpt:v4.9.9 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.8 # 阿里云 # image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.9 # 阿里云
ports: ports:
- 3000:3000 - 3000:3000
networks: networks:

View File

@@ -109,15 +109,15 @@ services:
# fastgpt # fastgpt
sandbox: sandbox:
container_name: sandbox container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.9.8 # git image: ghcr.io/labring/fastgpt-sandbox:v4.9.9 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.8 # 阿里云 # image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.9 # 阿里云
networks: networks:
- fastgpt - fastgpt
restart: always restart: always
fastgpt-mcp-server: fastgpt-mcp-server:
container_name: fastgpt-mcp-server container_name: fastgpt-mcp-server
image: ghcr.io/labring/fastgpt-mcp_server:v4.9.8 # git image: ghcr.io/labring/fastgpt-mcp_server:v4.9.9 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.9.8 # 阿里云 # image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.9.9 # 阿里云
ports: ports:
- 3005:3000 - 3005:3000
networks: networks:
@@ -127,8 +127,8 @@ services:
- FASTGPT_ENDPOINT=http://fastgpt:3000 - FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt: fastgpt:
container_name: fastgpt container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.9.8 # git image: ghcr.io/labring/fastgpt:v4.9.9 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.8 # 阿里云 # image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.9 # 阿里云
ports: ports:
- 3000:3000 - 3000:3000
networks: networks:

View File

@@ -23,7 +23,7 @@ services:
volumes: volumes:
- ./pg/data:/var/lib/postgresql/data - ./pg/data:/var/lib/postgresql/data
healthcheck: healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres', '-d', 'aiproxy'] test: ['CMD', 'pg_isready', '-U', 'postgres', '-d', 'postgres']
interval: 5s interval: 5s
timeout: 5s timeout: 5s
retries: 10 retries: 10
@@ -96,15 +96,15 @@ services:
# fastgpt # fastgpt
sandbox: sandbox:
container_name: sandbox container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.9.8 # git image: ghcr.io/labring/fastgpt-sandbox:v4.9.9 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.8 # 阿里云 # image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.9 # 阿里云
networks: networks:
- fastgpt - fastgpt
restart: always restart: always
fastgpt-mcp-server: fastgpt-mcp-server:
container_name: fastgpt-mcp-server container_name: fastgpt-mcp-server
image: ghcr.io/labring/fastgpt-mcp_server:v4.9.8 # git image: ghcr.io/labring/fastgpt-mcp_server:v4.9.9 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.9.8 # 阿里云 # image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.9.9 # 阿里云
ports: ports:
- 3005:3000 - 3005:3000
networks: networks:
@@ -114,8 +114,8 @@ services:
- FASTGPT_ENDPOINT=http://fastgpt:3000 - FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt: fastgpt:
container_name: fastgpt container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.9.8 # git image: ghcr.io/labring/fastgpt:v4.9.9 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.8 # 阿里云 # image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.9 # 阿里云
ports: ports:
- 3000:3000 - 3000:3000
networks: networks:

View File

@@ -72,15 +72,15 @@ services:
sandbox: sandbox:
container_name: sandbox container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.9.8 # git image: ghcr.io/labring/fastgpt-sandbox:v4.9.9 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.8 # 阿里云 # image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.9 # 阿里云
networks: networks:
- fastgpt - fastgpt
restart: always restart: always
fastgpt-mcp-server: fastgpt-mcp-server:
container_name: fastgpt-mcp-server container_name: fastgpt-mcp-server
image: ghcr.io/labring/fastgpt-mcp_server:v4.9.8 # git image: ghcr.io/labring/fastgpt-mcp_server:v4.9.9 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.9.8 # 阿里云 # image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.9.9 # 阿里云
ports: ports:
- 3005:3000 - 3005:3000
networks: networks:
@@ -90,8 +90,8 @@ services:
- FASTGPT_ENDPOINT=http://fastgpt:3000 - FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt: fastgpt:
container_name: fastgpt container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.9.8 # git image: ghcr.io/labring/fastgpt:v4.9.9 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.8 # 阿里云 # image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.9 # 阿里云
ports: ports:
- 3000:3000 - 3000:3000
networks: networks:

Binary file not shown.

After

Width:  |  Height:  |  Size: 386 KiB

View File

@@ -959,10 +959,16 @@ curl --location --request POST 'http://localhost:3000/api/core/chat/getHistories
{{< markdownify >}} {{< markdownify >}}
{{% alert icon=" " context="success" %}} {{% alert icon=" " context="success" %}}
目前仅能获取到当前 API key 的创建者的对话。
- appId - 应用 Id - appId - 应用 Id
- offset - 偏移量,即从第几条数据开始取 - offset - 偏移量,即从第几条数据开始取
- pageSize - 记录数量 - pageSize - 记录数量
- source - 对话源。source=api表示获取通过 API 创建的对话(不会获取到页面上的对话记录) - source - 对话源。source=api表示获取通过 API 创建的对话(不会获取到页面上的对话记录)
- startCreateTime - 开始创建时间(可选)
- endCreateTime - 结束创建时间(可选)
- startUpdateTime - 开始更新时间(可选)
- endUpdateTime - 结束更新时间(可选)
{{% /alert %}} {{% /alert %}}
{{< /markdownify >}} {{< /markdownify >}}

View File

@@ -0,0 +1,37 @@
---
title: 'V4.9.10(进行中)'
description: 'FastGPT V4.9.10 更新说明'
icon: 'upgrade'
draft: false
toc: true
weight: 790
---
## 🚀 新增内容
1. 支持 PG 设置`systemEnv.hnswMaxScanTuples`参数,提高迭代搜索的数据总量。
2. 知识库预处理参数增加 “分块条件”,可控制某些情况下不进行分块处理。
3. 知识库预处理参数增加 “段落优先” 模式,可控制最大段落深度。原“长度优先”模式,不再内嵌段落优先逻辑。
4. 工作流调整为单向接入和接出,支持快速的添加下一步节点。
5. 开放飞书和语雀知识库到开源版。
6. gemini 和 claude 最新模型预设。
## ⚙️ 优化
1. LLM stream调用默认超时调大。
2. 部分确认交互优化。
3. 纠正原先知识库的“表格数据集”名称,改成“备份导入”。同时支持知识库索引的导出和导入。
4. 工作流知识库引用上限,如果工作流中没有相关 AI 节点,则交互模式改成纯手动输入,并且上限为 1000万。
5. 语音输入,移动端判断逻辑,准确判断是否为手机,而不是小屏。
6. 优化上下文截取算法,至少保证留下一组 Human 信息。
## 🐛 修复
1. 全文检索多知识库时排序得分排序不正确。
2. 流响应捕获 finish_reason 可能不正确。
3. 工具调用模式,未保存思考输出。
4. 知识库 indexSize 参数未生效。
5. 工作流嵌套 2 层后,获取预览引用、上下文不正确。
6. xlsx 转成 Markdown 时候,前面会多出一个空格。
7. 读取 Markdown 文件时Base64 图片未进行额外抓换保存。

View File

@@ -0,0 +1,43 @@
---
title: 'V4.9.9'
description: 'FastGPT V4.9.9 更新说明'
icon: 'upgrade'
draft: false
toc: true
weight: 791
---
## 升级指南
### 1. 做好数据备份
### 2. 商业版用户替换新 License
商业版用户可以联系 FastGPT 团队支持同学,获取 License 替换方案。替换后,可以直接升级系统,管理后台会提示输入新 License。
### 3. 更新镜像 tag
- 更新 FastGPT 镜像 tag: v4.9.9
- 更新 FastGPT 商业版镜像 tag: v4.9.9
- mcp_server 无需更新
- Sandbox 无需更新
- AIProxy 无需更新
## 🚀 新增内容
1. 切换 SessionId 来替代 JWT 实现登录鉴权,可控制最大登录客户端数量。
2. 新的商业版 License 管理模式。
3. 公众号调用,显示记录 chat 对话错误,方便排查。
4. API 知识库支持 BasePath 选择,需增加 API 接口,具体可见[API 知识库介绍](/docs/guide/knowledge_base/api_dataset/#4-获取文件详细信息用于获取文件信息)
## ⚙️ 优化
1. 优化工具调用,新工具的判断逻辑。
2. 调整 Cite 引用提示词。
## 🐛 修复
1. 无法正常获取应用历史保存/发布记录。
2. 成员创建 MCP 工具权限问题。
3. 来源引用展示,存在 ID 传递错误,导致提示无权操作该文件。
4. 回答标注前端数据报错。

View File

@@ -185,3 +185,40 @@ curl --location --request GET '{{baseURL}}/v1/file/read?id=xx' \
{{< /tabs >}} {{< /tabs >}}
### 4. 获取文件详细信息(用于获取文件信息)
{{< tabs tabTotal="2" >}}
{{< tab tabName="请求示例" >}}
{{< markdownify >}}
id 为文件的 id。
```bash
curl --location --request GET '{{baseURL}}/v1/file/detail?id=xx' \
--header 'Authorization: Bearer {{authorization}}'
```
{{< /markdownify >}}
{{< /tab >}}
{{< tab tabName="响应示例" >}}
{{< markdownify >}}
```json
{
"code": 200,
"success": true,
"message": "",
"data": {
"id": "docs",
"parentId": "",
"name": "docs"
}
}
```
{{< /markdownify >}}
{{< /tab >}}
{{< /tabs >}}

View File

@@ -28,7 +28,6 @@ FastGPT 商业版是基于 FastGPT 开源版的增强版本,增加了一些独
| 应用发布安全配置 | ❌ | ✅ | ✅ | | 应用发布安全配置 | ❌ | ✅ | ✅ |
| 内容审核 | ❌ | ✅ | ✅ | | 内容审核 | ❌ | ✅ | ✅ |
| web站点同步 | ❌ | ✅ | ✅ | | web站点同步 | ❌ | ✅ | ✅ |
| 主流文档库接入(目前支持:语雀、飞书) | ❌ | ✅ | ✅ |
| 增强训练模式 | ❌ | ✅ | ✅ | | 增强训练模式 | ❌ | ✅ | ✅ |
| 第三方应用快速接入(飞书、公众号) | ❌ | ✅ | ✅ | | 第三方应用快速接入(飞书、公众号) | ❌ | ✅ | ✅ |
| 管理后台 | ❌ | ✅ | 不需要 | | 管理后台 | ❌ | ✅ | 不需要 |

View File

@@ -132,7 +132,9 @@ weight: 506
### 公众号没响应 ### 公众号没响应
检查应用对话日志,如果有对话日志,但是微信公众号无响应,则是白名单 IP未成功。 检查应用对话日志,如果有对话日志,但是微信公众号无响应,则是白名单 IP未成功。
添加白名单IP 后,通常需要等待几分钟微信更新。 添加白名单IP 后,通常需要等待几分钟微信更新。可以在对话日志中,找点错误日志。
![](/imgs/official_account_faq.png)
### 如何新开一个聊天记录 ### 如何新开一个聊天记录

2
env.d.ts vendored
View File

@@ -4,7 +4,6 @@ declare global {
LOG_DEPTH: string; LOG_DEPTH: string;
DEFAULT_ROOT_PSW: string; DEFAULT_ROOT_PSW: string;
DB_MAX_LINK: string; DB_MAX_LINK: string;
TOKEN_KEY: string;
FILE_TOKEN_KEY: string; FILE_TOKEN_KEY: string;
ROOT_KEY: string; ROOT_KEY: string;
OPENAI_BASE_URL: string; OPENAI_BASE_URL: string;
@@ -37,6 +36,7 @@ declare global {
CONFIG_JSON_PATH?: string; CONFIG_JSON_PATH?: string;
PASSWORD_LOGIN_LOCK_SECONDS?: string; PASSWORD_LOGIN_LOCK_SECONDS?: string;
PASSWORD_EXPIRED_MONTH?: string; PASSWORD_EXPIRED_MONTH?: string;
MAX_LOGIN_SESSION?: string;
} }
} }
} }

View File

@@ -27,7 +27,7 @@ const datasetErr = [
}, },
{ {
statusText: DatasetErrEnum.unExist, statusText: DatasetErrEnum.unExist,
message: 'core.dataset.error.unExistDataset' message: i18nT('common:core.dataset.error.unExistDataset')
}, },
{ {
statusText: DatasetErrEnum.unExistCollection, statusText: DatasetErrEnum.unExistCollection,

View File

@@ -7,6 +7,10 @@ export const CUSTOM_SPLIT_SIGN = '-----CUSTOM_SPLIT_SIGN-----';
type SplitProps = { type SplitProps = {
text: string; text: string;
chunkSize: number; chunkSize: number;
paragraphChunkDeep?: number; // Paragraph deep
paragraphChunkMinSize?: number; // Paragraph min size, if too small, it will merge
maxSize?: number; maxSize?: number;
overlapRatio?: number; overlapRatio?: number;
customReg?: string[]; customReg?: string[];
@@ -108,6 +112,8 @@ const commonSplit = (props: SplitProps): SplitResponse => {
let { let {
text = '', text = '',
chunkSize, chunkSize,
paragraphChunkDeep = 5,
paragraphChunkMinSize = 100,
maxSize = defaultMaxChunkSize, maxSize = defaultMaxChunkSize,
overlapRatio = 0.15, overlapRatio = 0.15,
customReg = [] customReg = []
@@ -123,7 +129,7 @@ const commonSplit = (props: SplitProps): SplitResponse => {
text = text.replace(/(```[\s\S]*?```|~~~[\s\S]*?~~~)/g, function (match) { text = text.replace(/(```[\s\S]*?```|~~~[\s\S]*?~~~)/g, function (match) {
return match.replace(/\n/g, codeBlockMarker); return match.replace(/\n/g, codeBlockMarker);
}); });
// 2. 表格处理 - 单独提取表格出来,进行表头合并 // 2. Markdown 表格处理 - 单独提取表格出来,进行表头合并
const tableReg = const tableReg =
/(\n\|(?:(?:[^\n|]+\|){1,})\n\|(?:[:\-\s]+\|){1,}\n(?:\|(?:[^\n|]+\|)*\n?)*)(?:\n|$)/g; /(\n\|(?:(?:[^\n|]+\|){1,})\n\|(?:[:\-\s]+\|){1,}\n(?:\|(?:[^\n|]+\|)*\n?)*)(?:\n|$)/g;
const tableDataList = text.match(tableReg); const tableDataList = text.match(tableReg);
@@ -143,25 +149,40 @@ const commonSplit = (props: SplitProps): SplitResponse => {
text = text.replace(/(\r?\n|\r){3,}/g, '\n\n\n'); text = text.replace(/(\r?\n|\r){3,}/g, '\n\n\n');
// The larger maxLen is, the next sentence is less likely to trigger splitting // The larger maxLen is, the next sentence is less likely to trigger splitting
const markdownIndex = 4; const customRegLen = customReg.length;
const forbidOverlapIndex = 8; const markdownIndex = paragraphChunkDeep - 1;
const forbidOverlapIndex = customRegLen + markdownIndex + 4;
const markdownHeaderRules = ((deep?: number): { reg: RegExp; maxLen: number }[] => {
if (!deep || deep === 0) return [];
const maxDeep = Math.min(deep, 8); // Maximum 8 levels
const rules: { reg: RegExp; maxLen: number }[] = [];
for (let i = 1; i <= maxDeep; i++) {
const hashSymbols = '#'.repeat(i);
rules.push({
reg: new RegExp(`^(${hashSymbols}\\s[^\\n]+\\n)`, 'gm'),
maxLen: chunkSize
});
}
return rules;
})(paragraphChunkDeep);
const stepReges: { reg: RegExp | string; maxLen: number }[] = [ const stepReges: { reg: RegExp | string; maxLen: number }[] = [
...customReg.map((text) => ({ ...customReg.map((text) => ({
reg: text.replaceAll('\\n', '\n'), reg: text.replaceAll('\\n', '\n'),
maxLen: chunkSize maxLen: chunkSize
})), })),
{ reg: /^(#\s[^\n]+\n)/gm, maxLen: chunkSize }, ...markdownHeaderRules,
{ reg: /^(##\s[^\n]+\n)/gm, maxLen: chunkSize },
{ reg: /^(###\s[^\n]+\n)/gm, maxLen: chunkSize },
{ reg: /^(####\s[^\n]+\n)/gm, maxLen: chunkSize },
{ reg: /^(#####\s[^\n]+\n)/gm, maxLen: chunkSize },
{ reg: /([\n](```[\s\S]*?```|~~~[\s\S]*?~~~))/g, maxLen: maxSize }, // code block { reg: /([\n](```[\s\S]*?```|~~~[\s\S]*?~~~))/g, maxLen: maxSize }, // code block
// HTML Table tag 尽可能保障完整
{ {
reg: /(\n\|(?:(?:[^\n|]+\|){1,})\n\|(?:[:\-\s]+\|){1,}\n(?:\|(?:[^\n|]+\|)*\n)*)/g, reg: /(\n\|(?:(?:[^\n|]+\|){1,})\n\|(?:[:\-\s]+\|){1,}\n(?:\|(?:[^\n|]+\|)*\n)*)/g,
maxLen: Math.min(chunkSize * 1.5, maxSize) maxLen: chunkSize
}, // Table 尽可能保证完整性 }, // Markdown Table 尽可能保证完整性
{ reg: /(\n{2,})/g, maxLen: chunkSize }, { reg: /(\n{2,})/g, maxLen: chunkSize },
{ reg: /([\n])/g, maxLen: chunkSize }, { reg: /([\n])/g, maxLen: chunkSize },
// ------ There's no overlap on the top // ------ There's no overlap on the top
@@ -172,12 +193,10 @@ const commonSplit = (props: SplitProps): SplitResponse => {
{ reg: /([]|,\s)/g, maxLen: chunkSize } { reg: /([]|,\s)/g, maxLen: chunkSize }
]; ];
const customRegLen = customReg.length;
const checkIsCustomStep = (step: number) => step < customRegLen; const checkIsCustomStep = (step: number) => step < customRegLen;
const checkIsMarkdownSplit = (step: number) => const checkIsMarkdownSplit = (step: number) =>
step >= customRegLen && step <= markdownIndex + customRegLen; step >= customRegLen && step <= markdownIndex + customRegLen;
const checkForbidOverlap = (step: number) => step <= forbidOverlapIndex;
const checkForbidOverlap = (step: number) => step <= forbidOverlapIndex + customRegLen;
// if use markdown title split, Separate record title // if use markdown title split, Separate record title
const getSplitTexts = ({ text, step }: { text: string; step: number }) => { const getSplitTexts = ({ text, step }: { text: string; step: number }) => {
@@ -301,6 +320,7 @@ const commonSplit = (props: SplitProps): SplitResponse => {
const splitTexts = getSplitTexts({ text, step }); const splitTexts = getSplitTexts({ text, step });
const chunks: string[] = []; const chunks: string[] = [];
for (let i = 0; i < splitTexts.length; i++) { for (let i = 0; i < splitTexts.length; i++) {
const item = splitTexts[i]; const item = splitTexts[i];
@@ -443,7 +463,6 @@ const commonSplit = (props: SplitProps): SplitResponse => {
*/ */
export const splitText2Chunks = (props: SplitProps): SplitResponse => { export const splitText2Chunks = (props: SplitProps): SplitResponse => {
let { text = '' } = props; let { text = '' } = props;
const start = Date.now();
const splitWithCustomSign = text.split(CUSTOM_SPLIT_SIGN); const splitWithCustomSign = text.split(CUSTOM_SPLIT_SIGN);
const splitResult = splitWithCustomSign.map((item) => { const splitResult = splitWithCustomSign.map((item) => {

View File

@@ -130,9 +130,11 @@ export type SystemEnvType = {
vectorMaxProcess: number; vectorMaxProcess: number;
qaMaxProcess: number; qaMaxProcess: number;
vlmMaxProcess: number; vlmMaxProcess: number;
hnswEfSearch: number;
tokenWorkers: number; // token count max worker tokenWorkers: number; // token count max worker
hnswEfSearch: number;
hnswMaxScanTuples: number;
oneapiUrl?: string; oneapiUrl?: string;
chatApiKey?: string; chatApiKey?: string;

View File

@@ -2,6 +2,248 @@ import { type PromptTemplateItem } from '../type.d';
import { i18nT } from '../../../../web/i18n/utils'; import { i18nT } from '../../../../web/i18n/utils';
import { getPromptByVersion } from './utils'; import { getPromptByVersion } from './utils';
export const Prompt_userQuotePromptList: PromptTemplateItem[] = [
{
title: i18nT('app:template.standard_template'),
desc: '',
value: {
['4.9.7']: `## 任务描述
你是一个知识库回答助手,可以使用 <Cites></Cites> 中的内容作为你本次回答的参考。
同时,为了使回答结果更加可信并且可追溯,你需要在每段话结尾添加引用标记,标识参考了哪些内容。
## 追溯展示规则
- 使用 [id](CITE) 的格式来引用 <Cites></Cites> 中的知识,其中 CITE 是固定常量, id 为引文中的 id。
- 在 **每段话结尾** 自然地整合引用。例如: "Nginx是一款轻量级的Web服务器、反向代理服务器[67e517e74767063e882d6861](CITE)。"。
- 每段话**至少包含一个引用**,多个引用时按顺序排列,例如:"Nginx是一款轻量级的Web服务器、反向代理服务器[67e517e74767063e882d6861](CITE)[67e517e74767063e882d6862](CITE)。\n 它的特点是非常轻量[67e517e74767063e882d6863](CITE)。"
- 不要把示例作为知识点。
- 不要伪造 id返回的 id 必须都存在 <Cites></Cites> 中!
## 通用规则
- 如果你不清楚答案,你需要澄清。
- 避免提及你是从 <Cites></Cites> 获取的知识。
- 保持答案与 <Cites></Cites> 中描述的一致。
- 使用 Markdown 语法优化回答格式。尤其是图片、表格、序列号等内容,需严格完整输出。
- 使用与问题相同的语言回答。
<Cites>
{{quote}}
</Cites>
## 用户问题
{{question}}
## 回答
`
}
},
{
title: i18nT('app:template.qa_template'),
desc: '',
value: {
['4.9.7']: `## 任务描述
作为一个问答助手,你会使用 <QA></QA> 标记中的提供的数据对进行内容回答。
## 回答要求
- 选择其中一个或多个问答对进行回答。
- 回答的内容应尽可能与 <Answer></Answer> 中的内容一致。
- 如果没有相关的问答对,你需要澄清。
- 避免提及你是从 <QA></QA> 获取的知识,只需要回复答案。
- 使用与问题相同的语言回答。
<QA>
{{quote}}
</QA>
## 用户问题
{{question}}
## 回答
`
}
},
{
title: i18nT('app:template.standard_strict'),
desc: '',
value: {
['4.9.7']: `## 任务描述
你是一个知识库回答助手,可以使用 <Cites></Cites> 中的内容作为你本次回答的参考。
同时,为了使回答结果更加可信并且可追溯,你需要在每段话结尾添加引用标记,标识参考了哪些内容。
## 追溯展示规则
- 使用 [id](CITE) 的格式来引用 <Cites></Cites> 中的知识,其中 CITE 是固定常量, id 为引文中的 id。
- 在 **每段话结尾** 自然地整合引用。例如: "Nginx是一款轻量级的Web服务器、反向代理服务器[67e517e74767063e882d6861](CITE)。"。
- 每段话**至少包含一个引用**,多个引用时按顺序排列,例如:"Nginx是一款轻量级的Web服务器、反向代理服务器[67e517e74767063e882d6861](CITE)[67e517e74767063e882d6862](CITE)。\n 它的特点是非常轻量[67e517e74767063e882d6863](CITE)。"
- 不要把示例作为知识点。
- 不要伪造 id返回的 id 必须都存在 <Cites></Cites> 中!
## 通用规则
- 如果你不清楚答案,你需要澄清。
- 避免提及你是从 <Cites></Cites> 获取的知识。
- 保持答案与 <Cites></Cites> 中描述的一致。
- 使用 Markdown 语法优化回答格式。尤其是图片、表格、序列号等内容,需严格完整输出。
- 使用与问题相同的语言回答。
## 严格要求
你只能使用 <Cites></Cites> 标记中的内容作为参考,不能使用自身的知识,并且回答的内容需严格与 <Cites></Cites> 中的内容一致。
<Cites>
{{quote}}
</Cites>
## 用户问题
{{question}}
## 回答
`
}
},
{
title: i18nT('app:template.hard_strict'),
desc: '',
value: {
['4.9.7']: `## 任务描述
作为一个问答助手,你会使用 <QA></QA> 标记中的提供的数据对进行内容回答。
## 回答要求
- 选择其中一个或多个问答对进行回答。
- 回答的内容应尽可能与 <Answer></Answer> 中的内容一致。
- 如果没有相关的问答对,你需要澄清。
- 避免提及你是从 <QA></QA> 获取的知识,只需要回复答案。
- 使用与问题相同的语言回答。
## 严格要求
你只能使用 <QA></QA> 标记中的内容作为参考,不能使用自身的知识,并且回答的内容需严格与 <QA></QA> 中的内容一致。
<QA>
{{quote}}
</QA>
## 用户问题
{{question}}
## 回答
`
}
}
];
export const Prompt_systemQuotePromptList: PromptTemplateItem[] = [
{
title: i18nT('app:template.standard_template'),
desc: '',
value: {
['4.9.7']: `## 任务描述
你是一个知识库回答助手,可以使用 <Cites></Cites> 中的内容作为你本次回答的参考。
同时,为了使回答结果更加可信并且可追溯,你需要在每段话结尾添加引用标记,标识参考了哪些内容。
## 追溯展示规则
- 使用 [id](CITE) 的格式来引用 <Cites></Cites> 中的知识,其中 CITE 是固定常量, id 为引文中的 id。
- 在 **每段话结尾** 自然地整合引用。例如: "Nginx是一款轻量级的Web服务器、反向代理服务器[67e517e74767063e882d6861](CITE)。"。
- 每段话**至少包含一个引用**,多个引用时按顺序排列,例如:"Nginx是一款轻量级的Web服务器、反向代理服务器[67e517e74767063e882d6861](CITE)[67e517e74767063e882d6862](CITE)。\n 它的特点是非常轻量[67e517e74767063e882d6863](CITE)。"
- 不要把示例作为知识点。
- 不要伪造 id返回的 id 必须都存在 <Cites></Cites> 中!
## 通用规则
- 如果你不清楚答案,你需要澄清。
- 避免提及你是从 <Cites></Cites> 获取的知识。
- 保持答案与 <Cites></Cites> 中描述的一致。
- 使用 Markdown 语法优化回答格式。尤其是图片、表格、序列号等内容,需严格完整输出。
- 使用与问题相同的语言回答。
<Cites>
{{quote}}
</Cites>`
}
},
{
title: i18nT('app:template.qa_template'),
desc: '',
value: {
['4.9.8']: `## 任务描述
作为一个问答助手,你会使用 <QA></QA> 标记中的提供的数据对进行内容回答。
## 回答要求
- 选择其中一个或多个问答对进行回答。
- 回答的内容应尽可能与 <Answer></Answer> 中的内容一致。
- 如果没有相关的问答对,你需要澄清。
- 避免提及你是从 <QA></QA> 获取的知识,只需要回复答案。
- 使用与问题相同的语言回答。
<QA>
{{quote}}
</QA>`
}
},
{
title: i18nT('app:template.standard_strict'),
desc: '',
value: {
['4.9.7']: `## 任务描述
你是一个知识库回答助手,可以使用 <Cites></Cites> 中的内容作为你本次回答的参考。
同时,为了使回答结果更加可信并且可追溯,你需要在每段话结尾添加引用标记,标识参考了哪些内容。
## 追溯展示规则
- 使用 [id](CITE) 的格式来引用 <Cites></Cites> 中的知识,其中 CITE 是固定常量, id 为引文中的 id。
- 在 **每段话结尾** 自然地整合引用。例如: "Nginx是一款轻量级的Web服务器、反向代理服务器[67e517e74767063e882d6861](CITE)。"。
- 每段话**至少包含一个引用**,多个引用时按顺序排列,例如:"Nginx是一款轻量级的Web服务器、反向代理服务器[67e517e74767063e882d6861](CITE)[67e517e74767063e882d6862](CITE)。\n 它的特点是非常轻量[67e517e74767063e882d6863](CITE)。"
- 不要把示例作为知识点。
- 不要伪造 id返回的 id 必须都存在 <Cites></Cites> 中!
## 通用规则
- 如果你不清楚答案,你需要澄清。
- 避免提及你是从 <Cites></Cites> 获取的知识。
- 保持答案与 <Cites></Cites> 中描述的一致。
- 使用 Markdown 语法优化回答格式。尤其是图片、表格、序列号等内容,需严格完整输出。
- 使用与问题相同的语言回答。
## 严格要求
你只能使用 <Cites></Cites> 标记中的内容作为参考,不能使用自身的知识,并且回答的内容需严格与 <Cites></Cites> 中的内容一致。
<Cites>
{{quote}}
</Cites>`
}
},
{
title: i18nT('app:template.hard_strict'),
desc: '',
value: {
['4.9.7']: `## 任务描述
作为一个问答助手,你会使用 <QA></QA> 标记中的提供的数据对进行内容回答。
## 回答要求
- 选择其中一个或多个问答对进行回答。
- 回答的内容应尽可能与 <Answer></Answer> 中的内容一致。
- 如果没有相关的问答对,你需要澄清。
- 避免提及你是从 <QA></QA> 获取的知识,只需要回复答案。
- 使用与问题相同的语言回答。
## 严格要求
你只能使用 <QA></QA> 标记中的内容作为参考,不能使用自身的知识,并且回答的内容需严格与 <QA></QA> 中的内容一致。
<QA>
{{quote}}
</QA>`
}
}
];
export const Prompt_QuoteTemplateList: PromptTemplateItem[] = [ export const Prompt_QuoteTemplateList: PromptTemplateItem[] = [
{ {
title: i18nT('app:template.standard_template'), title: i18nT('app:template.standard_template'),
@@ -10,11 +252,6 @@ export const Prompt_QuoteTemplateList: PromptTemplateItem[] = [
['4.9.7']: `{ ['4.9.7']: `{
"id": "{{id}}", "id": "{{id}}",
"sourceName": "{{source}}", "sourceName": "{{source}}",
"content": "{{q}}\n{{a}}"
}
`,
['4.9.2']: `{
"sourceName": "{{source}}",
"updateTime": "{{updateTime}}", "updateTime": "{{updateTime}}",
"content": "{{q}}\n{{a}}" "content": "{{q}}\n{{a}}"
} }
@@ -25,7 +262,7 @@ export const Prompt_QuoteTemplateList: PromptTemplateItem[] = [
title: i18nT('app:template.qa_template'), title: i18nT('app:template.qa_template'),
desc: i18nT('app:template.qa_template_des'), desc: i18nT('app:template.qa_template_des'),
value: { value: {
['4.9.2']: `<Question> ['4.9.7']: `<Question>
{{q}} {{q}}
</Question> </Question>
<Answer> <Answer>
@@ -40,11 +277,6 @@ export const Prompt_QuoteTemplateList: PromptTemplateItem[] = [
['4.9.7']: `{ ['4.9.7']: `{
"id": "{{id}}", "id": "{{id}}",
"sourceName": "{{source}}", "sourceName": "{{source}}",
"content": "{{q}}\n{{a}}"
}
`,
['4.9.2']: `{
"sourceName": "{{source}}",
"updateTime": "{{updateTime}}", "updateTime": "{{updateTime}}",
"content": "{{q}}\n{{a}}" "content": "{{q}}\n{{a}}"
} }
@@ -55,7 +287,7 @@ export const Prompt_QuoteTemplateList: PromptTemplateItem[] = [
title: i18nT('app:template.hard_strict'), title: i18nT('app:template.hard_strict'),
desc: i18nT('app:template.hard_strict_des'), desc: i18nT('app:template.hard_strict_des'),
value: { value: {
['4.9.2']: `<Question> ['4.9.7']: `<Question>
{{q}} {{q}}
</Question> </Question>
<Answer> <Answer>
@@ -64,263 +296,12 @@ export const Prompt_QuoteTemplateList: PromptTemplateItem[] = [
} }
} }
]; ];
export const getQuoteTemplate = (version?: string) => { export const getQuoteTemplate = (version?: string) => {
const defaultTemplate = Prompt_QuoteTemplateList[0].value; const defaultTemplate = Prompt_QuoteTemplateList[0].value;
return getPromptByVersion(version, defaultTemplate); return getPromptByVersion(version, defaultTemplate);
}; };
export const Prompt_userQuotePromptList: PromptTemplateItem[] = [
{
title: i18nT('app:template.standard_template'),
desc: '',
value: {
['4.9.7']: `使用 <Reference></Reference> 标记中的内容作为本次对话的参考:
<Reference>
{{quote}}
</Reference>
回答要求:
- 如果你不清楚答案,你需要澄清。
- 避免提及你是从 <Reference></Reference> 获取的知识。
- 保持答案与 <Reference></Reference> 中描述的一致。
- 使用 Markdown 语法优化回答格式。
- 使用与问题相同的语言回答。
- 使用 [id](CITE) 格式来引用<Reference></Reference>中的知识,其中 CITE 是固定常量, id 为引文中的 id。
- 在每段结尾自然地整合引用。例如: "FastGPT 是一个基于大语言模型(LLM)的知识库问答系统[67e517e74767063e882d6861](CITE)。"
- 每段至少包含一个引用,也可根据内容需要加入多个引用,按顺序排列。`,
['4.9.2']: `使用 <Reference></Reference> 标记中的内容作为本次对话的参考:
<Reference>
{{quote}}
</Reference>
回答要求:
- 如果你不清楚答案,你需要澄清。
- 避免提及你是从 <Reference></Reference> 获取的知识。
- 保持答案与 <Reference></Reference> 中描述的一致。
- 使用 Markdown 语法优化回答格式。
- 使用与问题相同的语言回答。
问题:"""{{question}}"""`
}
},
{
title: i18nT('app:template.qa_template'),
desc: '',
value: {
['4.9.2']: `使用 <QA></QA> 标记中的问答对进行回答。
<QA>
{{quote}}
</QA>
回答要求:
- 选择其中一个或多个问答对进行回答。
- 回答的内容应尽可能与 <答案></答案> 中的内容一致。
- 如果没有相关的问答对,你需要澄清。
- 避免提及你是从 QA 获取的知识,只需要回复答案。
问题:"""{{question}}"""`
}
},
{
title: i18nT('app:template.standard_strict'),
desc: '',
value: {
['4.9.7']: `忘记你已有的知识,仅使用 <Reference></Reference> 标记中的内容作为本次对话的参考:
<Reference>
{{quote}}
</Reference>
思考流程:
1. 判断问题是否与 <Reference></Reference> 标记中的内容有关。
2. 如果有关,你按下面的要求回答。
3. 如果无关,你直接拒绝回答本次问题。
回答要求:
- 避免提及你是从 <Reference></Reference> 获取的知识。
- 保持答案与 <Reference></Reference> 中描述的一致。
- 使用 Markdown 语法优化回答格式。
- 使用与问题相同的语言回答。
- 使用 [id](CITE) 格式来引用<Reference></Reference>中的知识,其中 CITE 是固定常量, id 为引文中的 id。
- 在每段结尾自然地整合引用。例如: "FastGPT 是一个基于大语言模型(LLM)的知识库问答系统[67e517e74767063e882d6861](CITE)。"
- 每段至少包含一个引用,也可根据内容需要加入多个引用,按顺序排列。
问题:"""{{question}}"""`,
['4.9.2']: `忘记你已有的知识,仅使用 <Reference></Reference> 标记中的内容作为本次对话的参考:
<Reference>
{{quote}}
</Reference>
思考流程:
1. 判断问题是否与 <Reference></Reference> 标记中的内容有关。
2. 如果有关,你按下面的要求回答。
3. 如果无关,你直接拒绝回答本次问题。
回答要求:
- 避免提及你是从 <Reference></Reference> 获取的知识。
- 保持答案与 <Reference></Reference> 中描述的一致。
- 使用 Markdown 语法优化回答格式。
- 使用与问题相同的语言回答。
问题:"""{{question}}"""`
}
},
{
title: i18nT('app:template.hard_strict'),
desc: '',
value: {
['4.9.2']: `忘记你已有的知识,仅使用 <QA></QA> 标记中的问答对进行回答。
<QA>
{{quote}}
</QA>
思考流程:
1. 判断问题是否与 <QA></QA> 标记中的内容有关。
2. 如果无关,你直接拒绝回答本次问题。
3. 判断是否有相近或相同的问题。
4. 如果有相同的问题,直接输出对应答案。
5. 如果只有相近的问题,请把相近的问题和答案一起输出。
回答要求:
- 如果没有相关的问答对,你需要澄清。
- 回答的内容应尽可能与 <QA></QA> 标记中的内容一致。
- 避免提及你是从 QA 获取的知识,只需要回复答案。
- 使用 Markdown 语法优化回答格式。
- 使用与问题相同的语言回答。
问题:"""{{question}}"""`
}
}
];
export const Prompt_systemQuotePromptList: PromptTemplateItem[] = [
{
title: i18nT('app:template.standard_template'),
desc: '',
value: {
['4.9.7']: `使用 <Reference></Reference> 标记中的内容作为本次对话的参考:
<Reference>
{{quote}}
</Reference>
回答要求:
- 如果你不清楚答案,你需要澄清。
- 避免提及你是从 <Reference></Reference> 获取的知识。
- 保持答案与 <Reference></Reference> 中描述的一致。
- 使用 Markdown 语法优化回答格式。
- 使用与问题相同的语言回答。
- 使用 [id](CITE) 格式来引用<Reference></Reference>中的知识,其中 CITE 是固定常量, id 为引文中的 id。
- 在每段结尾自然地整合引用。例如: "FastGPT 是一个基于大语言模型(LLM)的知识库问答系统[67e517e74767063e882d6861](CITE)。"
- 每段至少包含一个引用,也可根据内容需要加入多个引用,按顺序排列。`,
['4.9.2']: `使用 <Reference></Reference> 标记中的内容作为本次对话的参考:
<Reference>
{{quote}}
</Reference>
回答要求:
- 如果你不清楚答案,你需要澄清。
- 避免提及你是从 <Reference></Reference> 获取的知识。
- 保持答案与 <Reference></Reference> 中描述的一致。
- 使用 Markdown 语法优化回答格式。
- 使用与问题相同的语言回答。`
}
},
{
title: i18nT('app:template.qa_template'),
desc: '',
value: {
['4.9.2']: `使用 <QA></QA> 标记中的问答对进行回答。
<QA>
{{quote}}
</QA>
回答要求:
- 选择其中一个或多个问答对进行回答。
- 回答的内容应尽可能与 <答案></答案> 中的内容一致。
- 如果没有相关的问答对,你需要澄清。
- 避免提及你是从 QA 获取的知识,只需要回复答案。`
}
},
{
title: i18nT('app:template.standard_strict'),
desc: '',
value: {
['4.9.7']: `忘记你已有的知识,仅使用 <Reference></Reference> 标记中的内容作为本次对话的参考:
<Reference>
{{quote}}
</Reference>
思考流程:
1. 判断问题是否与 <Reference></Reference> 标记中的内容有关。
2. 如果有关,你按下面的要求回答。
3. 如果无关,你直接拒绝回答本次问题。
回答要求:
- 避免提及你是从 <Reference></Reference> 获取的知识。
- 保持答案与 <Reference></Reference> 中描述的一致。
- 使用 Markdown 语法优化回答格式。
- 使用与问题相同的语言回答。
- 使用 [id](CITE) 格式来引用<Reference></Reference>中的知识,其中 CITE 是固定常量, id 为引文中的 id。
- 在每段结尾自然地整合引用。例如: "FastGPT 是一个基于大语言模型(LLM)的知识库问答系统[67e517e74767063e882d6861](CITE)。"
- 每段至少包含一个引用,也可根据内容需要加入多个引用,按顺序排列。
问题:"""{{question}}"""`,
['4.9.2']: `忘记你已有的知识,仅使用 <Reference></Reference> 标记中的内容作为本次对话的参考:
<Reference>
{{quote}}
</Reference>
思考流程:
1. 判断问题是否与 <Reference></Reference> 标记中的内容有关。
2. 如果有关,你按下面的要求回答。
3. 如果无关,你直接拒绝回答本次问题。
回答要求:
- 避免提及你是从 <Reference></Reference> 获取的知识。
- 保持答案与 <Reference></Reference> 中描述的一致。
- 使用 Markdown 语法优化回答格式。
- 使用与问题相同的语言回答。`
}
},
{
title: i18nT('app:template.hard_strict'),
desc: '',
value: {
['4.9.2']: `忘记你已有的知识,仅使用 <QA></QA> 标记中的问答对进行回答。
<QA>
{{quote}}
</QA>
思考流程:
1. 判断问题是否与 <QA></QA> 标记中的内容有关。
2. 如果无关,你直接拒绝回答本次问题。
3. 判断是否有相近或相同的问题。
4. 如果有相同的问题,直接输出对应答案。
5. 如果只有相近的问题,请把相近的问题和答案一起输出。
回答要求:
- 如果没有相关的问答对,你需要澄清。
- 回答的内容应尽可能与 <QA></QA> 标记中的内容一致。
- 避免提及你是从 QA 获取的知识,只需要回复答案。
- 使用 Markdown 语法优化回答格式。
- 使用与问题相同的语言回答。`
}
}
];
export const getQuotePrompt = (version?: string, role: 'user' | 'system' = 'user') => { export const getQuotePrompt = (version?: string, role: 'user' | 'system' = 'user') => {
const quotePromptTemplates = const quotePromptTemplates =
role === 'user' ? Prompt_userQuotePromptList : Prompt_systemQuotePromptList; role === 'user' ? Prompt_userQuotePromptList : Prompt_systemQuotePromptList;
@@ -333,7 +314,7 @@ export const getQuotePrompt = (version?: string, role: 'user' | 'system' = 'user
// Document quote prompt // Document quote prompt
export const getDocumentQuotePrompt = (version?: string) => { export const getDocumentQuotePrompt = (version?: string) => {
const promptMap = { const promptMap = {
['4.9.2']: `将 <FilesContent></FilesContent> 中的内容作为本次对话的参考: ['4.9.7']: `将 <FilesContent></FilesContent> 中的内容作为本次对话的参考:
<FilesContent> <FilesContent>
{{quote}} {{quote}}
</FilesContent> </FilesContent>

View File

@@ -1,14 +1,19 @@
export const getDatasetSearchToolResponsePrompt = () => { export const getDatasetSearchToolResponsePrompt = () => {
return `## Role return `## Role
你是一个知识库回答助手,可以 "quotes" 中的内容作为本次对话的参考。为了使回答结果更加可信并且可追溯,你需要在每段话结尾添加引用标记。 你是一个知识库回答助手,可以 "cites" 中的内容作为本次对话的参考。为了使回答结果更加可信并且可追溯,你需要在每段话结尾添加引用标记,标识参考了哪些内容
## Rules ## 追溯展示规则
- 使用 **[id](CITE)** 格式来引用 "cites" 中的知识,其中 CITE 是固定常量, id 为引文中的 id。
- 在 **每段话结尾** 自然地整合引用。例如: "Nginx是一款轻量级的Web服务器、反向代理服务器[67e517e74767063e882d6861](CITE)。"。
- 每段话**至少包含一个引用**,多个引用时按顺序排列,例如:"Nginx是一款轻量级的Web服务器、反向代理服务器[67e517e74767063e882d6861](CITE)[67e517e74767063e882d6862](CITE)。\n 它的特点是非常轻量[67e517e74767063e882d6863](CITE)。"
- 不要把示例作为知识点。
- 不要伪造 id返回的 id 必须都存在 cites 中!
## 通用规则
- 如果你不清楚答案,你需要澄清。 - 如果你不清楚答案,你需要澄清。
- 避免提及你是从 "quotes" 获取的知识。 - 避免提及你是从 "cites" 获取的知识。
- 保持答案与 "quotes" 中描述的一致。 - 保持答案与 "cites" 中描述的一致。
- 使用 Markdown 语法优化回答格式。尤其是图片、表格、序列号等内容,需严格完整输出。 - 使用 Markdown 语法优化回答格式。尤其是图片、表格、序列号等内容,需严格完整输出。
- 使用与问题相同的语言回答。 - 使用与问题相同的语言回答。`;
- 使用 [id](CITE) 格式来引用 "quotes" 中的知识,其中 CITE 是固定常量, id 为引文中的 id。
- 在每段话结尾自然地整合引用。例如: "FastGPT 是一个基于大语言模型(LLM)的知识库问答系统[67e517e74767063e882d6861](CITE)。"
- 每段话至少包含一个引用,也可根据内容需要加入多个引用,按顺序排列。`;
}; };

View File

@@ -60,5 +60,3 @@ export enum AppTemplateTypeEnum {
// special type // special type
contribute = 'contribute' contribute = 'contribute'
} }
export const defaultDatasetMaxTokens = 16000;

View File

@@ -5,7 +5,7 @@ import {
FlowNodeTypeEnum FlowNodeTypeEnum
} from '../../workflow/node/constant'; } from '../../workflow/node/constant';
import { nanoid } from 'nanoid'; import { nanoid } from 'nanoid';
import { type ToolType } from '../type'; import { type McpToolConfigType } from '../type';
import { i18nT } from '../../../../web/i18n/utils'; import { i18nT } from '../../../../web/i18n/utils';
import { type RuntimeNodeItemType } from '../../workflow/runtime/type'; import { type RuntimeNodeItemType } from '../../workflow/runtime/type';
@@ -16,7 +16,7 @@ export const getMCPToolSetRuntimeNode = ({
avatar avatar
}: { }: {
url: string; url: string;
toolList: ToolType[]; toolList: McpToolConfigType[];
name?: string; name?: string;
avatar?: string; avatar?: string;
}): RuntimeNodeItemType => { }): RuntimeNodeItemType => {
@@ -45,7 +45,7 @@ export const getMCPToolRuntimeNode = ({
url, url,
avatar = 'core/app/type/mcpToolsFill' avatar = 'core/app/type/mcpToolsFill'
}: { }: {
tool: ToolType; tool: McpToolConfigType;
url: string; url: string;
avatar?: string; avatar?: string;
}): RuntimeNodeItemType => { }): RuntimeNodeItemType => {
@@ -65,7 +65,7 @@ export const getMCPToolRuntimeNode = ({
...Object.entries(tool.inputSchema?.properties || {}).map(([key, value]) => ({ ...Object.entries(tool.inputSchema?.properties || {}).map(([key, value]) => ({
key, key,
label: key, label: key,
valueType: value.type as WorkflowIOValueTypeEnum, valueType: value.type as WorkflowIOValueTypeEnum, // TODO: 这里需要做一个映射
description: value.description, description: value.description,
toolDescription: value.description || key, toolDescription: value.description || key,
required: tool.inputSchema?.required?.includes(key) || false, required: tool.inputSchema?.required?.includes(key) || false,

View File

@@ -16,16 +16,6 @@ import { FlowNodeInputTypeEnum } from '../../core/workflow/node/constant';
import type { WorkflowTemplateBasicType } from '@fastgpt/global/core/workflow/type'; import type { WorkflowTemplateBasicType } from '@fastgpt/global/core/workflow/type';
import type { SourceMemberType } from '../../support/user/type'; import type { SourceMemberType } from '../../support/user/type';
export type ToolType = {
name: string;
description: string;
inputSchema: {
type: string;
properties?: Record<string, { type: string; description?: string }>;
required?: string[];
};
};
export type AppSchema = { export type AppSchema = {
_id: string; _id: string;
parentId?: ParentIdType; parentId?: ParentIdType;
@@ -117,6 +107,16 @@ export type AppSimpleEditFormType = {
chatConfig: AppChatConfigType; chatConfig: AppChatConfigType;
}; };
export type McpToolConfigType = {
name: string;
description: string;
inputSchema: {
type: string;
properties?: Record<string, { type: string; description?: string }>;
required?: string[];
};
};
/* app chat config type */ /* app chat config type */
export type AppChatConfigType = { export type AppChatConfigType = {
welcomeText?: string; welcomeText?: string;

View File

@@ -9,6 +9,7 @@ import { type WorkflowTemplateBasicType } from '../workflow/type';
import { AppTypeEnum } from './constants'; import { AppTypeEnum } from './constants';
import { AppErrEnum } from '../../common/error/code/app'; import { AppErrEnum } from '../../common/error/code/app';
import { PluginErrEnum } from '../../common/error/code/plugin'; import { PluginErrEnum } from '../../common/error/code/plugin';
import { i18nT } from '../../../web/i18n/utils';
export const getDefaultAppForm = (): AppSimpleEditFormType => { export const getDefaultAppForm = (): AppSimpleEditFormType => {
return { return {
@@ -189,7 +190,7 @@ export const getAppType = (config?: WorkflowTemplateBasicType | AppSimpleEditFor
return ''; return '';
}; };
export const checkAppUnExistError = (error?: string) => { export const formatToolError = (error?: string) => {
const unExistError: Array<string> = [ const unExistError: Array<string> = [
AppErrEnum.unAuthApp, AppErrEnum.unAuthApp,
AppErrEnum.unExist, AppErrEnum.unExist,
@@ -197,9 +198,9 @@ export const checkAppUnExistError = (error?: string) => {
PluginErrEnum.unExist PluginErrEnum.unExist
]; ];
if (!!error && unExistError.includes(error)) { if (error && unExistError.includes(error)) {
return error; return i18nT('app:un_auth');
} else { } else {
return undefined; return error;
} }
}; };

View File

@@ -26,6 +26,7 @@ export type ChatSchema = {
teamId: string; teamId: string;
tmbId: string; tmbId: string;
appId: string; appId: string;
createTime: Date;
updateTime: Date; updateTime: Date;
title: string; title: string;
customTitle: string; customTitle: string;
@@ -112,6 +113,7 @@ export type ChatItemSchema = (UserChatItemType | SystemChatItemType | AIChatItem
appId: string; appId: string;
time: Date; time: Date;
durationSeconds?: number; durationSeconds?: number;
errorMsg?: string;
}; };
export type AdminFbkType = { export type AdminFbkType = {
@@ -143,6 +145,7 @@ export type ChatSiteItemType = (UserChatItemType | SystemChatItemType | AIChatIt
responseData?: ChatHistoryItemResType[]; responseData?: ChatHistoryItemResType[];
time?: Date; time?: Date;
durationSeconds?: number; durationSeconds?: number;
errorMsg?: string;
} & ChatBoxInputType & } & ChatBoxInputType &
ResponseTagItemType; ResponseTagItemType;

View File

@@ -1,9 +1,11 @@
import type { DatasetDataIndexItemType, DatasetSchemaType } from './type'; import type { ChunkSettingsType, DatasetDataIndexItemType, DatasetSchemaType } from './type';
import type { import type {
DatasetCollectionTypeEnum, DatasetCollectionTypeEnum,
DatasetCollectionDataProcessModeEnum, DatasetCollectionDataProcessModeEnum,
ChunkSettingModeEnum, ChunkSettingModeEnum,
DataChunkSplitModeEnum DataChunkSplitModeEnum,
ChunkTriggerConfigTypeEnum,
ParagraphChunkAIModeEnum
} from './constants'; } from './constants';
import type { LLMModelItemType } from '../ai/model.d'; import type { LLMModelItemType } from '../ai/model.d';
import type { ParentIdType } from 'common/parentFolder/type'; import type { ParentIdType } from 'common/parentFolder/type';
@@ -32,26 +34,16 @@ export type DatasetUpdateBody = {
}; };
/* ================= collection ===================== */ /* ================= collection ===================== */
export type DatasetCollectionChunkMetadataType = { // Input + store params
type DatasetCollectionStoreDataType = ChunkSettingsType & {
parentId?: string; parentId?: string;
customPdfParse?: boolean;
trainingType?: DatasetCollectionDataProcessModeEnum;
imageIndex?: boolean;
autoIndexes?: boolean;
chunkSettingMode?: ChunkSettingModeEnum;
chunkSplitMode?: DataChunkSplitModeEnum;
chunkSize?: number;
indexSize?: number;
chunkSplitter?: string;
qaPrompt?: string;
metadata?: Record<string, any>; metadata?: Record<string, any>;
customPdfParse?: boolean;
}; };
// create collection params // create collection params
export type CreateDatasetCollectionParams = DatasetCollectionChunkMetadataType & { export type CreateDatasetCollectionParams = DatasetCollectionStoreDataType & {
datasetId: string; datasetId: string;
name: string; name: string;
type: DatasetCollectionTypeEnum; type: DatasetCollectionTypeEnum;
@@ -72,7 +64,7 @@ export type CreateDatasetCollectionParams = DatasetCollectionChunkMetadataType &
nextSyncTime?: Date; nextSyncTime?: Date;
}; };
export type ApiCreateDatasetCollectionParams = DatasetCollectionChunkMetadataType & { export type ApiCreateDatasetCollectionParams = DatasetCollectionStoreDataType & {
datasetId: string; datasetId: string;
tags?: string[]; tags?: string[];
}; };
@@ -90,7 +82,7 @@ export type ApiDatasetCreateDatasetCollectionParams = ApiCreateDatasetCollection
export type FileIdCreateDatasetCollectionParams = ApiCreateDatasetCollectionParams & { export type FileIdCreateDatasetCollectionParams = ApiCreateDatasetCollectionParams & {
fileId: string; fileId: string;
}; };
export type reTrainingDatasetFileCollectionParams = DatasetCollectionChunkMetadataType & { export type reTrainingDatasetFileCollectionParams = DatasetCollectionStoreDataType & {
datasetId: string; datasetId: string;
collectionId: string; collectionId: string;
}; };
@@ -147,6 +139,7 @@ export type PushDatasetDataProps = {
collectionId: string; collectionId: string;
data: PushDatasetDataChunkProps[]; data: PushDatasetDataChunkProps[];
trainingType?: DatasetCollectionDataProcessModeEnum; trainingType?: DatasetCollectionDataProcessModeEnum;
indexSize?: number;
autoIndexes?: boolean; autoIndexes?: boolean;
imageIndex?: boolean; imageIndex?: boolean;
prompt?: string; prompt?: string;

View File

@@ -120,6 +120,8 @@ export const DatasetCollectionSyncResultMap = {
export enum DatasetCollectionDataProcessModeEnum { export enum DatasetCollectionDataProcessModeEnum {
chunk = 'chunk', chunk = 'chunk',
qa = 'qa', qa = 'qa',
backup = 'backup',
auto = 'auto' // abandon auto = 'auto' // abandon
} }
export const DatasetCollectionDataProcessModeMap = { export const DatasetCollectionDataProcessModeMap = {
@@ -131,21 +133,35 @@ export const DatasetCollectionDataProcessModeMap = {
label: i18nT('common:core.dataset.training.QA mode'), label: i18nT('common:core.dataset.training.QA mode'),
tooltip: i18nT('common:core.dataset.import.QA Import Tip') tooltip: i18nT('common:core.dataset.import.QA Import Tip')
}, },
[DatasetCollectionDataProcessModeEnum.backup]: {
label: i18nT('dataset:backup_mode'),
tooltip: i18nT('dataset:backup_mode')
},
[DatasetCollectionDataProcessModeEnum.auto]: { [DatasetCollectionDataProcessModeEnum.auto]: {
label: i18nT('common:core.dataset.training.Auto mode'), label: i18nT('common:core.dataset.training.Auto mode'),
tooltip: i18nT('common:core.dataset.training.Auto mode Tip') tooltip: i18nT('common:core.dataset.training.Auto mode Tip')
} }
}; };
export enum ChunkTriggerConfigTypeEnum {
minSize = 'minSize',
forceChunk = 'forceChunk',
maxSize = 'maxSize'
}
export enum ChunkSettingModeEnum { export enum ChunkSettingModeEnum {
auto = 'auto', auto = 'auto',
custom = 'custom' custom = 'custom'
} }
export enum DataChunkSplitModeEnum { export enum DataChunkSplitModeEnum {
paragraph = 'paragraph',
size = 'size', size = 'size',
char = 'char' char = 'char'
} }
export enum ParagraphChunkAIModeEnum {
auto = 'auto',
force = 'force'
}
/* ------------ data -------------- */ /* ------------ data -------------- */
@@ -154,7 +170,6 @@ export enum ImportDataSourceEnum {
fileLocal = 'fileLocal', fileLocal = 'fileLocal',
fileLink = 'fileLink', fileLink = 'fileLink',
fileCustom = 'fileCustom', fileCustom = 'fileCustom',
csvTable = 'csvTable',
externalFile = 'externalFile', externalFile = 'externalFile',
apiDataset = 'apiDataset', apiDataset = 'apiDataset',
reTraining = 'reTraining' reTraining = 'reTraining'

View File

@@ -32,7 +32,7 @@ export const DatasetDataIndexMap: Record<
color: 'red' color: 'red'
}, },
[DatasetDataIndexTypeEnum.image]: { [DatasetDataIndexTypeEnum.image]: {
label: i18nT('common:data_index_image'), label: i18nT('dataset:data_index_image'),
color: 'purple' color: 'purple'
} }
}; };

View File

@@ -118,9 +118,8 @@ export const computeChunkSize = (params: {
return getLLMMaxChunkSize(params.llmModel); return getLLMMaxChunkSize(params.llmModel);
} }
return Math.min(params.chunkSize || chunkAutoChunkSize, getLLMMaxChunkSize(params.llmModel)); return Math.min(params.chunkSize ?? chunkAutoChunkSize, getLLMMaxChunkSize(params.llmModel));
}; };
export const computeChunkSplitter = (params: { export const computeChunkSplitter = (params: {
chunkSettingMode?: ChunkSettingModeEnum; chunkSettingMode?: ChunkSettingModeEnum;
chunkSplitMode?: DataChunkSplitModeEnum; chunkSplitMode?: DataChunkSplitModeEnum;
@@ -129,8 +128,21 @@ export const computeChunkSplitter = (params: {
if (params.chunkSettingMode === ChunkSettingModeEnum.auto) { if (params.chunkSettingMode === ChunkSettingModeEnum.auto) {
return undefined; return undefined;
} }
if (params.chunkSplitMode === DataChunkSplitModeEnum.size) { if (params.chunkSplitMode !== DataChunkSplitModeEnum.char) {
return undefined; return undefined;
} }
return params.chunkSplitter; return params.chunkSplitter;
}; };
export const computeParagraphChunkDeep = (params: {
chunkSettingMode?: ChunkSettingModeEnum;
chunkSplitMode?: DataChunkSplitModeEnum;
paragraphChunkDeep?: number;
}) => {
if (params.chunkSettingMode === ChunkSettingModeEnum.auto) {
return 5;
}
if (params.chunkSplitMode === DataChunkSplitModeEnum.paragraph) {
return params.paragraphChunkDeep;
}
return 0;
};

View File

@@ -8,26 +8,42 @@ import type {
DatasetStatusEnum, DatasetStatusEnum,
DatasetTypeEnum, DatasetTypeEnum,
SearchScoreTypeEnum, SearchScoreTypeEnum,
TrainingModeEnum TrainingModeEnum,
ChunkSettingModeEnum,
ChunkTriggerConfigTypeEnum
} from './constants'; } from './constants';
import type { DatasetPermission } from '../../support/permission/dataset/controller'; import type { DatasetPermission } from '../../support/permission/dataset/controller';
import { Permission } from '../../support/permission/controller';
import type { APIFileServer, FeishuServer, YuqueServer } from './apiDataset'; import type { APIFileServer, FeishuServer, YuqueServer } from './apiDataset';
import type { SourceMemberType } from 'support/user/type'; import type { SourceMemberType } from 'support/user/type';
import type { DatasetDataIndexTypeEnum } from './data/constants'; import type { DatasetDataIndexTypeEnum } from './data/constants';
import type { ChunkSettingModeEnum } from './constants';
export type ChunkSettingsType = { export type ChunkSettingsType = {
trainingType: DatasetCollectionDataProcessModeEnum; trainingType?: DatasetCollectionDataProcessModeEnum;
autoIndexes?: boolean;
// Chunk trigger
chunkTriggerType?: ChunkTriggerConfigTypeEnum;
chunkTriggerMinSize?: number; // maxSize from agent model, not store
// Data enhance
dataEnhanceCollectionName?: boolean; // Auto add collection name to data
// Index enhance
imageIndex?: boolean; imageIndex?: boolean;
autoIndexes?: boolean;
chunkSettingMode?: ChunkSettingModeEnum; // Chunk setting
chunkSettingMode?: ChunkSettingModeEnum; // 系统参数/自定义参数
chunkSplitMode?: DataChunkSplitModeEnum; chunkSplitMode?: DataChunkSplitModeEnum;
// Paragraph split
chunkSize?: number; paragraphChunkAIMode?: ParagraphChunkAIModeEnum;
paragraphChunkDeep?: number; // Paragraph deep
paragraphChunkMinSize?: number; // Paragraph min size, if too small, it will merge
// Size split
chunkSize?: number; // chunk/qa chunk size, Paragraph max chunk size.
// Char split
chunkSplitter?: string; // chunk/qa chunk splitter
indexSize?: number; indexSize?: number;
chunkSplitter?: string;
qaPrompt?: string; qaPrompt?: string;
}; };
@@ -66,7 +82,7 @@ export type DatasetSchemaType = {
defaultPermission?: number; defaultPermission?: number;
}; };
export type DatasetCollectionSchemaType = { export type DatasetCollectionSchemaType = ChunkSettingsType & {
_id: string; _id: string;
teamId: string; teamId: string;
tmbId: string; tmbId: string;
@@ -101,18 +117,7 @@ export type DatasetCollectionSchemaType = {
// Parse settings // Parse settings
customPdfParse?: boolean; customPdfParse?: boolean;
// Chunk settings
autoIndexes?: boolean;
imageIndex?: boolean;
trainingType: DatasetCollectionDataProcessModeEnum; trainingType: DatasetCollectionDataProcessModeEnum;
chunkSettingMode?: ChunkSettingModeEnum;
chunkSplitMode?: DataChunkSplitModeEnum;
chunkSize?: number;
indexSize?: number;
chunkSplitter?: string;
qaPrompt?: string;
}; };
export type DatasetCollectionTagsSchemaType = { export type DatasetCollectionTagsSchemaType = {
@@ -175,6 +180,7 @@ export type DatasetTrainingSchemaType = {
q: string; q: string;
a: string; a: string;
chunkIndex: number; chunkIndex: number;
indexSize?: number;
weight: number; weight: number;
indexes: Omit<DatasetDataIndexItemType, 'dataId'>[]; indexes: Omit<DatasetDataIndexItemType, 'dataId'>[];
retryCount: number; retryCount: number;

View File

@@ -7,7 +7,7 @@ import type {
} from '../../chat/type'; } from '../../chat/type';
import { NodeOutputItemType } from '../../chat/type'; import { NodeOutputItemType } from '../../chat/type';
import type { FlowNodeInputItemType, FlowNodeOutputItemType } from '../type/io.d'; import type { FlowNodeInputItemType, FlowNodeOutputItemType } from '../type/io.d';
import type { StoreNodeItemType } from '../type/node'; import type { NodeToolConfigType, StoreNodeItemType } from '../type/node';
import type { DispatchNodeResponseKeyEnum } from './constants'; import type { DispatchNodeResponseKeyEnum } from './constants';
import type { StoreEdgeItemType } from '../type/edge'; import type { StoreEdgeItemType } from '../type/edge';
import type { NodeInputKeyEnum } from '../constants'; import type { NodeInputKeyEnum } from '../constants';
@@ -102,6 +102,9 @@ export type RuntimeNodeItemType = {
pluginId?: string; // workflow id / plugin id pluginId?: string; // workflow id / plugin id
version?: string; version?: string;
// tool
toolConfig?: NodeToolConfigType;
}; };
export type RuntimeEdgeItemType = StoreEdgeItemType & { export type RuntimeEdgeItemType = StoreEdgeItemType & {
@@ -114,7 +117,7 @@ export type DispatchNodeResponseType = {
runningTime?: number; runningTime?: number;
query?: string; query?: string;
textOutput?: string; textOutput?: string;
error?: Record<string, any>; error?: Record<string, any> | string;
customInputs?: Record<string, any>; customInputs?: Record<string, any>;
customOutputs?: Record<string, any>; customOutputs?: Record<string, any>;
nodeInputs?: Record<string, any>; nodeInputs?: Record<string, any>;

View File

@@ -20,11 +20,17 @@ import { RuntimeNodeItemType } from '../runtime/type';
import { PluginTypeEnum } from '../../plugin/constants'; import { PluginTypeEnum } from '../../plugin/constants';
import { RuntimeEdgeItemType, StoreEdgeItemType } from './edge'; import { RuntimeEdgeItemType, StoreEdgeItemType } from './edge';
import { NextApiResponse } from 'next'; import { NextApiResponse } from 'next';
import { AppDetailType, AppSchema } from '../../app/type'; import type { AppDetailType, AppSchema, McpToolConfigType } from '../../app/type';
import type { ParentIdType } from 'common/parentFolder/type'; import type { ParentIdType } from 'common/parentFolder/type';
import { AppTypeEnum } from 'core/app/constants'; import { AppTypeEnum } from '../../app/constants';
import type { WorkflowInteractiveResponseType } from '../template/system/interactive/type'; import type { WorkflowInteractiveResponseType } from '../template/system/interactive/type';
export type NodeToolConfigType = {
mcpTool?: McpToolConfigType & {
url: string;
};
};
export type FlowNodeCommonType = { export type FlowNodeCommonType = {
parentNodeId?: string; parentNodeId?: string;
flowNodeType: FlowNodeTypeEnum; // render node card flowNodeType: FlowNodeTypeEnum; // render node card
@@ -46,8 +52,10 @@ export type FlowNodeCommonType = {
// plugin data // plugin data
pluginId?: string; pluginId?: string;
isFolder?: boolean; isFolder?: boolean;
// pluginType?: AppTypeEnum;
pluginData?: PluginDataType; pluginData?: PluginDataType;
// tool data
toolData?: NodeToolConfigType;
}; };
export type PluginDataType = { export type PluginDataType = {

View File

@@ -6,12 +6,6 @@ import type {
} from '../../core/dataset/search/controller'; } from '../../core/dataset/search/controller';
import type { AuthOpenApiLimitProps } from '../../support/openapi/auth'; import type { AuthOpenApiLimitProps } from '../../support/openapi/auth';
import type { CreateUsageProps, ConcatUsageProps } from '@fastgpt/global/support/wallet/usage/api'; import type { CreateUsageProps, ConcatUsageProps } from '@fastgpt/global/support/wallet/usage/api';
import type {
GetProApiDatasetFileContentParams,
GetProApiDatasetFileDetailParams,
GetProApiDatasetFileListParams,
GetProApiDatasetFilePreviewUrlParams
} from '../../core/dataset/apiDataset/proApi';
declare global { declare global {
var textCensorHandler: (params: { text: string }) => Promise<{ code: number; message?: string }>; var textCensorHandler: (params: { text: string }) => Promise<{ code: number; message?: string }>;
@@ -19,16 +13,4 @@ declare global {
var authOpenApiHandler: (data: AuthOpenApiLimitProps) => Promise<any>; var authOpenApiHandler: (data: AuthOpenApiLimitProps) => Promise<any>;
var createUsageHandler: (data: CreateUsageProps) => any; var createUsageHandler: (data: CreateUsageProps) => any;
var concatUsageHandler: (data: ConcatUsageProps) => any; var concatUsageHandler: (data: ConcatUsageProps) => any;
// API dataset
var getProApiDatasetFileList: (data: GetProApiDatasetFileListParams) => Promise<APIFileItem[]>;
var getProApiDatasetFileContent: (
data: GetProApiDatasetFileContentParams
) => Promise<ApiFileReadContentResponse>;
var getProApiDatasetFilePreviewUrl: (
data: GetProApiDatasetFilePreviewUrlParams
) => Promise<string>;
var getProApiDatasetFileDetail: (
data: GetProApiDatasetFileDetailParams
) => Promise<ApiDatasetDetailResponse>;
} }

View File

@@ -210,15 +210,15 @@ export const readFileContentFromMongo = async ({
tmbId, tmbId,
bucketName, bucketName,
fileId, fileId,
isQAImport = false, customPdfParse = false,
customPdfParse = false getFormatText
}: { }: {
teamId: string; teamId: string;
tmbId: string; tmbId: string;
bucketName: `${BucketNameEnum}`; bucketName: `${BucketNameEnum}`;
fileId: string; fileId: string;
isQAImport?: boolean;
customPdfParse?: boolean; customPdfParse?: boolean;
getFormatText?: boolean; // 数据类型都尽可能转化成 markdown 格式
}): Promise<{ }): Promise<{
rawText: string; rawText: string;
filename: string; filename: string;
@@ -254,8 +254,8 @@ export const readFileContentFromMongo = async ({
// Get raw text // Get raw text
const { rawText } = await readRawContentByFileBuffer({ const { rawText } = await readRawContentByFileBuffer({
customPdfParse, customPdfParse,
getFormatText,
extension, extension,
isQAImport,
teamId, teamId,
tmbId, tmbId,
buffer: fileBuffers, buffer: fileBuffers,

View File

@@ -16,6 +16,7 @@ export type readRawTextByLocalFileParams = {
path: string; path: string;
encoding: string; encoding: string;
customPdfParse?: boolean; customPdfParse?: boolean;
getFormatText?: boolean;
metadata?: Record<string, any>; metadata?: Record<string, any>;
}; };
export const readRawTextByLocalFile = async (params: readRawTextByLocalFileParams) => { export const readRawTextByLocalFile = async (params: readRawTextByLocalFileParams) => {
@@ -27,8 +28,8 @@ export const readRawTextByLocalFile = async (params: readRawTextByLocalFileParam
return readRawContentByFileBuffer({ return readRawContentByFileBuffer({
extension, extension,
isQAImport: false,
customPdfParse: params.customPdfParse, customPdfParse: params.customPdfParse,
getFormatText: params.getFormatText,
teamId: params.teamId, teamId: params.teamId,
tmbId: params.tmbId, tmbId: params.tmbId,
encoding: params.encoding, encoding: params.encoding,
@@ -46,7 +47,7 @@ export const readRawContentByFileBuffer = async ({
encoding, encoding,
metadata, metadata,
customPdfParse = false, customPdfParse = false,
isQAImport = false getFormatText = true
}: { }: {
teamId: string; teamId: string;
tmbId: string; tmbId: string;
@@ -57,8 +58,10 @@ export const readRawContentByFileBuffer = async ({
metadata?: Record<string, any>; metadata?: Record<string, any>;
customPdfParse?: boolean; customPdfParse?: boolean;
isQAImport: boolean; getFormatText?: boolean;
}): Promise<ReadFileResponse> => { }): Promise<{
rawText: string;
}> => {
const systemParse = () => const systemParse = () =>
runWorker<ReadFileResponse>(WorkerNameEnum.readFile, { runWorker<ReadFileResponse>(WorkerNameEnum.readFile, {
extension, extension,
@@ -149,7 +152,7 @@ export const readRawContentByFileBuffer = async ({
return await systemParse(); return await systemParse();
})(); })();
addLog.debug(`Parse file success, time: ${Date.now() - start}ms. Uploading file image.`); addLog.debug(`Parse file success, time: ${Date.now() - start}ms. `);
// markdown data format // markdown data format
if (imageList) { if (imageList) {
@@ -176,16 +179,7 @@ export const readRawContentByFileBuffer = async ({
}); });
} }
if (['csv', 'xlsx'].includes(extension)) { addLog.debug(`Upload file success, time: ${Date.now() - start}ms`);
// qa data
if (isQAImport) {
rawText = rawText || '';
} else {
rawText = formatText || rawText;
}
}
addLog.debug(`Upload file image success, time: ${Date.now() - start}ms`); return { rawText: getFormatText ? formatText || rawText : rawText };
return { rawText, formatText, imageList };
}; };

View File

@@ -1,7 +1,10 @@
import { getGlobalRedisCacheConnection } from './index'; import { getGlobalRedisConnection } from './index';
import { addLog } from '../system/log'; import { addLog } from '../system/log';
import { retryFn } from '@fastgpt/global/common/system/utils'; import { retryFn } from '@fastgpt/global/common/system/utils';
const redisPrefix = 'cache:';
const getCacheKey = (key: string) => `${redisPrefix}${key}`;
export enum CacheKeyEnum { export enum CacheKeyEnum {
team_vector_count = 'team_vector_count' team_vector_count = 'team_vector_count'
} }
@@ -13,12 +16,12 @@ export const setRedisCache = async (
) => { ) => {
return await retryFn(async () => { return await retryFn(async () => {
try { try {
const redis = getGlobalRedisCacheConnection(); const redis = getGlobalRedisConnection();
if (expireSeconds) { if (expireSeconds) {
await redis.set(key, data, 'EX', expireSeconds); await redis.set(getCacheKey(key), data, 'EX', expireSeconds);
} else { } else {
await redis.set(key, data); await redis.set(getCacheKey(key), data);
} }
} catch (error) { } catch (error) {
addLog.error('Set cache error:', error); addLog.error('Set cache error:', error);
@@ -28,11 +31,11 @@ export const setRedisCache = async (
}; };
export const getRedisCache = async (key: string) => { export const getRedisCache = async (key: string) => {
const redis = getGlobalRedisCacheConnection(); const redis = getGlobalRedisConnection();
return await retryFn(() => redis.get(key)); return await retryFn(() => redis.get(getCacheKey(key)));
}; };
export const delRedisCache = async (key: string) => { export const delRedisCache = async (key: string) => {
const redis = getGlobalRedisCacheConnection(); const redis = getGlobalRedisConnection();
await retryFn(() => redis.del(key)); await retryFn(() => redis.del(getCacheKey(key)));
}; };

View File

@@ -27,17 +27,26 @@ export const newWorkerRedisConnection = () => {
return redis; return redis;
}; };
export const getGlobalRedisCacheConnection = () => { export const FASTGPT_REDIS_PREFIX = 'fastgpt:';
if (global.redisCache) return global.redisCache; export const getGlobalRedisConnection = () => {
if (global.redisClient) return global.redisClient;
global.redisCache = new Redis(REDIS_URL, { keyPrefix: 'fastgpt:cache:' }); global.redisClient = new Redis(REDIS_URL, { keyPrefix: FASTGPT_REDIS_PREFIX });
global.redisCache.on('connect', () => { global.redisClient.on('connect', () => {
addLog.info('Redis connected'); addLog.info('Redis connected');
}); });
global.redisCache.on('error', (error) => { global.redisClient.on('error', (error) => {
addLog.error('Redis connection error', error); addLog.error('Redis connection error', error);
}); });
return global.redisCache; return global.redisClient;
};
export const getAllKeysByPrefix = async (key: string) => {
const redis = getGlobalRedisConnection();
const keys = (await redis.keys(`${FASTGPT_REDIS_PREFIX}${key}:*`)).map((key) =>
key.replace(FASTGPT_REDIS_PREFIX, '')
);
return keys;
}; };

View File

@@ -1,5 +1,5 @@
import type Redis from 'ioredis'; import type Redis from 'ioredis';
declare global { declare global {
var redisCache: Redis | null; var redisClient: Redis | null;
} }

View File

@@ -10,6 +10,7 @@ let jieba: Jieba | undefined;
})(); })();
const stopWords = new Set([ const stopWords = new Set([
'\n',
'--', '--',
'?', '?',
'“', '“',
@@ -1519,8 +1520,7 @@ const stopWords = new Set([
]); ]);
export async function jiebaSplit({ text }: { text: string }) { export async function jiebaSplit({ text }: { text: string }) {
text = text.replace(/[#*`_~>[\](){}|]/g, '').replace(/\S*https?\S*/gi, ''); text = text.replace(/[#*`_~>[\](){}|]|\S*https?\S*/g, '').trim();
const tokens = (await jieba!.cutAsync(text, true)) as string[]; const tokens = (await jieba!.cutAsync(text, true)) as string[];
return ( return (

View File

@@ -57,14 +57,19 @@ export const addLog = {
level === LogLevelEnum.error && console.error(obj); level === LogLevelEnum.error && console.error(obj);
// store // store log
if (level >= STORE_LOG_LEVEL && connectionMongo.connection.readyState === 1) { if (level >= STORE_LOG_LEVEL && connectionMongo.connection.readyState === 1) {
// store log (async () => {
getMongoLog().create({ try {
text: msg, await getMongoLog().create({
level, text: msg,
metadata: obj level,
}); metadata: obj
});
} catch (error) {
console.error('store log error', error);
}
})();
} }
}, },
debug(msg: string, obj?: Record<string, any>) { debug(msg: string, obj?: Record<string, any>) {

View File

@@ -188,6 +188,7 @@ export class PgVectorCtrl {
const results: any = await PgClient.query( const results: any = await PgClient.query(
`BEGIN; `BEGIN;
SET LOCAL hnsw.ef_search = ${global.systemEnv?.hnswEfSearch || 100}; SET LOCAL hnsw.ef_search = ${global.systemEnv?.hnswEfSearch || 100};
SET LOCAL hnsw.max_scan_tuples = ${global.systemEnv?.hnswMaxScanTuples || 100000};
SET LOCAL hnsw.iterative_scan = relaxed_order; SET LOCAL hnsw.iterative_scan = relaxed_order;
WITH relaxed_results AS MATERIALIZED ( WITH relaxed_results AS MATERIALIZED (
select id, collection_id, vector <#> '[${vector}]' AS score select id, collection_id, vector <#> '[${vector}]' AS score
@@ -199,7 +200,7 @@ export class PgVectorCtrl {
) SELECT id, collection_id, score FROM relaxed_results ORDER BY score; ) SELECT id, collection_id, score FROM relaxed_results ORDER BY score;
COMMIT;` COMMIT;`
); );
const rows = results?.[3]?.rows as PgSearchRawType[]; const rows = results?.[results.length - 2]?.rows as PgSearchRawType[];
if (!Array.isArray(rows)) { if (!Array.isArray(rows)) {
return { return {

View File

@@ -78,7 +78,7 @@ export const createChatCompletion = async ({
} }
body.model = modelConstantsData.model; body.model = modelConstantsData.model;
const formatTimeout = timeout ? timeout : body.stream ? 60000 : 600000; const formatTimeout = timeout ? timeout : 600000;
const ai = getAIApi({ const ai = getAIApi({
userKey, userKey,
timeout: formatTimeout timeout: formatTimeout

View File

@@ -1,6 +1,54 @@
{ {
"provider": "Claude", "provider": "Claude",
"list": [ "list": [
{
"model": "claude-sonnet-4-20250514",
"name": "claude-sonnet-4-20250514",
"maxContext": 200000,
"maxResponse": 8000,
"quoteMaxToken": 100000,
"maxTemperature": 1,
"showTopP": true,
"showStopSign": true,
"vision": true,
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"customCQPrompt": "",
"usedInExtractFields": true,
"usedInQueryExtension": true,
"customExtractPrompt": "",
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
},
{
"model": "claude-opus-4-20250514",
"name": "claude-opus-4-20250514",
"maxContext": 200000,
"maxResponse": 4096,
"quoteMaxToken": 100000,
"maxTemperature": 1,
"showTopP": true,
"showStopSign": true,
"vision": true,
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"customCQPrompt": "",
"usedInExtractFields": true,
"usedInQueryExtension": true,
"customExtractPrompt": "",
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
},
{ {
"model": "claude-3-7-sonnet-20250219", "model": "claude-3-7-sonnet-20250219",
"name": "claude-3-7-sonnet-20250219", "name": "claude-3-7-sonnet-20250219",

View File

@@ -25,6 +25,30 @@
"showTopP": true, "showTopP": true,
"showStopSign": true "showStopSign": true
}, },
{
"model": "gemini-2.5-flash-preview-04-17",
"name": "gemini-2.5-flash-preview-04-17",
"maxContext": 1000000,
"maxResponse": 8000,
"quoteMaxToken": 60000,
"maxTemperature": 1,
"vision": true,
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"customCQPrompt": "",
"usedInExtractFields": true,
"usedInQueryExtension": true,
"customExtractPrompt": "",
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
"showTopP": true,
"showStopSign": true
},
{ {
"model": "gemini-2.0-flash", "model": "gemini-2.0-flash",
"name": "gemini-2.0-flash", "name": "gemini-2.0-flash",

View File

@@ -18,15 +18,17 @@ import json5 from 'json5';
*/ */
export const computedMaxToken = ({ export const computedMaxToken = ({
maxToken, maxToken,
model model,
min
}: { }: {
maxToken?: number; maxToken?: number;
model: LLMModelItemType; model: LLMModelItemType;
min?: number;
}) => { }) => {
if (maxToken === undefined) return; if (maxToken === undefined) return;
maxToken = Math.min(maxToken, model.maxResponse); maxToken = Math.min(maxToken, model.maxResponse);
return maxToken; return Math.max(maxToken, min || 0);
}; };
// FastGPT temperature range: [0,10], ai temperature:[0,2],{0,1]…… // FastGPT temperature range: [0,10], ai temperature:[0,2],{0,1]……
@@ -135,12 +137,14 @@ export const llmStreamResponseToAnswerText = async (
// Tool calls // Tool calls
if (responseChoice?.tool_calls?.length) { if (responseChoice?.tool_calls?.length) {
responseChoice.tool_calls.forEach((toolCall) => { responseChoice.tool_calls.forEach((toolCall, i) => {
const index = toolCall.index; const index = toolCall.index ?? i;
if (toolCall.id || callingTool) { // Call new tool
// 有 id代表新 call 工具 const hasNewTool = toolCall?.function?.name || callingTool;
if (toolCall.id) { if (hasNewTool) {
// 有 function name代表新 call 工具
if (toolCall?.function?.name) {
callingTool = { callingTool = {
name: toolCall.function?.name || '', name: toolCall.function?.name || '',
arguments: toolCall.function?.arguments || '' arguments: toolCall.function?.arguments || ''
@@ -176,7 +180,7 @@ export const llmStreamResponseToAnswerText = async (
} }
} }
return { return {
text: parseReasoningContent(answer)[1], text: removeDatasetCiteText(parseReasoningContent(answer)[1], false),
usage, usage,
toolCalls toolCalls
}; };
@@ -190,8 +194,9 @@ export const llmUnStreamResponseToAnswerText = async (
}> => { }> => {
const answer = response.choices?.[0]?.message?.content || ''; const answer = response.choices?.[0]?.message?.content || '';
const toolCalls = response.choices?.[0]?.message?.tool_calls; const toolCalls = response.choices?.[0]?.message?.tool_calls;
return { return {
text: answer, text: removeDatasetCiteText(parseReasoningContent(answer)[1], false),
usage: response.usage, usage: response.usage,
toolCalls toolCalls
}; };
@@ -221,7 +226,9 @@ export const parseReasoningContent = (text: string): [string, string] => {
}; };
export const removeDatasetCiteText = (text: string, retainDatasetCite: boolean) => { export const removeDatasetCiteText = (text: string, retainDatasetCite: boolean) => {
return retainDatasetCite ? text : text.replace(/\[([a-f0-9]{24})\](?:\([^\)]*\)?)?/g, ''); return retainDatasetCite
? text.replace(/\[id\]\(CITE\)/g, '')
: text.replace(/\[([a-f0-9]{24})\](?:\([^\)]*\)?)?/g, '').replace(/\[id\]\(CITE\)/g, '');
}; };
// Parse llm stream part // Parse llm stream part
@@ -236,6 +243,12 @@ export const parseLLMStreamResponse = () => {
let citeBuffer = ''; let citeBuffer = '';
const maxCiteBufferLength = 32; // [Object](CITE)总长度为32 const maxCiteBufferLength = 32; // [Object](CITE)总长度为32
// Buffer
let buffer_finishReason: CompletionFinishReason = null;
let buffer_usage: CompletionUsage = getLLMDefaultUsage();
let buffer_reasoningContent = '';
let buffer_content = '';
/* /*
parseThinkTag - 只控制是否主动解析 <think></think>,如果接口已经解析了,则不再解析。 parseThinkTag - 只控制是否主动解析 <think></think>,如果接口已经解析了,则不再解析。
retainDatasetCite - retainDatasetCite -
@@ -253,6 +266,7 @@ export const parseLLMStreamResponse = () => {
}; };
finish_reason?: CompletionFinishReason; finish_reason?: CompletionFinishReason;
}[]; }[];
usage?: CompletionUsage;
}; };
parseThinkTag?: boolean; parseThinkTag?: boolean;
retainDatasetCite?: boolean; retainDatasetCite?: boolean;
@@ -262,72 +276,71 @@ export const parseLLMStreamResponse = () => {
responseContent: string; responseContent: string;
finishReason: CompletionFinishReason; finishReason: CompletionFinishReason;
} => { } => {
const finishReason = part.choices?.[0]?.finish_reason || null; const data = (() => {
const content = part.choices?.[0]?.delta?.content || ''; buffer_usage = part.usage || buffer_usage;
// @ts-ignore
const reasoningContent = part.choices?.[0]?.delta?.reasoning_content || '';
const isStreamEnd = !!finishReason;
// Parse think const finishReason = part.choices?.[0]?.finish_reason || null;
const { reasoningContent: parsedThinkReasoningContent, content: parsedThinkContent } = (() => { buffer_finishReason = finishReason || buffer_finishReason;
if (reasoningContent || !parseThinkTag) {
isInThinkTag = false;
return { reasoningContent, content };
}
if (!content) { const content = part.choices?.[0]?.delta?.content || '';
return { // @ts-ignore
reasoningContent: '', const reasoningContent = part.choices?.[0]?.delta?.reasoning_content || '';
content: '' const isStreamEnd = !!buffer_finishReason;
};
}
// 如果不在 think 标签中,或者有 reasoningContent(接口已解析),则返回 reasoningContent 和 content // Parse think
if (isInThinkTag === false) { const { reasoningContent: parsedThinkReasoningContent, content: parsedThinkContent } =
return { (() => {
reasoningContent: '', if (reasoningContent || !parseThinkTag) {
content isInThinkTag = false;
}; return { reasoningContent, content };
} }
// 检测是否为 think 标签开头的数据 // 如果不在 think 标签中,或者有 reasoningContent(接口已解析),则返回 reasoningContent 和 content
if (isInThinkTag === undefined) { if (isInThinkTag === false) {
// Parse content think and answer
startTagBuffer += content;
// 太少内容时候,暂时不解析
if (startTagBuffer.length < thinkStartChars.length) {
if (isStreamEnd) {
const tmpContent = startTagBuffer;
startTagBuffer = '';
return { return {
reasoningContent: '', reasoningContent: '',
content: tmpContent content
}; };
} }
return {
reasoningContent: '',
content: ''
};
}
if (startTagBuffer.startsWith(thinkStartChars)) { // 检测是否为 think 标签开头的数据
isInThinkTag = true; if (isInThinkTag === undefined) {
return { // Parse content think and answer
reasoningContent: startTagBuffer.slice(thinkStartChars.length), startTagBuffer += content;
content: '' // 太少内容时候,暂时不解析
}; if (startTagBuffer.length < thinkStartChars.length) {
} if (isStreamEnd) {
const tmpContent = startTagBuffer;
startTagBuffer = '';
return {
reasoningContent: '',
content: tmpContent
};
}
return {
reasoningContent: '',
content: ''
};
}
// 如果未命中 think 标签,则认为不在 think 标签中,返回 buffer 内容作为 content if (startTagBuffer.startsWith(thinkStartChars)) {
isInThinkTag = false; isInThinkTag = true;
return { return {
reasoningContent: '', reasoningContent: startTagBuffer.slice(thinkStartChars.length),
content: startTagBuffer content: ''
}; };
} }
// 确认是 think 标签内容,开始返回 think 内容,并实时检测 </think> // 如果未命中 think 标签,则认为不在 think 标签中,返回 buffer 内容作为 content
/* isInThinkTag = false;
return {
reasoningContent: '',
content: startTagBuffer
};
}
// 确认是 think 标签内容,开始返回 think 内容,并实时检测 </think>
/*
检测 </think> 方案。 检测 </think> 方案。
存储所有疑似 </think> 的内容,直到检测到完整的 </think> 标签或超出 </think> 长度。 存储所有疑似 </think> 的内容,直到检测到完整的 </think> 标签或超出 </think> 长度。
content 返回值包含以下几种情况: content 返回值包含以下几种情况:
@@ -338,124 +351,145 @@ export const parseLLMStreamResponse = () => {
</think>abc - 完全命中尾标签 </think>abc - 完全命中尾标签
k>abc - 命中一部分尾标签 k>abc - 命中一部分尾标签
*/ */
// endTagBuffer 专门用来记录疑似尾标签的内容 // endTagBuffer 专门用来记录疑似尾标签的内容
if (endTagBuffer) { if (endTagBuffer) {
endTagBuffer += content; endTagBuffer += content;
if (endTagBuffer.includes(thinkEndChars)) { if (endTagBuffer.includes(thinkEndChars)) {
isInThinkTag = false; isInThinkTag = false;
const answer = endTagBuffer.slice(thinkEndChars.length); const answer = endTagBuffer.slice(thinkEndChars.length);
return { return {
reasoningContent: '', reasoningContent: '',
content: answer content: answer
}; };
} else if (endTagBuffer.length >= thinkEndChars.length) { } else if (endTagBuffer.length >= thinkEndChars.length) {
// 缓存内容超出尾标签长度,且仍未命中 </think>,则认为本次猜测 </think> 失败,仍处于 think 阶段。 // 缓存内容超出尾标签长度,且仍未命中 </think>,则认为本次猜测 </think> 失败,仍处于 think 阶段。
const tmp = endTagBuffer; const tmp = endTagBuffer;
endTagBuffer = ''; endTagBuffer = '';
return { return {
reasoningContent: tmp, reasoningContent: tmp,
content: '' content: ''
}; };
} }
return {
reasoningContent: '',
content: ''
};
} else if (content.includes(thinkEndChars)) {
// 返回内容,完整命中</think>,直接结束
isInThinkTag = false;
const [think, answer] = content.split(thinkEndChars);
return {
reasoningContent: think,
content: answer
};
} else {
// 无 buffer且未命中 </think>,开始疑似 </think> 检测。
for (let i = 1; i < thinkEndChars.length; i++) {
const partialEndTag = thinkEndChars.slice(0, i);
// 命中一部分尾标签
if (content.endsWith(partialEndTag)) {
const think = content.slice(0, -partialEndTag.length);
endTagBuffer += partialEndTag;
return { return {
reasoningContent: think, reasoningContent: '',
content: '' content: ''
}; };
} else if (content.includes(thinkEndChars)) {
// 返回内容,完整命中</think>,直接结束
isInThinkTag = false;
const [think, answer] = content.split(thinkEndChars);
return {
reasoningContent: think,
content: answer
};
} else {
// 无 buffer且未命中 </think>,开始疑似 </think> 检测。
for (let i = 1; i < thinkEndChars.length; i++) {
const partialEndTag = thinkEndChars.slice(0, i);
// 命中一部分尾标签
if (content.endsWith(partialEndTag)) {
const think = content.slice(0, -partialEndTag.length);
endTagBuffer += partialEndTag;
return {
reasoningContent: think,
content: ''
};
}
}
} }
}
// 完全未命中尾标签,还是 think 阶段。
return {
reasoningContent: content,
content: ''
};
})();
// Parse datset cite
if (retainDatasetCite) {
return {
reasoningContent: parsedThinkReasoningContent,
content: parsedThinkContent,
responseContent: parsedThinkContent,
finishReason: buffer_finishReason
};
} }
// 完全未命中尾标签,还是 think 阶段。 // 缓存包含 [ 的字符串,直到超出 maxCiteBufferLength 再一次性返回
return { const parseCite = (text: string) => {
reasoningContent: content, // 结束时,返回所有剩余内容
content: '' if (isStreamEnd) {
}; const content = citeBuffer + text;
})(); return {
content: removeDatasetCiteText(content, false)
};
}
// 新内容包含 [,初始化缓冲数据
if (text.includes('[')) {
const index = text.indexOf('[');
const beforeContent = citeBuffer + text.slice(0, index);
citeBuffer = text.slice(index);
// beforeContent 可能是:普通字符串,带 [ 的字符串
return {
content: removeDatasetCiteText(beforeContent, false)
};
}
// 处于 Cite 缓冲区,判断是否满足条件
else if (citeBuffer) {
citeBuffer += text;
// 检查缓冲区长度是否达到完整Quote长度或已经流结束
if (citeBuffer.length >= maxCiteBufferLength) {
const content = removeDatasetCiteText(citeBuffer, false);
citeBuffer = '';
return {
content
};
} else {
// 暂时不返回内容
return { content: '' };
}
}
return {
content: text
};
};
const { content: pasedCiteContent } = parseCite(parsedThinkContent);
// Parse datset cite
if (retainDatasetCite) {
return { return {
reasoningContent: parsedThinkReasoningContent, reasoningContent: parsedThinkReasoningContent,
content: parsedThinkContent, content: parsedThinkContent,
responseContent: parsedThinkContent, responseContent: pasedCiteContent,
finishReason finishReason: buffer_finishReason
}; };
} })();
// 缓存包含 [ 的字符串,直到超出 maxCiteBufferLength 再一次性返回 buffer_reasoningContent += data.reasoningContent;
const parseCite = (text: string) => { buffer_content += data.content;
// 结束时,返回所有剩余内容
if (isStreamEnd) {
const content = citeBuffer + text;
return {
content: removeDatasetCiteText(content, false)
};
}
// 新内容包含 [,初始化缓冲数据 return data;
if (text.includes('[')) { };
const index = text.indexOf('[');
const beforeContent = citeBuffer + text.slice(0, index);
citeBuffer = text.slice(index);
// beforeContent 可能是:普通字符串,带 [ 的字符串
return {
content: removeDatasetCiteText(beforeContent, false)
};
}
// 处于 Cite 缓冲区,判断是否满足条件
else if (citeBuffer) {
citeBuffer += text;
// 检查缓冲区长度是否达到完整Quote长度或已经流结束
if (citeBuffer.length >= maxCiteBufferLength) {
const content = removeDatasetCiteText(citeBuffer, false);
citeBuffer = '';
return {
content
};
} else {
// 暂时不返回内容
return { content: '' };
}
}
return {
content: text
};
};
const { content: pasedCiteContent } = parseCite(parsedThinkContent);
const getResponseData = () => {
return { return {
reasoningContent: parsedThinkReasoningContent, finish_reason: buffer_finishReason,
content: parsedThinkContent, usage: buffer_usage,
responseContent: pasedCiteContent, reasoningContent: buffer_reasoningContent,
finishReason content: buffer_content
}; };
}; };
const updateFinishReason = (finishReason: CompletionFinishReason) => {
buffer_finishReason = finishReason;
};
return { return {
parsePart parsePart,
getResponseData,
updateFinishReason
}; };
}; };

View File

@@ -11,40 +11,6 @@ export const beforeUpdateAppFormat = <T extends AppSchema['modules'] | undefined
nodes: T; nodes: T;
isPlugin: boolean; isPlugin: boolean;
}) => { }) => {
if (nodes) {
// Check dataset maxTokens
if (isPlugin) {
let maxTokens = 16000;
nodes.forEach((item) => {
if (
item.flowNodeType === FlowNodeTypeEnum.chatNode ||
item.flowNodeType === FlowNodeTypeEnum.tools
) {
const model =
item.inputs.find((item) => item.key === NodeInputKeyEnum.aiModel)?.value || '';
const chatModel = getLLMModel(model);
const quoteMaxToken = chatModel.quoteMaxToken || 16000;
maxTokens = Math.max(maxTokens, quoteMaxToken);
}
});
nodes.forEach((item) => {
if (item.flowNodeType === FlowNodeTypeEnum.datasetSearchNode) {
item.inputs.forEach((input) => {
if (input.key === NodeInputKeyEnum.datasetMaxTokens) {
const val = input.value as number;
if (val > maxTokens) {
input.value = maxTokens;
}
}
});
}
});
}
}
return { return {
nodes nodes
}; };

View File

@@ -1,7 +1,7 @@
import { Client } from '@modelcontextprotocol/sdk/client/index.js'; import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { SSEClientTransport } from '@modelcontextprotocol/sdk/client/sse.js'; import { SSEClientTransport } from '@modelcontextprotocol/sdk/client/sse.js';
import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js'; import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js';
import { type ToolType } from '@fastgpt/global/core/app/type'; import { type McpToolConfigType } from '@fastgpt/global/core/app/type';
import { addLog } from '../../common/system/log'; import { addLog } from '../../common/system/log';
import { retryFn } from '@fastgpt/global/common/system/utils'; import { retryFn } from '@fastgpt/global/common/system/utils';
@@ -41,7 +41,7 @@ export class MCPClient {
* Get available tools list * Get available tools list
* @returns List of tools * @returns List of tools
*/ */
public async getTools(): Promise<ToolType[]> { public async getTools(): Promise<McpToolConfigType[]> {
try { try {
const client = await this.getConnection(); const client = await this.getConnection();
const response = await client.listTools(); const response = await client.listTools();

View File

@@ -46,6 +46,7 @@ export async function rewriteAppWorkflowToDetail({
const versionIds = appNodes const versionIds = appNodes
.filter((node) => node.version && Types.ObjectId.isValid(node.version)) .filter((node) => node.version && Types.ObjectId.isValid(node.version))
.map((node) => node.version); .map((node) => node.version);
if (versionIds.length > 0) { if (versionIds.length > 0) {
const versionDataList = await MongoAppVersion.find( const versionDataList = await MongoAppVersion.find(
{ {

View File

@@ -61,6 +61,7 @@ const ChatItemSchema = new Schema({
type: Array, type: Array,
default: [] default: []
}, },
errorMsg: String,
userGoodFeedback: { userGoodFeedback: {
type: String type: String
}, },

View File

@@ -34,6 +34,10 @@ const ChatSchema = new Schema({
ref: AppCollectionName, ref: AppCollectionName,
required: true required: true
}, },
createTime: {
type: Date,
default: () => new Date()
},
updateTime: { updateTime: {
type: Date, type: Date,
default: () => new Date() default: () => new Date()

View File

@@ -32,6 +32,7 @@ type Props = {
content: [UserChatItemType & { dataId?: string }, AIChatItemType & { dataId?: string }]; content: [UserChatItemType & { dataId?: string }, AIChatItemType & { dataId?: string }];
metadata?: Record<string, any>; metadata?: Record<string, any>;
durationSeconds: number; //s durationSeconds: number; //s
errorMsg?: string;
}; };
export async function saveChat({ export async function saveChat({
@@ -50,6 +51,7 @@ export async function saveChat({
outLinkUid, outLinkUid,
content, content,
durationSeconds, durationSeconds,
errorMsg,
metadata = {} metadata = {}
}: Props) { }: Props) {
if (!chatId || chatId === 'NO_RECORD_HISTORIES') return; if (!chatId || chatId === 'NO_RECORD_HISTORIES') return;
@@ -104,7 +106,8 @@ export async function saveChat({
return { return {
...item, ...item,
[DispatchNodeResponseKeyEnum.nodeResponse]: nodeResponse, [DispatchNodeResponseKeyEnum.nodeResponse]: nodeResponse,
durationSeconds durationSeconds,
errorMsg
}; };
} }
return item; return item;

View File

@@ -65,8 +65,8 @@ export const filterGPTMessageByMaxContext = async ({
if (lastMessage.role === ChatCompletionRequestMessageRoleEnum.User) { if (lastMessage.role === ChatCompletionRequestMessageRoleEnum.User) {
const tokens = await countGptMessagesTokens([lastMessage, ...tmpChats]); const tokens = await countGptMessagesTokens([lastMessage, ...tmpChats]);
maxContext -= tokens; maxContext -= tokens;
// 该轮信息整体 tokens 超出范围,这段数据不要了 // 该轮信息整体 tokens 超出范围,这段数据不要了。但是至少保证一组。
if (maxContext < 0) { if (maxContext < 0 && chats.length > 0) {
break; break;
} }

View File

@@ -2,7 +2,9 @@ import type {
APIFileListResponse, APIFileListResponse,
ApiFileReadContentResponse, ApiFileReadContentResponse,
APIFileReadResponse, APIFileReadResponse,
APIFileServer ApiDatasetDetailResponse,
APIFileServer,
APIFileItem
} from '@fastgpt/global/core/dataset/apiDataset'; } from '@fastgpt/global/core/dataset/apiDataset';
import axios, { type Method } from 'axios'; import axios, { type Method } from 'axios';
import { addLog } from '../../../common/system/log'; import { addLog } from '../../../common/system/log';
@@ -89,7 +91,7 @@ export const useApiDatasetRequest = ({ apiServer }: { apiServer: APIFileServer }
`/v1/file/list`, `/v1/file/list`,
{ {
searchKey, searchKey,
parentId parentId: parentId || apiServer.basePath
}, },
'POST' 'POST'
); );
@@ -144,7 +146,8 @@ export const useApiDatasetRequest = ({ apiServer }: { apiServer: APIFileServer }
tmbId, tmbId,
url: previewUrl, url: previewUrl,
relatedId: apiFileId, relatedId: apiFileId,
customPdfParse customPdfParse,
getFormatText: true
}); });
return { return {
title, title,
@@ -164,9 +167,34 @@ export const useApiDatasetRequest = ({ apiServer }: { apiServer: APIFileServer }
return url; return url;
}; };
const getFileDetail = async ({
apiFileId
}: {
apiFileId: string;
}): Promise<ApiDatasetDetailResponse> => {
const fileData = await request<ApiDatasetDetailResponse>(
`/v1/file/detail`,
{
id: apiFileId
},
'GET'
);
if (fileData) {
return {
id: fileData.id,
name: fileData.name,
parentId: fileData.parentId === null ? '' : fileData.parentId
};
}
return Promise.reject('File not found');
};
return { return {
getFileContent, getFileContent,
listFiles, listFiles,
getFilePreviewUrl getFilePreviewUrl,
getFileDetail
}; };
}; };

View File

@@ -0,0 +1,27 @@
import type {
APIFileServer,
YuqueServer,
FeishuServer
} from '@fastgpt/global/core/dataset/apiDataset';
import { useApiDatasetRequest } from './api';
import { useYuqueDatasetRequest } from '../yuqueDataset/api';
import { useFeishuDatasetRequest } from '../feishuDataset/api';
export const getApiDatasetRequest = async (data: {
apiServer?: APIFileServer;
yuqueServer?: YuqueServer;
feishuServer?: FeishuServer;
}) => {
const { apiServer, yuqueServer, feishuServer } = data;
if (apiServer) {
return useApiDatasetRequest({ apiServer });
}
if (yuqueServer) {
return useYuqueDatasetRequest({ yuqueServer });
}
if (feishuServer) {
return useFeishuDatasetRequest({ feishuServer });
}
return Promise.reject('Can not find api dataset server');
};

View File

@@ -1,30 +0,0 @@
import { type ParentIdType } from '@fastgpt/global/common/parentFolder/type';
import { type FeishuServer, type YuqueServer } from '@fastgpt/global/core/dataset/apiDataset';
export enum ProApiDatasetOperationTypeEnum {
LIST = 'list',
READ = 'read',
CONTENT = 'content',
DETAIL = 'detail'
}
export type ProApiDatasetCommonParams = {
feishuServer?: FeishuServer;
yuqueServer?: YuqueServer;
};
export type GetProApiDatasetFileListParams = ProApiDatasetCommonParams & {
parentId?: ParentIdType;
};
export type GetProApiDatasetFileContentParams = ProApiDatasetCommonParams & {
apiFileId: string;
};
export type GetProApiDatasetFilePreviewUrlParams = ProApiDatasetCommonParams & {
apiFileId: string;
};
export type GetProApiDatasetFileDetailParams = ProApiDatasetCommonParams & {
apiFileId: string;
};

View File

@@ -34,15 +34,17 @@ import { getTrainingModeByCollection } from './utils';
import { import {
computeChunkSize, computeChunkSize,
computeChunkSplitter, computeChunkSplitter,
computeParagraphChunkDeep,
getLLMMaxChunkSize getLLMMaxChunkSize
} from '@fastgpt/global/core/dataset/training/utils'; } from '@fastgpt/global/core/dataset/training/utils';
import { DatasetDataIndexTypeEnum } from '@fastgpt/global/core/dataset/data/constants';
export const createCollectionAndInsertData = async ({ export const createCollectionAndInsertData = async ({
dataset, dataset,
rawText, rawText,
relatedId, relatedId,
createCollectionParams, createCollectionParams,
isQAImport = false, backupParse = false,
billId, billId,
session session
}: { }: {
@@ -50,8 +52,8 @@ export const createCollectionAndInsertData = async ({
rawText: string; rawText: string;
relatedId?: string; relatedId?: string;
createCollectionParams: CreateOneCollectionParams; createCollectionParams: CreateOneCollectionParams;
backupParse?: boolean;
isQAImport?: boolean;
billId?: string; billId?: string;
session?: ClientSession; session?: ClientSession;
}) => { }) => {
@@ -73,15 +75,30 @@ export const createCollectionAndInsertData = async ({
llmModel: getLLMModel(dataset.agentModel) llmModel: getLLMModel(dataset.agentModel)
}); });
const chunkSplitter = computeChunkSplitter(createCollectionParams); const chunkSplitter = computeChunkSplitter(createCollectionParams);
const paragraphChunkDeep = computeParagraphChunkDeep(createCollectionParams);
if (trainingType === DatasetCollectionDataProcessModeEnum.qa) {
delete createCollectionParams.chunkTriggerType;
delete createCollectionParams.chunkTriggerMinSize;
delete createCollectionParams.dataEnhanceCollectionName;
delete createCollectionParams.imageIndex;
delete createCollectionParams.autoIndexes;
delete createCollectionParams.indexSize;
delete createCollectionParams.qaPrompt;
}
// 1. split chunks // 1. split chunks
const chunks = rawText2Chunks({ const chunks = rawText2Chunks({
rawText, rawText,
chunkTriggerType: createCollectionParams.chunkTriggerType,
chunkTriggerMinSize: createCollectionParams.chunkTriggerMinSize,
chunkSize, chunkSize,
paragraphChunkDeep,
paragraphChunkMinSize: createCollectionParams.paragraphChunkMinSize,
maxSize: getLLMMaxChunkSize(getLLMModel(dataset.agentModel)), maxSize: getLLMMaxChunkSize(getLLMModel(dataset.agentModel)),
overlapRatio: trainingType === DatasetCollectionDataProcessModeEnum.chunk ? 0.2 : 0, overlapRatio: trainingType === DatasetCollectionDataProcessModeEnum.chunk ? 0.2 : 0,
customReg: chunkSplitter ? [chunkSplitter] : [], customReg: chunkSplitter ? [chunkSplitter] : [],
isQAImport backupParse
}); });
// 2. auth limit // 2. auth limit
@@ -102,6 +119,7 @@ export const createCollectionAndInsertData = async ({
const { _id: collectionId } = await createOneCollection({ const { _id: collectionId } = await createOneCollection({
...createCollectionParams, ...createCollectionParams,
trainingType, trainingType,
paragraphChunkDeep,
chunkSize, chunkSize,
chunkSplitter, chunkSplitter,
@@ -157,6 +175,10 @@ export const createCollectionAndInsertData = async ({
billId: traingBillId, billId: traingBillId,
data: chunks.map((item, index) => ({ data: chunks.map((item, index) => ({
...item, ...item,
indexes: item.indexes?.map((text) => ({
type: DatasetDataIndexTypeEnum.custom,
text
})),
chunkIndex: index chunkIndex: index
})), })),
session session
@@ -198,46 +220,19 @@ export type CreateOneCollectionParams = CreateDatasetCollectionParams & {
tmbId: string; tmbId: string;
session?: ClientSession; session?: ClientSession;
}; };
export async function createOneCollection({ export async function createOneCollection({ session, ...props }: CreateOneCollectionParams) {
teamId, const {
tmbId, teamId,
name, parentId,
parentId, datasetId,
datasetId, tags,
type,
createTime, fileId,
updateTime, rawLink,
externalFileId,
hashRawText, externalFileUrl,
rawTextLength, apiFileId
metadata = {}, } = props;
tags,
nextSyncTime,
fileId,
rawLink,
externalFileId,
externalFileUrl,
apiFileId,
// Parse settings
customPdfParse,
imageIndex,
autoIndexes,
// Chunk settings
trainingType,
chunkSettingMode,
chunkSplitMode,
chunkSize,
indexSize,
chunkSplitter,
qaPrompt,
session
}: CreateOneCollectionParams) {
// Create collection tags // Create collection tags
const collectionTags = await createOrGetCollectionTags({ tags, teamId, datasetId, session }); const collectionTags = await createOrGetCollectionTags({ tags, teamId, datasetId, session });
@@ -245,41 +240,18 @@ export async function createOneCollection({
const [collection] = await MongoDatasetCollection.create( const [collection] = await MongoDatasetCollection.create(
[ [
{ {
...props,
teamId, teamId,
tmbId,
parentId: parentId || null, parentId: parentId || null,
datasetId, datasetId,
name,
type,
rawTextLength,
hashRawText,
tags: collectionTags, tags: collectionTags,
metadata,
createTime,
updateTime,
nextSyncTime,
...(fileId ? { fileId } : {}), ...(fileId ? { fileId } : {}),
...(rawLink ? { rawLink } : {}), ...(rawLink ? { rawLink } : {}),
...(externalFileId ? { externalFileId } : {}), ...(externalFileId ? { externalFileId } : {}),
...(externalFileUrl ? { externalFileUrl } : {}), ...(externalFileUrl ? { externalFileUrl } : {}),
...(apiFileId ? { apiFileId } : {}), ...(apiFileId ? { apiFileId } : {})
// Parse settings
customPdfParse,
imageIndex,
autoIndexes,
// Chunk settings
trainingType,
chunkSettingMode,
chunkSplitMode,
chunkSize,
indexSize,
chunkSplitter,
qaPrompt
} }
], ],
{ session, ordered: true } { session, ordered: true }

View File

@@ -34,9 +34,9 @@ const DatasetDataTextSchema = new Schema({
try { try {
DatasetDataTextSchema.index( DatasetDataTextSchema.index(
{ teamId: 1, datasetId: 1, fullTextToken: 'text' }, { teamId: 1, fullTextToken: 'text' },
{ {
name: 'teamId_1_datasetId_1_fullTextToken_text', name: 'teamId_1_fullTextToken_text',
default_language: 'none' default_language: 'none'
} }
); );

View File

@@ -0,0 +1,208 @@
import type {
APIFileItem,
ApiFileReadContentResponse,
ApiDatasetDetailResponse,
FeishuServer
} from '@fastgpt/global/core/dataset/apiDataset';
import { type ParentIdType } from '@fastgpt/global/common/parentFolder/type';
import axios, { type Method } from 'axios';
import { addLog } from '../../../common/system/log';
type ResponseDataType = {
success: boolean;
message: string;
data: any;
};
type FeishuFileListResponse = {
files: {
token: string;
parent_token: string;
name: string;
type: string;
modified_time: number;
created_time: number;
url: string;
owner_id: string;
}[];
has_more: boolean;
next_page_token: string;
};
const feishuBaseUrl = process.env.FEISHU_BASE_URL || 'https://open.feishu.cn';
export const useFeishuDatasetRequest = ({ feishuServer }: { feishuServer: FeishuServer }) => {
const instance = axios.create({
baseURL: feishuBaseUrl,
timeout: 60000
});
// 添加请求拦截器
instance.interceptors.request.use(async (config) => {
if (!config.headers.Authorization) {
const { data } = await axios.post<{ tenant_access_token: string }>(
`${feishuBaseUrl}/open-apis/auth/v3/tenant_access_token/internal`,
{
app_id: feishuServer.appId,
app_secret: feishuServer.appSecret
}
);
config.headers['Authorization'] = `Bearer ${data.tenant_access_token}`;
config.headers['Content-Type'] = 'application/json; charset=utf-8';
}
return config;
});
/**
* 响应数据检查
*/
const checkRes = (data: ResponseDataType) => {
if (data === undefined) {
addLog.info('yuque dataset data is empty');
return Promise.reject('服务器异常');
}
return data.data;
};
const responseError = (err: any) => {
console.log('error->', '请求错误', err);
if (!err) {
return Promise.reject({ message: '未知错误' });
}
if (typeof err === 'string') {
return Promise.reject({ message: err });
}
if (typeof err.message === 'string') {
return Promise.reject({ message: err.message });
}
if (typeof err.data === 'string') {
return Promise.reject({ message: err.data });
}
if (err?.response?.data) {
return Promise.reject(err?.response?.data);
}
return Promise.reject(err);
};
const request = <T>(url: string, data: any, method: Method): Promise<T> => {
/* 去空 */
for (const key in data) {
if (data[key] === undefined) {
delete data[key];
}
}
return instance
.request({
url,
method,
data: ['POST', 'PUT'].includes(method) ? data : undefined,
params: !['POST', 'PUT'].includes(method) ? data : undefined
})
.then((res) => checkRes(res.data))
.catch((err) => responseError(err));
};
const listFiles = async ({ parentId }: { parentId?: ParentIdType }): Promise<APIFileItem[]> => {
const fetchFiles = async (pageToken?: string): Promise<FeishuFileListResponse['files']> => {
const data = await request<FeishuFileListResponse>(
`/open-apis/drive/v1/files`,
{
folder_token: parentId || feishuServer.folderToken,
page_size: 200,
page_token: pageToken
},
'GET'
);
if (data.has_more) {
const nextFiles = await fetchFiles(data.next_page_token);
return [...data.files, ...nextFiles];
}
return data.files;
};
const allFiles = await fetchFiles();
return allFiles
.filter((file) => ['folder', 'docx'].includes(file.type))
.map((file) => ({
id: file.token,
parentId: file.parent_token,
name: file.name,
type: file.type === 'folder' ? ('folder' as const) : ('file' as const),
hasChild: file.type === 'folder',
updateTime: new Date(file.modified_time * 1000),
createTime: new Date(file.created_time * 1000)
}));
};
const getFileContent = async ({
apiFileId
}: {
apiFileId: string;
}): Promise<ApiFileReadContentResponse> => {
const [{ content }, { document }] = await Promise.all([
request<{ content: string }>(
`/open-apis/docx/v1/documents/${apiFileId}/raw_content`,
{},
'GET'
),
request<{ document: { title: string } }>(
`/open-apis/docx/v1/documents/${apiFileId}`,
{},
'GET'
)
]);
return {
title: document?.title,
rawText: content
};
};
const getFilePreviewUrl = async ({ apiFileId }: { apiFileId: string }): Promise<string> => {
const { metas } = await request<{ metas: { url: string }[] }>(
`/open-apis/drive/v1/metas/batch_query`,
{
request_docs: [
{
doc_token: apiFileId,
doc_type: 'docx'
}
],
with_url: true
},
'POST'
);
return metas[0].url;
};
const getFileDetail = async ({
apiFileId
}: {
apiFileId: string;
}): Promise<ApiDatasetDetailResponse> => {
const { document } = await request<{ document: { title: string } }>(
`/open-apis/docx/v1/documents/${apiFileId}`,
{},
'GET'
);
return {
name: document?.title,
parentId: null,
id: apiFileId
};
};
return {
getFileContent,
listFiles,
getFilePreviewUrl,
getFileDetail
};
};

View File

@@ -1,8 +1,10 @@
import { BucketNameEnum } from '@fastgpt/global/common/file/constants'; import { BucketNameEnum } from '@fastgpt/global/common/file/constants';
import { DatasetSourceReadTypeEnum } from '@fastgpt/global/core/dataset/constants'; import {
ChunkTriggerConfigTypeEnum,
DatasetSourceReadTypeEnum
} from '@fastgpt/global/core/dataset/constants';
import { readFileContentFromMongo } from '../../common/file/gridfs/controller'; import { readFileContentFromMongo } from '../../common/file/gridfs/controller';
import { urlsFetch } from '../../common/string/cheerio'; import { urlsFetch } from '../../common/string/cheerio';
import { parseCsvTable2Chunks } from './training/utils';
import { type TextSplitProps, splitText2Chunks } from '@fastgpt/global/common/string/textSplitter'; import { type TextSplitProps, splitText2Chunks } from '@fastgpt/global/common/string/textSplitter';
import axios from 'axios'; import axios from 'axios';
import { readRawContentByFileBuffer } from '../../common/file/read/utils'; import { readRawContentByFileBuffer } from '../../common/file/read/utils';
@@ -12,19 +14,22 @@ import {
type FeishuServer, type FeishuServer,
type YuqueServer type YuqueServer
} from '@fastgpt/global/core/dataset/apiDataset'; } from '@fastgpt/global/core/dataset/apiDataset';
import { useApiDatasetRequest } from './apiDataset/api'; import { getApiDatasetRequest } from './apiDataset';
import Papa from 'papaparse';
export const readFileRawTextByUrl = async ({ export const readFileRawTextByUrl = async ({
teamId, teamId,
tmbId, tmbId,
url, url,
customPdfParse, customPdfParse,
getFormatText,
relatedId relatedId
}: { }: {
teamId: string; teamId: string;
tmbId: string; tmbId: string;
url: string; url: string;
customPdfParse?: boolean; customPdfParse?: boolean;
getFormatText?: boolean;
relatedId: string; // externalFileId / apiFileId relatedId: string; // externalFileId / apiFileId
}) => { }) => {
const response = await axios({ const response = await axios({
@@ -38,7 +43,7 @@ export const readFileRawTextByUrl = async ({
const { rawText } = await readRawContentByFileBuffer({ const { rawText } = await readRawContentByFileBuffer({
customPdfParse, customPdfParse,
isQAImport: false, getFormatText,
extension, extension,
teamId, teamId,
tmbId, tmbId,
@@ -62,21 +67,21 @@ export const readDatasetSourceRawText = async ({
tmbId, tmbId,
type, type,
sourceId, sourceId,
isQAImport,
selector, selector,
externalFileId, externalFileId,
apiServer, apiServer,
feishuServer, feishuServer,
yuqueServer, yuqueServer,
customPdfParse customPdfParse,
getFormatText
}: { }: {
teamId: string; teamId: string;
tmbId: string; tmbId: string;
type: DatasetSourceReadTypeEnum; type: DatasetSourceReadTypeEnum;
sourceId: string; sourceId: string;
customPdfParse?: boolean; customPdfParse?: boolean;
getFormatText?: boolean;
isQAImport?: boolean; // csv data
selector?: string; // link selector selector?: string; // link selector
externalFileId?: string; // external file dataset externalFileId?: string; // external file dataset
apiServer?: APIFileServer; // api dataset apiServer?: APIFileServer; // api dataset
@@ -92,8 +97,8 @@ export const readDatasetSourceRawText = async ({
tmbId, tmbId,
bucketName: BucketNameEnum.dataset, bucketName: BucketNameEnum.dataset,
fileId: sourceId, fileId: sourceId,
isQAImport, customPdfParse,
customPdfParse getFormatText
}); });
return { return {
title: filename, title: filename,
@@ -161,38 +166,82 @@ export const readApiServerFileContent = async ({
title?: string; title?: string;
rawText: string; rawText: string;
}> => { }> => {
if (apiServer) { return (
return useApiDatasetRequest({ apiServer }).getFileContent({ await getApiDatasetRequest({
teamId, apiServer,
tmbId,
apiFileId,
customPdfParse
});
}
if (feishuServer || yuqueServer) {
return global.getProApiDatasetFileContent({
feishuServer,
yuqueServer, yuqueServer,
apiFileId feishuServer
}); })
} ).getFileContent({
teamId,
return Promise.reject('No apiServer or feishuServer or yuqueServer'); tmbId,
apiFileId,
customPdfParse
});
}; };
export const rawText2Chunks = ({ export const rawText2Chunks = ({
rawText, rawText,
isQAImport, chunkTriggerType = ChunkTriggerConfigTypeEnum.minSize,
chunkTriggerMinSize = 1000,
backupParse,
chunkSize = 512, chunkSize = 512,
...splitProps ...splitProps
}: { }: {
rawText: string; rawText: string;
isQAImport?: boolean;
} & TextSplitProps) => { chunkTriggerType?: ChunkTriggerConfigTypeEnum;
if (isQAImport) { chunkTriggerMinSize?: number; // maxSize from agent model, not store
const { chunks } = parseCsvTable2Chunks(rawText);
return chunks; backupParse?: boolean;
tableParse?: boolean;
} & TextSplitProps): {
q: string;
a: string;
indexes?: string[];
}[] => {
const parseDatasetBackup2Chunks = (rawText: string) => {
const csvArr = Papa.parse(rawText).data as string[][];
console.log(rawText, csvArr);
const chunks = csvArr
.slice(1)
.map((item) => ({
q: item[0] || '',
a: item[1] || '',
indexes: item.slice(2)
}))
.filter((item) => item.q || item.a);
return {
chunks
};
};
// Chunk condition
// 1. 选择最大值条件,只有超过了最大值(默认为模型的最大值*0.7),才会触发分块
if (chunkTriggerType === ChunkTriggerConfigTypeEnum.maxSize) {
const textLength = rawText.trim().length;
const maxSize = splitProps.maxSize ? splitProps.maxSize * 0.7 : 16000;
if (textLength < maxSize) {
return [
{
q: rawText,
a: ''
}
];
}
}
// 2. 选择最小值条件,只有超过最小值(手动决定)才会触发分块
if (chunkTriggerType !== ChunkTriggerConfigTypeEnum.forceChunk) {
const textLength = rawText.trim().length;
if (textLength < chunkTriggerMinSize) {
return [{ q: rawText, a: '' }];
}
}
if (backupParse) {
return parseDatasetBackup2Chunks(rawText).chunks;
} }
const { chunks } = splitText2Chunks({ const { chunks } = splitText2Chunks({
@@ -203,6 +252,7 @@ export const rawText2Chunks = ({
return chunks.map((item) => ({ return chunks.map((item) => ({
q: item, q: item,
a: '' a: '',
indexes: []
})); }));
}; };

View File

@@ -1,10 +1,12 @@
import { getMongoModel, Schema } from '../../common/mongo'; import { getMongoModel, Schema } from '../../common/mongo';
import { import {
ChunkSettingModeEnum, ChunkSettingModeEnum,
ChunkTriggerConfigTypeEnum,
DataChunkSplitModeEnum, DataChunkSplitModeEnum,
DatasetCollectionDataProcessModeEnum, DatasetCollectionDataProcessModeEnum,
DatasetTypeEnum, DatasetTypeEnum,
DatasetTypeMap DatasetTypeMap,
ParagraphChunkAIModeEnum
} from '@fastgpt/global/core/dataset/constants'; } from '@fastgpt/global/core/dataset/constants';
import { import {
TeamCollectionName, TeamCollectionName,
@@ -15,12 +17,22 @@ import type { DatasetSchemaType } from '@fastgpt/global/core/dataset/type.d';
export const DatasetCollectionName = 'datasets'; export const DatasetCollectionName = 'datasets';
export const ChunkSettings = { export const ChunkSettings = {
imageIndex: Boolean,
autoIndexes: Boolean,
trainingType: { trainingType: {
type: String, type: String,
enum: Object.values(DatasetCollectionDataProcessModeEnum) enum: Object.values(DatasetCollectionDataProcessModeEnum)
}, },
chunkTriggerType: {
type: String,
enum: Object.values(ChunkTriggerConfigTypeEnum)
},
chunkTriggerMinSize: Number,
dataEnhanceCollectionName: Boolean,
imageIndex: Boolean,
autoIndexes: Boolean,
chunkSettingMode: { chunkSettingMode: {
type: String, type: String,
enum: Object.values(ChunkSettingModeEnum) enum: Object.values(ChunkSettingModeEnum)
@@ -29,6 +41,12 @@ export const ChunkSettings = {
type: String, type: String,
enum: Object.values(DataChunkSplitModeEnum) enum: Object.values(DataChunkSplitModeEnum)
}, },
paragraphChunkAIMode: {
type: String,
enum: Object.values(ParagraphChunkAIModeEnum)
},
paragraphChunkDeep: Number,
paragraphChunkMinSize: Number,
chunkSize: Number, chunkSize: Number,
chunkSplitter: String, chunkSplitter: String,
@@ -115,9 +133,7 @@ const DatasetSchema = new Schema({
// abandoned // abandoned
autoSync: Boolean, autoSync: Boolean,
externalReadUrl: { externalReadUrl: String,
type: String
},
defaultPermission: Number defaultPermission: Number
}); });

View File

@@ -27,6 +27,7 @@ import { type ChatItemType } from '@fastgpt/global/core/chat/type';
import type { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants'; import type { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { datasetSearchQueryExtension } from './utils'; import { datasetSearchQueryExtension } from './utils';
import type { RerankModelItemType } from '@fastgpt/global/core/ai/model.d'; import type { RerankModelItemType } from '@fastgpt/global/core/ai/model.d';
import { addLog } from '../../../common/system/log';
export type SearchDatasetDataProps = { export type SearchDatasetDataProps = {
histories: ChatItemType[]; histories: ChatItemType[];
@@ -474,7 +475,7 @@ export async function searchDatasetData(
).lean() ).lean()
]); ]);
const set = new Map<string, number>(); const set = new Set<string>();
const formatResult = results const formatResult = results
.map((item, index) => { .map((item, index) => {
const collection = collections.find((col) => String(col._id) === String(item.collectionId)); const collection = collections.find((col) => String(col._id) === String(item.collectionId));
@@ -507,7 +508,7 @@ export async function searchDatasetData(
.filter((item) => { .filter((item) => {
if (!item) return false; if (!item) return false;
if (set.has(item.id)) return false; if (set.has(item.id)) return false;
set.set(item.id, 1); set.add(item.id);
return true; return true;
}) })
.map((item, index) => { .map((item, index) => {
@@ -544,113 +545,125 @@ export async function searchDatasetData(
}; };
} }
const searchResults = ( try {
await Promise.all( const searchResults = (await MongoDatasetDataText.aggregate(
datasetIds.map(async (id) => { [
return MongoDatasetDataText.aggregate( {
[ $match: {
{ teamId: new Types.ObjectId(teamId),
$match: { $text: { $search: await jiebaSplit({ text: query }) },
teamId: new Types.ObjectId(teamId), datasetId: { $in: datasetIds.map((id) => new Types.ObjectId(id)) },
datasetId: new Types.ObjectId(id), ...(filterCollectionIdList
$text: { $search: await jiebaSplit({ text: query }) }, ? {
...(filterCollectionIdList collectionId: {
? { $in: filterCollectionIdList.map((id) => new Types.ObjectId(id))
collectionId: { }
$in: filterCollectionIdList.map((id) => new Types.ObjectId(id)) }
} : {}),
} ...(forbidCollectionIdList && forbidCollectionIdList.length > 0
: {}), ? {
...(forbidCollectionIdList && forbidCollectionIdList.length > 0 collectionId: {
? { $nin: forbidCollectionIdList.map((id) => new Types.ObjectId(id))
collectionId: { }
$nin: forbidCollectionIdList.map((id) => new Types.ObjectId(id)) }
} : {})
} }
: {}) },
} {
}, $sort: {
{ score: { $meta: 'textScore' }
$sort: { }
score: { $meta: 'textScore' } },
} {
}, $limit: limit
{ },
$limit: limit {
}, $project: {
{ _id: 1,
$project: { collectionId: 1,
_id: 1, dataId: 1,
collectionId: 1, score: { $meta: 'textScore' }
dataId: 1,
score: { $meta: 'textScore' }
}
}
],
{
...readFromSecondary
} }
);
})
)
).flat() as (DatasetDataTextSchemaType & { score: number })[];
// Get data and collections
const [dataList, collections] = await Promise.all([
MongoDatasetData.find(
{
_id: { $in: searchResults.map((item) => item.dataId) }
},
'_id datasetId collectionId updateTime q a chunkIndex indexes',
{ ...readFromSecondary }
).lean(),
MongoDatasetCollection.find(
{
_id: { $in: searchResults.map((item) => item.collectionId) }
},
'_id name fileId rawLink apiFileId externalFileId externalFileUrl',
{ ...readFromSecondary }
).lean()
]);
return {
fullTextRecallResults: searchResults
.map((item, index) => {
const collection = collections.find(
(col) => String(col._id) === String(item.collectionId)
);
if (!collection) {
console.log('Collection is not found', item);
return;
}
const data = dataList.find((data) => String(data._id) === String(item.dataId));
if (!data) {
console.log('Data is not found', item);
return;
} }
],
{
...readFromSecondary
}
)) as (DatasetDataTextSchemaType & { score: number })[];
return { // Get data and collections
id: String(data._id), const [dataList, collections] = await Promise.all([
datasetId: String(data.datasetId), MongoDatasetData.find(
collectionId: String(data.collectionId), {
updateTime: data.updateTime, _id: { $in: searchResults.map((item) => item.dataId) }
q: data.q, },
a: data.a, '_id datasetId collectionId updateTime q a chunkIndex indexes',
chunkIndex: data.chunkIndex, { ...readFromSecondary }
indexes: data.indexes, ).lean(),
...getCollectionSourceData(collection), MongoDatasetCollection.find(
score: [ {
{ _id: { $in: searchResults.map((item) => item.collectionId) }
type: SearchScoreTypeEnum.fullText, },
value: item.score || 0, '_id name fileId rawLink apiFileId externalFileId externalFileUrl',
index { ...readFromSecondary }
} ).lean()
] ]);
};
}) return {
.filter(Boolean) as SearchDataResponseItemType[], fullTextRecallResults: searchResults
tokenLen: 0 .map((item, index) => {
}; const collection = collections.find(
(col) => String(col._id) === String(item.collectionId)
);
if (!collection) {
console.log('Collection is not found', item);
return;
}
const data = dataList.find((data) => String(data._id) === String(item.dataId));
if (!data) {
console.log('Data is not found', item);
return;
}
return {
id: String(data._id),
datasetId: String(data.datasetId),
collectionId: String(data.collectionId),
updateTime: data.updateTime,
q: data.q,
a: data.a,
chunkIndex: data.chunkIndex,
indexes: data.indexes,
...getCollectionSourceData(collection),
score: [
{
type: SearchScoreTypeEnum.fullText,
value: item.score || 0,
index
}
]
};
})
.filter((item) => {
if (!item) return false;
return true;
})
.map((item, index) => {
if (!item) return;
return {
...item,
score: item.score.map((item) => ({ ...item, index }))
};
}) as SearchDataResponseItemType[],
tokenLen: 0
};
} catch (error) {
addLog.error('Full text search error', error);
return {
fullTextRecallResults: [],
tokenLen: 0
};
}
}; };
const multiQueryRecall = async ({ const multiQueryRecall = async ({
embeddingLimit, embeddingLimit,

View File

@@ -1,6 +1,5 @@
export enum ImportDataSourceEnum { export enum ImportDataSourceEnum {
fileLocal = 'fileLocal', fileLocal = 'fileLocal',
fileLink = 'fileLink', fileLink = 'fileLink',
fileCustom = 'fileCustom', fileCustom = 'fileCustom'
tableLocal = 'tableLocal'
} }

View File

@@ -1,16 +0,0 @@
import Papa from 'papaparse';
export const parseCsvTable2Chunks = (rawText: string) => {
const csvArr = Papa.parse(rawText).data as string[][];
const chunks = csvArr
.map((item) => ({
q: item[0] || '',
a: item[1] || ''
}))
.filter((item) => item.q || item.a);
return {
chunks
};
};

View File

@@ -0,0 +1,304 @@
import type {
APIFileItem,
ApiFileReadContentResponse,
YuqueServer,
ApiDatasetDetailResponse
} from '@fastgpt/global/core/dataset/apiDataset';
import axios, { type Method } from 'axios';
import { addLog } from '../../../common/system/log';
import { type ParentIdType } from '@fastgpt/global/common/parentFolder/type';
type ResponseDataType = {
success: boolean;
message: string;
data: any;
};
type YuqueRepoListResponse = {
id: string;
name: string;
title: string;
book_id: string | null;
type: string;
updated_at: Date;
created_at: Date;
slug?: string;
}[];
type YuqueTocListResponse = {
uuid: string;
type: string;
title: string;
url: string;
slug: string;
id: string;
doc_id: string;
prev_uuid: string;
sibling_uuid: string;
child_uuid: string;
parent_uuid: string;
}[];
const yuqueBaseUrl = process.env.YUQUE_DATASET_BASE_URL || 'https://www.yuque.com';
export const useYuqueDatasetRequest = ({ yuqueServer }: { yuqueServer: YuqueServer }) => {
const instance = axios.create({
baseURL: yuqueBaseUrl,
timeout: 60000, // 超时时间
headers: {
'X-Auth-Token': yuqueServer.token
}
});
/**
* 响应数据检查
*/
const checkRes = (data: ResponseDataType) => {
if (data === undefined) {
addLog.info('yuque dataset data is empty');
return Promise.reject('服务器异常');
}
return data.data;
};
const responseError = (err: any) => {
console.log('error->', '请求错误', err);
if (!err) {
return Promise.reject({ message: '未知错误' });
}
if (typeof err === 'string') {
return Promise.reject({ message: err });
}
if (typeof err.message === 'string') {
return Promise.reject({ message: err.message });
}
if (typeof err.data === 'string') {
return Promise.reject({ message: err.data });
}
if (err?.response?.data) {
return Promise.reject(err?.response?.data);
}
return Promise.reject(err);
};
const request = <T>(url: string, data: any, method: Method): Promise<T> => {
/* 去空 */
for (const key in data) {
if (data[key] === undefined) {
delete data[key];
}
}
return instance
.request({
url,
method,
data: ['POST', 'PUT'].includes(method) ? data : undefined,
params: !['POST', 'PUT'].includes(method) ? data : undefined
})
.then((res) => checkRes(res.data))
.catch((err) => responseError(err));
};
const listFiles = async ({ parentId }: { parentId?: ParentIdType }) => {
// Auto set baseurl to parentId
if (!parentId) {
if (yuqueServer.basePath) parentId = yuqueServer.basePath;
}
let files: APIFileItem[] = [];
if (!parentId) {
const limit = 100;
let offset = 0;
let allData: YuqueRepoListResponse = [];
while (true) {
const data = await request<YuqueRepoListResponse>(
`/api/v2/groups/${yuqueServer.userId}/repos`,
{
offset,
limit
},
'GET'
);
if (!data || data.length === 0) break;
allData = [...allData, ...data];
if (data.length < limit) break;
offset += limit;
}
files = allData.map((item) => {
return {
id: item.id,
name: item.name,
parentId: null,
type: 'folder',
updateTime: item.updated_at,
createTime: item.created_at,
hasChild: true,
slug: item.slug
};
});
} else {
if (typeof parentId === 'number') {
const data = await request<YuqueTocListResponse>(
`/api/v2/repos/${parentId}/toc`,
{},
'GET'
);
return data
.filter((item) => !item.parent_uuid && item.type !== 'LINK')
.map((item) => ({
id: `${parentId}-${item.id}-${item.uuid}`,
name: item.title,
parentId: item.parent_uuid,
type: item.type === 'TITLE' ? ('folder' as const) : ('file' as const),
updateTime: new Date(),
createTime: new Date(),
uuid: item.uuid,
slug: item.slug,
hasChild: !!item.child_uuid
}));
} else {
const [repoId, uuid, parentUuid] = parentId.split(/-(.*?)-(.*)/);
const data = await request<YuqueTocListResponse>(`/api/v2/repos/${repoId}/toc`, {}, 'GET');
return data
.filter((item) => item.parent_uuid === parentUuid)
.map((item) => ({
id: `${repoId}-${item.id}-${item.uuid}`,
name: item.title,
parentId: item.parent_uuid,
type: item.type === 'TITLE' ? ('folder' as const) : ('file' as const),
updateTime: new Date(),
createTime: new Date(),
uuid: item.uuid,
slug: item.slug,
hasChild: !!item.child_uuid
}));
}
}
if (!Array.isArray(files)) {
return Promise.reject('Invalid file list format');
}
if (files.some((file) => !file.id || !file.name || typeof file.type === 'undefined')) {
return Promise.reject('Invalid file data format');
}
return files;
};
const getFileContent = async ({
apiFileId
}: {
apiFileId: string;
}): Promise<ApiFileReadContentResponse> => {
const [parentId, fileId] = apiFileId.split(/-(.*?)-(.*)/);
const data = await request<{ title: string; body: string }>(
`/api/v2/repos/${parentId}/docs/${fileId}`,
{},
'GET'
);
return {
title: data.title,
rawText: data.body
};
};
const getFilePreviewUrl = async ({ apiFileId }: { apiFileId: string }) => {
const [parentId, fileId] = apiFileId.split(/-(.*?)-(.*)/);
const { slug: parentSlug } = await request<{ slug: string }>(
`/api/v2/repos/${parentId}`,
{ id: apiFileId },
'GET'
);
const { slug: fileSlug } = await request<{ slug: string }>(
`/api/v2/repos/${parentId}/docs/${fileId}`,
{},
'GET'
);
return `${yuqueBaseUrl}/${yuqueServer.userId}/${parentSlug}/${fileSlug}`;
};
const getFileDetail = async ({
apiFileId
}: {
apiFileId: string;
}): Promise<ApiDatasetDetailResponse> => {
//如果id是数字认为是知识库获取知识库列表
if (typeof apiFileId === 'number' || !isNaN(Number(apiFileId))) {
const limit = 100;
let offset = 0;
let allData: YuqueRepoListResponse = [];
while (true) {
const data = await request<YuqueRepoListResponse>(
`/api/v2/groups/${yuqueServer.userId}/repos`,
{
offset,
limit
},
'GET'
);
if (!data || data.length === 0) break;
allData = [...allData, ...data];
if (data.length < limit) break;
offset += limit;
}
const file = allData.find((item) => Number(item.id) === Number(apiFileId));
if (!file) {
return Promise.reject('文件不存在');
}
return {
id: file.id,
name: file.name,
parentId: null
};
} else {
const [repoId, parentUuid, fileId] = apiFileId.split(/-(.*?)-(.*)/);
const data = await request<YuqueTocListResponse>(`/api/v2/repos/${repoId}/toc`, {}, 'GET');
const file = data.find((item) => item.uuid === fileId);
if (!file) {
return Promise.reject('文件不存在');
}
const parentfile = data.find((item) => item.uuid === file.parent_uuid);
const parentId = `${repoId}-${parentfile?.id}-${parentfile?.uuid}`;
//判断如果parent_uuid为空则认为是知识库的根目录返回知识库
if (file.parent_uuid) {
return {
id: file.id,
name: file.title,
parentId: parentId
};
} else {
return {
id: file.id,
name: file.title,
parentId: repoId
};
}
}
};
return {
getFileContent,
listFiles,
getFilePreviewUrl,
getFileDetail
};
};

View File

@@ -223,28 +223,29 @@ const toolChoice = async (props: ActionProps) => {
} }
]; ];
const body = llmCompletionsBodyFormat(
{
stream: true,
model: extractModel.model,
temperature: 0.01,
messages: filterMessages,
tools,
tool_choice: { type: 'function', function: { name: agentFunName } }
},
extractModel
);
const { response } = await createChatCompletion({ const { response } = await createChatCompletion({
body: llmCompletionsBodyFormat( body,
{
stream: true,
model: extractModel.model,
temperature: 0.01,
messages: filterMessages,
tools,
tool_choice: { type: 'function', function: { name: agentFunName } }
},
extractModel
),
userKey: externalProvider.openaiAccount userKey: externalProvider.openaiAccount
}); });
const { toolCalls, usage } = await formatLLMResponse(response); const { text, toolCalls, usage } = await formatLLMResponse(response);
const arg: Record<string, any> = (() => { const arg: Record<string, any> = (() => {
try { try {
return json5.parse(toolCalls?.[0]?.function?.arguments || ''); return json5.parse(toolCalls?.[0]?.function?.arguments || '');
} catch (error) { } catch (error) {
console.log(agentFunction.parameters); console.log('body', body);
console.log(toolCalls?.[0]?.function); console.log('AI response', text, toolCalls?.[0]?.function);
console.log('Your model may not support tool_call', error); console.log('Your model may not support tool_call', error);
return {}; return {};
} }

View File

@@ -1,13 +1,14 @@
import { createChatCompletion } from '../../../../ai/config'; import { createChatCompletion } from '../../../../ai/config';
import { filterGPTMessageByMaxContext, loadRequestMessages } from '../../../../chat/utils'; import { filterGPTMessageByMaxContext, loadRequestMessages } from '../../../../chat/utils';
import { import type {
type ChatCompletion, ChatCompletion,
type StreamChatType, StreamChatType,
type ChatCompletionMessageParam, ChatCompletionMessageParam,
type ChatCompletionCreateParams, ChatCompletionCreateParams,
type ChatCompletionMessageFunctionCall, ChatCompletionMessageFunctionCall,
type ChatCompletionFunctionMessageParam, ChatCompletionFunctionMessageParam,
type ChatCompletionAssistantMessageParam ChatCompletionAssistantMessageParam,
CompletionFinishReason
} from '@fastgpt/global/core/ai/type.d'; } from '@fastgpt/global/core/ai/type.d';
import { type NextApiResponse } from 'next'; import { type NextApiResponse } from 'next';
import { responseWriteController } from '../../../../../common/response'; import { responseWriteController } from '../../../../../common/response';
@@ -259,14 +260,15 @@ export const runToolWithFunctionCall = async (
} }
}); });
let { answer, functionCalls, inputTokens, outputTokens } = await (async () => { let { answer, functionCalls, inputTokens, outputTokens, finish_reason } = await (async () => {
if (isStreamResponse) { if (isStreamResponse) {
if (!res || res.closed) { if (!res || res.closed) {
return { return {
answer: '', answer: '',
functionCalls: [], functionCalls: [],
inputTokens: 0, inputTokens: 0,
outputTokens: 0 outputTokens: 0,
finish_reason: 'close' as const
}; };
} }
const result = await streamResponse({ const result = await streamResponse({
@@ -281,10 +283,12 @@ export const runToolWithFunctionCall = async (
answer: result.answer, answer: result.answer,
functionCalls: result.functionCalls, functionCalls: result.functionCalls,
inputTokens: result.usage.prompt_tokens, inputTokens: result.usage.prompt_tokens,
outputTokens: result.usage.completion_tokens outputTokens: result.usage.completion_tokens,
finish_reason: result.finish_reason
}; };
} else { } else {
const result = aiResponse as ChatCompletion; const result = aiResponse as ChatCompletion;
const finish_reason = result.choices?.[0]?.finish_reason as CompletionFinishReason;
const function_call = result.choices?.[0]?.message?.function_call; const function_call = result.choices?.[0]?.message?.function_call;
const usage = result.usage; const usage = result.usage;
@@ -315,7 +319,8 @@ export const runToolWithFunctionCall = async (
answer, answer,
functionCalls: toolCalls, functionCalls: toolCalls,
inputTokens: usage?.prompt_tokens, inputTokens: usage?.prompt_tokens,
outputTokens: usage?.completion_tokens outputTokens: usage?.completion_tokens,
finish_reason
}; };
} }
})(); })();
@@ -481,7 +486,8 @@ export const runToolWithFunctionCall = async (
completeMessages, completeMessages,
assistantResponses: toolNodeAssistants, assistantResponses: toolNodeAssistants,
runTimes, runTimes,
toolWorkflowInteractiveResponse toolWorkflowInteractiveResponse,
finish_reason
}; };
} }
@@ -495,7 +501,8 @@ export const runToolWithFunctionCall = async (
toolNodeInputTokens, toolNodeInputTokens,
toolNodeOutputTokens, toolNodeOutputTokens,
assistantResponses: toolNodeAssistants, assistantResponses: toolNodeAssistants,
runTimes runTimes,
finish_reason
} }
); );
} else { } else {
@@ -523,7 +530,8 @@ export const runToolWithFunctionCall = async (
: outputTokens, : outputTokens,
completeMessages, completeMessages,
assistantResponses: [...assistantResponses, ...toolNodeAssistant.value], assistantResponses: [...assistantResponses, ...toolNodeAssistant.value],
runTimes: (response?.runTimes || 0) + 1 runTimes: (response?.runTimes || 0) + 1,
finish_reason
}; };
} }
}; };
@@ -546,28 +554,25 @@ async function streamResponse({
readStream: stream readStream: stream
}); });
let textAnswer = '';
let functionCalls: ChatCompletionMessageFunctionCall[] = []; let functionCalls: ChatCompletionMessageFunctionCall[] = [];
let functionId = getNanoid(); let functionId = getNanoid();
let usage = getLLMDefaultUsage();
const { parsePart } = parseLLMStreamResponse(); const { parsePart, getResponseData, updateFinishReason } = parseLLMStreamResponse();
for await (const part of stream) { for await (const part of stream) {
usage = part.usage || usage;
if (res.closed) { if (res.closed) {
stream.controller?.abort(); stream.controller?.abort();
updateFinishReason('close');
break; break;
} }
const { content: toolChoiceContent, responseContent } = parsePart({ const { responseContent } = parsePart({
part, part,
parseThinkTag: false, parseThinkTag: false,
retainDatasetCite retainDatasetCite
}); });
const responseChoice = part.choices?.[0]?.delta; const responseChoice = part.choices?.[0]?.delta;
textAnswer += toolChoiceContent;
if (responseContent) { if (responseContent) {
workflowStreamResponse?.({ workflowStreamResponse?.({
@@ -577,7 +582,7 @@ async function streamResponse({
text: responseContent text: responseContent
}) })
}); });
} else if (responseChoice.function_call) { } else if (responseChoice?.function_call) {
const functionCall: { const functionCall: {
arguments?: string; arguments?: string;
name?: string; name?: string;
@@ -640,5 +645,7 @@ async function streamResponse({
} }
} }
return { answer: textAnswer, functionCalls, usage }; const { content, finish_reason, usage } = getResponseData();
return { answer: content, functionCalls, finish_reason, usage };
} }

View File

@@ -220,7 +220,8 @@ export const runToolWithPromptCall = async (
const max_tokens = computedMaxToken({ const max_tokens = computedMaxToken({
model: toolModel, model: toolModel,
maxToken maxToken,
min: 100
}); });
const filterMessages = await filterGPTMessageByMaxContext({ const filterMessages = await filterGPTMessageByMaxContext({
messages, messages,
@@ -592,28 +593,22 @@ async function streamResponse({
let startResponseWrite = false; let startResponseWrite = false;
let answer = ''; let answer = '';
let reasoning = '';
let finish_reason: CompletionFinishReason = null;
let usage = getLLMDefaultUsage();
const { parsePart } = parseLLMStreamResponse(); const { parsePart, getResponseData, updateFinishReason } = parseLLMStreamResponse();
for await (const part of stream) { for await (const part of stream) {
usage = part.usage || usage;
if (res.closed) { if (res.closed) {
stream.controller?.abort(); stream.controller?.abort();
finish_reason = 'close'; updateFinishReason('close');
break; break;
} }
const { reasoningContent, content, responseContent, finishReason } = parsePart({ const { reasoningContent, content, responseContent } = parsePart({
part, part,
parseThinkTag: aiChatReasoning, parseThinkTag: aiChatReasoning,
retainDatasetCite retainDatasetCite
}); });
finish_reason = finish_reason || finishReason;
answer += content; answer += content;
reasoning += reasoningContent;
// Reasoning response // Reasoning response
if (aiChatReasoning && reasoningContent) { if (aiChatReasoning && reasoningContent) {
@@ -658,7 +653,9 @@ async function streamResponse({
} }
} }
return { answer, reasoning, finish_reason, usage }; const { reasoningContent, content, finish_reason, usage } = getResponseData();
return { answer: content, reasoning: reasoningContent, finish_reason, usage };
} }
const parseAnswer = ( const parseAnswer = (

View File

@@ -7,17 +7,13 @@ import {
type ChatCompletionToolMessageParam, type ChatCompletionToolMessageParam,
type ChatCompletionMessageParam, type ChatCompletionMessageParam,
type ChatCompletionTool, type ChatCompletionTool,
type ChatCompletionAssistantMessageParam,
type CompletionFinishReason type CompletionFinishReason
} from '@fastgpt/global/core/ai/type'; } from '@fastgpt/global/core/ai/type';
import { type NextApiResponse } from 'next'; import { type NextApiResponse } from 'next';
import { responseWriteController } from '../../../../../common/response'; import { responseWriteController } from '../../../../../common/response';
import { SseResponseEventEnum } from '@fastgpt/global/core/workflow/runtime/constants'; import { SseResponseEventEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import { textAdaptGptResponse } from '@fastgpt/global/core/workflow/runtime/utils'; import { textAdaptGptResponse } from '@fastgpt/global/core/workflow/runtime/utils';
import { import { ChatCompletionRequestMessageRoleEnum } from '@fastgpt/global/core/ai/constants';
ChatCompletionRequestMessageRoleEnum,
getLLMDefaultUsage
} from '@fastgpt/global/core/ai/constants';
import { dispatchWorkFlow } from '../../index'; import { dispatchWorkFlow } from '../../index';
import { import {
type DispatchToolModuleProps, type DispatchToolModuleProps,
@@ -254,7 +250,8 @@ export const runToolWithToolChoice = async (
const max_tokens = computedMaxToken({ const max_tokens = computedMaxToken({
model: toolModel, model: toolModel,
maxToken maxToken,
min: 100
}); });
// Filter histories by maxToken // Filter histories by maxToken
@@ -319,97 +316,101 @@ export const runToolWithToolChoice = async (
} }
}); });
let { answer, toolCalls, finish_reason, inputTokens, outputTokens } = await (async () => { let { reasoningContent, answer, toolCalls, finish_reason, inputTokens, outputTokens } =
if (isStreamResponse) { await (async () => {
if (!res || res.closed) { if (isStreamResponse) {
return { if (!res || res.closed) {
answer: '', return {
toolCalls: [], reasoningContent: '',
finish_reason: 'close' as const, answer: '',
inputTokens: 0, toolCalls: [],
outputTokens: 0 finish_reason: 'close' as const,
}; inputTokens: 0,
} outputTokens: 0
};
}
const result = await streamResponse({ const result = await streamResponse({
res, res,
workflowStreamResponse, workflowStreamResponse,
toolNodes, toolNodes,
stream: aiResponse, stream: aiResponse,
aiChatReasoning, aiChatReasoning,
retainDatasetCite retainDatasetCite
});
return {
answer: result.answer,
toolCalls: result.toolCalls,
finish_reason: result.finish_reason,
inputTokens: result.usage.prompt_tokens,
outputTokens: result.usage.completion_tokens
};
} else {
const result = aiResponse as ChatCompletion;
const finish_reason = result.choices?.[0]?.finish_reason as CompletionFinishReason;
const calls = result.choices?.[0]?.message?.tool_calls || [];
const answer = result.choices?.[0]?.message?.content || '';
// @ts-ignore
const reasoningContent = result.choices?.[0]?.message?.reasoning_content || '';
const usage = result.usage;
if (aiChatReasoning && reasoningContent) {
workflowStreamResponse?.({
event: SseResponseEventEnum.fastAnswer,
data: textAdaptGptResponse({
reasoning_content: removeDatasetCiteText(reasoningContent, retainDatasetCite)
})
}); });
}
// 格式化 toolCalls return {
const toolCalls = calls.map((tool) => { reasoningContent: result.reasoningContent,
const toolNode = toolNodes.find((item) => item.nodeId === tool.function?.name); answer: result.answer,
toolCalls: result.toolCalls,
finish_reason: result.finish_reason,
inputTokens: result.usage.prompt_tokens,
outputTokens: result.usage.completion_tokens
};
} else {
const result = aiResponse as ChatCompletion;
const finish_reason = result.choices?.[0]?.finish_reason as CompletionFinishReason;
const calls = result.choices?.[0]?.message?.tool_calls || [];
const answer = result.choices?.[0]?.message?.content || '';
// @ts-ignore
const reasoningContent = result.choices?.[0]?.message?.reasoning_content || '';
const usage = result.usage;
// 不支持 stream 模式的模型的这里需要补一个响应给客户端 if (aiChatReasoning && reasoningContent) {
workflowStreamResponse?.({ workflowStreamResponse?.({
event: SseResponseEventEnum.toolCall, event: SseResponseEventEnum.fastAnswer,
data: { data: textAdaptGptResponse({
tool: { reasoning_content: removeDatasetCiteText(reasoningContent, retainDatasetCite)
id: tool.id, })
toolName: toolNode?.name || '', });
toolAvatar: toolNode?.avatar || '', }
functionName: tool.function.name,
params: tool.function?.arguments ?? '', // 格式化 toolCalls
response: '' const toolCalls = calls.map((tool) => {
const toolNode = toolNodes.find((item) => item.nodeId === tool.function?.name);
// 不支持 stream 模式的模型的这里需要补一个响应给客户端
workflowStreamResponse?.({
event: SseResponseEventEnum.toolCall,
data: {
tool: {
id: tool.id,
toolName: toolNode?.name || '',
toolAvatar: toolNode?.avatar || '',
functionName: tool.function.name,
params: tool.function?.arguments ?? '',
response: ''
}
} }
} });
return {
...tool,
toolName: toolNode?.name || '',
toolAvatar: toolNode?.avatar || ''
};
}); });
if (answer) {
workflowStreamResponse?.({
event: SseResponseEventEnum.fastAnswer,
data: textAdaptGptResponse({
text: removeDatasetCiteText(answer, retainDatasetCite)
})
});
}
return { return {
...tool, reasoningContent: (reasoningContent as string) || '',
toolName: toolNode?.name || '', answer,
toolAvatar: toolNode?.avatar || '' toolCalls: toolCalls,
finish_reason,
inputTokens: usage?.prompt_tokens,
outputTokens: usage?.completion_tokens
}; };
});
if (answer) {
workflowStreamResponse?.({
event: SseResponseEventEnum.fastAnswer,
data: textAdaptGptResponse({
text: removeDatasetCiteText(answer, retainDatasetCite)
})
});
} }
})();
return { if (!answer && !reasoningContent && toolCalls.length === 0) {
answer,
toolCalls: toolCalls,
finish_reason,
inputTokens: usage?.prompt_tokens,
outputTokens: usage?.completion_tokens
};
}
})();
if (!answer && toolCalls.length === 0) {
return Promise.reject(getEmptyResponseTip()); return Promise.reject(getEmptyResponseTip());
} }
@@ -501,12 +502,13 @@ export const runToolWithToolChoice = async (
if (toolCalls.length > 0) { if (toolCalls.length > 0) {
// Run the tool, combine its results, and perform another round of AI calls // Run the tool, combine its results, and perform another round of AI calls
const assistantToolMsgParams: ChatCompletionAssistantMessageParam[] = [ const assistantToolMsgParams: ChatCompletionMessageParam[] = [
...(answer ...(answer || reasoningContent
? [ ? [
{ {
role: ChatCompletionRequestMessageRoleEnum.Assistant as 'assistant', role: ChatCompletionRequestMessageRoleEnum.Assistant as 'assistant',
content: answer content: answer,
reasoning_text: reasoningContent
} }
] ]
: []), : []),
@@ -627,9 +629,10 @@ export const runToolWithToolChoice = async (
); );
} else { } else {
// No tool is invoked, indicating that the process is over // No tool is invoked, indicating that the process is over
const gptAssistantResponse: ChatCompletionAssistantMessageParam = { const gptAssistantResponse: ChatCompletionMessageParam = {
role: ChatCompletionRequestMessageRoleEnum.Assistant, role: ChatCompletionRequestMessageRoleEnum.Assistant,
content: answer content: answer,
reasoning_text: reasoningContent
}; };
const completeMessages = filterMessages.concat(gptAssistantResponse); const completeMessages = filterMessages.concat(gptAssistantResponse);
inputTokens = inputTokens || (await countGptMessagesTokens(requestMessages, tools)); inputTokens = inputTokens || (await countGptMessagesTokens(requestMessages, tools));
@@ -671,34 +674,23 @@ async function streamResponse({
readStream: stream readStream: stream
}); });
let textAnswer = '';
let callingTool: { name: string; arguments: string } | null = null; let callingTool: { name: string; arguments: string } | null = null;
let toolCalls: ChatCompletionMessageToolCall[] = []; let toolCalls: ChatCompletionMessageToolCall[] = [];
let finish_reason: CompletionFinishReason = null;
let usage = getLLMDefaultUsage();
const { parsePart } = parseLLMStreamResponse(); const { parsePart, getResponseData, updateFinishReason } = parseLLMStreamResponse();
for await (const part of stream) { for await (const part of stream) {
usage = part.usage || usage;
if (res.closed) { if (res.closed) {
stream.controller?.abort(); stream.controller?.abort();
finish_reason = 'close'; updateFinishReason('close');
break; break;
} }
const { const { reasoningContent, responseContent } = parsePart({
reasoningContent,
content: toolChoiceContent,
responseContent,
finishReason
} = parsePart({
part, part,
parseThinkTag: true, parseThinkTag: true,
retainDatasetCite retainDatasetCite
}); });
textAnswer += toolChoiceContent;
finish_reason = finishReason || finish_reason;
const responseChoice = part.choices?.[0]?.delta; const responseChoice = part.choices?.[0]?.delta;
@@ -727,9 +719,10 @@ async function streamResponse({
const index = toolCall.index ?? i; const index = toolCall.index ?? i;
// Call new tool // Call new tool
if (toolCall.id || callingTool) { const hasNewTool = toolCall?.function?.name || callingTool;
// 有 id代表新 call 工具 if (hasNewTool) {
if (toolCall.id) { // 有 function name代表新 call 工具
if (toolCall?.function?.name) {
callingTool = { callingTool = {
name: toolCall.function?.name || '', name: toolCall.function?.name || '',
arguments: toolCall.function?.arguments || '' arguments: toolCall.function?.arguments || ''
@@ -799,5 +792,13 @@ async function streamResponse({
} }
} }
return { answer: textAnswer, toolCalls: toolCalls.filter(Boolean), finish_reason, usage }; const { reasoningContent, content, finish_reason, usage } = getResponseData();
return {
reasoningContent,
answer: content,
toolCalls: toolCalls.filter(Boolean),
finish_reason,
usage
};
} }

View File

@@ -556,30 +556,21 @@ async function streamResponse({
res, res,
readStream: stream readStream: stream
}); });
let answer = '';
let reasoning = '';
let finish_reason: CompletionFinishReason = null;
let usage: CompletionUsage = getLLMDefaultUsage();
const { parsePart } = parseLLMStreamResponse(); const { parsePart, getResponseData, updateFinishReason } = parseLLMStreamResponse();
for await (const part of stream) { for await (const part of stream) {
usage = part.usage || usage;
if (res.closed) { if (res.closed) {
stream.controller?.abort(); stream.controller?.abort();
finish_reason = 'close'; updateFinishReason('close');
break; break;
} }
const { reasoningContent, content, responseContent, finishReason } = parsePart({ const { reasoningContent, responseContent } = parsePart({
part, part,
parseThinkTag, parseThinkTag,
retainDatasetCite retainDatasetCite
}); });
finish_reason = finish_reason || finishReason;
answer += content;
reasoning += reasoningContent;
if (aiChatReasoning && reasoningContent) { if (aiChatReasoning && reasoningContent) {
workflowStreamResponse?.({ workflowStreamResponse?.({
@@ -602,5 +593,7 @@ async function streamResponse({
} }
} }
const { reasoningContent: reasoning, content: answer, finish_reason, usage } = getResponseData();
return { answer, reasoning, finish_reason, usage }; return { answer, reasoning, finish_reason, usage };
} }

View File

@@ -49,8 +49,6 @@ export const dispatchRunCode = async (props: RunCodeType): Promise<RunCodeRespon
variables: customVariables variables: customVariables
}); });
console.log(runResult);
if (runResult.success) { if (runResult.success) {
return { return {
[NodeOutputKeyEnum.rawResponse]: runResult.data.codeReturn, [NodeOutputKeyEnum.rawResponse]: runResult.data.codeReturn,

View File

@@ -268,7 +268,7 @@ export async function dispatchDatasetSearch(
nodeDispatchUsages, nodeDispatchUsages,
[DispatchNodeResponseKeyEnum.toolResponses]: { [DispatchNodeResponseKeyEnum.toolResponses]: {
prompt: getDatasetSearchToolResponsePrompt(), prompt: getDatasetSearchToolResponsePrompt(),
quotes: searchRes.map((item) => ({ cites: searchRes.map((item) => ({
id: item.id, id: item.id,
sourceName: item.sourceName, sourceName: item.sourceName,
updateTime: item.updateTime, updateTime: item.updateTime,

View File

@@ -211,12 +211,12 @@ export const getFileContentFromLinks = async ({
// Read file // Read file
const { rawText } = await readRawContentByFileBuffer({ const { rawText } = await readRawContentByFileBuffer({
extension, extension,
isQAImport: false,
teamId, teamId,
tmbId, tmbId,
buffer, buffer,
encoding, encoding,
customPdfParse customPdfParse,
getFormatText: true
}); });
// Add to buffer // Add to buffer

View File

@@ -28,7 +28,7 @@
"lodash": "^4.17.21", "lodash": "^4.17.21",
"mammoth": "^1.6.0", "mammoth": "^1.6.0",
"mongoose": "^8.10.1", "mongoose": "^8.10.1",
"multer": "1.4.5-lts.1", "multer": "2.0.0",
"mysql2": "^3.11.3", "mysql2": "^3.11.3",
"next": "14.2.28", "next": "14.2.28",
"nextjs-cors": "^2.2.0", "nextjs-cors": "^2.2.0",

View File

@@ -20,6 +20,7 @@ import { type MemberGroupSchemaType } from '@fastgpt/global/support/permission/m
import { type TeamMemberSchema } from '@fastgpt/global/support/user/team/type'; import { type TeamMemberSchema } from '@fastgpt/global/support/user/team/type';
import { type OrgSchemaType } from '@fastgpt/global/support/user/team/org/type'; import { type OrgSchemaType } from '@fastgpt/global/support/user/team/org/type';
import { getOrgIdSetWithParentByTmbId } from './org/controllers'; import { getOrgIdSetWithParentByTmbId } from './org/controllers';
import { authUserSession } from '../user/session';
/** get resource permission for a team member /** get resource permission for a team member
* If there is no permission for the team member, it will return undefined * If there is no permission for the team member, it will return undefined
@@ -213,51 +214,6 @@ export const delResourcePermission = ({
}; };
/* 下面代码等迁移 */ /* 下面代码等迁移 */
/* create token */
export function createJWT(user: {
_id?: string;
team?: { teamId?: string; tmbId: string };
isRoot?: boolean;
}) {
const key = process.env.TOKEN_KEY as string;
const token = jwt.sign(
{
userId: String(user._id),
teamId: String(user.team?.teamId),
tmbId: String(user.team?.tmbId),
isRoot: user.isRoot,
exp: Math.floor(Date.now() / 1000) + 60 * 60 * 24 * 7
},
key
);
return token;
}
// auth token
export function authJWT(token: string) {
return new Promise<{
userId: string;
teamId: string;
tmbId: string;
isRoot: boolean;
}>((resolve, reject) => {
const key = process.env.TOKEN_KEY as string;
jwt.verify(token, key, (err, decoded: any) => {
if (err || !decoded?.userId) {
reject(ERROR_ENUM.unAuthorization);
return;
}
resolve({
userId: decoded.userId,
teamId: decoded.teamId || '',
tmbId: decoded.tmbId,
isRoot: decoded.isRoot
});
});
});
}
export async function parseHeaderCert({ export async function parseHeaderCert({
req, req,
@@ -275,7 +231,7 @@ export async function parseHeaderCert({
return Promise.reject(ERROR_ENUM.unAuthorization); return Promise.reject(ERROR_ENUM.unAuthorization);
} }
return await authJWT(cookieToken); return authUserSession(cookieToken);
} }
// from authorization get apikey // from authorization get apikey
async function parseAuthorization(authorization?: string) { async function parseAuthorization(authorization?: string) {
@@ -345,6 +301,7 @@ export async function parseHeaderCert({
if (authToken && (token || cookie)) { if (authToken && (token || cookie)) {
// user token(from fastgpt web) // user token(from fastgpt web)
const res = await authCookieToken(cookie, token); const res = await authCookieToken(cookie, token);
return { return {
uid: res.userId, uid: res.userId,
teamId: res.teamId, teamId: res.teamId,

View File

@@ -0,0 +1,179 @@
import { retryFn } from '@fastgpt/global/common/system/utils';
import { getAllKeysByPrefix, getGlobalRedisConnection } from '../../common/redis';
import { addLog } from '../../common/system/log';
import { ERROR_ENUM } from '@fastgpt/global/common/error/errorCode';
import { getNanoid } from '@fastgpt/global/common/string/tools';
const redisPrefix = 'session:';
const getSessionKey = (key: string) => `${redisPrefix}${key}`;
type SessionType = {
userId: string;
teamId: string;
tmbId: string;
isRoot?: boolean;
createdAt: number;
ip?: string | null;
};
/* Session manager */
const setSession = async ({
key,
data,
expireSeconds
}: {
key: string;
data: SessionType;
expireSeconds: number;
}) => {
return await retryFn(async () => {
try {
const redis = getGlobalRedisConnection();
const formatKey = getSessionKey(key);
// 使用 hmset 存储对象字段
await redis.hmset(formatKey, {
userId: data.userId,
teamId: data.teamId,
tmbId: data.tmbId,
isRoot: data.isRoot ? '1' : '0',
createdAt: data.createdAt.toString(),
ip: data.ip
});
// 设置过期时间
if (expireSeconds) {
await redis.expire(formatKey, expireSeconds);
}
} catch (error) {
addLog.error('Set session error:', error);
return Promise.reject(error);
}
});
};
const delSession = (key: string) => {
const redis = getGlobalRedisConnection();
retryFn(() => redis.del(getSessionKey(key)));
};
const getSession = async (key: string): Promise<SessionType> => {
const formatKey = getSessionKey(key);
const redis = getGlobalRedisConnection();
// 使用 hgetall 获取所有字段
const data = await retryFn(() => redis.hgetall(formatKey));
if (!data || Object.keys(data).length === 0) {
return Promise.reject(ERROR_ENUM.unAuthorization);
}
try {
return {
userId: data.userId,
teamId: data.teamId,
tmbId: data.tmbId,
isRoot: data.isRoot === '1',
createdAt: parseInt(data.createdAt),
ip: data.ip
};
} catch (error) {
addLog.error('Parse session error:', error);
delSession(formatKey);
return Promise.reject(ERROR_ENUM.unAuthorization);
}
};
export const delUserAllSession = async (userId: string, whileList?: string[]) => {
const formatWhileList = whileList?.map((item) => getSessionKey(item));
const redis = getGlobalRedisConnection();
const keys = (await getAllKeysByPrefix(`${redisPrefix}${userId}`)).filter(
(item) => !formatWhileList?.includes(item)
);
if (keys.length > 0) {
await redis.del(keys);
}
};
// 会根据创建时间,删除超出客户端登录限制的 session
const delRedundantSession = async (userId: string) => {
// 至少为 1默认为 10
let maxSession = process.env.MAX_LOGIN_SESSION ? Number(process.env.MAX_LOGIN_SESSION) : 10;
if (maxSession < 1) {
maxSession = 1;
}
const redis = getGlobalRedisConnection();
const keys = await getAllKeysByPrefix(`${redisPrefix}${userId}`);
if (keys.length <= maxSession) {
return;
}
// 获取所有会话的创建时间
const sessionList = await Promise.all(
keys.map(async (key) => {
try {
const data = await redis.hgetall(key);
if (!data || Object.keys(data).length === 0) return null;
return {
key,
createdAt: parseInt(data.createdAt)
};
} catch (error) {
return null;
}
})
);
// 过滤掉无效数据并按创建时间排序
const validSessions = sessionList.filter(Boolean) as { key: string; createdAt: number }[];
validSessions.sort((a, b) => a.createdAt - b.createdAt);
// 删除最早创建的会话
const delKeys = validSessions.slice(0, validSessions.length - maxSession).map((item) => item.key);
if (delKeys.length > 0) {
await redis.del(delKeys);
}
};
export const createUserSession = async ({
userId,
teamId,
tmbId,
isRoot,
ip
}: {
userId: string;
teamId: string;
tmbId: string;
isRoot?: boolean;
ip?: string | null;
}) => {
const key = `${String(userId)}:${getNanoid(32)}`;
await setSession({
key,
data: {
userId: String(userId),
teamId: String(teamId),
tmbId: String(tmbId),
isRoot,
createdAt: new Date().getTime(),
ip
},
expireSeconds: 7 * 24 * 60 * 60
});
delRedundantSession(userId);
return key;
};
export const authUserSession = async (key: string): Promise<SessionType> => {
const data = await getSession(key);
return data;
};

View File

@@ -1,5 +1,6 @@
import iconv from 'iconv-lite'; import iconv from 'iconv-lite';
import { type ReadRawTextByBuffer, type ReadFileResponse } from '../type'; import { type ReadRawTextByBuffer, type ReadFileResponse } from '../type';
import { matchMdImg } from '@fastgpt/global/common/string/markdown';
const rawEncodingList = [ const rawEncodingList = [
'ascii', 'ascii',
@@ -34,7 +35,10 @@ export const readFileRawText = ({ buffer, encoding }: ReadRawTextByBuffer): Read
} }
})(); })();
const { text, imageList } = matchMdImg(content);
return { return {
rawText: content rawText: text,
imageList
}; };
}; };

View File

@@ -28,11 +28,11 @@ export const readXlsxRawText = async ({
if (!header) return; if (!header) return;
const formatText = `| ${header.join(' | ')} | const formatText = `| ${header.join(' | ')} |
| ${header.map(() => '---').join(' | ')} | | ${header.map(() => '---').join(' | ')} |
${csvArr ${csvArr
.slice(1) .slice(1)
.map((row) => `| ${row.map((item) => item.replace(/\n/g, '\\n')).join(' | ')} |`) .map((row) => `| ${row.map((item) => item.replace(/\n/g, '\\n')).join(' | ')} |`)
.join('\n')}`; .join('\n')}`;
return formatText; return formatText;
}) })

View File

@@ -6,10 +6,6 @@ export const getUserFingerprint = async () => {
console.log(result.visitorId); console.log(result.visitorId);
}; };
export const hasHttps = () => {
return window.location.protocol === 'https:';
};
export const subRoute = process.env.NEXT_PUBLIC_BASE_URL; export const subRoute = process.env.NEXT_PUBLIC_BASE_URL;
export const getWebReqUrl = (url: string = '') => { export const getWebReqUrl = (url: string = '') => {
@@ -20,3 +16,32 @@ export const getWebReqUrl = (url: string = '') => {
if (!url.startsWith('/') || url.startsWith(baseUrl)) return url; if (!url.startsWith('/') || url.startsWith(baseUrl)) return url;
return `${baseUrl}${url}`; return `${baseUrl}${url}`;
}; };
export const isMobile = () => {
// SSR return false
if (typeof window === 'undefined') return false;
// 1. Check User-Agent
const userAgent = navigator.userAgent.toLowerCase();
const mobileKeywords = [
'android',
'iphone',
'ipod',
'ipad',
'windows phone',
'blackberry',
'webos',
'iemobile',
'opera mini'
];
const isMobileUA = mobileKeywords.some((keyword) => userAgent.includes(keyword));
// 2. Check screen width
const isMobileWidth = window.innerWidth <= 900;
// 3. Check if touch events are supported (exclude touch screen PCs)
const isTouchDevice = 'ontouchstart' in window || navigator.maxTouchPoints > 0;
// If any of the following conditions are met, it is considered a mobile device
return isMobileUA || (isMobileWidth && isTouchDevice);
};

View File

@@ -6,7 +6,6 @@ import MyTooltip from '../MyTooltip';
type Props = FlexProps & { type Props = FlexProps & {
icon: string; icon: string;
size?: string; size?: string;
onClick?: () => void;
hoverColor?: string; hoverColor?: string;
hoverBg?: string; hoverBg?: string;
hoverBorderColor?: string; hoverBorderColor?: string;
@@ -41,9 +40,9 @@ const MyIconButton = ({
color: hoverColor, color: hoverColor,
borderColor: hoverBorderColor borderColor: hoverBorderColor
}} }}
onClick={() => { onClick={(e) => {
if (isLoading) return; if (isLoading) return;
onClick?.(); onClick?.(e);
}} }}
sx={{ userSelect: 'none' }} sx={{ userSelect: 'none' }}
{...props} {...props}

View File

@@ -2,6 +2,7 @@
export const iconPaths = { export const iconPaths = {
alignLeft: () => import('./icons/alignLeft.svg'), alignLeft: () => import('./icons/alignLeft.svg'),
backup: () => import('./icons/backup.svg'),
book: () => import('./icons/book.svg'), book: () => import('./icons/book.svg'),
change: () => import('./icons/change.svg'), change: () => import('./icons/change.svg'),
chatSend: () => import('./icons/chatSend.svg'), chatSend: () => import('./icons/chatSend.svg'),
@@ -229,6 +230,7 @@ export const iconPaths = {
'core/dataset/tableCollection': () => import('./icons/core/dataset/tableCollection.svg'), 'core/dataset/tableCollection': () => import('./icons/core/dataset/tableCollection.svg'),
'core/dataset/tag': () => import('./icons/core/dataset/tag.svg'), 'core/dataset/tag': () => import('./icons/core/dataset/tag.svg'),
'core/dataset/websiteDataset': () => import('./icons/core/dataset/websiteDataset.svg'), 'core/dataset/websiteDataset': () => import('./icons/core/dataset/websiteDataset.svg'),
'core/dataset/otherDataset': () => import('./icons/core/dataset/otherDataset.svg'),
'core/dataset/websiteDatasetColor': () => import('./icons/core/dataset/websiteDatasetColor.svg'), 'core/dataset/websiteDatasetColor': () => import('./icons/core/dataset/websiteDatasetColor.svg'),
'core/dataset/websiteDatasetOutline': () => 'core/dataset/websiteDatasetOutline': () =>
import('./icons/core/dataset/websiteDatasetOutline.svg'), import('./icons/core/dataset/websiteDatasetOutline.svg'),
@@ -439,6 +441,7 @@ export const iconPaths = {
point: () => import('./icons/point.svg'), point: () => import('./icons/point.svg'),
preview: () => import('./icons/preview.svg'), preview: () => import('./icons/preview.svg'),
'price/bg': () => import('./icons/price/bg.svg'), 'price/bg': () => import('./icons/price/bg.svg'),
'price/pricearrow': () => import('./icons/price/pricearrow.svg'),
'price/right': () => import('./icons/price/right.svg'), 'price/right': () => import('./icons/price/right.svg'),
save: () => import('./icons/save.svg'), save: () => import('./icons/save.svg'),
sliderTag: () => import('./icons/sliderTag.svg'), sliderTag: () => import('./icons/sliderTag.svg'),

View File

@@ -0,0 +1,4 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" >
<path fill-rule="evenodd" clip-rule="evenodd" d="M17.9386 2H10.2616C9.73441 1.99998 9.27964 1.99997 8.90507 2.03057C8.50973 2.06287 8.11651 2.13419 7.73813 2.32698C7.17364 2.6146 6.7147 3.07354 6.42708 3.63803C6.23429 4.01641 6.16297 4.40963 6.13067 4.80497C6.10007 5.17955 6.10008 5.63431 6.1001 6.16146V13.8385C6.10008 14.3657 6.10007 14.8205 6.13067 15.195C6.16297 15.5904 6.23429 15.9836 6.42708 16.362C6.7147 16.9265 7.17364 17.3854 7.73813 17.673C8.11651 17.8658 8.50973 17.9371 8.90507 17.9694C9.27961 18 9.73432 18 10.2614 18H17.9386C18.4657 18 18.9206 18 19.2951 17.9694C19.6905 17.9371 20.0837 17.8658 20.4621 17.673C21.0266 17.3854 21.4855 16.9265 21.7731 16.362C21.9659 15.9836 22.0372 15.5904 22.0695 15.195C22.1001 14.8205 22.1001 14.3658 22.1001 13.8387V6.16148C22.1001 5.63439 22.1001 5.17951 22.0695 4.80497C22.0372 4.40963 21.9659 4.01641 21.7731 3.63803C21.4855 3.07354 21.0266 2.6146 20.4621 2.32698C20.0837 2.13419 19.6905 2.06287 19.2951 2.03057C18.9206 1.99997 18.4658 1.99998 17.9386 2ZM15.1001 16H17.9001C18.4767 16 18.8489 15.9992 19.1323 15.9761C19.4039 15.9539 19.5046 15.9162 19.5541 15.891C19.7423 15.7951 19.8952 15.6422 19.9911 15.454C20.0163 15.4045 20.054 15.3038 20.0762 15.0322C20.0993 14.7488 20.1001 14.3766 20.1001 13.8V11H15.1001V16ZM20.1001 9V6.2C20.1001 5.62345 20.0993 5.25117 20.0762 4.96784C20.054 4.69617 20.0163 4.59546 19.9911 4.54601C19.8952 4.35785 19.7423 4.20487 19.5541 4.109C19.5046 4.0838 19.4039 4.04612 19.1323 4.02393C18.8489 4.00078 18.4767 4 17.9001 4H10.3001C9.72355 4 9.35127 4.00078 9.06793 4.02393C8.79627 4.04612 8.69555 4.0838 8.64611 4.109C8.45795 4.20487 8.30497 4.35785 8.20909 4.54601C8.1839 4.59546 8.14622 4.69617 8.12403 4.96784C8.10088 5.25117 8.1001 5.62345 8.1001 6.2V9H20.1001ZM13.1001 11V16H10.3001C9.72355 16 9.35127 15.9992 9.06793 15.9761C8.79627 15.9539 8.69555 15.9162 8.64611 15.891C8.45795 15.7951 8.30497 15.6422 8.20909 15.454C8.1839 15.4045 8.14622 15.3038 8.12403 15.0322C8.10088 14.7488 8.1001 14.3766 8.1001 13.8V11H13.1001Z" />
<path d="M4.1001 7C4.1001 6.44772 3.65238 6 3.1001 6C2.54781 6 2.1001 6.44772 2.1001 7L2.1001 15.9217C2.10009 16.7823 2.10008 17.4887 2.14702 18.0632C2.19567 18.6586 2.29968 19.2 2.55787 19.7068C2.96054 20.497 3.60306 21.1396 4.39334 21.5422C4.90007 21.8004 5.44147 21.9044 6.03691 21.9531C6.61142 22 7.3177 22 8.17835 22H17.1001C17.6524 22 18.1001 21.5523 18.1001 21C18.1001 20.4477 17.6524 20 17.1001 20H8.2201C7.30751 20 6.68322 19.9992 6.19978 19.9597C5.72801 19.9212 5.47911 19.8508 5.30132 19.7602C4.88736 19.5493 4.55081 19.2127 4.33988 18.7988C4.2493 18.621 4.17892 18.3721 4.14038 17.9003C4.10088 17.4169 4.1001 16.7926 4.1001 15.88V7Z" />
</svg>

After

Width:  |  Height:  |  Size: 2.7 KiB

View File

@@ -1,11 +1,11 @@
<svg viewBox="0 0 20 20" fill="none" xmlns="http://www.w3.org/2000/svg"> <svg viewBox="0 0 32 32" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect width="100%" height="100%" fill="url(#paint0_linear_7967_30275)" /> <rect width="32" height="32" fill="url(#paint0_linear_20384_1853)"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M12.7552 4.8767C12.3073 4.92969 11.9962 5.33456 12.0603 5.78101L13.3235 14.5762C13.3876 15.0226 13.8027 15.3416 14.2506 15.2886L16.1502 15.0639C16.5981 15.0109 16.9093 14.606 16.8451 14.1596L15.582 5.36443C15.5178 4.91798 15.1028 4.59901 14.6548 4.65199L12.7552 4.8767ZM4.0675 5.52248C4.0675 5.07145 4.43314 4.70582 4.88417 4.70582H6.80225C7.25328 4.70582 7.61892 5.07145 7.61892 5.52248V14.4772C7.61892 14.9282 7.25328 15.2938 6.80225 15.2938H4.88417C4.43314 15.2938 4.0675 14.9282 4.0675 14.4772V5.52248ZM8.20321 5.52248C8.20321 5.07145 8.56885 4.70582 9.01988 4.70582H10.938C11.389 4.70582 11.7546 5.07145 11.7546 5.52248V14.4772C11.7546 14.9282 11.389 15.2938 10.938 15.2938H9.01988C8.56885 15.2938 8.20321 14.9282 8.20321 14.4772V5.52248Z" fill="white"/> <path d="M22.8185 7.25714C23.3782 6.69749 24.2855 6.69749 24.8452 7.25714C25.4048 7.81678 25.4048 8.72415 24.8452 9.28379L23.8314 10.2976C25.0837 12.4456 24.7895 15.2473 22.9487 17.0881L21.7775 18.2593C21.4078 18.6289 20.8085 18.6289 20.4388 18.2593L13.8431 11.6635C13.4734 11.2938 13.4734 10.6945 13.8431 10.3249L15.0142 9.15369C16.855 7.3129 19.6567 7.01864 21.8047 8.27093L22.8185 7.25714Z" fill="white"/>
<path d="M14.0661 16.4523L15.65 18.0362L16.3429 17.3434C16.7073 16.9789 17.2983 16.9789 17.6628 17.3434L18.3695 18.0501C18.734 18.4146 18.734 19.0056 18.3695 19.3701L17.6767 20.0629L18.1561 20.5422C18.5257 20.9118 18.5257 21.5111 18.1561 21.8807L16.9857 23.051C15.1449 24.8918 12.3432 25.1861 10.1952 23.9338L9.18136 24.9476C8.62172 25.5073 7.71435 25.5073 7.1547 24.9476C6.59506 24.388 6.59506 23.4806 7.1547 22.9209L8.16853 21.9071C6.91623 19.7591 7.21048 16.9574 9.05127 15.1166L10.2217 13.9462C10.5913 13.5767 11.1905 13.5767 11.5601 13.9462L12.0395 14.4256L12.7323 13.7328C13.0968 13.3683 13.6877 13.3683 14.0522 13.7328L14.7589 14.4396C15.1234 14.804 15.1234 15.395 14.7589 15.7595L14.0661 16.4523Z" fill="white"/>
<defs> <defs>
<linearGradient id="paint0_linear_7967_30275" x1="1.5" y1="20" x2="20" y2="1.6056e-06" gradientUnits="userSpaceOnUse"> <linearGradient id="paint0_linear_20384_1853" x1="2.4" y1="32" x2="32" y2="2.56896e-06" gradientUnits="userSpaceOnUse">
<stop stop-color="#C172FF"/> <stop stop-color="#00CAD1"/>
<stop offset="1" stop-color="#F19EFF"/> <stop offset="1" stop-color="#73E6D8"/>
</linearGradient> </linearGradient>
</defs> </defs>
</svg> </svg>

Before

Width:  |  Height:  |  Size: 1.2 KiB

After

Width:  |  Height:  |  Size: 1.5 KiB

View File

@@ -0,0 +1,11 @@
<svg viewBox="0 0 20 20" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect width="100%" height="100%" fill="url(#paint0_linear_7967_30275)" />
<path fill-rule="evenodd" clip-rule="evenodd" d="M12.7552 4.8767C12.3073 4.92969 11.9962 5.33456 12.0603 5.78101L13.3235 14.5762C13.3876 15.0226 13.8027 15.3416 14.2506 15.2886L16.1502 15.0639C16.5981 15.0109 16.9093 14.606 16.8451 14.1596L15.582 5.36443C15.5178 4.91798 15.1028 4.59901 14.6548 4.65199L12.7552 4.8767ZM4.0675 5.52248C4.0675 5.07145 4.43314 4.70582 4.88417 4.70582H6.80225C7.25328 4.70582 7.61892 5.07145 7.61892 5.52248V14.4772C7.61892 14.9282 7.25328 15.2938 6.80225 15.2938H4.88417C4.43314 15.2938 4.0675 14.9282 4.0675 14.4772V5.52248ZM8.20321 5.52248C8.20321 5.07145 8.56885 4.70582 9.01988 4.70582H10.938C11.389 4.70582 11.7546 5.07145 11.7546 5.52248V14.4772C11.7546 14.9282 11.389 15.2938 10.938 15.2938H9.01988C8.56885 15.2938 8.20321 14.9282 8.20321 14.4772V5.52248Z" fill="white"/>
<defs>
<linearGradient id="paint0_linear_7967_30275" x1="1.5" y1="20" x2="20" y2="1.6056e-06" gradientUnits="userSpaceOnUse">
<stop stop-color="#C172FF"/>
<stop offset="1" stop-color="#F19EFF"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 1.2 KiB

View File

@@ -0,0 +1,3 @@
<svg width="23" height="41" viewBox="0 0 23 41" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M0.487322 0.838902C5.52993 0.392809 14.4127 1.30573 19.3633 8.23636L19.7315 8.77445C23.4021 14.3865 23.0442 21.6048 19.9483 26.9717C17.7504 30.7815 14.1557 33.6822 9.62013 34.4327C9.59005 34.4426 9.55881 34.45 9.52638 34.4541C9.52493 34.4546 9.5225 34.4561 9.51955 34.4571C9.5153 34.4585 9.50623 34.4621 9.49806 34.4649L9.41994 34.4873C9.41814 34.4879 9.41426 34.49 9.40724 34.4922C9.39802 34.4952 9.37728 34.5023 9.35939 34.5078C9.3403 34.5137 9.31328 34.5214 9.2842 34.5284L9.1592 34.5469L8.89748 34.5459L8.79006 34.544C7.42741 34.6867 5.98678 34.6389 4.47853 34.3672C4.53191 34.4008 4.6025 34.4492 4.66799 34.5147C4.69099 34.5376 4.72144 34.5593 4.80959 34.6211C4.8805 34.6708 5.00368 34.7557 5.10646 34.878L5.13967 34.919C5.14802 34.9291 5.15311 34.935 5.15627 34.9385L5.41017 35.1338C5.6673 35.3405 5.90801 35.5576 6.13576 35.7627C6.44566 36.0419 6.73402 36.3007 7.04494 36.5264L7.20998 36.6602C7.36441 36.8001 7.49012 36.9562 7.59279 37.0967L7.68263 37.1787C7.73953 37.2274 7.85194 37.3224 7.94045 37.46L7.96291 37.4883C7.99181 37.5184 8.044 37.5555 8.15529 37.6202C8.27226 37.6881 8.51757 37.8164 8.6924 38.041L8.87306 38.2422C8.93835 38.3118 9.01823 38.4008 9.09572 38.5069C9.42285 38.7501 9.7519 38.9908 10.084 39.2266L10.3828 39.42C10.4872 39.4826 10.5969 39.5441 10.7119 39.6084C10.8236 39.6709 10.9418 39.7372 11.0615 39.8067L11.4199 40.0274L11.4981 40.0918C11.6617 40.257 11.6926 40.5197 11.5586 40.7207C11.4246 40.9213 11.1703 40.9945 10.9551 40.9073L10.8653 40.8594L10.5606 40.6709C10.4541 40.6092 10.3418 40.5476 10.2236 40.4815C10.052 40.3854 9.86755 40.2816 9.68556 40.1641L9.5049 40.042C9.41935 39.9813 9.33432 39.9197 9.24904 39.8584C9.2181 39.8524 9.1871 39.8442 9.15724 39.8321L9.06838 39.7842L8.88869 39.667C8.73114 39.5618 8.55702 39.4327 8.3965 39.2383L8.38967 39.2295C8.00348 38.941 7.62059 38.6476 7.2422 38.3487L6.50783 37.7569C6.00954 37.3479 5.57646 36.9545 5.12013 36.5811L4.65236 36.2149C4.41843 36.0403 4.17408 35.8774 3.92092 35.7198L3.13967 35.2539C3.00588 35.1761 2.86098 35.1009 2.69631 35.0137C2.53656 34.9291 2.36142 34.8357 2.19142 34.7305C1.89714 34.5484 1.58338 34.3126 1.34963 33.9825L1.2549 33.835L1.21291 33.7432C1.21251 33.742 1.21232 33.7405 1.21193 33.7393C1.08214 33.4994 1.16845 33.1982 1.40724 33.0645L3.10842 32.1084C3.68186 31.7899 4.26243 31.477 4.85158 31.1846L5.51271 30.8682C6.1781 30.5614 6.85297 30.2778 7.49806 29.9903L8.07424 29.7295C8.64423 29.4677 9.19873 29.1967 9.72951 28.8877L9.99416 28.7246C10.0791 28.6011 10.2177 28.5164 10.3789 28.5078C10.3824 28.5077 10.3862 28.5088 10.3897 28.5088C10.3915 28.5085 10.3937 28.5073 10.3955 28.5069C10.4079 28.5048 10.4781 28.4925 10.5654 28.5078C10.6116 28.516 10.7084 28.5395 10.8018 28.6182C10.9121 28.7113 10.972 28.8433 10.9785 28.9766C10.9839 29.0882 10.9514 29.1725 10.9336 29.211C10.9138 29.2537 10.8915 29.2851 10.8789 29.3018C10.8541 29.3349 10.8303 29.3577 10.8213 29.3662C10.8008 29.3857 10.7823 29.401 10.7754 29.4063L10.6533 29.4912C10.5861 29.5373 10.5101 29.5851 10.4365 29.6299L10.2324 29.752C9.66368 30.0831 9.07573 30.3708 8.49025 30.6397L7.90529 30.9034C7.23393 31.2026 6.579 31.4778 5.93556 31.7744L5.29592 32.0801C4.80426 32.3242 4.32055 32.583 3.83791 32.8487L3.93849 32.8799L4.09963 32.9278C4.16584 32.9496 4.26199 32.9861 4.35646 33.0479L4.41213 33.0713C4.44909 33.0849 4.49227 33.0981 4.53517 33.1094L4.90236 33.2129C5.0111 33.2435 5.11243 33.2689 5.22365 33.2891L5.49416 33.3506C5.57406 33.3703 5.63482 33.3852 5.6924 33.3946L6.34084 33.5C6.45433 33.5184 6.57269 33.5275 6.70802 33.5371C6.83757 33.5463 6.99208 33.5564 7.14455 33.5743L7.26662 33.5801C7.31235 33.5802 7.36473 33.5787 7.42677 33.5762C7.53858 33.5718 7.69494 33.5634 7.85256 33.5772H7.94631C7.98429 33.5753 8.02868 33.5718 8.07717 33.5664L8.40431 33.5352L8.48146 33.5293C8.50488 33.5292 8.52576 33.5306 8.54299 33.5323C8.56567 33.5345 8.58567 33.5385 8.59963 33.541C8.64475 33.5413 8.69333 33.5426 8.74318 33.543C13.3027 33.0453 16.9045 30.2471 19.082 26.4727C22.0174 21.3845 22.3308 14.5763 18.8945 9.32132L18.5498 8.81742C13.9144 2.32808 5.50466 1.3989 0.575212 1.835C0.300419 1.85909 0.0577133 1.65566 0.0332203 1.38089C0.00891035 1.10592 0.212384 0.863353 0.487322 0.838902ZM9.49123 34.461C9.49697 34.4606 9.506 34.46 9.51564 34.458C9.51858 34.4574 9.52136 34.4556 9.52345 34.4551C9.51294 34.4564 9.50195 34.4604 9.49123 34.461Z" fill="#DC7E03"/>
</svg>

After

Width:  |  Height:  |  Size: 4.4 KiB

View File

@@ -3,8 +3,10 @@ import { Box, HStack, Icon, type StackProps } from '@chakra-ui/react';
const LightTip = ({ const LightTip = ({
text, text,
icon = 'common/info',
...props ...props
}: { }: {
icon?: string;
text: string; text: string;
} & StackProps) => { } & StackProps) => {
return ( return (
@@ -17,7 +19,7 @@ const LightTip = ({
fontSize={'sm'} fontSize={'sm'}
{...props} {...props}
> >
<Icon name="common/info" w="1rem" /> <Icon name={icon} w="1rem" />
<Box>{text}</Box> <Box>{text}</Box>
</HStack> </HStack>
); );

View File

@@ -216,7 +216,7 @@ const MyMenu = ({
if (offset) return offset; if (offset) return offset;
if (typeof width === 'number') return [-width / 2, 5]; if (typeof width === 'number') return [-width / 2, 5];
return [0, 5]; return [0, 5];
}, [offset]); }, [offset, width]);
return ( return (
<Menu <Menu

View File

@@ -11,15 +11,16 @@ import {
HStack, HStack,
Box, Box,
Button, Button,
PopoverArrow PopoverArrow,
Portal
} from '@chakra-ui/react'; } from '@chakra-ui/react';
const PopoverConfirm = ({ const PopoverConfirm = ({
content, content,
showCancel, showCancel = true,
type, type,
Trigger, Trigger,
placement = 'bottom-start', placement = 'auto',
offset, offset,
onConfirm, onConfirm,
confirmText, confirmText,
@@ -50,7 +51,7 @@ const PopoverConfirm = ({
}; };
if (type && map[type]) return map[type]; if (type && map[type]) return map[type];
return map.info; return map.info;
}, [type, t]); }, [type]);
const firstFieldRef = React.useRef(null); const firstFieldRef = React.useRef(null);
const { onOpen, onClose, isOpen } = useDisclosure(); const { onOpen, onClose, isOpen } = useDisclosure();
@@ -67,7 +68,7 @@ const PopoverConfirm = ({
onClose={onClose} onClose={onClose}
placement={placement} placement={placement}
offset={offset} offset={offset}
closeOnBlur={false} closeOnBlur={true}
trigger={'click'} trigger={'click'}
openDelay={100} openDelay={100}
closeDelay={100} closeDelay={100}
@@ -75,6 +76,7 @@ const PopoverConfirm = ({
lazyBehavior="keepMounted" lazyBehavior="keepMounted"
arrowSize={10} arrowSize={10}
strategy={'fixed'} strategy={'fixed'}
computePositionOnMount={true}
> >
<PopoverTrigger>{Trigger}</PopoverTrigger> <PopoverTrigger>{Trigger}</PopoverTrigger>
<PopoverContent p={4}> <PopoverContent p={4}>
@@ -82,15 +84,25 @@ const PopoverConfirm = ({
<HStack alignItems={'flex-start'} color={'myGray.700'}> <HStack alignItems={'flex-start'} color={'myGray.700'}>
<MyIcon name={map.icon as any} w={'1.5rem'} /> <MyIcon name={map.icon as any} w={'1.5rem'} />
<Box fontSize={'sm'}>{content}</Box> <Box fontSize={'sm'} whiteSpace={'pre-wrap'}>
{content}
</Box>
</HStack> </HStack>
<HStack mt={1} justifyContent={'flex-end'}> <HStack mt={2} justifyContent={'flex-end'}>
{showCancel && ( {showCancel && (
<Button variant={'whiteBase'} size="sm" onClick={onClose}> <Button variant={'whiteBase'} size="sm" onClick={onClose}>
{cancelText || t('common:Cancel')} {cancelText || t('common:Cancel')}
</Button> </Button>
)} )}
<Button isLoading={loading} variant={map.variant} size="sm" onClick={onclickConfirm}> <Button
isLoading={loading}
variant={map.variant}
size="sm"
onClick={async (e) => {
e.stopPropagation();
await onclickConfirm();
}}
>
{confirmText || t('common:Confirm')} {confirmText || t('common:Confirm')}
</Button> </Button>
</HStack> </HStack>

View File

@@ -57,6 +57,7 @@ const MyPopover = ({
closeDelay={100} closeDelay={100}
isLazy isLazy
lazyBehavior="keepMounted" lazyBehavior="keepMounted"
autoFocus={false}
> >
<PopoverTrigger>{Trigger}</PopoverTrigger> <PopoverTrigger>{Trigger}</PopoverTrigger>
<PopoverContent {...props}> <PopoverContent {...props}>

View File

@@ -54,7 +54,7 @@ const RadioGroup = <T = any,>({ list, value, onChange, ...props }: Props<T>) =>
/> />
</Flex> </Flex>
</Box> </Box>
<HStack spacing={1} color={'myGray.900'} whiteSpace={'nowrap'} fontSize={'sm'}> <HStack spacing={0.5} color={'myGray.900'} whiteSpace={'nowrap'} fontSize={'sm'}>
<Box>{typeof item.title === 'string' ? t(item.title as any) : item.title}</Box> <Box>{typeof item.title === 'string' ? t(item.title as any) : item.title}</Box>
{!!item.tooltip && <QuestionTip label={item.tooltip} color={'myGray.600'} />} {!!item.tooltip && <QuestionTip label={item.tooltip} color={'myGray.600'} />}
</HStack> </HStack>

View File

@@ -1,5 +1,5 @@
import React, { forwardRef } from 'react'; import React, { forwardRef } from 'react';
import { Flex, Box, type BoxProps } from '@chakra-ui/react'; import { Flex, Box, type BoxProps, HStack } from '@chakra-ui/react';
import MyIcon from '../Icon'; import MyIcon from '../Icon';
type Props<T = string> = Omit<BoxProps, 'onChange'> & { type Props<T = string> = Omit<BoxProps, 'onChange'> & {
@@ -10,9 +10,22 @@ type Props<T = string> = Omit<BoxProps, 'onChange'> & {
}[]; }[];
value: T; value: T;
onChange: (e: T) => void; onChange: (e: T) => void;
iconSize?: string;
labelSize?: string;
iconGap?: number;
}; };
const FillRowTabs = ({ list, value, onChange, py = '7px', px = '12px', ...props }: Props) => { const FillRowTabs = ({
list,
value,
onChange,
py = '2.5',
px = '4',
iconSize = '18px',
labelSize = 'sm',
iconGap = 2,
...props
}: Props) => {
return ( return (
<Box <Box
display={'inline-flex'} display={'inline-flex'}
@@ -28,7 +41,7 @@ const FillRowTabs = ({ list, value, onChange, py = '7px', px = '12px', ...props
{...props} {...props}
> >
{list.map((item) => ( {list.map((item) => (
<Flex <HStack
key={item.value} key={item.value}
flex={'1 0 0'} flex={'1 0 0'}
alignItems={'center'} alignItems={'center'}
@@ -39,6 +52,7 @@ const FillRowTabs = ({ list, value, onChange, py = '7px', px = '12px', ...props
py={py} py={py}
userSelect={'none'} userSelect={'none'}
whiteSpace={'noWrap'} whiteSpace={'noWrap'}
gap={iconGap}
{...(value === item.value {...(value === item.value
? { ? {
bg: 'white', bg: 'white',
@@ -53,9 +67,9 @@ const FillRowTabs = ({ list, value, onChange, py = '7px', px = '12px', ...props
onClick: () => onChange(item.value) onClick: () => onChange(item.value)
})} })}
> >
{item.icon && <MyIcon name={item.icon as any} mr={1.5} w={'18px'} />} {item.icon && <MyIcon name={item.icon as any} w={iconSize} />}
<Box>{item.label}</Box> <Box fontSize={labelSize}>{item.label}</Box>
</Flex> </HStack>
))} ))}
</Box> </Box>
); );

View File

@@ -1,8 +1,6 @@
import { useTranslation } from 'next-i18next'; import { useTranslation } from 'next-i18next';
import { useToast } from './useToast'; import { useToast } from './useToast';
import { useCallback } from 'react'; import { useCallback } from 'react';
import { hasHttps } from '../common/system/utils';
import { isProduction } from '@fastgpt/global/common/system/constants';
import MyModal from '../components/common/MyModal'; import MyModal from '../components/common/MyModal';
import React from 'react'; import React from 'react';
import { Box, ModalBody } from '@chakra-ui/react'; import { Box, ModalBody } from '@chakra-ui/react';
@@ -26,7 +24,7 @@ export const useCopyData = () => {
data = data.trim(); data = data.trim();
try { try {
if ((hasHttps() || !isProduction) && navigator.clipboard) { if (navigator.clipboard && window.isSecureContext) {
await navigator.clipboard.writeText(data); await navigator.clipboard.writeText(data);
if (title) { if (title) {
toast({ toast({
@@ -36,13 +34,35 @@ export const useCopyData = () => {
}); });
} }
} else { } else {
throw new Error(''); let textArea = document.createElement('textarea');
textArea.value = data;
// 使text area不在viewport同时设置不可见
textArea.style.position = 'absolute';
// @ts-ignore
textArea.style.opacity = 0;
textArea.style.left = '-999999px';
textArea.style.top = '-999999px';
document.body.appendChild(textArea);
textArea.focus();
textArea.select();
await new Promise((res, rej) => {
document.execCommand('copy') ? res('') : rej();
textArea.remove();
}).then(() => {
if (title) {
toast({
title,
status: 'success',
duration
});
}
});
} }
} catch (error) { } catch (error) {
setCopyContent(data); setCopyContent(data);
} }
}, },
[t, toast] [setCopyContent, t, toast]
); );
return { return {

View File

@@ -66,6 +66,7 @@
"model.tool_choice_tip": "If the model supports tool calling, turn on this switch", "model.tool_choice_tip": "If the model supports tool calling, turn on this switch",
"model.used_in_classify": "Used for problem classification", "model.used_in_classify": "Used for problem classification",
"model.used_in_extract_fields": "for text extraction", "model.used_in_extract_fields": "for text extraction",
"model.used_in_query_extension": "For problem optimization",
"model.used_in_tool_call": "Used for tool call nodes", "model.used_in_tool_call": "Used for tool call nodes",
"model.vision": "Vision model", "model.vision": "Vision model",
"model.vision_tag": "Vision", "model.vision_tag": "Vision",

View File

@@ -39,8 +39,6 @@
"new_password": "New Password", "new_password": "New Password",
"notification_receiving": "Notify", "notification_receiving": "Notify",
"old_password": "Old Password", "old_password": "Old Password",
"openai_account_configuration": "OpenAI account configuration",
"openai_account_setting_exception": "Setting OpenAI account exception",
"package_and_usage": "Plans", "package_and_usage": "Plans",
"package_details": "Details", "package_details": "Details",
"package_expiry_time": "Expired", "package_expiry_time": "Expired",
@@ -52,8 +50,10 @@
"password_update_success": "Password changed successfully", "password_update_success": "Password changed successfully",
"pending_usage": "To be used", "pending_usage": "To be used",
"phone_label": "Phone number", "phone_label": "Phone number",
"please_bind_contact": "Please bind the contact information",
"please_bind_notification_receiving_path": "Please bind the notification receiving method first", "please_bind_notification_receiving_path": "Please bind the notification receiving method first",
"purchase_extra_package": "Upgrade", "purchase_extra_package": "Upgrade",
"redeem_coupon": "Redeem coupon",
"reminder_create_bound_notification_account": "Remind the creator to bind the notification account", "reminder_create_bound_notification_account": "Remind the creator to bind the notification account",
"reset_password": "reset password", "reset_password": "reset password",
"resource_usage": "Usages", "resource_usage": "Usages",
@@ -75,6 +75,5 @@
"user_team_team_name": "Team", "user_team_team_name": "Team",
"verification_code": "Verification code", "verification_code": "Verification code",
"you_can_convert": "you can redeem", "you_can_convert": "you can redeem",
"yuan": "Yuan", "yuan": "Yuan"
"redeem_coupon": "Redeem coupon"
} }

View File

@@ -8,8 +8,9 @@
"assign_permission": "Permission change", "assign_permission": "Permission change",
"change_department_name": "Department Editor", "change_department_name": "Department Editor",
"change_member_name": "Member name change", "change_member_name": "Member name change",
"confirm_delete_from_org": "Confirm to move {{username}} out of the department?",
"confirm_delete_from_team": "Confirm to move {{username}} out of the team?",
"confirm_delete_group": "Confirm to delete group?", "confirm_delete_group": "Confirm to delete group?",
"confirm_delete_member": "Confirm to delete member?",
"confirm_delete_org": "Confirm to delete organization?", "confirm_delete_org": "Confirm to delete organization?",
"confirm_forbidden": "Confirm forbidden", "confirm_forbidden": "Confirm forbidden",
"confirm_leave_team": "Confirmed to leave the team? \nAfter exiting, all your resources in the team are transferred to the team owner.", "confirm_leave_team": "Confirmed to leave the team? \nAfter exiting, all your resources in the team are transferred to the team owner.",
@@ -21,6 +22,8 @@
"create_sub_org": "Create sub-organization", "create_sub_org": "Create sub-organization",
"delete": "delete", "delete": "delete",
"delete_department": "Delete sub-department", "delete_department": "Delete sub-department",
"delete_from_org": "Move out of department",
"delete_from_team": "Move out of the team",
"delete_group": "Delete a group", "delete_group": "Delete a group",
"delete_org": "Delete organization", "delete_org": "Delete organization",
"edit_info": "Edit information", "edit_info": "Edit information",
@@ -28,6 +31,7 @@
"edit_member_tip": "Name", "edit_member_tip": "Name",
"edit_org_info": "Edit organization information", "edit_org_info": "Edit organization information",
"expires": "Expiration time", "expires": "Expiration time",
"export_members": "Export members",
"forbid_hint": "After forbidden, this invitation link will become invalid. This action is irreversible. Are you sure you want to deactivate?", "forbid_hint": "After forbidden, this invitation link will become invalid. This action is irreversible. Are you sure you want to deactivate?",
"forbid_success": "Forbid success", "forbid_success": "Forbid success",
"forbidden": "Forbidden", "forbidden": "Forbidden",
@@ -44,8 +48,10 @@
"invite_member": "Invite members", "invite_member": "Invite members",
"invited": "Invited", "invited": "Invited",
"join_team": "Join the team", "join_team": "Join the team",
"join_update_time": "Join/Update Time",
"kick_out_team": "Remove members", "kick_out_team": "Remove members",
"label_sync": "Tag sync", "label_sync": "Tag sync",
"leave": "Resigned",
"leave_team_failed": "Leaving the team exception", "leave_team_failed": "Leaving the team exception",
"log_assign_permission": "[{{name}}] Updated the permissions of [{{objectName}}]: [Application creation: [{{appCreate}}], Knowledge Base: [{{datasetCreate}}], API Key: [{{apiKeyCreate}}], Management: [{{manage}}]]", "log_assign_permission": "[{{name}}] Updated the permissions of [{{objectName}}]: [Application creation: [{{appCreate}}], Knowledge Base: [{{datasetCreate}}], API Key: [{{apiKeyCreate}}], Management: [{{manage}}]]",
"log_change_department": "【{{name}}】Updated department【{{departmentName}}】", "log_change_department": "【{{name}}】Updated department【{{departmentName}}】",
@@ -70,6 +76,7 @@
"member_group": "Belonging to member group", "member_group": "Belonging to member group",
"move_member": "Move member", "move_member": "Move member",
"move_org": "Move organization", "move_org": "Move organization",
"notification_recieve": "Team notification reception",
"operation_log": "log", "operation_log": "log",
"org": "organization", "org": "organization",
"org_description": "Organization description", "org_description": "Organization description",
@@ -77,20 +84,29 @@
"owner": "owner", "owner": "owner",
"permission": "Permissions", "permission": "Permissions",
"permission_apikeyCreate": "Create API Key", "permission_apikeyCreate": "Create API Key",
"permission_apikeyCreate_Tip": "Can create global APIKeys", "permission_apikeyCreate_Tip": "You can create global APIKey and MCP services",
"permission_appCreate": "Create Application", "permission_appCreate": "Create Application",
"permission_appCreate_tip": "Can create applications in the root directory (creation permissions in folders are controlled by the folder)", "permission_appCreate_tip": "Can create applications in the root directory (creation permissions in folders are controlled by the folder)",
"permission_datasetCreate": "Create Knowledge Base", "permission_datasetCreate": "Create Knowledge Base",
"permission_datasetCreate_Tip": "Can create knowledge bases in the root directory (creation permissions in folders are controlled by the folder)", "permission_datasetCreate_Tip": "Can create knowledge bases in the root directory (creation permissions in folders are controlled by the folder)",
"permission_manage": "Admin", "permission_manage": "Admin",
"permission_manage_tip": "Can manage members, create groups, manage all groups, and assign permissions to groups and members", "permission_manage_tip": "Can manage members, create groups, manage all groups, and assign permissions to groups and members",
"please_bind_contact": "Please bind the contact information",
"recover_team_member": "Member Recovery", "recover_team_member": "Member Recovery",
"relocate_department": "Department Mobile", "relocate_department": "Department Mobile",
"remark": "remark", "remark": "remark",
"remove_tip": "Confirm to remove {{username}} from the team?", "remove_tip": "Confirm to remove {{username}} from the team?",
"restore_tip": "Confirm to join the team {{username}}? \nOnly the availability and related permissions of this member account are restored, and the resources under the account cannot be restored.",
"restore_tip_title": "Recovery confirmation",
"retain_admin_permissions": "Keep administrator rights", "retain_admin_permissions": "Keep administrator rights",
"search_log": "Search log", "search_log": "Search log",
"search_member": "Search for members",
"search_member_group_name": "Search member/group name", "search_member_group_name": "Search member/group name",
"search_org": "Search Department",
"set_name_avatar": "Team avatar",
"sync_immediately": "Synchronize now",
"sync_member_failed": "Synchronization of members failed",
"sync_member_success": "Synchronize members successfully",
"total_team_members": "{{amount}} members in total", "total_team_members": "{{amount}} members in total",
"transfer_ownership": "transfer owner", "transfer_ownership": "transfer owner",
"unlimited": "Unlimited", "unlimited": "Unlimited",

View File

@@ -1,13 +1,17 @@
{ {
"configured": "Configured", "configured": "Configured",
"error.no_permission": "Please contact the administrator to configure", "error.no_permission": "Please contact the administrator to configure",
"get_usage_failed": "Failed to get usage",
"laf_account": "laf account", "laf_account": "laf account",
"no_intro": "No explanation yet", "no_intro": "No explanation yet",
"not_configured": "Not configured", "not_configured": "Not configured",
"open_api_notice": "You can fill in the relevant key of OpenAI/OneAPI. \nIf you fill in this content, the online platform using [AI Dialogue], [Problem Classification] and [Content Extraction] will use the Key you filled in, and there will be no charge. \nPlease pay attention to whether your Key has permission to access the corresponding model. \nGPT models can choose FastAI.", "open_api_notice": "You can fill in the relevant key of OpenAI/OneAPI. \nIf you fill in this content, the online platform using [AI Dialogue], [Problem Classification] and [Content Extraction] will use the Key you filled in, and there will be no charge. \nPlease pay attention to whether your Key has permission to access the corresponding model. \nGPT models can choose FastAI.",
"openai_account_configuration": "OpenAI/OneAPI account", "openai_account_configuration": "OpenAI/OneAPI account",
"openai_account_setting_exception": "Setting up an exception to OpenAI account",
"request_address_notice": "Request address, default is openai official. \nThe forwarding address can be filled in, but \\\"v1\\\" is not automatically completed.", "request_address_notice": "Request address, default is openai official. \nThe forwarding address can be filled in, but \\\"v1\\\" is not automatically completed.",
"third_party_account": "Third-party account", "third_party_account": "Third-party account",
"third_party_account.configured": "Configured",
"third_party_account.not_configured": "Not configured",
"third_party_account_desc": "The administrator can configure third-party accounts or variables here, and the account will be used by all team members.", "third_party_account_desc": "The administrator can configure third-party accounts or variables here, and the account will be used by all team members.",
"unavailable": "Get usage exception", "unavailable": "Get usage exception",
"usage": "Usage", "usage": "Usage",

View File

@@ -13,8 +13,10 @@
"embedding_index": "Embedding", "embedding_index": "Embedding",
"every_day": "Day", "every_day": "Day",
"every_month": "Moon", "every_month": "Moon",
"every_week": "weekly",
"export_confirm": "Export confirmation", "export_confirm": "Export confirmation",
"export_confirm_tip": "There are currently {{total}} usage records in total. Are you sure to export?", "export_confirm_tip": "There are currently {{total}} usage records in total. Are you sure to export?",
"export_success": "Export successfully",
"export_title": "Time,Members,Type,Project name,AI points", "export_title": "Time,Members,Type,Project name,AI points",
"feishu": "Feishu", "feishu": "Feishu",
"generation_time": "Generation time", "generation_time": "Generation time",

View File

@@ -1,7 +1,11 @@
{ {
"Click_to_delete_this_field": "Click to delete this field", "Click_to_delete_this_field": "Click to delete this field",
"Filed_is_deprecated": "This field is deprecated", "Filed_is_deprecated": "This field is deprecated",
"MCP_tools_debug": "debug",
"MCP_tools_detail": "check the details",
"MCP_tools_list": "Tool list",
"MCP_tools_list_is_empty": "MCP tool not resolved", "MCP_tools_list_is_empty": "MCP tool not resolved",
"MCP_tools_list_with_number": "Tool list: {{total}}",
"MCP_tools_parse_failed": "Failed to parse MCP address", "MCP_tools_parse_failed": "Failed to parse MCP address",
"MCP_tools_url": "MCP Address", "MCP_tools_url": "MCP Address",
"MCP_tools_url_is_empty": "The MCP address cannot be empty", "MCP_tools_url_is_empty": "The MCP address cannot be empty",
@@ -18,7 +22,6 @@
"app.modules.click to update": "Click to Refresh", "app.modules.click to update": "Click to Refresh",
"app.modules.has new version": "New Version Available", "app.modules.has new version": "New Version Available",
"app.modules.not_found": "Not Found", "app.modules.not_found": "Not Found",
"app.modules.not_found_tips": "This component cannot be found in the system, please delete it, otherwise the process will not run normally",
"app.version_current": "Current Version", "app.version_current": "Current Version",
"app.version_initial": "Initial Version", "app.version_initial": "Initial Version",
"app.version_name_tips": "Version name cannot be empty", "app.version_name_tips": "Version name cannot be empty",
@@ -131,6 +134,7 @@
"response_format": "Response format", "response_format": "Response format",
"saved_success": "Saved successfully! \nTo use this version externally, click Save and Publish", "saved_success": "Saved successfully! \nTo use this version externally, click Save and Publish",
"search_app": "Search apps", "search_app": "Search apps",
"search_tool": "Search Tools",
"setting_app": "Workflow", "setting_app": "Workflow",
"setting_plugin": "Workflow", "setting_plugin": "Workflow",
"show_top_p_tip": "An alternative method of temperature sampling, called Nucleus sampling, the model considers the results of tokens with TOP_P probability mass quality. \nTherefore, 0.1 means that only tokens containing the highest probability quality are considered. \nThe default is 1.", "show_top_p_tip": "An alternative method of temperature sampling, called Nucleus sampling, the model considers the results of tokens with TOP_P probability mass quality. \nTherefore, 0.1 means that only tokens containing the highest probability quality are considered. \nThe default is 1.",
@@ -166,6 +170,7 @@
"template_market_description": "Explore more features in the template market, with configuration tutorials and usage guides to help you understand and get started with various applications.", "template_market_description": "Explore more features in the template market, with configuration tutorials and usage guides to help you understand and get started with various applications.",
"template_market_empty_data": "No suitable templates found", "template_market_empty_data": "No suitable templates found",
"time_zone": "Time Zone", "time_zone": "Time Zone",
"tool_detail": "Tool details",
"tool_input_param_tip": "This plugin requires configuration of related information to run properly.", "tool_input_param_tip": "This plugin requires configuration of related information to run properly.",
"tools_no_description": "This tool has not been introduced ~", "tools_no_description": "This tool has not been introduced ~",
"transition_to_workflow": "Convert to Workflow", "transition_to_workflow": "Convert to Workflow",
@@ -176,6 +181,7 @@
"tts_close": "Close", "tts_close": "Close",
"type.All": "All", "type.All": "All",
"type.Create http plugin tip": "Batch create plugins through OpenAPI Schema, compatible with GPTs format.", "type.Create http plugin tip": "Batch create plugins through OpenAPI Schema, compatible with GPTs format.",
"type.Create mcp tools tip": "Automatically parse and batch create callable MCP tools by entering the MCP address",
"type.Create one plugin tip": "Customizable input and output workflows, usually used to encapsulate reusable workflows.", "type.Create one plugin tip": "Customizable input and output workflows, usually used to encapsulate reusable workflows.",
"type.Create plugin bot": "Create Plugin", "type.Create plugin bot": "Create Plugin",
"type.Create simple bot": "Create Simple App", "type.Create simple bot": "Create Simple App",
@@ -187,12 +193,15 @@
"type.Import from json tip": "Create applications directly through JSON configuration files", "type.Import from json tip": "Create applications directly through JSON configuration files",
"type.Import from json_error": "Failed to get workflow data, please check the URL or manually paste the JSON data", "type.Import from json_error": "Failed to get workflow data, please check the URL or manually paste the JSON data",
"type.Import from json_loading": "Workflow data is being retrieved, please wait...", "type.Import from json_loading": "Workflow data is being retrieved, please wait...",
"type.MCP tools": "MCP Toolset",
"type.MCP_tools_url": "MCP Address",
"type.Plugin": "Plugin", "type.Plugin": "Plugin",
"type.Simple bot": "Simple App", "type.Simple bot": "Simple App",
"type.Workflow bot": "Workflow", "type.Workflow bot": "Workflow",
"type.error.Workflow data is empty": "No workflow data was obtained", "type.error.Workflow data is empty": "No workflow data was obtained",
"type.error.workflowresponseempty": "Response content is empty", "type.error.workflowresponseempty": "Response content is empty",
"type_not_recognized": "App type not recognized", "type_not_recognized": "App type not recognized",
"un_auth": "No permission",
"upload_file_max_amount": "Maximum File Quantity", "upload_file_max_amount": "Maximum File Quantity",
"upload_file_max_amount_tip": "Maximum number of files uploaded in a single round of conversation", "upload_file_max_amount_tip": "Maximum number of files uploaded in a single round of conversation",
"variable.select type_desc": "You can define a global variable that does not need to be filled in by the user.\n\nThe value of this variable can come from the API interface, the Query of the shared link, or assigned through the [Variable Update] module.", "variable.select type_desc": "You can define a global variable that does not need to be filled in by the user.\n\nThe value of this variable can come from the API interface, the Query of the shared link, or assigned through the [Variable Update] module.",

View File

@@ -35,6 +35,7 @@
"delete_all_input_guide_confirm": "Are you sure you want to clear the input guide lexicon?", "delete_all_input_guide_confirm": "Are you sure you want to clear the input guide lexicon?",
"download_chunks": "Download data", "download_chunks": "Download data",
"empty_directory": "This directory is empty~", "empty_directory": "This directory is empty~",
"error_message": "error message",
"file_amount_over": "Exceeded maximum file quantity {{max}}", "file_amount_over": "Exceeded maximum file quantity {{max}}",
"file_input": "File input", "file_input": "File input",
"file_input_tip": "You can obtain the link to the corresponding file through the \"File Link\" of the [Plug-in Start] node", "file_input_tip": "You can obtain the link to the corresponding file through the \"File Link\" of the [Plug-in Start] node",

View File

@@ -186,7 +186,6 @@
"commercial_function_tip": "Please Upgrade to the Commercial Version to Use This Feature: https://doc.fastgpt.cn/docs/commercial/intro/", "commercial_function_tip": "Please Upgrade to the Commercial Version to Use This Feature: https://doc.fastgpt.cn/docs/commercial/intro/",
"comon.Continue_Adding": "Continue Adding", "comon.Continue_Adding": "Continue Adding",
"compliance.chat": "The content is generated by third-party AI and cannot be guaranteed to be true and accurate. It is for reference only.", "compliance.chat": "The content is generated by third-party AI and cannot be guaranteed to be true and accurate. It is for reference only.",
"compliance.compliance.dataset": "Please ensure that your content strictly complies with relevant laws and regulations and avoid containing any illegal or infringing content. \nPlease be careful when uploading materials that may contain sensitive information.",
"compliance.dataset": "Please ensure that your content strictly complies with relevant laws and regulations and avoid containing any illegal or infringing content. \nPlease be careful when uploading materials that may contain sensitive information.", "compliance.dataset": "Please ensure that your content strictly complies with relevant laws and regulations and avoid containing any illegal or infringing content. \nPlease be careful when uploading materials that may contain sensitive information.",
"confirm_choice": "Confirm Choice", "confirm_choice": "Confirm Choice",
"confirm_move": "Move Here", "confirm_move": "Move Here",
@@ -431,7 +430,6 @@
"core.dataset.Read Dataset": "View Dataset Details", "core.dataset.Read Dataset": "View Dataset Details",
"core.dataset.Set Website Config": "Start Configuring", "core.dataset.Set Website Config": "Start Configuring",
"core.dataset.Start export": "Export Started", "core.dataset.Start export": "Export Started",
"core.dataset.Table collection": "Table Dataset",
"core.dataset.Text collection": "Text Dataset", "core.dataset.Text collection": "Text Dataset",
"core.dataset.apiFile": "API File", "core.dataset.apiFile": "API File",
"core.dataset.collection.Click top config website": "Click to Configure Website", "core.dataset.collection.Click top config website": "Click to Configure Website",
@@ -476,6 +474,7 @@
"core.dataset.error.unAuthDatasetData": "Unauthorized to Operate This Data", "core.dataset.error.unAuthDatasetData": "Unauthorized to Operate This Data",
"core.dataset.error.unAuthDatasetFile": "Unauthorized to Operate This File", "core.dataset.error.unAuthDatasetFile": "Unauthorized to Operate This File",
"core.dataset.error.unCreateCollection": "Unauthorized to Operate This Data", "core.dataset.error.unCreateCollection": "Unauthorized to Operate This Data",
"core.dataset.error.unExistDataset": "The knowledge base does not exist",
"core.dataset.error.unLinkCollection": "Not a Web Link Collection", "core.dataset.error.unLinkCollection": "Not a Web Link Collection",
"core.dataset.externalFile": "External File Library", "core.dataset.externalFile": "External File Library",
"core.dataset.file": "File", "core.dataset.file": "File",
@@ -529,7 +528,6 @@
"core.dataset.search.mode.fullTextRecall desc": "Use traditional full-text search, suitable for finding some keywords and subject-predicate special data", "core.dataset.search.mode.fullTextRecall desc": "Use traditional full-text search, suitable for finding some keywords and subject-predicate special data",
"core.dataset.search.mode.mixedRecall": "Mixed Search", "core.dataset.search.mode.mixedRecall": "Mixed Search",
"core.dataset.search.mode.mixedRecall desc": "Use a combination of vector search and full-text search results, sorted using the RRF algorithm.", "core.dataset.search.mode.mixedRecall desc": "Use a combination of vector search and full-text search results, sorted using the RRF algorithm.",
"core.dataset.search.score.embedding": "Semantic Search",
"core.dataset.search.score.embedding desc": "Get scores by calculating the distance between vectors, ranging from 0 to 1.", "core.dataset.search.score.embedding desc": "Get scores by calculating the distance between vectors, ranging from 0 to 1.",
"core.dataset.search.score.fullText": "Full Text Search", "core.dataset.search.score.fullText": "Full Text Search",
"core.dataset.search.score.fullText desc": "Calculate the score of the same keywords, ranging from 0 to infinity.", "core.dataset.search.score.fullText desc": "Calculate the score of the same keywords, ranging from 0 to infinity.",
@@ -751,9 +749,9 @@
"custom_title": "Custom Title", "custom_title": "Custom Title",
"data_index_custom": "Custom index", "data_index_custom": "Custom index",
"data_index_default": "Default index", "data_index_default": "Default index",
"data_index_image": "Image Index",
"data_index_question": "Inferred question index", "data_index_question": "Inferred question index",
"data_index_summary": "Summary Index", "data_index_summary": "Summary Index",
"data_not_found": "Data can't be found",
"dataset.Confirm move the folder": "Confirm to Move to This Directory", "dataset.Confirm move the folder": "Confirm to Move to This Directory",
"dataset.Confirm to delete the data": "Confirm to Delete This Data?", "dataset.Confirm to delete the data": "Confirm to Delete This Data?",
"dataset.Confirm to delete the file": "Confirm to Delete This File and All Its Data?", "dataset.Confirm to delete the file": "Confirm to Delete This File and All Its Data?",
@@ -922,6 +920,7 @@
"not_open": "Not Open", "not_open": "Not Open",
"not_permission": "The current subscription package does not support team operation logs", "not_permission": "The current subscription package does not support team operation logs",
"not_support": "Not Supported", "not_support": "Not Supported",
"not_support_wechat_image": "This is a WeChat picture",
"not_yet_introduced": "No Introduction Yet", "not_yet_introduced": "No Introduction Yet",
"open_folder": "Open Folder", "open_folder": "Open Folder",
"option": "Option", "option": "Option",
@@ -939,6 +938,7 @@
"pay_corporate_payment": "Payment to the public", "pay_corporate_payment": "Payment to the public",
"pay_money": "Amount payable", "pay_money": "Amount payable",
"pay_success": "Payment successfully", "pay_success": "Payment successfully",
"pay_year_tip": "Pay 10 months, enjoy 1 year!",
"permission.Collaborator": "Collaborator", "permission.Collaborator": "Collaborator",
"permission.Default permission": "Default Permission", "permission.Default permission": "Default Permission",
"permission.Manage": "Manage", "permission.Manage": "Manage",
@@ -1133,7 +1133,7 @@
"support.wallet.subscription.AI points usage tip": "Each time the AI model is called, a certain amount of AI points will be consumed. For specific calculation standards, please refer to the 'Pricing' above.", "support.wallet.subscription.AI points usage tip": "Each time the AI model is called, a certain amount of AI points will be consumed. For specific calculation standards, please refer to the 'Pricing' above.",
"support.wallet.subscription.Ai points": "AI Points Calculation Standards", "support.wallet.subscription.Ai points": "AI Points Calculation Standards",
"support.wallet.subscription.Current plan": "Current Package", "support.wallet.subscription.Current plan": "Current Package",
"support.wallet.subscription.Extra ai points": "Extra AI Points", "support.wallet.subscription.Extra ai points": "AI points",
"support.wallet.subscription.Extra dataset size": "Extra Dataset Capacity", "support.wallet.subscription.Extra dataset size": "Extra Dataset Capacity",
"support.wallet.subscription.Extra plan": "Extra Resource Pack", "support.wallet.subscription.Extra plan": "Extra Resource Pack",
"support.wallet.subscription.Extra plan tip": "When the standard package is not enough, you can purchase extra resource packs to continue using", "support.wallet.subscription.Extra plan tip": "When the standard package is not enough, you can purchase extra resource packs to continue using",
@@ -1142,11 +1142,11 @@
"support.wallet.subscription.Next plan": "Future Package", "support.wallet.subscription.Next plan": "Future Package",
"support.wallet.subscription.Stand plan level": "Subscription Package", "support.wallet.subscription.Stand plan level": "Subscription Package",
"support.wallet.subscription.Sub plan": "Subscription Package", "support.wallet.subscription.Sub plan": "Subscription Package",
"support.wallet.subscription.Sub plan tip": "Free to use {{title}} or upgrade to a higher package", "support.wallet.subscription.Sub plan tip": "Free to use [{{title}}] or upgrade to a higher package",
"support.wallet.subscription.Team plan and usage": "Package and Usage", "support.wallet.subscription.Team plan and usage": "Package and Usage",
"support.wallet.subscription.Training weight": "Training Priority: {{weight}}", "support.wallet.subscription.Training weight": "Training Priority: {{weight}}",
"support.wallet.subscription.Update extra ai points": "Extra AI Points", "support.wallet.subscription.Update extra ai points": "Extra AI Points",
"support.wallet.subscription.Update extra dataset size": "Extra Storage", "support.wallet.subscription.Update extra dataset size": "Storage",
"support.wallet.subscription.Upgrade plan": "Upgrade Package", "support.wallet.subscription.Upgrade plan": "Upgrade Package",
"support.wallet.subscription.ai_model": "AI Language Model", "support.wallet.subscription.ai_model": "AI Language Model",
"support.wallet.subscription.function.History store": "{{amount}} Days of Chat History Retention", "support.wallet.subscription.function.History store": "{{amount}} Days of Chat History Retention",
@@ -1155,9 +1155,9 @@
"support.wallet.subscription.function.Max dataset size": "{{amount}} Dataset Indexes", "support.wallet.subscription.function.Max dataset size": "{{amount}} Dataset Indexes",
"support.wallet.subscription.function.Max members": "{{amount}} Team Members", "support.wallet.subscription.function.Max members": "{{amount}} Team Members",
"support.wallet.subscription.function.Points": "{{amount}} AI Points", "support.wallet.subscription.function.Points": "{{amount}} AI Points",
"support.wallet.subscription.mode.Month": "Monthly", "support.wallet.subscription.mode.Month": "Month",
"support.wallet.subscription.mode.Period": "Subscription Period", "support.wallet.subscription.mode.Period": "Subscription Period",
"support.wallet.subscription.mode.Year": "Yearly", "support.wallet.subscription.mode.Year": "Year",
"support.wallet.subscription.mode.Year sale": "Two Months Free", "support.wallet.subscription.mode.Year sale": "Two Months Free",
"support.wallet.subscription.point": "Points", "support.wallet.subscription.point": "Points",
"support.wallet.subscription.standardSubLevel.custom": "Custom", "support.wallet.subscription.standardSubLevel.custom": "Custom",
@@ -1166,7 +1166,7 @@
"support.wallet.subscription.standardSubLevel.experience": "Experience", "support.wallet.subscription.standardSubLevel.experience": "Experience",
"support.wallet.subscription.standardSubLevel.experience_desc": "Unlock the full functionality of FastGPT", "support.wallet.subscription.standardSubLevel.experience_desc": "Unlock the full functionality of FastGPT",
"support.wallet.subscription.standardSubLevel.free": "Free", "support.wallet.subscription.standardSubLevel.free": "Free",
"support.wallet.subscription.standardSubLevel.free desc": "Basic functions can be used for free every month. If the system is not logged in for 30 consecutive days, the Dataset will be automatically cleared.", "support.wallet.subscription.standardSubLevel.free desc": "Free trial of core features. \nIf you haven't logged in for 30 days, the knowledge base will be cleared.",
"support.wallet.subscription.standardSubLevel.team": "Team", "support.wallet.subscription.standardSubLevel.team": "Team",
"support.wallet.subscription.standardSubLevel.team_desc": "Suitable for small teams to build Dataset applications and provide external services", "support.wallet.subscription.standardSubLevel.team_desc": "Suitable for small teams to build Dataset applications and provide external services",
"support.wallet.subscription.status.active": "Active", "support.wallet.subscription.status.active": "Active",
@@ -1268,8 +1268,6 @@
"user.reset_password_tip": "The initial password is not set/the password has not been modified for a long time, please reset the password", "user.reset_password_tip": "The initial password is not set/the password has not been modified for a long time, please reset the password",
"user.team.Balance": "Team Balance", "user.team.Balance": "Team Balance",
"user.team.Check Team": "Switch", "user.team.Check Team": "Switch",
"user.team.Confirm Invite": "Confirm Invite",
"user.team.Create Team": "Create New Team",
"user.team.Leave Team": "Leave Team", "user.team.Leave Team": "Leave Team",
"user.team.Leave Team Failed": "Failed to Leave Team", "user.team.Leave Team Failed": "Failed to Leave Team",
"user.team.Member": "Member", "user.team.Member": "Member",
@@ -1280,13 +1278,10 @@
"user.team.Processing invitations Tips": "You have {{amount}} team invitations to process", "user.team.Processing invitations Tips": "You have {{amount}} team invitations to process",
"user.team.Remove Member Confirm Tip": "Confirm to remove {{username}} from the team?", "user.team.Remove Member Confirm Tip": "Confirm to remove {{username}} from the team?",
"user.team.Select Team": "Select Team", "user.team.Select Team": "Select Team",
"user.team.Set Name": "Name the Team",
"user.team.Switch Team Failed": "Failed to Switch Team", "user.team.Switch Team Failed": "Failed to Switch Team",
"user.team.Tags Async": "Save", "user.team.Tags Async": "Save",
"user.team.Team Name": "Team Name",
"user.team.Team Tags Async": "Tag Sync", "user.team.Team Tags Async": "Tag Sync",
"user.team.Team Tags Async Success": "Link Error Successful, Tag Information Updated", "user.team.Team Tags Async Success": "Link Error Successful, Tag Information Updated",
"user.team.Update Team": "Update Team Information",
"user.team.invite.Accepted": "Joined Team", "user.team.invite.Accepted": "Joined Team",
"user.team.invite.Deal Width Footer Tip": "It will automatically close after processing", "user.team.invite.Deal Width Footer Tip": "It will automatically close after processing",
"user.team.invite.Reject": "Invitation Rejected", "user.team.invite.Reject": "Invitation Rejected",

View File

@@ -7,22 +7,35 @@
"auto_indexes": "Automatically generate supplementary indexes", "auto_indexes": "Automatically generate supplementary indexes",
"auto_indexes_tips": "Additional index generation is performed through large models to improve semantic richness and improve retrieval accuracy.", "auto_indexes_tips": "Additional index generation is performed through large models to improve semantic richness and improve retrieval accuracy.",
"auto_training_queue": "Enhanced index queueing", "auto_training_queue": "Enhanced index queueing",
"backup_collection": "Backup data",
"backup_data_parse": "Backup data is being parsed",
"backup_data_uploading": "Backup data is being uploaded: {{num}}%",
"backup_dataset": "Backup import",
"backup_dataset_success": "The backup was created successfully",
"backup_dataset_tip": "You can reimport the downloaded csv file when exporting the knowledge base.",
"backup_mode": "Backup import",
"chunk_max_tokens": "max_tokens", "chunk_max_tokens": "max_tokens",
"chunk_process_params": "Block processing parameters",
"chunk_size": "Block size", "chunk_size": "Block size",
"chunk_trigger": "Blocking conditions",
"chunk_trigger_force_chunk": "Forced chunking",
"chunk_trigger_max_size": "The original text length is less than the maximum context 70% of the file processing model",
"chunk_trigger_min_size": "The original text is greater than",
"chunk_trigger_tips": "Block storage is triggered when certain conditions are met, otherwise the original text will be stored in full directly",
"close_auto_sync": "Are you sure you want to turn off automatic sync?", "close_auto_sync": "Are you sure you want to turn off automatic sync?",
"collection.Create update time": "Creation/Update Time", "collection.Create update time": "Creation/Update Time",
"collection.Training type": "Training", "collection.Training type": "Training",
"collection.training_type": "Chunk type", "collection.training_type": "Chunk type",
"collection_data_count": "Data amount", "collection_data_count": "Data amount",
"collection_metadata_custom_pdf_parse": "PDF enhancement analysis", "collection_metadata_custom_pdf_parse": "PDF enhancement analysis",
"collection_metadata_image_parse": "Image tagging",
"collection_not_support_retraining": "This collection type does not support retuning parameters", "collection_not_support_retraining": "This collection type does not support retuning parameters",
"collection_not_support_sync": "This collection does not support synchronization", "collection_not_support_sync": "This collection does not support synchronization",
"collection_sync": "Sync data", "collection_sync": "Sync data",
"collection_sync_confirm_tip": "Confirm to start synchronizing data? \nThe system will pull the latest data for comparison. If the contents are different, a new collection will be created and the old collection will be deleted. Please confirm!", "collection_sync_confirm_tip": "Confirm to start synchronizing data? \nThe system will pull the latest data for comparison. If the contents are different, a new collection will be created and the old collection will be deleted. Please confirm!",
"collection_tags": "Collection Tags", "collection_tags": "Collection Tags",
"common_dataset": "General Dataset", "common_dataset": "General Dataset",
"common_dataset_desc": "Build a Dataset by importing files, web links, or manual input.", "common_dataset_desc": "Building a knowledge base by importing files, web page links, or manual entry",
"condition": "condition",
"config_sync_schedule": "Configure scheduled synchronization", "config_sync_schedule": "Configure scheduled synchronization",
"confirm_to_rebuild_embedding_tip": "Are you sure you want to switch the index for the Dataset?\nSwitching the index is a significant operation that requires re-indexing all data in your Dataset, which may take a long time. Please ensure your account has sufficient remaining points.\n\nAdditionally, you need to update the applications that use this Dataset to avoid conflicts with other indexed model Datasets.", "confirm_to_rebuild_embedding_tip": "Are you sure you want to switch the index for the Dataset?\nSwitching the index is a significant operation that requires re-indexing all data in your Dataset, which may take a long time. Please ensure your account has sufficient remaining points.\n\nAdditionally, you need to update the applications that use this Dataset to avoid conflicts with other indexed model Datasets.",
"core.dataset.import.Adjust parameters": "Adjust parameters", "core.dataset.import.Adjust parameters": "Adjust parameters",
@@ -31,6 +44,7 @@
"custom_split_sign_tip": "Allows you to chunk according to custom delimiters. \nUsually used for processed data, using specific separators for precise chunking. \nYou can use the | symbol to represent multiple splitters, such as: \".|.\" to represent a period in Chinese and English.\n\nTry to avoid using special symbols related to regular, such as: * () [] {}, etc.", "custom_split_sign_tip": "Allows you to chunk according to custom delimiters. \nUsually used for processed data, using specific separators for precise chunking. \nYou can use the | symbol to represent multiple splitters, such as: \".|.\" to represent a period in Chinese and English.\n\nTry to avoid using special symbols related to regular, such as: * () [] {}, etc.",
"data_amount": "{{dataAmount}} Datas, {{indexAmount}} Indexes", "data_amount": "{{dataAmount}} Datas, {{indexAmount}} Indexes",
"data_error_amount": "{{errorAmount}} Group training exception", "data_error_amount": "{{errorAmount}} Group training exception",
"data_index_image": "Image index",
"data_index_num": "Index {{index}}", "data_index_num": "Index {{index}}",
"data_process_params": "Params", "data_process_params": "Params",
"data_process_setting": "Processing config", "data_process_setting": "Processing config",
@@ -57,8 +71,9 @@
"enhanced_indexes": "Index enhancement", "enhanced_indexes": "Index enhancement",
"error.collectionNotFound": "Collection not found~", "error.collectionNotFound": "Collection not found~",
"external_file": "External File Library", "external_file": "External File Library",
"external_file_dataset_desc": "Import files from an external file library to build a Dataset. The files will not be stored again.", "external_file_dataset_desc": "You can use external file library to build a knowledge library through the API",
"external_id": "File Reading ID", "external_id": "File Reading ID",
"external_other_dataset_desc": "Customize API, Feishu, Yuque and other external documents as knowledge bases",
"external_read_url": "External Preview URL", "external_read_url": "External Preview URL",
"external_read_url_tip": "Configure the reading URL of your file library for user authentication. Use the {{fileId}} variable to refer to the external file ID.", "external_read_url_tip": "Configure the reading URL of your file library for user authentication. Use the {{fileId}} variable to refer to the external file ID.",
"external_url": "File Access URL", "external_url": "File Access URL",
@@ -92,14 +107,18 @@
"is_open_schedule": "Enable scheduled synchronization", "is_open_schedule": "Enable scheduled synchronization",
"keep_image": "Keep the picture", "keep_image": "Keep the picture",
"loading": "Loading...", "loading": "Loading...",
"max_chunk_size": "Maximum chunk size",
"move.hint": "After moving, the selected knowledge base/folder will inherit the permission settings of the new folder, and the original permission settings will become invalid.", "move.hint": "After moving, the selected knowledge base/folder will inherit the permission settings of the new folder, and the original permission settings will become invalid.",
"noChildren": "No subdirectories", "noChildren": "No subdirectories",
"noSelectedFolder": "No selected folder", "noSelectedFolder": "No selected folder",
"noSelectedId": "No selected ID", "noSelectedId": "No selected ID",
"noValidId": "No valid ID", "noValidId": "No valid ID",
"open_auto_sync": "After scheduled synchronization is turned on, the system will try to synchronize the collection from time to time every day. During the collection synchronization period, the collection data will not be searched.", "open_auto_sync": "After scheduled synchronization is turned on, the system will try to synchronize the collection from time to time every day. During the collection synchronization period, the collection data will not be searched.",
"other_dataset": "Third-party knowledge base",
"paragraph_max_deep": "Maximum paragraph depth",
"paragraph_split": "Partition by paragraph",
"paragraph_split_tip": "Priority is given to chunking according to the Makdown title paragraph. If the chunking is too long, then chunking is done according to the length.",
"params_config": "Config", "params_config": "Config",
"params_setting": "Parameter settings",
"pdf_enhance_parse": "PDF enhancement analysis", "pdf_enhance_parse": "PDF enhancement analysis",
"pdf_enhance_parse_price": "{{price}} points/page", "pdf_enhance_parse_price": "{{price}} points/page",
"pdf_enhance_parse_tips": "Calling PDF recognition model for parsing, you can convert it into Markdown and retain pictures in the document. At the same time, you can also identify scanned documents, which will take a long time to identify them.", "pdf_enhance_parse_tips": "Calling PDF recognition model for parsing, you can convert it into Markdown and retain pictures in the document. At the same time, you can also identify scanned documents, which will take a long time to identify them.",
@@ -115,6 +134,7 @@
"process.Get QA": "Q&A extraction", "process.Get QA": "Q&A extraction",
"process.Image_Index": "Image index generation", "process.Image_Index": "Image index generation",
"process.Is_Ready": "Ready", "process.Is_Ready": "Ready",
"process.Is_Ready_Count": "{{count}} Group is ready",
"process.Parsing": "Parsing", "process.Parsing": "Parsing",
"process.Vectorizing": "Index vectorization", "process.Vectorizing": "Index vectorization",
"process.Waiting": "Queue", "process.Waiting": "Queue",
@@ -143,6 +163,7 @@
"sync_collection_failed": "Synchronization collection error, please check whether the source file can be accessed normally", "sync_collection_failed": "Synchronization collection error, please check whether the source file can be accessed normally",
"sync_schedule": "Timing synchronization", "sync_schedule": "Timing synchronization",
"sync_schedule_tip": "Only existing collections will be synchronized. \nIncludes linked collections and all collections in the API knowledge base. \nThe system will poll for updates every day, and the specific update time cannot be determined.", "sync_schedule_tip": "Only existing collections will be synchronized. \nIncludes linked collections and all collections in the API knowledge base. \nThe system will poll for updates every day, and the specific update time cannot be determined.",
"table_model_tip": "Store each row of data as a chunk",
"tag.Add_new_tag": "add_new Tag", "tag.Add_new_tag": "add_new Tag",
"tag.Edit_tag": "Edit Tag", "tag.Edit_tag": "Edit Tag",
"tag.add": "Create", "tag.add": "Create",
@@ -162,7 +183,7 @@
"vector_model_max_tokens_tip": "Each chunk of data has a maximum length of 3000 tokens", "vector_model_max_tokens_tip": "Each chunk of data has a maximum length of 3000 tokens",
"vllm_model": "Image understanding model", "vllm_model": "Image understanding model",
"website_dataset": "Website Sync", "website_dataset": "Website Sync",
"website_dataset_desc": "Website sync allows you to build a Dataset directly using a web link.", "website_dataset_desc": "Build knowledge base by crawling web page data in batches",
"website_info": "Website Information", "website_info": "Website Information",
"yuque_dataset": "Yuque Dataset", "yuque_dataset": "Yuque Dataset",
"yuque_dataset_config": "Yuque Dataset Config", "yuque_dataset_config": "Yuque Dataset Config",

View File

@@ -16,5 +16,6 @@
"register": "Register", "register": "Register",
"root_password_placeholder": "The root user password is the value of the environment variable DEFAULT_ROOT_PSW", "root_password_placeholder": "The root user password is the value of the environment variable DEFAULT_ROOT_PSW",
"terms": "Terms", "terms": "Terms",
"use_root_login": "Log in as root user" "use_root_login": "Log in as root user",
"wecom": "Enterprise WeChat"
} }

View File

@@ -1,8 +1,10 @@
{ {
"Array_element": "Array element", "Array_element": "Array element",
"Array_element_index": "Index", "Array_element_index": "Index",
"Click": "Click ",
"Code": "Code", "Code": "Code",
"Confirm_sync_node": "It will be updated to the latest node configuration and fields that do not exist in the template will be deleted (including all custom fields).\n\nIf the fields are complex, it is recommended that you copy a node first and then update the original node to facilitate parameter copying.", "Confirm_sync_node": "It will be updated to the latest node configuration and fields that do not exist in the template will be deleted (including all custom fields).\n\nIf the fields are complex, it is recommended that you copy a node first and then update the original node to facilitate parameter copying.",
"Drag": "Drag ",
"Node.Open_Node_Course": "Open node course", "Node.Open_Node_Course": "Open node course",
"Node_variables": "Node variables", "Node_variables": "Node variables",
"Quote_prompt_setting": "Quote prompt", "Quote_prompt_setting": "Quote prompt",
@@ -175,6 +177,9 @@
"text_content_extraction": "Text Extract", "text_content_extraction": "Text Extract",
"text_to_extract": "Text to Extract", "text_to_extract": "Text to Extract",
"these_variables_will_be_input_parameters_for_code_execution": "These variables will be input parameters for code execution", "these_variables_will_be_input_parameters_for_code_execution": "These variables will be input parameters for code execution",
"tool.tool_result": "Tool operation results",
"to_add_node": "to add",
"to_connect_node": "to connect",
"tool_call_termination": "Stop ToolCall", "tool_call_termination": "Stop ToolCall",
"tool_custom_field": "Custom Tool", "tool_custom_field": "Custom Tool",
"tool_field": " Tool Field Parameter Configuration", "tool_field": " Tool Field Parameter Configuration",

Some files were not shown because too many files have changed in this diff Show More