Compare commits

...

47 Commits

Author SHA1 Message Date
archer
4b8dfeef12 perf: i18n 2025-06-03 23:01:58 +08:00
archer
98b00ae86d perf: multiple menu 2025-06-03 22:53:42 +08:00
dreamer6680
c1f8d5b032 add secondary.tsx (#4946)
* add secondary.tsx

* fix

---------

Co-authored-by: dreamer6680 <146868355@qq.com>
2025-06-03 22:05:07 +08:00
archer
4adb8b7e6f perf: log 2025-06-03 22:03:12 +08:00
archer
e32ca8a3e9 perf: api dataset code 2025-06-03 21:30:54 +08:00
dreamer6680
2507997d20 Thirddatasetmd (#4942)
* add thirddataset.md

* fix thirddataset.md

* fix

* delete wrong png

---------

Co-authored-by: dreamer6680 <146868355@qq.com>
2025-06-03 21:30:54 +08:00
Archer
86f5a68d8c fix: ts (#4948) 2025-06-03 21:30:54 +08:00
Archer
92c38d9d2f Feat: Images dataset collection (#4941)
* New pic (#4858)

* 更新数据集相关类型,添加图像文件ID和预览URL支持;优化数据集导入功能,新增图像数据集处理组件;修复部分国际化文本;更新文件上传逻辑以支持新功能。

* 与原先代码的差别

* 新增 V4.9.10 更新说明,支持 PG 设置`systemEnv.hnswMaxScanTuples`参数,优化 LLM stream 调用超时,修复全文检索多知识库排序问题。同时更新数据集索引,移除 datasetId 字段以简化查询。

* 更换成fileId_image逻辑,并增加训练队列匹配的逻辑

* 新增图片集合判断逻辑,优化预览URL生成流程,确保仅在数据集为图片集合时生成预览URL,并添加相关日志输出以便调试。

* Refactor Docker Compose configuration to comment out exposed ports for production environments, update image versions for pgvector, fastgpt, and mcp_server, and enhance Redis service with a health check. Additionally, standardize dataset collection labels in constants and improve internationalization strings across multiple languages.

* Enhance TrainingStates component by adding internationalization support for the imageParse training mode and update defaultCounts to include imageParse mode in trainingDetail API.

* Enhance dataset import context by adding additional steps for image dataset import process and improve internationalization strings for modal buttons in the useEditTitle hook.

* Update DatasetImportContext to conditionally render MyStep component based on data source type, improving the import process for non-image datasets.

* Refactor image dataset handling by improving internationalization strings, enhancing error messages, and streamlining the preview URL generation process.

* 图片上传到新建的 dataset_collection_images 表,逻辑跟随更改

* 修改了除了controller的其他部分问题

* 把图片数据集的逻辑整合到controller里面

* 补充i18n

* 补充i18n

* resolve评论:主要是上传逻辑的更改和组件复用

* 图片名称的图标显示

* 修改编译报错的命名问题

* 删除不需要的collectionid部分

* 多余文件的处理和改动一个删除按钮

* 除了loading和统一的imageId,其他都resolve掉的

* 处理图标报错

* 复用了MyPhotoView并采用全部替换的方式将imageFileId变成imageId

* 去除不必要文件修改

* 报错和字段修改

* 增加上传成功后删除临时文件的逻辑以及回退一些修改

* 删除path字段,将图片保存到gridfs内,并修改增删等操作的代码

* 修正编译错误

---------

Co-authored-by: archer <545436317@qq.com>

* perf: image dataset

* feat: insert image

* perf: image icon

* fix: training state

---------

Co-authored-by: Zhuangzai fa <143257420+ctrlz526@users.noreply.github.com>
2025-06-03 21:30:50 +08:00
gggaaallleee
9fb5d05865 add audit (#4923)
* add audit

* update audit

* update audit
2025-06-03 21:28:26 +08:00
dependabot[bot]
b974574157 chore(deps): bump tar-fs in /plugins/webcrawler/SPIDER (#4945)
Bumps [tar-fs](https://github.com/mafintosh/tar-fs) from 3.0.8 to 3.0.9.
- [Commits](https://github.com/mafintosh/tar-fs/compare/v3.0.8...v3.0.9)

---
updated-dependencies:
- dependency-name: tar-fs
  dependency-version: 3.0.9
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-03 16:02:05 +08:00
gggaaallleee
5a5367d30b add Bocha search template (#4933)
* add bocha

* Delete packages/service/support/operationLog/util.ts
2025-05-30 21:07:49 +08:00
Archer
8ed35ffe7e Update dataset.md (#4927) 2025-05-29 18:25:59 +08:00
Archer
0f866fc552 feat: text collecion auto save for a txt file (#4924) 2025-05-29 17:57:27 +08:00
Archer
05c7ba4483 feat: Workflow node search (#4920)
* add node find (#4902)

* add node find

* plugin header

* fix

* fix

* remove

* type

* add searched status

* optimize

* perf: search nodes

---------

Co-authored-by: heheer <heheer@sealos.io>
2025-05-29 14:29:28 +08:00
heheer
fa80ce3a77 fix child app external variables (#4919) 2025-05-29 13:37:59 +08:00
Archer
830358aa72 remove invalid code (#4915) 2025-05-28 22:11:40 +08:00
Archer
02b214b3ec feat: remove buffer;fix: custom pdf parse (#4914)
* fix: doc

* fix: remove buffer

* fix: pdf parse
2025-05-28 21:48:10 +08:00
Archer
a171c7b11c perf: buffer;fix: back up split (#4913)
* perf: buffer

* fix: back up split

* fix: app limit

* doc
2025-05-28 18:18:25 +08:00
heheer
802de11363 fix runtool empty message (#4911)
* fix runtool empty message

* del unused code

* fix
2025-05-28 17:48:30 +08:00
Archer
b4ecfb0b79 Feat: Node latest version (#4905)
* node versions add keep the latest option (#4899)

* node versions add keep the latest option

* i18n

* perf: version code

* fix: ts

* hide system version

* hide system version

* hide system version

* fix: ts

* fix: ts

---------

Co-authored-by: heheer <heheer@sealos.io>
2025-05-28 10:46:32 +08:00
heheer
331b851a78 fix has tool node condition (#4907) 2025-05-28 10:34:02 +08:00
Archer
50d235c42a fix: i18n (#4898) 2025-05-27 10:45:25 +08:00
Archer
9838593451 version doc (#4897) 2025-05-27 10:33:35 +08:00
Archer
c25cd48e72 perf: chunk trigger and paragraph split (#4893)
* perf: chunk trigger and paragraph split

* update max size computed

* perf: i18n

* remove table
2025-05-26 18:57:22 +08:00
Archer
874300a56a fix: chinese name export (#4890)
* fix: chinese name export

* fix: xlsx white space

* doc

* doc
2025-05-25 21:19:29 +08:00
Archer
1dea2b71b4 perf: human check;perf: recursion get node response (#4888)
* perf: human check

* version

* perf: recursion get node response
2025-05-25 20:55:29 +08:00
Archer
a8673344b1 Test add menu (#4887)
* Feature: Add additional dataset options and their descriptions, updat… (#4874)

* Feature: Add additional dataset options and their descriptions, update menu components to support submenu functionality

* Optimize the menu component by removing the sub-menu position attribute, introducing the MyPopover component to support sub-menu functionality, and adding new dataset options and their descriptions in the dataset list.

---------

Co-authored-by: dreamer6680 <146868355@qq.com>

* api dataset tip

* remove invalid code

---------

Co-authored-by: dreamer6680 <1468683855@qq.com>
Co-authored-by: dreamer6680 <146868355@qq.com>
2025-05-25 20:16:03 +08:00
Archer
9709ae7a4f feat: The workflow quickly adds applications (#4882)
* feat: add node by handle (#4860)

* feat: add node by handle

* fix

* fix edge filter

* fix

* move utils

* move context

* scale handle

* move postion to handle params & optimize handle scale (#4878)

* move position to handle params

* close button scale

* perf: node template ui

* remove handle scale (#4880)

* feat: handle connect

* add mouse down duration check (#4881)

* perf: long press time

* tool handle size

* optimize add node by handle (#4883)

---------

Co-authored-by: heheer <heheer@sealos.io>
2025-05-23 19:20:12 +08:00
Archer
fae76e887a perf: dataset import params code (#4875)
* perf: dataset import params code

* perf: api dataset code

* model
2025-05-23 10:40:25 +08:00
dreamer6680
9af92d1eae Open Yufu Feishu Knowledge Base Permissions (#4867)
* add feishu yuque dataset

* Open Yufu Feishu Knowledge Base Permissions

* Refactor the dataset request module, optimize the import path, and fix the type definition

---------

Co-authored-by: dreamer6680 <146868355@qq.com>
2025-05-22 23:19:55 +08:00
Archer
6a6719e93d perf: isPc check;perf: dataset max token checker (#4872)
* perf: isPc check

* perf: dataset max token checker

* perf: dataset max token checker
2025-05-22 18:40:29 +08:00
Compasafe
50481f4ca8 fix: 修改语音组件中判断isPc的逻辑 (#4854)
* fix: 修改语音组件中判断isPc的逻辑

* fix: 修改语音组件中判断isPc的逻辑
2025-05-22 16:29:53 +08:00
Archer
88bd3aaa9e perf: backup import (#4866)
* i18n

* remove invalid code

* perf: backup import

* backup tip

* fix: indexsize invalid
2025-05-22 15:53:51 +08:00
Archer
dd3c251603 fix: stream response (#4853) 2025-05-21 10:21:20 +08:00
Archer
aa55f059d4 perf: chat history api;perf: full text error (#4852)
* perf: chat history api

* perf: i18n

* perf: full text
2025-05-20 22:31:32 +08:00
dreamer6680
89c9a02650 change ui of price (#4851)
Co-authored-by: dreamer6680 <146868355@qq.com>
2025-05-20 20:51:07 +08:00
heheer
0f3bfa280a fix quote reader duplicate rendering (#4845) 2025-05-20 20:21:00 +08:00
dependabot[bot]
593ebfd269 chore(deps): bump multer from 1.4.5-lts.1 to 2.0.0 (#4839)
Bumps [multer](https://github.com/expressjs/multer) from 1.4.5-lts.1 to 2.0.0.
- [Release notes](https://github.com/expressjs/multer/releases)
- [Changelog](https://github.com/expressjs/multer/blob/v2.0.0/CHANGELOG.md)
- [Commits](https://github.com/expressjs/multer/compare/v1.4.5-lts.1...v2.0.0)

---
updated-dependencies:
- dependency-name: multer
  dependency-version: 2.0.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-20 13:58:47 +08:00
John Chen
f6dc2204f5 fix:修正docker-compose-pgvecto.yml文件中,健康检查参数错误 (#4841) 2025-05-20 13:57:32 +08:00
Archer
d44c338059 perf: confirm ux (#4843)
* perf: delete tip ux

* perf: confirm ux
2025-05-20 13:41:56 +08:00
Archer
1dac2b70ec perf: stream timeout;feat: hnsw max_scan_tuples config;fix: fulltext search merge error (#4838)
* perf: stream timeout

* feat: hnsw max_scan_tuples config

* fix: fulltext search merge error

* perf: jieba code
2025-05-20 09:59:24 +08:00
Archer
9fef3e15fb Update doc (#4831)
* doc

* doc

* version update
2025-05-18 23:16:31 +08:00
Archer
2d2d0fffe9 Test apidataset (#4830)
* Dataset (#4822)

* apidataset support to basepath

* Resolve the error of the Feishu Knowledge Base modification configuration page not supporting baseurl bug.

* apibasepath

* add

* perf: api dataset

---------

Co-authored-by: dreamer6680 <1468683855@qq.com>
2025-05-17 22:41:10 +08:00
heheer
c6e0b5a1e7 offiaccount welcome text (#4827)
* offiaccount welcome text

* fix

* Update Image.tsx

---------

Co-authored-by: Archer <545436317@qq.com>
2025-05-17 22:03:18 +08:00
dependabot[bot]
932aa28a1f chore(deps): bump undici in /plugins/webcrawler/SPIDER (#4825)
Bumps [undici](https://github.com/nodejs/undici) from 6.21.1 to 6.21.3.
- [Release notes](https://github.com/nodejs/undici/releases)
- [Commits](https://github.com/nodejs/undici/compare/v6.21.1...v6.21.3)

---
updated-dependencies:
- dependency-name: undici
  dependency-version: 6.21.3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-17 01:16:31 +08:00
heheer
9c59bc2c17 fix: handle optional indexes in InputDataModal (#4828) 2025-05-16 15:07:33 +08:00
Archer
e145f63554 feat: chat error msg (#4826)
* perf: i18n

* feat: chat error msg

* feat: doc
2025-05-16 12:07:11 +08:00
405 changed files with 11465 additions and 4834 deletions

View File

@@ -21,7 +21,7 @@
"i18n-ally.namespace": true,
"i18n-ally.pathMatcher": "{locale}/{namespaces}.json",
"i18n-ally.extract.targetPickingStrategy": "most-similar-by-key",
"i18n-ally.translate.engines": ["google"],
"i18n-ally.translate.engines": ["deepl","google"],
"[typescript]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},

View File

@@ -132,15 +132,15 @@ services:
# fastgpt
sandbox:
container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.9.8 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.8 # 阿里云
image: ghcr.io/labring/fastgpt-sandbox:v4.9.10-fix2 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.10-fix2 # 阿里云
networks:
- fastgpt
restart: always
fastgpt-mcp-server:
container_name: fastgpt-mcp-server
image: ghcr.io/labring/fastgpt-mcp_server:v4.9.8 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.9.8 # 阿里云
image: ghcr.io/labring/fastgpt-mcp_server:v4.9.10-fix2 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.9.10-fix2 # 阿里云
ports:
- 3005:3000
networks:
@@ -150,8 +150,8 @@ services:
- FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt:
container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.9.8 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.8 # 阿里云
image: ghcr.io/labring/fastgpt:v4.9.10-fix2 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.10-fix2 # 阿里云
ports:
- 3000:3000
networks:

View File

@@ -109,15 +109,15 @@ services:
# fastgpt
sandbox:
container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.9.8 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.8 # 阿里云
image: ghcr.io/labring/fastgpt-sandbox:v4.9.10-fix2 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.10-fix2 # 阿里云
networks:
- fastgpt
restart: always
fastgpt-mcp-server:
container_name: fastgpt-mcp-server
image: ghcr.io/labring/fastgpt-mcp_server:v4.9.8 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.9.8 # 阿里云
image: ghcr.io/labring/fastgpt-mcp_server:v4.9.10-fix2 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.9.10-fix2 # 阿里云
ports:
- 3005:3000
networks:
@@ -127,8 +127,8 @@ services:
- FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt:
container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.9.8 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.8 # 阿里云
image: ghcr.io/labring/fastgpt:v4.9.10-fix2 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.10-fix2 # 阿里云
ports:
- 3000:3000
networks:

View File

@@ -23,7 +23,7 @@ services:
volumes:
- ./pg/data:/var/lib/postgresql/data
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres', '-d', 'aiproxy']
test: ['CMD', 'pg_isready', '-U', 'postgres', '-d', 'postgres']
interval: 5s
timeout: 5s
retries: 10
@@ -96,15 +96,15 @@ services:
# fastgpt
sandbox:
container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.9.8 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.8 # 阿里云
image: ghcr.io/labring/fastgpt-sandbox:v4.9.10-fix2 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.10-fix2 # 阿里云
networks:
- fastgpt
restart: always
fastgpt-mcp-server:
container_name: fastgpt-mcp-server
image: ghcr.io/labring/fastgpt-mcp_server:v4.9.8 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.9.8 # 阿里云
image: ghcr.io/labring/fastgpt-mcp_server:v4.9.10-fix2 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.9.10-fix2 # 阿里云
ports:
- 3005:3000
networks:
@@ -114,8 +114,8 @@ services:
- FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt:
container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.9.8 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.8 # 阿里云
image: ghcr.io/labring/fastgpt:v4.9.10-fix2 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.10-fix2 # 阿里云
ports:
- 3000:3000
networks:

View File

@@ -72,15 +72,15 @@ services:
sandbox:
container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.9.8 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.8 # 阿里云
image: ghcr.io/labring/fastgpt-sandbox:v4.9.10-fix2 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.10-fix2 # 阿里云
networks:
- fastgpt
restart: always
fastgpt-mcp-server:
container_name: fastgpt-mcp-server
image: ghcr.io/labring/fastgpt-mcp_server:v4.9.8 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.9.8 # 阿里云
image: ghcr.io/labring/fastgpt-mcp_server:v4.9.10-fix2 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.9.10-fix2 # 阿里云
ports:
- 3005:3000
networks:
@@ -90,8 +90,8 @@ services:
- FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt:
container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.9.8 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.8 # 阿里云
image: ghcr.io/labring/fastgpt:v4.9.10-fix2 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.10-fix2 # 阿里云
ports:
- 3000:3000
networks:

232
dev.md
View File

@@ -1,114 +1,118 @@
## Premise
Since FastGPT is managed in the same way as monorepo, it is recommended to install make first during development.
monorepo Project Name:
- app: main project
-......
## Dev
```sh
# Give automatic script code execution permission (on non-Linux systems, you can manually execute the postinstall.sh file content)
chmod -R +x ./scripts/
# Executing under the code root directory installs all dependencies within the root package, projects, and packages
pnpm i
# Not make cmd
cd projects/app
pnpm dev
# Make cmd
make dev name=app
```
Note: If the Node version is >= 20, you need to pass the `--no-node-snapshot` parameter to Node when running `pnpm i`
```sh
NODE_OPTIONS=--no-node-snapshot pnpm i
```
### Jest
https://fael3z0zfze.feishu.cn/docx/ZOI1dABpxoGhS7xzhkXcKPxZnDL
## I18N
### Install i18n-ally Plugin
1. Open the Extensions Marketplace in VSCode, search for and install the `i18n Ally` plugin.
### Code Optimization Examples
#### Fetch Specific Namespace Translations in `getServerSideProps`
```typescript
// pages/yourPage.tsx
export async function getServerSideProps(context: any) {
return {
props: {
currentTab: context?.query?.currentTab || TabEnum.info,
...(await serverSideTranslations(context.locale, ['publish', 'user']))
}
};
}
```
#### Use useTranslation Hook in Page
```typescript
// pages/yourPage.tsx
import { useTranslation } from 'next-i18next';
const YourComponent = () => {
const { t } = useTranslation();
return (
<Button
variant="outline"
size="sm"
mr={2}
onClick={() => setShowSelected(false)}
>
{t('common:close')}
</Button>
);
};
export default YourComponent;
```
#### Handle Static File Translations
```typescript
// utils/i18n.ts
import { i18nT } from '@fastgpt/web/i18n/utils';
const staticContent = {
id: 'simpleChat',
avatar: 'core/workflow/template/aiChat',
name: i18nT('app:template.simple_robot'),
};
export default staticContent;
```
### Standardize Translation Format
- Use the t(namespace:key) format to ensure consistent naming.
- Translation keys should use lowercase letters and underscores, e.g., common.close.
## Build
```sh
# Docker cmd: Build image, not proxy
docker build -f ./projects/app/Dockerfile -t registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.1 . --build-arg name=app
# Make cmd: Build image, not proxy
make build name=app image=registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.1
# Docker cmd: Build image with proxy
docker build -f ./projects/app/Dockerfile -t registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.1 . --build-arg name=app --build-arg proxy=taobao
# Make cmd: Build image with proxy
make build name=app image=registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.1 proxy=taobao
```
## Premise
Since FastGPT is managed in the same way as monorepo, it is recommended to install make first during development.
monorepo Project Name:
- app: main project
-......
## Dev
```sh
# Give automatic script code execution permission (on non-Linux systems, you can manually execute the postinstall.sh file content)
chmod -R +x ./scripts/
# Executing under the code root directory installs all dependencies within the root package, projects, and packages
pnpm i
# Not make cmd
cd projects/app
pnpm dev
# Make cmd
make dev name=app
```
Note: If the Node version is >= 20, you need to pass the `--no-node-snapshot` parameter to Node when running `pnpm i`
```sh
NODE_OPTIONS=--no-node-snapshot pnpm i
```
### Jest
https://fael3z0zfze.feishu.cn/docx/ZOI1dABpxoGhS7xzhkXcKPxZnDL
## I18N
### Install i18n-ally Plugin
1. Open the Extensions Marketplace in VSCode, search for and install the `i18n Ally` plugin.
### Code Optimization Examples
#### Fetch Specific Namespace Translations in `getServerSideProps`
```typescript
// pages/yourPage.tsx
export async function getServerSideProps(context: any) {
return {
props: {
currentTab: context?.query?.currentTab || TabEnum.info,
...(await serverSideTranslations(context.locale, ['publish', 'user']))
}
};
}
```
#### Use useTranslation Hook in Page
```typescript
// pages/yourPage.tsx
import { useTranslation } from 'next-i18next';
const YourComponent = () => {
const { t } = useTranslation();
return (
<Button
variant="outline"
size="sm"
mr={2}
onClick={() => setShowSelected(false)}
>
{t('common:close')}
</Button>
);
};
export default YourComponent;
```
#### Handle Static File Translations
```typescript
// utils/i18n.ts
import { i18nT } from '@fastgpt/web/i18n/utils';
const staticContent = {
id: 'simpleChat',
avatar: 'core/workflow/template/aiChat',
name: i18nT('app:template.simple_robot'),
};
export default staticContent;
```
### Standardize Translation Format
- Use the t(namespace:key) format to ensure consistent naming.
- Translation keys should use lowercase letters and underscores, e.g., common.close.
## audit
Please fill the OperationLogEventEnum and operationLog/audit function is added to the ts, and on the corresponding position to fill i18n, at the same time to add the location of the log using addOpearationLog function add function
## Build
```sh
# Docker cmd: Build image, not proxy
docker build -f ./projects/app/Dockerfile -t registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.1 . --build-arg name=app
# Make cmd: Build image, not proxy
make build name=app image=registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.1
# Docker cmd: Build image with proxy
docker build -f ./projects/app/Dockerfile -t registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.1 . --build-arg name=app --build-arg proxy=taobao
# Make cmd: Build image with proxy
make build name=app image=registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.1 proxy=taobao
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 386 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

View File

@@ -959,10 +959,16 @@ curl --location --request POST 'http://localhost:3000/api/core/chat/getHistories
{{< markdownify >}}
{{% alert icon=" " context="success" %}}
目前仅能获取到当前 API key 的创建者的对话。
- appId - 应用 Id
- offset - 偏移量,即从第几条数据开始取
- pageSize - 记录数量
- source - 对话源。source=api表示获取通过 API 创建的对话(不会获取到页面上的对话记录)
- startCreateTime - 开始创建时间(可选)
- endCreateTime - 结束创建时间(可选)
- startUpdateTime - 开始更新时间(可选)
- endUpdateTime - 结束更新时间(可选)
{{% /alert %}}
{{< /markdownify >}}

View File

@@ -645,7 +645,7 @@ data 为集合的 ID。
{{< /tab >}}
{{< /tabs >}}
### 创建一个外部文件库集合(商业版
### 创建一个外部文件库集合(弃用
{{< tabs tabTotal="3" >}}
{{< tab tabName="请求示例" >}}

View File

@@ -1,5 +1,5 @@
---
title: 'V4.9.1'
title: 'V4.9.1(包含升级脚本)'
description: 'FastGPT V4.9.1 更新说明'
icon: 'upgrade'
draft: false

View File

@@ -0,0 +1,50 @@
---
title: 'V4.9.10'
description: 'FastGPT V4.9.10 更新说明'
icon: 'upgrade'
draft: false
toc: true
weight: 790
---
## 升级指南
重要提示本次更新会重新构建全文索引构建期间全文检索结果会为空4c16g 700 万组全文索引大致消耗 25 分钟。如需无缝升级,需自行做表同步工程。
### 1. 做好数据备份
### 2. 更新镜像 tag
- 更新 FastGPT 镜像 tag: v4.9.10-fix2
- 更新 FastGPT 商业版镜像 tag: v4.9.10-fix2
- mcp_server 无需更新
- Sandbox 无需更新
- AIProxy 无需更新
## 🚀 新增内容
1. 支持 PG 设置`systemEnv.hnswMaxScanTuples`参数,提高迭代搜索的数据总量。
2. 知识库预处理参数增加 “分块条件”,可控制某些情况下不进行分块处理。
3. 知识库预处理参数增加 “段落优先” 模式,可控制最大段落深度。原“长度优先”模式,不再内嵌段落优先逻辑。
4. 工作流调整为单向接入和接出,支持快速的添加下一步节点。
5. 开放飞书和语雀知识库到开源版。
6. gemini 和 claude 最新模型预设。
## ⚙️ 优化
1. LLM stream调用默认超时调大。
2. 部分确认交互优化。
3. 纠正原先知识库的“表格数据集”名称,改成“备份导入”。同时支持知识库索引的导出和导入。
4. 工作流知识库引用上限,如果工作流中没有相关 AI 节点,则交互模式改成纯手动输入,并且上限为 1000万。
5. 语音输入,移动端判断逻辑,准确判断是否为手机,而不是小屏。
6. 优化上下文截取算法,至少保证留下一组 Human 信息。
## 🐛 修复
1. 全文检索多知识库时排序得分排序不正确。
2. 流响应捕获 finish_reason 可能不正确。
3. 工具调用模式,未保存思考输出。
4. 知识库 indexSize 参数未生效。
5. 工作流嵌套 2 层后,获取预览引用、上下文不正确。
6. xlsx 转成 Markdown 时候,前面会多出一个空格。
7. 读取 Markdown 文件时Base64 图片未进行额外抓换保存。

View File

@@ -0,0 +1,42 @@
---
title: 'V4.9.11(进行中)'
description: 'FastGPT V4.9.11 更新说明'
icon: 'upgrade'
draft: false
toc: true
weight: 789
---
## 执行升级脚本
该脚本仅需商业版用户执行。
从任意终端,发起 1 个 HTTP 请求。其中 {{rootkey}} 替换成环境变量里的 `rootkey`{{host}} 替换成**FastGPT 域名**。
```bash
curl --location --request POST 'https://{{host}}/api/admin/initv4911' \
--header 'rootkey: {{rootkey}}' \
--header 'Content-Type: application/json'
```
**脚本功能**
1. 移动第三方知识库 API 配置。
## 🚀 新增内容
1. 商业版支持图片知识库。
2. 工作流中增加节点搜索功能。
3. 工作流中,子流程版本控制,可选择“保持最新版本”,无需手动更新。
4. 增加更多审计操作日志。
## ⚙️ 优化
1. 原文缓存改用 gridfs 存储,提高上限。
## 🐛 修复
1. 工作流中,管理员声明的全局系统工具,无法进行版本管理。
2. 工具调用节点前,有交互节点时,上下文异常。
3. 修复备份导入,小于 1000 字时,无法分块问题。
4. 自定义 PDF 解析,无法保存 base64 图片。

View File

@@ -1,5 +1,5 @@
---
title: 'V4.9.4'
title: 'V4.9.4(包含升级脚本)'
description: 'FastGPT V4.9.4 更新说明'
icon: 'upgrade'
draft: false

View File

@@ -1,5 +1,5 @@
---
title: 'V4.9.9(进行中)'
title: 'V4.9.9'
description: 'FastGPT V4.9.9 更新说明'
icon: 'upgrade'
draft: false
@@ -7,11 +7,28 @@ toc: true
weight: 791
---
## 升级指南
### 1. 做好数据备份
### 2. 商业版用户替换新 License
商业版用户可以联系 FastGPT 团队支持同学,获取 License 替换方案。替换后,可以直接升级系统,管理后台会提示输入新 License。
### 3. 更新镜像 tag
- 更新 FastGPT 镜像 tag: v4.9.9
- 更新 FastGPT 商业版镜像 tag: v4.9.9
- mcp_server 无需更新
- Sandbox 无需更新
- AIProxy 无需更新
## 🚀 新增内容
1. 切换 SessionId 来替代 JWT 实现登录鉴权,可控制最大登录客户端数量。
2. 新的商业版 License 管理模式。
3. 公众号调用,显示记录 chat 对话错误,方便排查。
4. API 知识库支持 BasePath 选择,需增加 API 接口,具体可见[API 知识库介绍](/docs/guide/knowledge_base/api_dataset/#4-获取文件详细信息用于获取文件信息)
## ⚙️ 优化
@@ -23,3 +40,4 @@ weight: 791
1. 无法正常获取应用历史保存/发布记录。
2. 成员创建 MCP 工具权限问题。
3. 来源引用展示,存在 ID 传递错误,导致提示无权操作该文件。
4. 回答标注前端数据报错。

View File

@@ -185,3 +185,40 @@ curl --location --request GET '{{baseURL}}/v1/file/read?id=xx' \
{{< /tabs >}}
### 4. 获取文件详细信息(用于获取文件信息)
{{< tabs tabTotal="2" >}}
{{< tab tabName="请求示例" >}}
{{< markdownify >}}
id 为文件的 id。
```bash
curl --location --request GET '{{baseURL}}/v1/file/detail?id=xx' \
--header 'Authorization: Bearer {{authorization}}'
```
{{< /markdownify >}}
{{< /tab >}}
{{< tab tabName="响应示例" >}}
{{< markdownify >}}
```json
{
"code": 200,
"success": true,
"message": "",
"data": {
"id": "docs",
"parentId": "",
"name": "docs"
}
}
```
{{< /markdownify >}}
{{< /tab >}}
{{< /tabs >}}

View File

@@ -0,0 +1,161 @@
---
title: '第三方知识库开发'
description: '本节详细介绍如何在FastGPT上自己接入第三方知识库'
icon: 'language'
draft: false
toc: true
weight: 410
---
目前,互联网上拥有各种各样的文档库,例如飞书,语雀等等。 FastGPT 的不同用户可能使用的文档库不同,目前 FastGPT 内置了飞书、语雀文档库,如果需要接入其他文档库,可以参考本节内容。
## 统一的接口规范
为了实现对不同文档库的统一接入FastGPT 对第三方文档库进行了接口的规范,共包含 4 个接口内容,可以[查看 API 文件库接口](/docs/guide/knowledge_base/api_datase)。
所有内置的文档库,都是基于标准的 API 文件库进行扩展。可以参考`FastGPT/packages/service/core/dataset/apiDataset/yuqueDataset/api.ts`中的代码,进行其他文档库的扩展。一共需要完成 4 个接口开发:
1. 获取文件列表
2. 获取文件内容/文件链接
3. 获取原文预览地址
4. 获取文件详情信息
## 开始一个第三方文件库
为了方便讲解,这里以添加飞书知识库为例。
### 1. 添加第三方文档库参数
首先,要进入 FastGPT 项目路径下的`FastGPT\packages\global\core\dataset\apiDataset.d.ts`文件,添加第三方文档库 Server 类型。例如,语雀文档中,需要提供`userId``token`两个字段作为鉴权信息。
```ts
export type YuqueServer = {
userId: string;
token?: string;
basePath?: string;
};
```
{{% alert icon="🤖 " context="success" %}}
如果文档库有`根目录`选择的功能,需要设置添加一个字段`basePath`
{{% /alert %}}
### 2. 创建 Hook 文件
每个第三方文档库都会采用 Hook 的方式来实现一套 API 接口的维护Hook 里包含 4 个函数需要完成。
-`FastGPT\packages\service\core\dataset\apiDataset\`下创建一个文档库的文件夹,然后在文件夹下创建一个`api.ts`文件
-`api.ts`文件中,需要完成 4 个函数的定义,分别是:
- `listFiles`:获取文件列表
- `getFileContent`:获取文件内容/文件链接
- `getFileDetail`:获取文件详情信息
- `getFilePreviewUrl`:获取原文预览地址
### 3. 数据库添加配置字段
-`packages/service/core/dataset/schema.ts` 中添加第三方文档库的配置字段,类型统一设置成`Object`
-`FastGPT/packages/global/core/dataset/type.d.ts`中添加第三方文档库配置字段的数据类型,类型设置为第一步创建的参数。
![](/imgs/thirddataset-7.png)
{{% alert icon="🤖 " context="success" %}}
`schema.ts`文件修改后,需要重新启动 FastGPT 项目才会生效。
{{% /alert %}}
### 4. 添加知识库类型
`projects/app/src/web/core/dataset/constants.ts`中,添加自己的知识库类型
```TS
export const datasetTypeCourseMap: Record<`${DatasetTypeEnum}`, string> = {
[DatasetTypeEnum.folder]: '',
[DatasetTypeEnum.dataset]: '',
[DatasetTypeEnum.apiDataset]: '/docs/guide/knowledge_base/api_dataset/',
[DatasetTypeEnum.websiteDataset]: '/docs/guide/knowledge_base/websync/',
[DatasetTypeEnum.feishuShare]: '/docs/guide/knowledge_base/lark_share_dataset/',
[DatasetTypeEnum.feishuKnowledge]: '/docs/guide/knowledge_base/lark_knowledge_dataset/',
[DatasetTypeEnum.yuque]: '/docs/guide/knowledge_base/yuque_dataset/',
[DatasetTypeEnum.externalFile]: ''
};
```
{{% alert icon="🤖 " context="success" %}}
在 datasetTypeCourseMap 中添加自己的知识库类型,`' '`内是相应的文档说明,如果有的话,可以添加。
文档添加在`FastGPT\docSite\content\zh-cn\docs\guide\knowledge_base\`
{{% /alert %}}
## 添加前端
`FastGPT\packages\web\i18n\zh-CN\dataset.json`,`FastGPT\packages\web\i18n\en\dataset.json``FastGPT\packages\web\i18n\zh-Hant\dataset.json`中添加自己的 I18n 翻译,以中文翻译为例,大体需要如下几个内容:
![](/imgs/thirddataset-24.png)
`FastGPT\packages\web\components\common\Icon\icons\core\dataset\`添加自己的知识库图标,一共是两个,分为`Outline``Color`,分别是有颜色的和无色的,具体看如下图片。
![](/imgs/thirddataset-10.png)
`FastGPT\packages\web\components\common\Icon\constants.ts`文件中,添加自己的图标。 `import` 是图标的存放路径。
![](/imgs/thirddataset-9.png)
`FastGPT\packages\global\core\dataset\constants.ts`文件中,添加自己的知识库类型。
![](/imgs/thirddataset-8.png)
{{% alert icon="🤖 " context="success" %}}
`label`内容是自己之前通过 i18n 翻译添加的知识库名称的
`icon`是自己之前添加的 Icon , I18n 的添加看最后清单。
{{% /alert %}}
`FastGPT\projects\app\src\pages\dataset\list\index.tsx`文件下,添加如下内容。这个文件负责的是知识库列表页的`新建`按钮点击后的菜单,只有在该文件添加知识库后,才能创建知识库。
![](/imgs/thirddataset-12.png)
`FastGPT\projects\app\src\pageComponents\dataset\detail\Info\index.tsx`文件下,添加如下内容。
![](/imgs/thirddataset-18.png)
`FastGPT\projects\app\src\pageComponents\dataset\list\CreateModal.tsx`文件下,添加如下内容。
| | |
| --- | --- |
| ![](/imgs/thirddataset-19.png) | ![](/imgs/thirddataset-20.png) |
`FastGPT\projects\app\src\pageComponents\dataset\list\SideTag.tsx`文件下,添加如下内容。
![](/imgs/thirddataset-21.png)
`FastGPT\projects\app\src\web\core\dataset\context\datasetPageContext.tsx`文件下,添加如下内容。
![](/imgs/thirddataset-23.png)
## 添加配置表单
`FastGPT\projects\app\src\pageComponents\dataset\ApiDatasetForm.tsx`文件下,添加自己如下内容。这个文件负责的是创建知识库页的字段填写。
| | | |
| --- | --- | --- |
| ![](/imgs/thirddataset-13.png) | ![](/imgs/thirddataset-14.png) | ![](/imgs/thirddataset-15.png) |
代码中添加的两个组件是对根目录选择的渲染,对应设计的 api 的 getfiledetail 方法,如果你的文件不支持,你可以不引用。
```
{renderBaseUrlSelector()} //这是对`Base URL`字段的渲染
{renderDirectoryModal()} //点击`选择`后出现的`选择根目录`窗口,见图
```
| | |
| --- | --- |
| ![](/imgs/thirddataset-16.png) | ![](/imgs/thirddataset-17.png) |
如果知识库需要支持根目录,还需要在`ApiDatasetForm`文件中添加相关内容。
## 添加杂项
最后,需要在很多文件里添加`server`类型,这里由于文件过多,且不大,不一一列举文件的清单。只提供方法:使用自己编程工具的全局搜索功能,搜索`YuqueServer``yuqueServer`。在搜索到的文件中,逐一添加自己的知识库类型。
## 提示
建议知识库创建完成后,完整测试一遍知识库的功能,以确定有无漏洞,如果你的知识库添加有问题,且无法在文档找到对应的文件解决,一定是杂项没有添加完全,建议重复一次全局搜索`YuqueServer``yuqueServer`,检查是否有地方没有加上自己的类型。

View File

@@ -28,7 +28,6 @@ FastGPT 商业版是基于 FastGPT 开源版的增强版本,增加了一些独
| 应用发布安全配置 | ❌ | ✅ | ✅ |
| 内容审核 | ❌ | ✅ | ✅ |
| web站点同步 | ❌ | ✅ | ✅ |
| 主流文档库接入(目前支持:语雀、飞书) | ❌ | ✅ | ✅ |
| 增强训练模式 | ❌ | ✅ | ✅ |
| 第三方应用快速接入(飞书、公众号) | ❌ | ✅ | ✅ |
| 管理后台 | ❌ | ✅ | 不需要 |

View File

@@ -132,7 +132,9 @@ weight: 506
### 公众号没响应
检查应用对话日志,如果有对话日志,但是微信公众号无响应,则是白名单 IP未成功。
添加白名单IP 后,通常需要等待几分钟微信更新。
添加白名单IP 后,通常需要等待几分钟微信更新。可以在对话日志中,找点错误日志。
![](/imgs/official_account_faq.png)
### 如何新开一个聊天记录

View File

@@ -27,7 +27,7 @@ const datasetErr = [
},
{
statusText: DatasetErrEnum.unExist,
message: 'core.dataset.error.unExistDataset'
message: i18nT('common:core.dataset.error.unExistDataset')
},
{
statusText: DatasetErrEnum.unExistCollection,

View File

@@ -6,7 +6,8 @@ export const fileImgs = [
{ suffix: '(doc|docs)', src: 'file/fill/doc' },
{ suffix: 'txt', src: 'file/fill/txt' },
{ suffix: 'md', src: 'file/fill/markdown' },
{ suffix: 'html', src: 'file/fill/html' }
{ suffix: 'html', src: 'file/fill/html' },
{ suffix: '(jpg|jpeg|png|gif|bmp|webp|svg|ico|tiff|tif)', src: 'image' }
// { suffix: '.', src: '/imgs/files/file.svg' }
];

View File

@@ -2,4 +2,5 @@ export type AuthFrequencyLimitProps = {
eventId: string;
maxAmount: number;
expiredTime: Date;
num?: number;
};

View File

@@ -7,6 +7,10 @@ export const CUSTOM_SPLIT_SIGN = '-----CUSTOM_SPLIT_SIGN-----';
type SplitProps = {
text: string;
chunkSize: number;
paragraphChunkDeep?: number; // Paragraph deep
paragraphChunkMinSize?: number; // Paragraph min size, if too small, it will merge
maxSize?: number;
overlapRatio?: number;
customReg?: string[];
@@ -108,6 +112,8 @@ const commonSplit = (props: SplitProps): SplitResponse => {
let {
text = '',
chunkSize,
paragraphChunkDeep = 5,
paragraphChunkMinSize = 100,
maxSize = defaultMaxChunkSize,
overlapRatio = 0.15,
customReg = []
@@ -123,7 +129,7 @@ const commonSplit = (props: SplitProps): SplitResponse => {
text = text.replace(/(```[\s\S]*?```|~~~[\s\S]*?~~~)/g, function (match) {
return match.replace(/\n/g, codeBlockMarker);
});
// 2. 表格处理 - 单独提取表格出来,进行表头合并
// 2. Markdown 表格处理 - 单独提取表格出来,进行表头合并
const tableReg =
/(\n\|(?:(?:[^\n|]+\|){1,})\n\|(?:[:\-\s]+\|){1,}\n(?:\|(?:[^\n|]+\|)*\n?)*)(?:\n|$)/g;
const tableDataList = text.match(tableReg);
@@ -143,25 +149,40 @@ const commonSplit = (props: SplitProps): SplitResponse => {
text = text.replace(/(\r?\n|\r){3,}/g, '\n\n\n');
// The larger maxLen is, the next sentence is less likely to trigger splitting
const markdownIndex = 4;
const forbidOverlapIndex = 8;
const customRegLen = customReg.length;
const markdownIndex = paragraphChunkDeep - 1;
const forbidOverlapIndex = customRegLen + markdownIndex + 4;
const markdownHeaderRules = ((deep?: number): { reg: RegExp; maxLen: number }[] => {
if (!deep || deep === 0) return [];
const maxDeep = Math.min(deep, 8); // Maximum 8 levels
const rules: { reg: RegExp; maxLen: number }[] = [];
for (let i = 1; i <= maxDeep; i++) {
const hashSymbols = '#'.repeat(i);
rules.push({
reg: new RegExp(`^(${hashSymbols}\\s[^\\n]+\\n)`, 'gm'),
maxLen: chunkSize
});
}
return rules;
})(paragraphChunkDeep);
const stepReges: { reg: RegExp | string; maxLen: number }[] = [
...customReg.map((text) => ({
reg: text.replaceAll('\\n', '\n'),
maxLen: chunkSize
})),
{ reg: /^(#\s[^\n]+\n)/gm, maxLen: chunkSize },
{ reg: /^(##\s[^\n]+\n)/gm, maxLen: chunkSize },
{ reg: /^(###\s[^\n]+\n)/gm, maxLen: chunkSize },
{ reg: /^(####\s[^\n]+\n)/gm, maxLen: chunkSize },
{ reg: /^(#####\s[^\n]+\n)/gm, maxLen: chunkSize },
...markdownHeaderRules,
{ reg: /([\n](```[\s\S]*?```|~~~[\s\S]*?~~~))/g, maxLen: maxSize }, // code block
// HTML Table tag 尽可能保障完整
{
reg: /(\n\|(?:(?:[^\n|]+\|){1,})\n\|(?:[:\-\s]+\|){1,}\n(?:\|(?:[^\n|]+\|)*\n)*)/g,
maxLen: Math.min(chunkSize * 1.5, maxSize)
}, // Table 尽可能保证完整性
maxLen: chunkSize
}, // Markdown Table 尽可能保证完整性
{ reg: /(\n{2,})/g, maxLen: chunkSize },
{ reg: /([\n])/g, maxLen: chunkSize },
// ------ There's no overlap on the top
@@ -172,12 +193,10 @@ const commonSplit = (props: SplitProps): SplitResponse => {
{ reg: /([]|,\s)/g, maxLen: chunkSize }
];
const customRegLen = customReg.length;
const checkIsCustomStep = (step: number) => step < customRegLen;
const checkIsMarkdownSplit = (step: number) =>
step >= customRegLen && step <= markdownIndex + customRegLen;
const checkForbidOverlap = (step: number) => step <= forbidOverlapIndex + customRegLen;
const checkForbidOverlap = (step: number) => step <= forbidOverlapIndex;
// if use markdown title split, Separate record title
const getSplitTexts = ({ text, step }: { text: string; step: number }) => {
@@ -301,6 +320,7 @@ const commonSplit = (props: SplitProps): SplitResponse => {
const splitTexts = getSplitTexts({ text, step });
const chunks: string[] = [];
for (let i = 0; i < splitTexts.length; i++) {
const item = splitTexts[i];
@@ -443,7 +463,6 @@ const commonSplit = (props: SplitProps): SplitResponse => {
*/
export const splitText2Chunks = (props: SplitProps): SplitResponse => {
let { text = '' } = props;
const start = Date.now();
const splitWithCustomSign = text.split(CUSTOM_SPLIT_SIGN);
const splitResult = splitWithCustomSign.map((item) => {

View File

@@ -34,7 +34,7 @@ export const valToStr = (val: any) => {
};
// replace {{variable}} to value
export function replaceVariable(text: any, obj: Record<string, string | number>) {
export function replaceVariable(text: any, obj: Record<string, string | number | undefined>) {
if (typeof text !== 'string') return text;
for (const key in obj) {

View File

@@ -130,9 +130,11 @@ export type SystemEnvType = {
vectorMaxProcess: number;
qaMaxProcess: number;
vlmMaxProcess: number;
hnswEfSearch: number;
tokenWorkers: number; // token count max worker
hnswEfSearch: number;
hnswMaxScanTuples: number;
oneapiUrl?: string;
chatApiKey?: string;

View File

@@ -60,5 +60,3 @@ export enum AppTemplateTypeEnum {
// special type
contribute = 'contribute'
}
export const defaultDatasetMaxTokens = 16000;

View File

@@ -10,6 +10,8 @@ import { AppTypeEnum } from './constants';
import { AppErrEnum } from '../../common/error/code/app';
import { PluginErrEnum } from '../../common/error/code/plugin';
import { i18nT } from '../../../web/i18n/utils';
import appErrList from '../../common/error/code/app';
import pluginErrList from '../../common/error/code/plugin';
export const getDefaultAppForm = (): AppSimpleEditFormType => {
return {
@@ -190,17 +192,10 @@ export const getAppType = (config?: WorkflowTemplateBasicType | AppSimpleEditFor
return '';
};
export const formatToolError = (error?: string) => {
const unExistError: Array<string> = [
AppErrEnum.unAuthApp,
AppErrEnum.unExist,
PluginErrEnum.unAuth,
PluginErrEnum.unExist
];
export const formatToolError = (error?: any) => {
if (!error || typeof error !== 'string') return;
if (error && unExistError.includes(error)) {
return i18nT('app:un_auth');
} else {
return error;
}
const errorText = appErrList[error]?.message || pluginErrList[error]?.message;
return errorText || error;
};

View File

@@ -26,6 +26,7 @@ export type ChatSchema = {
teamId: string;
tmbId: string;
appId: string;
createTime: Date;
updateTime: Date;
title: string;
customTitle: string;
@@ -112,6 +113,7 @@ export type ChatItemSchema = (UserChatItemType | SystemChatItemType | AIChatItem
appId: string;
time: Date;
durationSeconds?: number;
errorMsg?: string;
};
export type AdminFbkType = {
@@ -143,6 +145,7 @@ export type ChatSiteItemType = (UserChatItemType | SystemChatItemType | AIChatIt
responseData?: ChatHistoryItemResType[];
time?: Date;
durationSeconds?: number;
errorMsg?: string;
} & ChatBoxInputType &
ResponseTagItemType;

View File

@@ -1,16 +1,25 @@
import type { DatasetDataIndexItemType, DatasetSchemaType } from './type';
import type {
ChunkSettingsType,
DatasetDataIndexItemType,
DatasetDataFieldType,
DatasetSchemaType
} from './type';
import type {
DatasetCollectionTypeEnum,
DatasetCollectionDataProcessModeEnum,
ChunkSettingModeEnum,
DataChunkSplitModeEnum
DataChunkSplitModeEnum,
ChunkTriggerConfigTypeEnum,
ParagraphChunkAIModeEnum
} from './constants';
import type { LLMModelItemType } from '../ai/model.d';
import type { ParentIdType } from 'common/parentFolder/type';
import type { ParentIdType } from '../../common/parentFolder/type';
/* ================= dataset ===================== */
export type DatasetUpdateBody = {
id: string;
apiDatasetServer?: DatasetSchemaType['apiDatasetServer'];
parentId?: ParentIdType;
name?: string;
avatar?: string;
@@ -22,9 +31,6 @@ export type DatasetUpdateBody = {
websiteConfig?: DatasetSchemaType['websiteConfig'];
externalReadUrl?: DatasetSchemaType['externalReadUrl'];
defaultPermission?: DatasetSchemaType['defaultPermission'];
apiServer?: DatasetSchemaType['apiServer'];
yuqueServer?: DatasetSchemaType['yuqueServer'];
feishuServer?: DatasetSchemaType['feishuServer'];
chunkSettings?: DatasetSchemaType['chunkSettings'];
// sync schedule
@@ -32,26 +38,16 @@ export type DatasetUpdateBody = {
};
/* ================= collection ===================== */
export type DatasetCollectionChunkMetadataType = {
// Input + store params
type DatasetCollectionStoreDataType = ChunkSettingsType & {
parentId?: string;
customPdfParse?: boolean;
trainingType?: DatasetCollectionDataProcessModeEnum;
imageIndex?: boolean;
autoIndexes?: boolean;
chunkSettingMode?: ChunkSettingModeEnum;
chunkSplitMode?: DataChunkSplitModeEnum;
chunkSize?: number;
indexSize?: number;
chunkSplitter?: string;
qaPrompt?: string;
metadata?: Record<string, any>;
customPdfParse?: boolean;
};
// create collection params
export type CreateDatasetCollectionParams = DatasetCollectionChunkMetadataType & {
export type CreateDatasetCollectionParams = DatasetCollectionStoreDataType & {
datasetId: string;
name: string;
type: DatasetCollectionTypeEnum;
@@ -72,7 +68,7 @@ export type CreateDatasetCollectionParams = DatasetCollectionChunkMetadataType &
nextSyncTime?: Date;
};
export type ApiCreateDatasetCollectionParams = DatasetCollectionChunkMetadataType & {
export type ApiCreateDatasetCollectionParams = DatasetCollectionStoreDataType & {
datasetId: string;
tags?: string[];
};
@@ -90,7 +86,7 @@ export type ApiDatasetCreateDatasetCollectionParams = ApiCreateDatasetCollection
export type FileIdCreateDatasetCollectionParams = ApiCreateDatasetCollectionParams & {
fileId: string;
};
export type reTrainingDatasetFileCollectionParams = DatasetCollectionChunkMetadataType & {
export type reTrainingDatasetFileCollectionParams = DatasetCollectionStoreDataType & {
datasetId: string;
collectionId: string;
};
@@ -108,6 +104,9 @@ export type ExternalFileCreateDatasetCollectionParams = ApiCreateDatasetCollecti
externalFileUrl: string;
filename?: string;
};
export type ImageCreateDatasetCollectionParams = ApiCreateDatasetCollectionParams & {
collectionName: string;
};
/* ================= tag ===================== */
export type CreateDatasetCollectionTagParams = {
@@ -133,8 +132,9 @@ export type PgSearchRawType = {
score: number;
};
export type PushDatasetDataChunkProps = {
q: string; // embedding content
a?: string; // bonus content
q?: string;
a?: string;
imageId?: string;
chunkIndex?: number;
indexes?: Omit<DatasetDataIndexItemType, 'dataId'>[];
};
@@ -147,6 +147,7 @@ export type PushDatasetDataProps = {
collectionId: string;
data: PushDatasetDataChunkProps[];
trainingType?: DatasetCollectionDataProcessModeEnum;
indexSize?: number;
autoIndexes?: boolean;
imageIndex?: boolean;
prompt?: string;

View File

@@ -1,5 +1,5 @@
import { RequireOnlyOne } from '../../common/type/utils';
import type { ParentIdType } from '../../common/parentFolder/type.d';
import { RequireOnlyOne } from '../../../common/type/utils';
import type { ParentIdType } from '../../../common/parentFolder/type';
export type APIFileItem = {
id: string;
@@ -28,6 +28,12 @@ export type YuqueServer = {
basePath?: string;
};
export type ApiDatasetServerType = {
apiServer?: APIFileServer;
feishuServer?: FeishuServer;
yuqueServer?: YuqueServer;
};
// Api dataset api
export type APIFileListResponse = APIFileItem[];

View File

@@ -0,0 +1,31 @@
import type { ApiDatasetServerType } from './type';
export const filterApiDatasetServerPublicData = (apiDatasetServer?: ApiDatasetServerType) => {
if (!apiDatasetServer) return undefined;
const { apiServer, yuqueServer, feishuServer } = apiDatasetServer;
return {
apiServer: apiServer
? {
baseUrl: apiServer.baseUrl,
authorization: '',
basePath: apiServer.basePath
}
: undefined,
yuqueServer: yuqueServer
? {
userId: yuqueServer.userId,
token: '',
basePath: yuqueServer.basePath
}
: undefined,
feishuServer: feishuServer
? {
appId: feishuServer.appId,
appSecret: '',
folderToken: feishuServer.folderToken
}
: undefined
};
};

View File

@@ -6,45 +6,80 @@ export enum DatasetTypeEnum {
dataset = 'dataset',
websiteDataset = 'websiteDataset', // depp link
externalFile = 'externalFile',
apiDataset = 'apiDataset',
feishu = 'feishu',
yuque = 'yuque'
}
export const DatasetTypeMap = {
// @ts-ignore
export const ApiDatasetTypeMap: Record<
`${DatasetTypeEnum}`,
{
icon: string;
avatar: string;
label: any;
collectionLabel: string;
courseUrl?: string;
}
> = {
[DatasetTypeEnum.apiDataset]: {
icon: 'core/dataset/externalDatasetOutline',
avatar: 'core/dataset/externalDatasetColor',
label: i18nT('dataset:api_file'),
collectionLabel: i18nT('common:File'),
courseUrl: '/docs/guide/knowledge_base/api_dataset/'
},
[DatasetTypeEnum.feishu]: {
icon: 'core/dataset/feishuDatasetOutline',
avatar: 'core/dataset/feishuDatasetColor',
label: i18nT('dataset:feishu_dataset'),
collectionLabel: i18nT('common:File'),
courseUrl: '/docs/guide/knowledge_base/lark_dataset/'
},
[DatasetTypeEnum.yuque]: {
icon: 'core/dataset/yuqueDatasetOutline',
avatar: 'core/dataset/yuqueDatasetColor',
label: i18nT('dataset:yuque_dataset'),
collectionLabel: i18nT('common:File'),
courseUrl: '/docs/guide/knowledge_base/yuque_dataset/'
}
};
export const DatasetTypeMap: Record<
`${DatasetTypeEnum}`,
{
icon: string;
avatar: string;
label: any;
collectionLabel: string;
courseUrl?: string;
}
> = {
...ApiDatasetTypeMap,
[DatasetTypeEnum.folder]: {
icon: 'common/folderFill',
avatar: 'common/folderFill',
label: i18nT('dataset:folder_dataset'),
collectionLabel: i18nT('common:Folder')
},
[DatasetTypeEnum.dataset]: {
icon: 'core/dataset/commonDatasetOutline',
avatar: 'core/dataset/commonDatasetColor',
label: i18nT('dataset:common_dataset'),
collectionLabel: i18nT('common:File')
},
[DatasetTypeEnum.websiteDataset]: {
icon: 'core/dataset/websiteDatasetOutline',
avatar: 'core/dataset/websiteDatasetColor',
label: i18nT('dataset:website_dataset'),
collectionLabel: i18nT('common:Website')
collectionLabel: i18nT('common:Website'),
courseUrl: '/docs/guide/knowledge_base/websync/'
},
[DatasetTypeEnum.externalFile]: {
icon: 'core/dataset/externalDatasetOutline',
avatar: 'core/dataset/externalDatasetColor',
label: i18nT('dataset:external_file'),
collectionLabel: i18nT('common:File')
},
[DatasetTypeEnum.apiDataset]: {
icon: 'core/dataset/externalDatasetOutline',
label: i18nT('dataset:api_file'),
collectionLabel: i18nT('common:File')
},
[DatasetTypeEnum.feishu]: {
icon: 'core/dataset/feishuDatasetOutline',
label: i18nT('dataset:feishu_dataset'),
collectionLabel: i18nT('common:File')
},
[DatasetTypeEnum.yuque]: {
icon: 'core/dataset/yuqueDatasetOutline',
label: i18nT('dataset:yuque_dataset'),
collectionLabel: i18nT('common:File')
}
};
@@ -77,7 +112,8 @@ export enum DatasetCollectionTypeEnum {
file = 'file',
link = 'link', // one link
externalFile = 'externalFile',
apiFile = 'apiFile'
apiFile = 'apiFile',
images = 'images'
}
export const DatasetCollectionTypeMap = {
[DatasetCollectionTypeEnum.folder]: {
@@ -97,6 +133,9 @@ export const DatasetCollectionTypeMap = {
},
[DatasetCollectionTypeEnum.apiFile]: {
name: i18nT('common:core.dataset.apiFile')
},
[DatasetCollectionTypeEnum.images]: {
name: i18nT('dataset:core.dataset.Image collection')
}
};
@@ -120,6 +159,9 @@ export const DatasetCollectionSyncResultMap = {
export enum DatasetCollectionDataProcessModeEnum {
chunk = 'chunk',
qa = 'qa',
imageParse = 'imageParse',
backup = 'backup',
auto = 'auto' // abandon
}
export const DatasetCollectionDataProcessModeMap = {
@@ -131,21 +173,39 @@ export const DatasetCollectionDataProcessModeMap = {
label: i18nT('common:core.dataset.training.QA mode'),
tooltip: i18nT('common:core.dataset.import.QA Import Tip')
},
[DatasetCollectionDataProcessModeEnum.imageParse]: {
label: i18nT('dataset:training.Image mode'),
tooltip: i18nT('common:core.dataset.import.Chunk Split Tip')
},
[DatasetCollectionDataProcessModeEnum.backup]: {
label: i18nT('dataset:backup_mode'),
tooltip: i18nT('dataset:backup_mode')
},
[DatasetCollectionDataProcessModeEnum.auto]: {
label: i18nT('common:core.dataset.training.Auto mode'),
tooltip: i18nT('common:core.dataset.training.Auto mode Tip')
}
};
export enum ChunkTriggerConfigTypeEnum {
minSize = 'minSize',
forceChunk = 'forceChunk',
maxSize = 'maxSize'
}
export enum ChunkSettingModeEnum {
auto = 'auto',
custom = 'custom'
}
export enum DataChunkSplitModeEnum {
paragraph = 'paragraph',
size = 'size',
char = 'char'
}
export enum ParagraphChunkAIModeEnum {
auto = 'auto',
force = 'force'
}
/* ------------ data -------------- */
@@ -154,17 +214,18 @@ export enum ImportDataSourceEnum {
fileLocal = 'fileLocal',
fileLink = 'fileLink',
fileCustom = 'fileCustom',
csvTable = 'csvTable',
externalFile = 'externalFile',
apiDataset = 'apiDataset',
reTraining = 'reTraining'
reTraining = 'reTraining',
imageDataset = 'imageDataset'
}
export enum TrainingModeEnum {
chunk = 'chunk',
qa = 'qa',
auto = 'auto',
image = 'image'
image = 'image',
imageParse = 'imageParse'
}
/* ------------ search -------------- */

View File

@@ -8,17 +8,19 @@ export type CreateDatasetDataProps = {
chunkIndex?: number;
q: string;
a?: string;
imageId?: string;
indexes?: Omit<DatasetDataIndexItemType, 'dataId'>[];
};
export type UpdateDatasetDataProps = {
dataId: string;
q?: string;
q: string;
a?: string;
indexes?: (Omit<DatasetDataIndexItemType, 'dataId'> & {
dataId?: string; // pg data id
})[];
imageId?: string;
};
export type PatchIndexesProps =

View File

@@ -32,7 +32,7 @@ export const DatasetDataIndexMap: Record<
color: 'red'
},
[DatasetDataIndexTypeEnum.image]: {
label: i18nT('common:data_index_image'),
label: i18nT('dataset:data_index_image'),
color: 'purple'
}
};

View File

@@ -0,0 +1,13 @@
export type DatasetImageSchema = {
_id: string;
teamId: string;
datasetId: string;
collectionId?: string;
name: string;
contentType: string;
size: number;
metadata?: Record<string, any>;
expiredTime?: Date;
createdAt: Date;
updatedAt: Date;
};

View File

@@ -118,9 +118,8 @@ export const computeChunkSize = (params: {
return getLLMMaxChunkSize(params.llmModel);
}
return Math.min(params.chunkSize || chunkAutoChunkSize, getLLMMaxChunkSize(params.llmModel));
return Math.min(params.chunkSize ?? chunkAutoChunkSize, getLLMMaxChunkSize(params.llmModel));
};
export const computeChunkSplitter = (params: {
chunkSettingMode?: ChunkSettingModeEnum;
chunkSplitMode?: DataChunkSplitModeEnum;
@@ -129,8 +128,21 @@ export const computeChunkSplitter = (params: {
if (params.chunkSettingMode === ChunkSettingModeEnum.auto) {
return undefined;
}
if (params.chunkSplitMode === DataChunkSplitModeEnum.size) {
if (params.chunkSplitMode !== DataChunkSplitModeEnum.char) {
return undefined;
}
return params.chunkSplitter;
};
export const computeParagraphChunkDeep = (params: {
chunkSettingMode?: ChunkSettingModeEnum;
chunkSplitMode?: DataChunkSplitModeEnum;
paragraphChunkDeep?: number;
}) => {
if (params.chunkSettingMode === ChunkSettingModeEnum.auto) {
return 5;
}
if (params.chunkSplitMode === DataChunkSplitModeEnum.paragraph) {
return params.paragraphChunkDeep;
}
return 0;
};

View File

@@ -8,32 +8,54 @@ import type {
DatasetStatusEnum,
DatasetTypeEnum,
SearchScoreTypeEnum,
TrainingModeEnum
TrainingModeEnum,
ChunkSettingModeEnum,
ChunkTriggerConfigTypeEnum
} from './constants';
import type { DatasetPermission } from '../../support/permission/dataset/controller';
import { Permission } from '../../support/permission/controller';
import type { APIFileServer, FeishuServer, YuqueServer } from './apiDataset';
import type {
ApiDatasetServerType,
APIFileServer,
FeishuServer,
YuqueServer
} from './apiDataset/type';
import type { SourceMemberType } from 'support/user/type';
import type { DatasetDataIndexTypeEnum } from './data/constants';
import type { ChunkSettingModeEnum } from './constants';
import type { ParentIdType } from 'common/parentFolder/type';
export type ChunkSettingsType = {
trainingType: DatasetCollectionDataProcessModeEnum;
autoIndexes?: boolean;
trainingType?: DatasetCollectionDataProcessModeEnum;
// Chunk trigger
chunkTriggerType?: ChunkTriggerConfigTypeEnum;
chunkTriggerMinSize?: number; // maxSize from agent model, not store
// Data enhance
dataEnhanceCollectionName?: boolean; // Auto add collection name to data
// Index enhance
imageIndex?: boolean;
autoIndexes?: boolean;
chunkSettingMode?: ChunkSettingModeEnum;
// Chunk setting
chunkSettingMode?: ChunkSettingModeEnum; // 系统参数/自定义参数
chunkSplitMode?: DataChunkSplitModeEnum;
chunkSize?: number;
// Paragraph split
paragraphChunkAIMode?: ParagraphChunkAIModeEnum;
paragraphChunkDeep?: number; // Paragraph deep
paragraphChunkMinSize?: number; // Paragraph min size, if too small, it will merge
// Size split
chunkSize?: number; // chunk/qa chunk size, Paragraph max chunk size.
// Char split
chunkSplitter?: string; // chunk/qa chunk splitter
indexSize?: number;
chunkSplitter?: string;
qaPrompt?: string;
};
export type DatasetSchemaType = {
_id: string;
parentId?: string;
parentId: ParentIdType;
userId: string;
teamId: string;
tmbId: string;
@@ -56,17 +78,19 @@ export type DatasetSchemaType = {
chunkSettings?: ChunkSettingsType;
inheritPermission: boolean;
apiServer?: APIFileServer;
feishuServer?: FeishuServer;
yuqueServer?: YuqueServer;
apiDatasetServer?: ApiDatasetServerType;
// abandon
autoSync?: boolean;
externalReadUrl?: string;
defaultPermission?: number;
apiServer?: APIFileServer;
feishuServer?: FeishuServer;
yuqueServer?: YuqueServer;
};
export type DatasetCollectionSchemaType = {
export type DatasetCollectionSchemaType = ChunkSettingsType & {
_id: string;
teamId: string;
tmbId: string;
@@ -101,18 +125,7 @@ export type DatasetCollectionSchemaType = {
// Parse settings
customPdfParse?: boolean;
// Chunk settings
autoIndexes?: boolean;
imageIndex?: boolean;
trainingType: DatasetCollectionDataProcessModeEnum;
chunkSettingMode?: ChunkSettingModeEnum;
chunkSplitMode?: DataChunkSplitModeEnum;
chunkSize?: number;
indexSize?: number;
chunkSplitter?: string;
qaPrompt?: string;
};
export type DatasetCollectionTagsSchemaType = {
@@ -127,7 +140,13 @@ export type DatasetDataIndexItemType = {
dataId: string; // pg data id
text: string;
};
export type DatasetDataSchemaType = {
export type DatasetDataFieldType = {
q: string; // large chunks or question
a?: string; // answer or custom content
imageId?: string;
};
export type DatasetDataSchemaType = DatasetDataFieldType & {
_id: string;
userId: string;
teamId: string;
@@ -136,13 +155,9 @@ export type DatasetDataSchemaType = {
collectionId: string;
chunkIndex: number;
updateTime: Date;
q: string; // large chunks or question
a: string; // answer or custom content
history?: {
q: string;
a: string;
history?: (DatasetDataFieldType & {
updateTime: Date;
}[];
})[];
forbid?: boolean;
fullTextToken: string;
indexes: DatasetDataIndexItemType[];
@@ -174,7 +189,9 @@ export type DatasetTrainingSchemaType = {
dataId?: string;
q: string;
a: string;
imageId?: string;
chunkIndex: number;
indexSize?: number;
weight: number;
indexes: Omit<DatasetDataIndexItemType, 'dataId'>[];
retryCount: number;
@@ -238,20 +255,18 @@ export type DatasetCollectionItemType = CollectionWithDatasetType & {
};
/* ================= data ===================== */
export type DatasetDataItemType = {
export type DatasetDataItemType = DatasetDataFieldType & {
id: string;
teamId: string;
datasetId: string;
imagePreivewUrl?: string;
updateTime: Date;
collectionId: string;
sourceName: string;
sourceId?: string;
q: string;
a: string;
chunkIndex: number;
indexes: DatasetDataIndexItemType[];
isOwner: boolean;
// permission: DatasetPermission;
};
/* --------------- file ---------------------- */
@@ -278,3 +293,14 @@ export type SearchDataResponseItemType = Omit<
score: { type: `${SearchScoreTypeEnum}`; value: number; index: number }[];
// score: number;
};
export type DatasetCiteItemType = {
_id: string;
q: string;
a?: string;
imagePreivewUrl?: string;
history?: DatasetDataSchemaType['history'];
updateTime: DatasetDataSchemaType['updateTime'];
index: DatasetDataSchemaType['chunkIndex'];
updated?: boolean;
};

View File

@@ -2,10 +2,15 @@ import { TrainingModeEnum, DatasetCollectionTypeEnum } from './constants';
import { getFileIcon } from '../../common/file/icon';
import { strIsLink } from '../../common/string/tools';
export function getCollectionIcon(
type: DatasetCollectionTypeEnum = DatasetCollectionTypeEnum.file,
name = ''
) {
export function getCollectionIcon({
type = DatasetCollectionTypeEnum.file,
name = '',
sourceId
}: {
type?: DatasetCollectionTypeEnum;
name?: string;
sourceId?: string;
}) {
if (type === DatasetCollectionTypeEnum.folder) {
return 'common/folderFill';
}
@@ -15,7 +20,10 @@ export function getCollectionIcon(
if (type === DatasetCollectionTypeEnum.virtual) {
return 'file/fill/manual';
}
return getFileIcon(name);
if (type === DatasetCollectionTypeEnum.images) {
return 'core/dataset/imageFill';
}
return getSourceNameIcon({ sourceName: name, sourceId });
}
export function getSourceNameIcon({
sourceName,
@@ -40,5 +48,6 @@ export function getSourceNameIcon({
export const predictDataLimitLength = (mode: TrainingModeEnum, data: any[]) => {
if (mode === TrainingModeEnum.qa) return data.length * 20;
if (mode === TrainingModeEnum.auto) return data.length * 5;
if (mode === TrainingModeEnum.image) return data.length * 2;
return data.length;
};

View File

@@ -59,7 +59,6 @@ export type FlowNodeCommonType = {
};
export type PluginDataType = {
version?: string;
diagram?: string;
userGuide?: string;
courseUrl?: string;
@@ -126,6 +125,7 @@ export type FlowNodeItemType = FlowNodeTemplateType & {
nodeId: string;
parentNodeId?: string;
isError?: boolean;
searchedText?: string;
debugResult?: {
status: 'running' | 'success' | 'skipped' | 'failed';
message?: string;

View File

@@ -1,4 +1,5 @@
export enum OperationLogEventEnum {
//Team
LOGIN = 'LOGIN',
CREATE_INVITATION_LINK = 'CREATE_INVITATION_LINK',
JOIN_TEAM = 'JOIN_TEAM',
@@ -11,5 +12,52 @@ export enum OperationLogEventEnum {
RELOCATE_DEPARTMENT = 'RELOCATE_DEPARTMENT',
CREATE_GROUP = 'CREATE_GROUP',
DELETE_GROUP = 'DELETE_GROUP',
ASSIGN_PERMISSION = 'ASSIGN_PERMISSION'
ASSIGN_PERMISSION = 'ASSIGN_PERMISSION',
//APP
CREATE_APP = 'CREATE_APP',
UPDATE_APP_INFO = 'UPDATE_APP_INFO',
MOVE_APP = 'MOVE_APP',
DELETE_APP = 'DELETE_APP',
UPDATE_APP_COLLABORATOR = 'UPDATE_APP_COLLABORATOR',
DELETE_APP_COLLABORATOR = 'DELETE_APP_COLLABORATOR',
TRANSFER_APP_OWNERSHIP = 'TRANSFER_APP_OWNERSHIP',
CREATE_APP_COPY = 'CREATE_APP_COPY',
CREATE_APP_FOLDER = 'CREATE_APP_FOLDER',
UPDATE_PUBLISH_APP = 'UPDATE_PUBLISH_APP',
CREATE_APP_PUBLISH_CHANNEL = 'CREATE_APP_PUBLISH_CHANNEL',
UPDATE_APP_PUBLISH_CHANNEL = 'UPDATE_APP_PUBLISH_CHANNEL',
DELETE_APP_PUBLISH_CHANNEL = 'DELETE_APP_PUBLISH_CHANNEL',
EXPORT_APP_CHAT_LOG = 'EXPORT_APP_CHAT_LOG',
//Dataset
CREATE_DATASET = 'CREATE_DATASET',
UPDATE_DATASET = 'UPDATE_DATASET',
DELETE_DATASET = 'DELETE_DATASET',
MOVE_DATASET = 'MOVE_DATASET',
UPDATE_DATASET_COLLABORATOR = 'UPDATE_DATASET_COLLABORATOR',
DELETE_DATASET_COLLABORATOR = 'DELETE_DATASET_COLLABORATOR',
TRANSFER_DATASET_OWNERSHIP = 'TRANSFER_DATASET_OWNERSHIP',
EXPORT_DATASET = 'EXPORT_DATASET',
CREATE_DATASET_FOLDER = 'CREATE_DATASET_FOLDER',
//Collection
CREATE_COLLECTION = 'CREATE_COLLECTION',
UPDATE_COLLECTION = 'UPDATE_COLLECTION',
DELETE_COLLECTION = 'DELETE_COLLECTION',
RETRAIN_COLLECTION = 'RETRAIN_COLLECTION',
//Data
CREATE_DATA = 'CREATE_DATA',
UPDATE_DATA = 'UPDATE_DATA',
DELETE_DATA = 'DELETE_DATA',
//SearchTest
SEARCH_TEST = 'SEARCH_TEST',
//Account
CHANGE_PASSWORD = 'CHANGE_PASSWORD',
CHANGE_NOTIFICATION_SETTINGS = 'CHANGE_NOTIFICATION_SETTINGS',
CHANGE_MEMBER_NAME_ACCOUNT = 'CHANGE_MEMBER_NAME_ACCOUNT',
PURCHASE_PLAN = 'PURCHASE_PLAN',
EXPORT_BILL_RECORDS = 'EXPORT_BILL_RECORDS',
CREATE_INVOICE = 'CREATE_INVOICE',
SET_INVOICE_HEADER = 'SET_INVOICE_HEADER',
CREATE_API_KEY = 'CREATE_API_KEY',
UPDATE_API_KEY = 'UPDATE_API_KEY',
DELETE_API_KEY = 'DELETE_API_KEY'
}

View File

@@ -13,6 +13,7 @@ const staticPluginList = [
'WeWorkWebhook',
'google',
'bing',
'bocha',
'delay'
];
// Run in worker thread (Have npm packages)

View File

@@ -1,6 +1,5 @@
{
"author": "",
"version": "4816",
"name": "钉钉 webhook",
"avatar": "plugins/dingding",
"intro": "向钉钉机器人发起 webhook 请求。",

View File

@@ -1,6 +1,5 @@
{
"author": "Menghuan1918",
"version": "488",
"name": "PDF识别",
"avatar": "plugins/doc2x",
"intro": "将PDF文件发送至Doc2X进行解析返回结构化的LaTeX公式的文本(markdown)支持传入String类型的URL或者流程输出中的文件链接变量",

View File

@@ -1,6 +1,5 @@
{
"author": "Menghuan1918",
"version": "488",
"name": "Doc2X服务",
"avatar": "plugins/doc2x",
"intro": "将传入的图片或PDF文件发送至Doc2X进行解析返回带LaTeX公式的markdown格式的文本。",

View File

@@ -1,6 +1,5 @@
{
"author": "",
"version": "4816",
"name": "企业微信 webhook",
"avatar": "plugins/qiwei",
"intro": "向企业微信机器人发起 webhook 请求。只能内部群使用。",

View File

@@ -1,6 +1,5 @@
{
"author": "",
"version": "4811",
"name": "Bing搜索",
"avatar": "core/workflow/template/bing",
"intro": "在Bing中搜索。",

View File

@@ -0,0 +1,677 @@
{
"author": "",
"name": "博查搜索",
"avatar": "core/workflow/template/bocha",
"intro": "使用博查AI搜索引擎进行网络搜索。",
"showStatus": true,
"weight": 10,
"courseUrl": "",
"isTool": true,
"templateType": "search",
"workflow": {
"nodes": [
{
"nodeId": "pluginInput",
"name": "workflow:template.plugin_start",
"intro": "workflow:intro_plugin_input",
"avatar": "core/workflow/template/workflowStart",
"flowNodeType": "pluginInput",
"showStatus": false,
"position": {
"x": 636.3048409085379,
"y": -238.61714728578016
},
"version": "481",
"inputs": [
{
"renderTypeList": [
"input"
],
"selectedTypeIndex": 0,
"valueType": "string",
"canEdit": true,
"key": "apiKey",
"label": "apiKey",
"description": "博查API密钥",
"defaultValue": "",
"required": true
},
{
"renderTypeList": [
"input",
"reference"
],
"selectedTypeIndex": 0,
"valueType": "string",
"canEdit": true,
"key": "query",
"label": "query",
"description": "搜索查询词",
"defaultValue": "",
"required": true,
"toolDescription": "搜索查询词"
},
{
"renderTypeList": [
"input",
"reference"
],
"selectedTypeIndex": 0,
"valueType": "string",
"canEdit": true,
"key": "freshness",
"label": "freshness",
"description": "搜索指定时间范围内的网页。可填值oneDay(一天内)、oneWeek(一周内)、oneMonth(一个月内)、oneYear(一年内)、noLimit(不限,默认)、YYYY-MM-DD..YYYY-MM-DD(日期范围)、YYYY-MM-DD(指定日期)",
"defaultValue": "noLimit",
"required": false,
"toolDescription": "搜索时间范围"
},
{
"renderTypeList": [
"input",
"reference"
],
"selectedTypeIndex": 0,
"valueType": "boolean",
"canEdit": true,
"key": "summary",
"label": "summary",
"description": "是否显示文本摘要。true显示false不显示(默认)",
"defaultValue": false,
"required": false,
"toolDescription": "是否显示文本摘要"
},
{
"renderTypeList": [
"input",
"reference"
],
"selectedTypeIndex": 0,
"valueType": "string",
"canEdit": true,
"key": "include",
"label": "include",
"description": "指定搜索的site范围。多个域名使用|或,分隔最多20个。例如qq.com|m.163.com",
"defaultValue": "",
"required": false,
"toolDescription": "指定搜索的site范围"
},
{
"renderTypeList": [
"input",
"reference"
],
"selectedTypeIndex": 0,
"valueType": "string",
"canEdit": true,
"key": "exclude",
"label": "exclude",
"description": "排除搜索的网站范围。多个域名使用|或,分隔最多20个。例如qq.com|m.163.com",
"defaultValue": "",
"required": false,
"toolDescription": "排除搜索的网站范围"
},
{
"renderTypeList": [
"input",
"reference"
],
"selectedTypeIndex": 0,
"valueType": "number",
"canEdit": true,
"key": "count",
"label": "count",
"description": "返回结果的条数。可填范围1-50默认为10",
"defaultValue": 10,
"required": false,
"min": 1,
"max": 50,
"toolDescription": "返回结果条数"
}
],
"outputs": [
{
"id": "apiKey",
"valueType": "string",
"key": "apiKey",
"label": "apiKey",
"type": "hidden"
},
{
"id": "query",
"valueType": "string",
"key": "query",
"label": "query",
"type": "hidden"
},
{
"id": "freshness",
"valueType": "string",
"key": "freshness",
"label": "freshness",
"type": "hidden"
},
{
"id": "summary",
"valueType": "boolean",
"key": "summary",
"label": "summary",
"type": "hidden"
},
{
"id": "include",
"valueType": "string",
"key": "include",
"label": "include",
"type": "hidden"
},
{
"id": "exclude",
"valueType": "string",
"key": "exclude",
"label": "exclude",
"type": "hidden"
},
{
"id": "count",
"valueType": "number",
"key": "count",
"label": "count",
"type": "hidden"
}
]
},
{
"nodeId": "pluginOutput",
"name": "common:core.module.template.self_output",
"intro": "workflow:intro_custom_plugin_output",
"avatar": "core/workflow/template/pluginOutput",
"flowNodeType": "pluginOutput",
"showStatus": false,
"position": {
"x": 2764.1105686698083,
"y": -30.617147285780163
},
"version": "481",
"inputs": [
{
"renderTypeList": [
"reference"
],
"valueType": "object",
"canEdit": true,
"key": "result",
"label": "result",
"isToolOutput": true,
"description": "",
"value": [
"nyA6oA8mF1iW",
"httpRawResponse"
]
}
],
"outputs": []
},
{
"nodeId": "pluginConfig",
"name": "common:core.module.template.system_config",
"intro": "",
"avatar": "core/workflow/template/systemConfig",
"flowNodeType": "pluginConfig",
"position": {
"x": 184.66337662472682,
"y": -216.05298493910115
},
"version": "4811",
"inputs": [],
"outputs": []
},
{
"nodeId": "nyA6oA8mF1iW",
"name": "HTTP 请求",
"intro": "调用博查搜索API",
"avatar": "core/workflow/template/httpRequest",
"flowNodeType": "httpRequest468",
"showStatus": true,
"position": {
"x": 1335.0647252518884,
"y": -455.9043948565971
},
"version": "481",
"inputs": [
{
"key": "system_addInputParam",
"renderTypeList": [
"addInputParam"
],
"valueType": "dynamic",
"label": "",
"required": false,
"description": "common:core.module.input.description.HTTP Dynamic Input",
"customInputConfig": {
"selectValueTypeList": [
"string",
"number",
"boolean",
"object",
"arrayString",
"arrayNumber",
"arrayBoolean",
"arrayObject",
"arrayAny",
"any",
"chatHistory",
"datasetQuote",
"dynamic",
"selectDataset",
"selectApp"
],
"showDescription": false,
"showDefaultValue": true
},
"debugLabel": "",
"toolDescription": ""
},
{
"key": "system_httpMethod",
"renderTypeList": [
"custom"
],
"valueType": "string",
"label": "",
"value": "POST",
"required": true,
"debugLabel": "",
"toolDescription": ""
},
{
"key": "system_httpTimeout",
"renderTypeList": [
"custom"
],
"valueType": "number",
"label": "",
"value": 30,
"min": 5,
"max": 600,
"required": true,
"debugLabel": "",
"toolDescription": ""
},
{
"key": "system_httpReqUrl",
"renderTypeList": [
"hidden"
],
"valueType": "string",
"label": "",
"description": "common:core.module.input.description.Http Request Url",
"placeholder": "https://api.ai.com/getInventory",
"required": false,
"value": "https://api.bochaai.com/v1/web-search",
"debugLabel": "",
"toolDescription": ""
},
{
"key": "system_httpHeader",
"renderTypeList": [
"custom"
],
"valueType": "any",
"value": [
{
"key": "Authorization",
"type": "string",
"value": "Bearer {{$pluginInput.apiKey$}}"
},
{
"key": "Content-Type",
"type": "string",
"value": "application/json"
}
],
"label": "",
"description": "common:core.module.input.description.Http Request Header",
"placeholder": "common:core.module.input.description.Http Request Header",
"required": false,
"debugLabel": "",
"toolDescription": ""
},
{
"key": "system_httpParams",
"renderTypeList": [
"hidden"
],
"valueType": "any",
"value": [],
"label": "",
"required": false,
"debugLabel": "",
"toolDescription": ""
},
{
"key": "system_httpJsonBody",
"renderTypeList": [
"hidden"
],
"valueType": "any",
"value": "{\n \"query\": \"{{query}}\",\n \"freshness\": \"{{freshness}}\",\n \"summary\": {{summary}},\n \"include\": \"{{include}}\",\n \"exclude\": \"{{exclude}}\",\n \"count\": {{count}}\n}",
"label": "",
"required": false,
"debugLabel": "",
"toolDescription": ""
},
{
"key": "system_httpFormBody",
"renderTypeList": [
"hidden"
],
"valueType": "any",
"value": [],
"label": "",
"required": false,
"debugLabel": "",
"toolDescription": ""
},
{
"key": "system_httpContentType",
"renderTypeList": [
"hidden"
],
"valueType": "string",
"value": "json",
"label": "",
"required": false,
"debugLabel": "",
"toolDescription": ""
},
{
"valueType": "string",
"renderTypeList": [
"reference"
],
"key": "query",
"label": "query",
"toolDescription": "博查搜索检索词",
"required": true,
"canEdit": true,
"editField": {
"key": true,
"description": true
},
"customInputConfig": {
"selectValueTypeList": [
"string",
"number",
"boolean",
"object",
"arrayString",
"arrayNumber",
"arrayBoolean",
"arrayObject",
"arrayAny",
"any",
"chatHistory",
"datasetQuote",
"dynamic",
"selectApp",
"selectDataset"
],
"showDescription": false,
"showDefaultValue": true
},
"value": [
"pluginInput",
"query"
]
},
{
"valueType": "string",
"renderTypeList": [
"reference"
],
"key": "freshness",
"label": "freshness",
"toolDescription": "搜索时间范围",
"required": false,
"canEdit": true,
"editField": {
"key": true,
"description": true
},
"customInputConfig": {
"selectValueTypeList": [
"string",
"number",
"boolean",
"object",
"arrayString",
"arrayNumber",
"arrayBoolean",
"arrayObject",
"arrayAny",
"any",
"chatHistory",
"datasetQuote",
"dynamic",
"selectApp",
"selectDataset"
],
"showDescription": false,
"showDefaultValue": true
},
"value": [
"pluginInput",
"freshness"
]
},
{
"valueType": "boolean",
"renderTypeList": [
"reference"
],
"key": "summary",
"label": "summary",
"toolDescription": "是否显示文本摘要",
"required": false,
"canEdit": true,
"editField": {
"key": true,
"description": true
},
"customInputConfig": {
"selectValueTypeList": [
"string",
"number",
"boolean",
"object",
"arrayString",
"arrayNumber",
"arrayBoolean",
"arrayObject",
"arrayAny",
"any",
"chatHistory",
"datasetQuote",
"dynamic",
"selectApp",
"selectDataset"
],
"showDescription": false,
"showDefaultValue": true
},
"value": [
"pluginInput",
"summary"
]
},
{
"valueType": "string",
"renderTypeList": [
"reference"
],
"key": "include",
"label": "include",
"toolDescription": "指定搜索的site范围",
"required": false,
"canEdit": true,
"editField": {
"key": true,
"description": true
},
"customInputConfig": {
"selectValueTypeList": [
"string",
"number",
"boolean",
"object",
"arrayString",
"arrayNumber",
"arrayBoolean",
"arrayObject",
"arrayAny",
"any",
"chatHistory",
"datasetQuote",
"dynamic",
"selectApp",
"selectDataset"
],
"showDescription": false,
"showDefaultValue": true
},
"value": [
"pluginInput",
"include"
]
},
{
"valueType": "string",
"renderTypeList": [
"reference"
],
"key": "exclude",
"label": "exclude",
"toolDescription": "排除搜索的网站范围",
"required": false,
"canEdit": true,
"editField": {
"key": true,
"description": true
},
"customInputConfig": {
"selectValueTypeList": [
"string",
"number",
"boolean",
"object",
"arrayString",
"arrayNumber",
"arrayBoolean",
"arrayObject",
"arrayAny",
"any",
"chatHistory",
"datasetQuote",
"dynamic",
"selectApp",
"selectDataset"
],
"showDescription": false,
"showDefaultValue": true
},
"value": [
"pluginInput",
"exclude"
]
},
{
"valueType": "number",
"renderTypeList": [
"reference"
],
"key": "count",
"label": "count",
"toolDescription": "返回结果条数",
"required": false,
"canEdit": true,
"editField": {
"key": true,
"description": true
},
"customInputConfig": {
"selectValueTypeList": [
"string",
"number",
"boolean",
"object",
"arrayString",
"arrayNumber",
"arrayBoolean",
"arrayObject",
"arrayAny",
"any",
"chatHistory",
"datasetQuote",
"dynamic",
"selectApp",
"selectDataset"
],
"showDescription": false,
"showDefaultValue": true
},
"value": [
"pluginInput",
"count"
]
}
],
"outputs": [
{
"id": "error",
"key": "error",
"label": "workflow:request_error",
"description": "HTTP请求错误信息成功时返回空",
"valueType": "object",
"type": "static"
},
{
"id": "httpRawResponse",
"key": "httpRawResponse",
"required": true,
"label": "workflow:raw_response",
"description": "HTTP请求的原始响应。只能接受字符串或JSON类型响应数据。",
"valueType": "any",
"type": "static"
},
{
"id": "system_addOutputParam",
"key": "system_addOutputParam",
"type": "dynamic",
"valueType": "dynamic",
"label": "",
"editField": {
"key": true,
"valueType": true
}
}
]
}
],
"edges": [
{
"source": "pluginInput",
"target": "nyA6oA8mF1iW",
"sourceHandle": "pluginInput-source-right",
"targetHandle": "nyA6oA8mF1iW-target-left"
},
{
"source": "nyA6oA8mF1iW",
"target": "pluginOutput",
"sourceHandle": "nyA6oA8mF1iW-source-right",
"targetHandle": "pluginOutput-target-left"
}
]
},
"chatConfig": {}
}

View File

@@ -1,6 +1,5 @@
{
"author": "silencezhang",
"version": "4811",
"name": "数据库连接",
"avatar": "core/workflow/template/datasource",
"intro": "可连接常用数据库并执行sql",

View File

@@ -1,6 +1,5 @@
{
"author": "collin",
"version": "4817",
"name": "流程等待",
"avatar": "core/workflow/template/sleep",
"intro": "让工作流等待指定时间后运行",

View File

@@ -1,6 +1,5 @@
{
"author": "silencezhang",
"version": "4817",
"name": "基础图表",
"avatar": "core/workflow/template/baseChart",
"intro": "根据数据生成图表可根据chartType生成柱状图折线图饼图",

View File

@@ -1,6 +1,5 @@
{
"author": "silencezhang",
"version": "486",
"name": "BI图表功能",
"avatar": "core/workflow/template/BI",
"intro": "BI图表功能可以生成一些常用的图表如饼图柱状图折线图等",

View File

@@ -1,6 +1,5 @@
{
"author": "",
"version": "486",
"name": "DuckDuckGo 网络搜索",
"avatar": "core/workflow/template/duckduckgo",
"intro": "使用 DuckDuckGo 进行网络搜索",

View File

@@ -1,6 +1,5 @@
{
"author": "",
"version": "486",
"name": "DuckDuckGo 图片搜索",
"avatar": "core/workflow/template/duckduckgo",
"intro": "使用 DuckDuckGo 进行图片搜索",

View File

@@ -1,6 +1,5 @@
{
"author": "",
"version": "486",
"name": "DuckDuckGo 新闻检索",
"avatar": "core/workflow/template/duckduckgo",
"intro": "使用 DuckDuckGo 进行新闻检索",

View File

@@ -1,6 +1,5 @@
{
"author": "",
"version": "486",
"name": "DuckDuckGo 视频搜索",
"avatar": "core/workflow/template/duckduckgo",
"intro": "使用 DuckDuckGo 进行视频搜索",

View File

@@ -1,6 +1,5 @@
{
"author": "",
"version": "486",
"name": "DuckDuckGo服务",
"avatar": "core/workflow/template/duckduckgo",
"intro": "DuckDuckGo 服务,包含网络搜索、图片搜索、新闻搜索等。",

View File

@@ -1,6 +1,5 @@
{
"author": "",
"version": "488",
"name": "飞书 webhook",
"avatar": "core/app/templates/plugin-feishu",
"intro": "向飞书机器人发起 webhook 请求。",

View File

@@ -1,6 +1,5 @@
{
"author": "",
"version": "486",
"name": "网页内容抓取",
"avatar": "core/workflow/template/fetchUrl",
"intro": "可获取一个网页链接内容,并以 Markdown 格式输出,仅支持获取静态网站。",

View File

@@ -1,6 +1,5 @@
{
"author": "",
"version": "481",
"templateType": "tools",
"name": "获取当前时间",
"avatar": "core/workflow/template/getTime",

View File

@@ -1,6 +1,5 @@
{
"author": "",
"version": "4811",
"name": "Google搜索",
"avatar": "core/workflow/template/google",
"intro": "在google中搜索。",

View File

@@ -1,6 +1,5 @@
{
"author": "",
"version": "486",
"name": "数学公式执行",
"avatar": "core/workflow/template/mathCall",
"intro": "用于执行数学表达式的工具,通过 js 的 expr-eval 库运行表达式并返回结果。",

View File

@@ -1,6 +1,5 @@
{
"author": "",
"version": "4816",
"name": "Search XNG 搜索",
"avatar": "core/workflow/template/searxng",
"intro": "使用 Search XNG 服务进行搜索。",

View File

@@ -1,6 +1,5 @@
{
"author": "cloudpense",
"version": "1.0.0",
"name": "Email 邮件发送",
"avatar": "plugins/email",
"intro": "通过SMTP协议发送电子邮件(nodemailer)",

View File

@@ -1,6 +1,5 @@
{
"author": "",
"version": "489",
"name": "文本加工",
"avatar": "/imgs/workflow/textEditor.svg",
"intro": "可对固定或传入的文本进行加工后输出,非字符串类型数据最终会转成字符串类型。",

View File

@@ -1,6 +1,5 @@
{
"author": "",
"version": "4811",
"name": "Wiki搜索",
"avatar": "core/workflow/template/wiki",
"intro": "在Wiki中查询释义。",

View File

@@ -1,17 +1,14 @@
import type { ApiDatasetDetailResponse } from '@fastgpt/global/core/dataset/apiDataset';
import { FeishuServer, YuqueServer } from '@fastgpt/global/core/dataset/apiDataset';
import type {
ApiDatasetDetailResponse,
FeishuServer,
YuqueServer
} from '@fastgpt/global/core/dataset/apiDataset/type';
import type {
DeepRagSearchProps,
SearchDatasetDataResponse
} from '../../core/dataset/search/controller';
import type { AuthOpenApiLimitProps } from '../../support/openapi/auth';
import type { CreateUsageProps, ConcatUsageProps } from '@fastgpt/global/support/wallet/usage/api';
import type {
GetProApiDatasetFileContentParams,
GetProApiDatasetFileDetailParams,
GetProApiDatasetFileListParams,
GetProApiDatasetFilePreviewUrlParams
} from '../../core/dataset/apiDataset/proApi';
declare global {
var textCensorHandler: (params: { text: string }) => Promise<{ code: number; message?: string }>;
@@ -19,16 +16,4 @@ declare global {
var authOpenApiHandler: (data: AuthOpenApiLimitProps) => Promise<any>;
var createUsageHandler: (data: CreateUsageProps) => any;
var concatUsageHandler: (data: ConcatUsageProps) => any;
// API dataset
var getProApiDatasetFileList: (data: GetProApiDatasetFileListParams) => Promise<APIFileItem[]>;
var getProApiDatasetFileContent: (
data: GetProApiDatasetFileContentParams
) => Promise<ApiFileReadContentResponse>;
var getProApiDatasetFilePreviewUrl: (
data: GetProApiDatasetFilePreviewUrlParams
) => Promise<string>;
var getProApiDatasetFileDetail: (
data: GetProApiDatasetFileDetailParams
) => Promise<ApiDatasetDetailResponse>;
}

View File

@@ -0,0 +1,181 @@
import { retryFn } from '@fastgpt/global/common/system/utils';
import { connectionMongo } from '../../mongo';
import { MongoRawTextBufferSchema, bucketName } from './schema';
import { addLog } from '../../system/log';
import { setCron } from '../../system/cron';
import { checkTimerLock } from '../../system/timerLock/utils';
import { TimerIdEnum } from '../../system/timerLock/constants';
const getGridBucket = () => {
return new connectionMongo.mongo.GridFSBucket(connectionMongo.connection.db!, {
bucketName: bucketName
});
};
export const addRawTextBuffer = async ({
sourceId,
sourceName,
text,
expiredTime
}: {
sourceId: string;
sourceName: string;
text: string;
expiredTime: Date;
}) => {
const gridBucket = getGridBucket();
const metadata = {
sourceId,
sourceName,
expiredTime
};
const buffer = Buffer.from(text);
const fileSize = buffer.length;
// 单块大小:尽可能大,但不超过 14MB不小于128KB
const chunkSizeBytes = (() => {
// 计算理想块大小:文件大小 ÷ 目标块数(10)。 并且每个块需要小于 14MB
const idealChunkSize = Math.min(Math.ceil(fileSize / 10), 14 * 1024 * 1024);
// 确保块大小至少为128KB
const minChunkSize = 128 * 1024; // 128KB
// 取理想块大小和最小块大小中的较大值
let chunkSize = Math.max(idealChunkSize, minChunkSize);
// 将块大小向上取整到最接近的64KB的倍数使其更整齐
chunkSize = Math.ceil(chunkSize / (64 * 1024)) * (64 * 1024);
return chunkSize;
})();
const uploadStream = gridBucket.openUploadStream(sourceId, {
metadata,
chunkSizeBytes
});
return retryFn(async () => {
return new Promise((resolve, reject) => {
uploadStream.end(buffer);
uploadStream.on('finish', () => {
resolve(uploadStream.id);
});
uploadStream.on('error', (error) => {
addLog.error('addRawTextBuffer error', error);
resolve('');
});
});
});
};
export const getRawTextBuffer = async (sourceId: string) => {
const gridBucket = getGridBucket();
return retryFn(async () => {
const bufferData = await MongoRawTextBufferSchema.findOne(
{
'metadata.sourceId': sourceId
},
'_id metadata'
).lean();
if (!bufferData) {
return null;
}
// Read file content
const downloadStream = gridBucket.openDownloadStream(bufferData._id);
const chunks: Buffer[] = [];
return new Promise<{
text: string;
sourceName: string;
} | null>((resolve, reject) => {
downloadStream.on('data', (chunk) => {
chunks.push(chunk);
});
downloadStream.on('end', () => {
const buffer = Buffer.concat(chunks);
const text = buffer.toString('utf8');
resolve({
text,
sourceName: bufferData.metadata?.sourceName || ''
});
});
downloadStream.on('error', (error) => {
addLog.error('getRawTextBuffer error', error);
resolve(null);
});
});
});
};
export const deleteRawTextBuffer = async (sourceId: string): Promise<boolean> => {
const gridBucket = getGridBucket();
return retryFn(async () => {
const buffer = await MongoRawTextBufferSchema.findOne({ 'metadata.sourceId': sourceId });
if (!buffer) {
return false;
}
await gridBucket.delete(buffer._id);
return true;
});
};
export const updateRawTextBufferExpiredTime = async ({
sourceId,
expiredTime
}: {
sourceId: string;
expiredTime: Date;
}) => {
return retryFn(async () => {
return MongoRawTextBufferSchema.updateOne(
{ 'metadata.sourceId': sourceId },
{ $set: { 'metadata.expiredTime': expiredTime } }
);
});
};
export const clearExpiredRawTextBufferCron = async () => {
const gridBucket = getGridBucket();
const clearExpiredRawTextBuffer = async () => {
addLog.debug('Clear expired raw text buffer start');
const data = await MongoRawTextBufferSchema.find(
{
'metadata.expiredTime': { $lt: new Date() }
},
'_id'
).lean();
for (const item of data) {
try {
await gridBucket.delete(item._id);
} catch (error) {
addLog.error('Delete expired raw text buffer error', error);
}
}
addLog.debug('Clear expired raw text buffer end');
};
setCron('*/10 * * * *', async () => {
if (
await checkTimerLock({
timerId: TimerIdEnum.clearExpiredRawTextBuffer,
lockMinuted: 9
})
) {
try {
await clearExpiredRawTextBuffer();
} catch (error) {
addLog.error('clearExpiredRawTextBufferCron error', error);
}
}
});
};

View File

@@ -1,33 +1,22 @@
import { getMongoModel, Schema } from '../../mongo';
import { type RawTextBufferSchemaType } from './type';
import { getMongoModel, type Types, Schema } from '../../mongo';
export const collectionName = 'buffer_rawtexts';
export const bucketName = 'buffer_rawtext';
const RawTextBufferSchema = new Schema({
sourceId: {
type: String,
required: true
},
rawText: {
type: String,
default: ''
},
createTime: {
type: Date,
default: () => new Date()
},
metadata: Object
metadata: {
sourceId: { type: String, required: true },
sourceName: { type: String, required: true },
expiredTime: { type: Date, required: true }
}
});
RawTextBufferSchema.index({ 'metadata.sourceId': 'hashed' });
RawTextBufferSchema.index({ 'metadata.expiredTime': -1 });
try {
RawTextBufferSchema.index({ sourceId: 1 });
// 20 minutes
RawTextBufferSchema.index({ createTime: 1 }, { expireAfterSeconds: 20 * 60 });
} catch (error) {
console.log(error);
}
export const MongoRawTextBuffer = getMongoModel<RawTextBufferSchemaType>(
collectionName,
RawTextBufferSchema
);
export const MongoRawTextBufferSchema = getMongoModel<{
_id: Types.ObjectId;
metadata: {
sourceId: string;
sourceName: string;
expiredTime: Date;
};
}>(`${bucketName}.files`, RawTextBufferSchema);

View File

@@ -1,8 +0,0 @@
export type RawTextBufferSchemaType = {
sourceId: string;
rawText: string;
createTime: Date;
metadata?: {
filename: string;
};
};

View File

@@ -6,13 +6,14 @@ import { type DatasetFileSchema } from '@fastgpt/global/core/dataset/type';
import { MongoChatFileSchema, MongoDatasetFileSchema } from './schema';
import { detectFileEncoding, detectFileEncodingByPath } from '@fastgpt/global/common/file/tools';
import { CommonErrEnum } from '@fastgpt/global/common/error/code/common';
import { MongoRawTextBuffer } from '../../buffer/rawText/schema';
import { readRawContentByFileBuffer } from '../read/utils';
import { gridFsStream2Buffer, stream2Encoding } from './utils';
import { computeGridFsChunSize, gridFsStream2Buffer, stream2Encoding } from './utils';
import { addLog } from '../../system/log';
import { readFromSecondary } from '../../mongo/utils';
import { parseFileExtensionFromUrl } from '@fastgpt/global/common/string/tools';
import { Readable } from 'stream';
import { addRawTextBuffer, getRawTextBuffer } from '../../buffer/rawText/controller';
import { addMinutes } from 'date-fns';
import { retryFn } from '@fastgpt/global/common/system/utils';
export function getGFSCollection(bucket: `${BucketNameEnum}`) {
MongoDatasetFileSchema;
@@ -64,23 +65,7 @@ export async function uploadFile({
// create a gridfs bucket
const bucket = getGridBucket(bucketName);
const fileSize = stats.size;
// 单块大小:尽可能大,但不超过 14MB不小于512KB
const chunkSizeBytes = (() => {
// 计算理想块大小:文件大小 ÷ 目标块数(10)。 并且每个块需要小于 14MB
const idealChunkSize = Math.min(Math.ceil(fileSize / 10), 14 * 1024 * 1024);
// 确保块大小至少为512KB
const minChunkSize = 512 * 1024; // 512KB
// 取理想块大小和最小块大小中的较大值
let chunkSize = Math.max(idealChunkSize, minChunkSize);
// 将块大小向上取整到最接近的64KB的倍数使其更整齐
chunkSize = Math.ceil(chunkSize / (64 * 1024)) * (64 * 1024);
return chunkSize;
})();
const chunkSizeBytes = computeGridFsChunSize(stats.size);
const stream = bucket.openUploadStream(filename, {
metadata,
@@ -173,24 +158,18 @@ export async function getFileById({
export async function delFileByFileIdList({
bucketName,
fileIdList,
retry = 3
fileIdList
}: {
bucketName: `${BucketNameEnum}`;
fileIdList: string[];
retry?: number;
}): Promise<any> {
try {
return retryFn(async () => {
const bucket = getGridBucket(bucketName);
for await (const fileId of fileIdList) {
await bucket.delete(new Types.ObjectId(fileId));
}
} catch (error) {
if (retry > 0) {
return delFileByFileIdList({ bucketName, fileIdList, retry: retry - 1 });
}
}
});
}
export async function getDownloadStream({
@@ -210,28 +189,26 @@ export const readFileContentFromMongo = async ({
tmbId,
bucketName,
fileId,
isQAImport = false,
customPdfParse = false
customPdfParse = false,
getFormatText
}: {
teamId: string;
tmbId: string;
bucketName: `${BucketNameEnum}`;
fileId: string;
isQAImport?: boolean;
customPdfParse?: boolean;
getFormatText?: boolean; // 数据类型都尽可能转化成 markdown 格式
}): Promise<{
rawText: string;
filename: string;
}> => {
const bufferId = `${fileId}-${customPdfParse}`;
const bufferId = `${String(fileId)}-${customPdfParse}`;
// read buffer
const fileBuffer = await MongoRawTextBuffer.findOne({ sourceId: bufferId }, undefined, {
...readFromSecondary
}).lean();
const fileBuffer = await getRawTextBuffer(bufferId);
if (fileBuffer) {
return {
rawText: fileBuffer.rawText,
filename: fileBuffer.metadata?.filename || ''
rawText: fileBuffer.text,
filename: fileBuffer?.sourceName
};
}
@@ -254,8 +231,8 @@ export const readFileContentFromMongo = async ({
// Get raw text
const { rawText } = await readRawContentByFileBuffer({
customPdfParse,
getFormatText,
extension,
isQAImport,
teamId,
tmbId,
buffer: fileBuffers,
@@ -265,16 +242,13 @@ export const readFileContentFromMongo = async ({
}
});
// < 14M
if (fileBuffers.length < 14 * 1024 * 1024 && rawText.trim()) {
MongoRawTextBuffer.create({
sourceId: bufferId,
rawText,
metadata: {
filename: file.filename
}
});
}
// Add buffer
addRawTextBuffer({
sourceId: bufferId,
sourceName: file.filename,
text: rawText,
expiredTime: addMinutes(new Date(), 20)
});
return {
rawText,

View File

@@ -1,16 +1,16 @@
import { Schema, getMongoModel } from '../../mongo';
const DatasetFileSchema = new Schema({});
const ChatFileSchema = new Schema({});
const DatasetFileSchema = new Schema({
metadata: Object
});
const ChatFileSchema = new Schema({
metadata: Object
});
try {
DatasetFileSchema.index({ uploadDate: -1 });
DatasetFileSchema.index({ uploadDate: -1 });
ChatFileSchema.index({ uploadDate: -1 });
ChatFileSchema.index({ 'metadata.chatId': 1 });
} catch (error) {
console.log(error);
}
ChatFileSchema.index({ uploadDate: -1 });
ChatFileSchema.index({ 'metadata.chatId': 1 });
export const MongoDatasetFileSchema = getMongoModel('dataset.files', DatasetFileSchema);
export const MongoChatFileSchema = getMongoModel('chat.files', ChatFileSchema);

View File

@@ -1,5 +1,57 @@
import { detectFileEncoding } from '@fastgpt/global/common/file/tools';
import { PassThrough } from 'stream';
import { getGridBucket } from './controller';
import { type BucketNameEnum } from '@fastgpt/global/common/file/constants';
import { retryFn } from '@fastgpt/global/common/system/utils';
export const createFileFromText = async ({
bucket,
filename,
text,
metadata
}: {
bucket: `${BucketNameEnum}`;
filename: string;
text: string;
metadata: Record<string, any>;
}) => {
const gridBucket = getGridBucket(bucket);
const buffer = Buffer.from(text);
const fileSize = buffer.length;
// 单块大小:尽可能大,但不超过 14MB不小于128KB
const chunkSizeBytes = (() => {
// 计算理想块大小:文件大小 ÷ 目标块数(10)。 并且每个块需要小于 14MB
const idealChunkSize = Math.min(Math.ceil(fileSize / 10), 14 * 1024 * 1024);
// 确保块大小至少为128KB
const minChunkSize = 128 * 1024; // 128KB
// 取理想块大小和最小块大小中的较大值
let chunkSize = Math.max(idealChunkSize, minChunkSize);
// 将块大小向上取整到最接近的64KB的倍数使其更整齐
chunkSize = Math.ceil(chunkSize / (64 * 1024)) * (64 * 1024);
return chunkSize;
})();
const uploadStream = gridBucket.openUploadStream(filename, {
metadata,
chunkSizeBytes
});
return retryFn(async () => {
return new Promise<{ fileId: string }>((resolve, reject) => {
uploadStream.end(buffer);
uploadStream.on('finish', () => {
resolve({ fileId: String(uploadStream.id) });
});
uploadStream.on('error', reject);
});
});
};
export const gridFsStream2Buffer = (stream: NodeJS.ReadableStream) => {
return new Promise<Buffer>((resolve, reject) => {
@@ -53,3 +105,20 @@ export const stream2Encoding = async (stream: NodeJS.ReadableStream) => {
stream: copyStream
};
};
// 单块大小:尽可能大,但不超过 14MB不小于512KB
export const computeGridFsChunSize = (fileSize: number) => {
// 计算理想块大小:文件大小 ÷ 目标块数(10)。 并且每个块需要小于 14MB
const idealChunkSize = Math.min(Math.ceil(fileSize / 10), 14 * 1024 * 1024);
// 确保块大小至少为512KB
const minChunkSize = 512 * 1024; // 512KB
// 取理想块大小和最小块大小中的较大值
let chunkSize = Math.max(idealChunkSize, minChunkSize);
// 将块大小向上取整到最接近的64KB的倍数使其更整齐
chunkSize = Math.ceil(chunkSize / (64 * 1024)) * (64 * 1024);
return chunkSize;
};

View File

@@ -22,7 +22,7 @@ export const getUploadModel = ({ maxSize = 500 }: { maxSize?: number }) => {
maxSize *= 1024 * 1024;
class UploadModel {
uploader = multer({
uploaderSingle = multer({
limits: {
fieldSize: maxSize
},
@@ -41,8 +41,7 @@ export const getUploadModel = ({ maxSize = 500 }: { maxSize?: number }) => {
}
})
}).single('file');
async doUpload<T = any>(
async getUploadFile<T = any>(
req: NextApiRequest,
res: NextApiResponse,
originBucketName?: `${BucketNameEnum}`
@@ -54,7 +53,7 @@ export const getUploadModel = ({ maxSize = 500 }: { maxSize?: number }) => {
bucketName?: `${BucketNameEnum}`;
}>((resolve, reject) => {
// @ts-ignore
this.uploader(req, res, (error) => {
this.uploaderSingle(req, res, (error) => {
if (error) {
return reject(error);
}
@@ -94,6 +93,58 @@ export const getUploadModel = ({ maxSize = 500 }: { maxSize?: number }) => {
});
});
}
uploaderMultiple = multer({
limits: {
fieldSize: maxSize
},
preservePath: true,
storage: multer.diskStorage({
// destination: (_req, _file, cb) => {
// cb(null, tmpFileDirPath);
// },
filename: (req, file, cb) => {
if (!file?.originalname) {
cb(new Error('File not found'), '');
} else {
const { ext } = path.parse(decodeURIComponent(file.originalname));
cb(null, `${getNanoid()}${ext}`);
}
}
})
}).array('file', global.feConfigs?.uploadFileMaxSize);
async getUploadFiles<T = any>(req: NextApiRequest, res: NextApiResponse) {
return new Promise<{
files: FileType[];
data: T;
}>((resolve, reject) => {
// @ts-ignore
this.uploaderMultiple(req, res, (error) => {
if (error) {
console.log(error);
return reject(error);
}
// @ts-ignore
const files = req.files as FileType[];
resolve({
files: files.map((file) => ({
...file,
originalname: decodeURIComponent(file.originalname)
})),
data: (() => {
if (!req.body?.data) return {};
try {
return JSON.parse(req.body.data);
} catch (error) {
return {};
}
})()
});
});
});
}
}
return new UploadModel();

View File

@@ -16,6 +16,7 @@ export type readRawTextByLocalFileParams = {
path: string;
encoding: string;
customPdfParse?: boolean;
getFormatText?: boolean;
metadata?: Record<string, any>;
};
export const readRawTextByLocalFile = async (params: readRawTextByLocalFileParams) => {
@@ -27,8 +28,8 @@ export const readRawTextByLocalFile = async (params: readRawTextByLocalFileParam
return readRawContentByFileBuffer({
extension,
isQAImport: false,
customPdfParse: params.customPdfParse,
getFormatText: params.getFormatText,
teamId: params.teamId,
tmbId: params.tmbId,
encoding: params.encoding,
@@ -46,7 +47,7 @@ export const readRawContentByFileBuffer = async ({
encoding,
metadata,
customPdfParse = false,
isQAImport = false
getFormatText = true
}: {
teamId: string;
tmbId: string;
@@ -57,8 +58,10 @@ export const readRawContentByFileBuffer = async ({
metadata?: Record<string, any>;
customPdfParse?: boolean;
isQAImport: boolean;
}): Promise<ReadFileResponse> => {
getFormatText?: boolean;
}): Promise<{
rawText: string;
}> => {
const systemParse = () =>
runWorker<ReadFileResponse>(WorkerNameEnum.readFile, {
extension,
@@ -107,7 +110,7 @@ export const readRawContentByFileBuffer = async ({
return {
rawText: text,
formatText: rawText,
formatText: text,
imageList
};
};
@@ -176,16 +179,7 @@ export const readRawContentByFileBuffer = async ({
});
}
if (['csv', 'xlsx'].includes(extension)) {
// qa data
if (isQAImport) {
rawText = rawText || '';
} else {
rawText = formatText || rawText;
}
}
addLog.debug(`Upload file success, time: ${Date.now() - start}ms`);
return { rawText, formatText, imageList };
return { rawText: getFormatText ? formatText || rawText : rawText };
};

View File

@@ -10,6 +10,7 @@ let jieba: Jieba | undefined;
})();
const stopWords = new Set([
'\n',
'--',
'?',
'“',
@@ -1519,8 +1520,7 @@ const stopWords = new Set([
]);
export async function jiebaSplit({ text }: { text: string }) {
text = text.replace(/[#*`_~>[\](){}|]/g, '').replace(/\S*https?\S*/gi, '');
text = text.replace(/[#*`_~>[\](){}|]|\S*https?\S*/g, '').trim();
const tokens = (await jieba!.cutAsync(text, true)) as string[];
return (

View File

@@ -4,7 +4,8 @@ import { MongoFrequencyLimit } from './schema';
export const authFrequencyLimit = async ({
eventId,
maxAmount,
expiredTime
expiredTime,
num = 1
}: AuthFrequencyLimitProps) => {
try {
// 对应 eventId 的 account+1, 不存在的话,则创建一个
@@ -14,7 +15,7 @@ export const authFrequencyLimit = async ({
expiredTime: { $gte: new Date() }
},
{
$inc: { amount: 1 },
$inc: { amount: num },
// If not exist, set the expiredTime
$setOnInsert: { expiredTime }
},

View File

@@ -57,14 +57,19 @@ export const addLog = {
level === LogLevelEnum.error && console.error(obj);
// store
// store log
if (level >= STORE_LOG_LEVEL && connectionMongo.connection.readyState === 1) {
// store log
getMongoLog().create({
text: msg,
level,
metadata: obj
});
(async () => {
try {
await getMongoLog().create({
text: msg,
level,
metadata: obj
});
} catch (error) {
console.error('store log error', error);
}
})();
}
},
debug(msg: string, obj?: Record<string, any>) {

View File

@@ -5,7 +5,10 @@ export enum TimerIdEnum {
clearExpiredSubPlan = 'clearExpiredSubPlan',
updateStandardPlan = 'updateStandardPlan',
scheduleTriggerApp = 'scheduleTriggerApp',
notification = 'notification'
notification = 'notification',
clearExpiredRawTextBuffer = 'clearExpiredRawTextBuffer',
clearExpiredDatasetImage = 'clearExpiredDatasetImage'
}
export enum LockNotificationEnum {

Some files were not shown because too many files have changed in this diff Show More