Compare commits

..

11 Commits

Author SHA1 Message Date
Archer
952412f648 V4.9.6 feature (#4565)
* Dashboard submenu (#4545)

* add app submenu (#4452)

* add app submenu

* fix

* width & i18n

* optimize submenu code (#4515)

* optimize submenu code

* fix

* fix

* fix

* fix ts

* perf: dashboard sub menu

* doc

---------

Co-authored-by: heheer <heheer@sealos.io>

* feat: value format test

* doc

* Mcp export (#4555)

* feat: mcp server

* feat: mcp server

* feat: mcp server build

* update doc

* perf: path selector (#4556)

* perf: path selector

* fix: docker file path

* perf: add image endpoint to dataset search (#4557)

* perf: add image endpoint to dataset search

* fix: mcp_server url

* human in loop (#4558)

* Support interactive nodes for loops, and enhance the function of merging nested and loop node history messages. (#4552)

* feat: add LoopInteractive definition

* feat: Support LoopInteractive type and update related logic

* fix: Refactor loop handling logic and improve output value initialization

* feat: Add mergeSignId to dispatchLoop and dispatchRunAppNode responses

* feat: Enhance mergeChatResponseData to recursively merge plugin details and improve response handling

* refactor: Remove redundant comments in mergeChatResponseData for clarity

* perf: loop interactive

* perf: human in loop

---------

Co-authored-by: Theresa <63280168+sd0ric4@users.noreply.github.com>

* mcp server ui

* integrate mcp (#4549)

* integrate mcp

* delete unused code

* fix ts

* bug fix

* fix

* support whole mcp tools

* add try catch

* fix

* fix

* fix ts

* fix test

* fix ts

* fix: interactive in v1 completions

* doc

* fix: router path

* fix mcp integrate (#4563)

* fix mcp integrate

* fix ui

* fix: mcp ux

* feat: mcp call title

* remove repeat loading

* fix mcp tools avatar (#4564)

* fix

* fix avatar

* fix update version

* update doc

* fix: value format

* close server and remove cache

* perf: avatar

---------

Co-authored-by: heheer <heheer@sealos.io>
Co-authored-by: Theresa <63280168+sd0ric4@users.noreply.github.com>
2025-04-16 22:18:51 +08:00
Finley Ge
ab799e13cd test: concurrent test (#4548) 2025-04-16 12:05:38 +08:00
Finley Ge
ba422b73b3 docs: team&roles&permissions and intro (#4550)
* docs: add team roles permissions

* docs: intro rewrite
2025-04-16 12:05:17 +08:00
Theresa
c7c79b400a fix: update workflow to use ubuntu-24.04 for consistency across all jobs (#4553)
* fix: update workflow to use ubuntu-22.04 for consistency across all jobs

* fix: update workflows to use ubuntu-24.04 for consistency across all jobs
2025-04-16 12:04:20 +08:00
Finley Ge
47f674666b fix: app/dataset create (#4554) 2025-04-16 12:04:00 +08:00
Archer
0c9e56c1ee Test sandbox (#4547)
* feat: python sandbox execute with a temporary file (#4464)

* change runPythonSandbox:
1. write code into a temp file in /tmp dir then run it
2. write sandbox python script into a tmp file then run it

* repair subProcess.py file does not generate in tmp dir

* Adjust the security policy to kill (#4546)

---------

Co-authored-by: Donald Yang <yjyangfan@gmail.com>
Co-authored-by: gggaaallleee <91131304+gggaaallleee@users.noreply.github.com>
2025-04-15 16:26:10 +08:00
gaord
97a6c6749a 变量更新组件处理字符串数组错误修复 (#4523)
1,字符串数组的值可能包含转义斜杠会导致typeof value为string,结果变量更新后的值变形为多层数组嵌套而无法使用([[]])。这个修复解决了这个问题。
2,另外对整体的逻辑做了梳理和整理,更加清晰易懂
2025-04-15 15:28:12 +08:00
dependabot[bot]
565b3e4319 chore(deps): bump @nestjs/common from 10.4.15 to 10.4.16 (#4541)
Bumps [@nestjs/common](https://github.com/nestjs/nest/tree/HEAD/packages/common) from 10.4.15 to 10.4.16.
- [Release notes](https://github.com/nestjs/nest/releases)
- [Commits](https://github.com/nestjs/nest/commits/v10.4.16/packages/common)

---
updated-dependencies:
- dependency-name: "@nestjs/common"
  dependency-version: 10.4.16
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-15 15:26:09 +08:00
Finley Ge
efad4c101f test: add opreationLog mock (#4542) 2025-04-15 14:03:21 +08:00
zijiren
bed68718e8 fix: image source (#4535) 2025-04-15 10:55:56 +08:00
Archer
7a9cf4ce9e add system model (#4540) 2025-04-15 10:49:46 +08:00
217 changed files with 6925 additions and 1403 deletions

View File

@@ -18,7 +18,7 @@ jobs:
url: ${{ steps.vercel-action.outputs.preview-url }}
# The type of runner that the job will run on
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
permissions:
contents: write

View File

@@ -73,7 +73,7 @@ jobs:
update-docs-image:
needs: build-fastgpt-docs-images
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
if: github.repository == 'labring/FastGPT'
steps:
- name: Checkout code

View File

@@ -22,7 +22,7 @@ jobs:
url: ${{ steps.vercel-action.outputs.preview-url }}
# The type of runner that the job will run on
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
# Job outputs
outputs:

View File

@@ -14,7 +14,7 @@ jobs:
contents: read
attestations: write
id-token: write
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
if: github.repository != 'labring/FastGPT'
steps:
- name: Checkout

View File

@@ -79,7 +79,7 @@ jobs:
build-args: |
${{ matrix.sub_routes.base_url && format('base_url={0}', matrix.sub_routes.base_url) || '' }}
labels: |
org.opencontainers.image.source=https://github.com/${{ github.repository_owner }}/${{ matrix.sub_routes.repo }}
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.description=${{ matrix.sub_routes.repo }} image
outputs: type=image,"name=ghcr.io/${{ github.repository_owner }}/${{ matrix.sub_routes.repo }},${{ secrets.ALI_IMAGE_NAME }}/${{ matrix.sub_routes.repo }},${{ secrets.DOCKER_IMAGE_NAME }}/${{ matrix.sub_routes.repo }}",push-by-digest=true,push=true
cache-from: type=local,src=/tmp/.buildx-cache

View File

@@ -12,26 +12,33 @@ jobs:
id-token: write
pull-requests: write
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
strategy:
matrix:
image: [fastgpt, sandbox, mcp_server]
fail-fast: false # 即使一个镜像构建失败,也继续构建其他镜像
steps:
- name: Checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.ref }}
repository: ${{ github.event.pull_request.head.repo.full_name }}
fetch-depth: 0 # Fetch all history for .GitInfo and .Lastmod
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
driver-opts: network=host
- name: Cache Docker layers
uses: actions/cache@v3
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
key: ${{ runner.os }}-buildx-${{ github.sha }}-${{ matrix.image }}
restore-keys: |
${{ runner.os }}-buildx-${{ github.sha }}-
${{ runner.os }}-buildx-
- name: Login to GitHub Container Registry
@@ -41,24 +48,35 @@ jobs:
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set DOCKER_REPO_TAGGED based on branch or tag
- name: Set image config
id: config
run: |
echo "DOCKER_REPO_TAGGED=ghcr.io/${{ github.repository_owner }}/fastgpt-pr:${{ github.event.pull_request.head.sha }}" >> $GITHUB_ENV
if [[ "${{ matrix.image }}" == "fastgpt" ]]; then
echo "DOCKERFILE=projects/app/Dockerfile" >> $GITHUB_OUTPUT
echo "DESCRIPTION=fastgpt-pr image" >> $GITHUB_OUTPUT
echo "DOCKER_REPO_TAGGED=ghcr.io/${{ github.repository_owner }}/fastgpt-pr:fatsgpt_${{ github.event.pull_request.head.sha }}" >> $GITHUB_OUTPUT
elif [[ "${{ matrix.image }}" == "sandbox" ]]; then
echo "DOCKERFILE=projects/sandbox/Dockerfile" >> $GITHUB_OUTPUT
echo "DESCRIPTION=fastgpt-sandbox-pr image" >> $GITHUB_OUTPUT
echo "DOCKER_REPO_TAGGED=ghcr.io/${{ github.repository_owner }}/fastgpt-pr:fatsgpt_sandbox_${{ github.event.pull_request.head.sha }}" >> $GITHUB_OUTPUT
elif [[ "${{ matrix.image }}" == "mcp_server" ]]; then
echo "DOCKERFILE=projects/mcp_server/Dockerfile" >> $GITHUB_OUTPUT
echo "DESCRIPTION=fastgpt-mcp_server-pr image" >> $GITHUB_OUTPUT
echo "DOCKER_REPO_TAGGED=ghcr.io/${{ github.repository_owner }}/fastgpt-pr:fatsgpt_mcp_server_${{ github.event.pull_request.head.sha }}" >> $GITHUB_OUTPUT
fi
- name: Build image for PR
env:
DOCKER_REPO_TAGGED: ${{ env.DOCKER_REPO_TAGGED }}
- name: Build ${{ matrix.image }} image for PR
run: |
docker buildx build \
-f projects/app/Dockerfile \
-f ${{ steps.config.outputs.DOCKERFILE }} \
--label "org.opencontainers.image.source=https://github.com/${{ github.repository_owner }}/FastGPT" \
--label "org.opencontainers.image.description=fastgpt-pr image" \
--label "org.opencontainers.image.licenses=Apache" \
--label "org.opencontainers.image.description=${{ steps.config.outputs.DESCRIPTION }}" \
--push \
--cache-from=type=local,src=/tmp/.buildx-cache \
--cache-to=type=local,dest=/tmp/.buildx-cache \
-t ${DOCKER_REPO_TAGGED} \
-t ${{ steps.config.outputs.DOCKER_REPO_TAGGED }} \
.
- uses: actions/github-script@v7
with:
github-token: ${{secrets.GITHUB_TOKEN}}
@@ -67,5 +85,5 @@ jobs:
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: 'Preview Image: `${{ env.DOCKER_REPO_TAGGED }}`'
body: 'Preview ${{ matrix.image }} Image: `${{ steps.config.outputs.DOCKER_REPO_TAGGED }}`'
})

View File

@@ -13,7 +13,7 @@ jobs:
contents: read
attestations: write
id-token: write
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
steps:
- name: Checkout
uses: actions/checkout@v4

View File

@@ -0,0 +1,151 @@
name: Build fastgpt-mcp-server images
on:
workflow_dispatch:
push:
paths:
- 'projects/sandbox/**'
tags:
- 'v*'
jobs:
build-fastgpt-mcp_server-images:
permissions:
packages: write
contents: read
attestations: write
id-token: write
strategy:
matrix:
include:
- arch: amd64
- arch: arm64
runs-on: ubuntu-24.04-arm
runs-on: ${{ matrix.runs-on || 'ubuntu-24.04' }}
steps:
# install env
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver-opts: network=host
- name: Cache Docker layers
uses: actions/cache@v4
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-mcp-server-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-mcp_server-buildx-
# login docker
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Ali Hub
uses: docker/login-action@v3
with:
registry: registry.cn-hangzhou.aliyuncs.com
username: ${{ secrets.ALI_HUB_USERNAME }}
password: ${{ secrets.ALI_HUB_PASSWORD }}
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_NAME }}
password: ${{ secrets.DOCKER_HUB_PASSWORD }}
- name: Build for ${{ matrix.arch }}
id: build
uses: docker/build-push-action@v6
with:
context: .
file: projects/mcp_server/Dockerfile
platforms: linux/${{ matrix.arch }}
labels: |
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.description=fastgpt-mcp_server image
outputs: type=image,"name=ghcr.io/${{ github.repository_owner }}/fastgpt-mcp_server,${{ secrets.ALI_IMAGE_NAME }}/fastgpt-mcp_server,${{ secrets.DOCKER_IMAGE_NAME }}/fastgpt-mcp_server",push-by-digest=true,push=true
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache
- name: Export digest
run: |
mkdir -p ${{ runner.temp }}/digests
digest="${{ steps.build.outputs.digest }}"
touch "${{ runner.temp }}/digests/${digest#sha256:}"
- name: Upload digest
uses: actions/upload-artifact@v4
with:
name: digests-fastgpt-mcp_server-${{ github.sha }}-${{ matrix.arch }}
path: ${{ runner.temp }}/digests/*
if-no-files-found: error
retention-days: 1
release-fastgpt-mcp_server-images:
permissions:
packages: write
contents: read
attestations: write
id-token: write
needs: build-fastgpt-mcp_server-images
runs-on: ubuntu-24.04
steps:
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Ali Hub
uses: docker/login-action@v3
with:
registry: registry.cn-hangzhou.aliyuncs.com
username: ${{ secrets.ALI_HUB_USERNAME }}
password: ${{ secrets.ALI_HUB_PASSWORD }}
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_NAME }}
password: ${{ secrets.DOCKER_HUB_PASSWORD }}
- name: Download digests
uses: actions/download-artifact@v4
with:
path: ${{ runner.temp }}/digests
pattern: digests-fastgpt-mcp_server-${{ github.sha }}-*
merge-multiple: true
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Set image name and tag
run: |
if [[ "${{ github.ref_name }}" == "main" ]]; then
echo "Git_Tag=ghcr.io/${{ github.repository_owner }}/fastgpt-mcp_server:latest" >> $GITHUB_ENV
echo "Git_Latest=ghcr.io/${{ github.repository_owner }}/fastgpt-mcp_server:latest" >> $GITHUB_ENV
echo "Ali_Tag=${{ secrets.ALI_IMAGE_NAME }}/fastgpt-mcp_server:latest" >> $GITHUB_ENV
echo "Ali_Latest=${{ secrets.ALI_IMAGE_NAME }}/fastgpt-mcp_server:latest" >> $GITHUB_ENV
echo "Docker_Hub_Tag=${{ secrets.DOCKER_IMAGE_NAME }}/fastgpt-mcp_server:latest" >> $GITHUB_ENV
echo "Docker_Hub_Latest=${{ secrets.DOCKER_IMAGE_NAME }}/fastgpt-mcp_server:latest" >> $GITHUB_ENV
else
echo "Git_Tag=ghcr.io/${{ github.repository_owner }}/fastgpt-mcp_server:${{ github.ref_name }}" >> $GITHUB_ENV
echo "Git_Latest=ghcr.io/${{ github.repository_owner }}/fastgpt-mcp_server:latest" >> $GITHUB_ENV
echo "Ali_Tag=${{ secrets.ALI_IMAGE_NAME }}/fastgpt-mcp_server:${{ github.ref_name }}" >> $GITHUB_ENV
echo "Ali_Latest=${{ secrets.ALI_IMAGE_NAME }}/fastgpt-mcp_server:latest" >> $GITHUB_ENV
echo "Docker_Hub_Tag=${{ secrets.DOCKER_IMAGE_NAME }}/fastgpt-mcp_server:${{ github.ref_name }}" >> $GITHUB_ENV
echo "Docker_Hub_Latest=${{ secrets.DOCKER_IMAGE_NAME }}/fastgpt-mcp_server:latest" >> $GITHUB_ENV
fi
- name: Create manifest list and push
working-directory: ${{ runner.temp }}/digests
run: |
TAGS="$(echo -e "${Git_Tag}\n${Git_Latest}\n${Ali_Tag}\n${Ali_Latest}\n${Docker_Hub_Tag}\n${Docker_Hub_Latest}")"
for TAG in $TAGS; do
docker buildx imagetools create -t $TAG \
$(printf 'ghcr.io/${{ github.repository_owner }}/fastgpt-mcp_server@sha256:%s ' *)
sleep 5
done

View File

@@ -65,7 +65,7 @@ jobs:
file: projects/sandbox/Dockerfile
platforms: linux/${{ matrix.arch }}
labels: |
org.opencontainers.image.source=https://github.com/${{ github.repository_owner }}/fastgpt-sandbox
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.description=fastgpt-sandbox image
outputs: type=image,"name=ghcr.io/${{ github.repository_owner }}/fastgpt-sandbox,${{ secrets.ALI_IMAGE_NAME }}/fastgpt-sandbox,${{ secrets.DOCKER_IMAGE_NAME }}/fastgpt-sandbox",push-by-digest=true,push=true
cache-from: type=local,src=/tmp/.buildx-cache

View File

@@ -52,71 +52,17 @@
"description": "FastGPT usecontext template"
},
"Jest test template": {
"scope": "typescriptreact",
"prefix": "jesttest",
"Vitest test case template": {
"scope": "typescript",
"prefix": "template_test",
"body": [
"import '@/pages/api/__mocks__/base';",
"import { root } from '@/pages/api/__mocks__/db/init';",
"import { getTestRequest } from '@fastgpt/service/test/utils'; ;",
"import { AppErrEnum } from '@fastgpt/global/common/error/code/app';",
"import handler from './demo';",
"import { describe, it, expect } from 'vitest';",
"",
"// Import the schema",
"import { MongoOutLink } from '@fastgpt/service/support/outLink/schema';",
"",
"beforeAll(async () => {",
" // await MongoOutLink.create({",
" // shareId: 'aaa',",
" // appId: root.appId,",
" // tmbId: root.tmbId,",
" // teamId: root.teamId,",
" // type: 'share',",
" // name: 'aaa'",
" // })",
"});",
"",
"test('Should return a list of outLink', async () => {",
" // Mock request",
" const res = (await handler(",
" ...getTestRequest({",
" query: {",
" appId: root.appId,",
" type: 'share'",
" },",
" user: root",
" })",
" )) as any;",
"",
" expect(res.code).toBe(200);",
" expect(res.data.length).toBe(2);",
"});",
"",
"test('appId is required', async () => {",
" const res = (await handler(",
" ...getTestRequest({",
" query: {",
" type: 'share'",
" },",
" user: root",
" })",
" )) as any;",
" expect(res.code).toBe(500);",
" expect(res.error).toBe(AppErrEnum.unExist);",
"});",
"",
"test('if type is not provided, return nothing', async () => {",
" const res = (await handler(",
" ...getTestRequest({",
" query: {",
" appId: root.appId",
" },",
" user: root",
" })",
" )) as any;",
" expect(res.code).toBe(200);",
" expect(res.data.length).toBe(0);",
"describe('authType2UsageSource', () => {",
" it('Test description', () => {",
" expect().toBe();",
" });",
"});"
]
]
}
}

View File

@@ -17,7 +17,7 @@ dev:
build:
ifeq ($(proxy), taobao)
docker build -f $(filePath) -t $(image) . --build-arg proxy=taobao
docker build -f $(filePath) -t $(image) . --build-arg proxy=taobao
else ifeq ($(proxy), clash)
docker build -f $(filePath) -t $(image) . --network host --build-arg HTTP_PROXY=http://127.0.0.1:7890 --build-arg HTTPS_PROXY=http://127.0.0.1:7890
else

View File

@@ -126,15 +126,15 @@ services:
# fastgpt
sandbox:
container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.9.4 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.4 # 阿里云
image: ghcr.io/labring/fastgpt-sandbox:v4.9.5 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.5 # 阿里云
networks:
- fastgpt
restart: always
fastgpt:
container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.9.4 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.4 # 阿里云
image: ghcr.io/labring/fastgpt:v4.9.5 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.5 # 阿里云
ports:
- 3000:3000
networks:

View File

@@ -85,15 +85,15 @@ services:
# fastgpt
sandbox:
container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.9.4 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.4 # 阿里云
image: ghcr.io/labring/fastgpt-sandbox:v4.9.5 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.5 # 阿里云
networks:
- fastgpt
restart: always
fastgpt:
container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.9.4 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.4 # 阿里云
image: ghcr.io/labring/fastgpt:v4.9.5 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.5 # 阿里云
ports:
- 3000:3000
networks:

View File

@@ -66,15 +66,15 @@ services:
sandbox:
container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.9.4 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.4 # 阿里云
image: ghcr.io/labring/fastgpt-sandbox:v4.9.5 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.9.5 # 阿里云
networks:
- fastgpt
restart: always
fastgpt:
container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.9.4 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.4 # 阿里云
image: ghcr.io/labring/fastgpt:v4.9.5 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.9.5 # 阿里云
ports:
- 3000:3000
networks:

Binary file not shown.

Before

Width:  |  Height:  |  Size: 153 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 145 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 205 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.0 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 469 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 193 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 318 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 199 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 167 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 239 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 98 KiB

View File

@@ -138,7 +138,7 @@ FastGPT 商业版共包含了2个应用fastgpt, fastgpt-plus和2个数据
SYSTEM_NAME=FastGPT
SYSTEM_DESCRIPTION=
SYSTEM_FAVICON=/favicon.ico
HOME_URL=/app/list
HOME_URL=/dashboard/apps
```
SYSTEM_FAVICON 可以是一个网络地址

View File

@@ -1,5 +1,5 @@
---
title: 'V4.9.5(进行中)'
title: 'V4.9.5'
description: 'FastGPT V4.9.5 更新说明'
icon: 'upgrade'
draft: false
@@ -7,6 +7,15 @@ toc: true
weight: 795
---
## 升级指南
### 1. 做好数据备份
### 2. 更新镜像 tag
- 更新 FastGPT 镜像 tag: v4.9.5
- 更新 FastGPT 商业版镜像 tag: v4.9.5
- Sandbox 无需更新
- AIProxy 无需更新
## 🚀 新增内容
@@ -18,6 +27,7 @@ weight: 795
## ⚙️ 优化
1. 繁体中文翻译。
2. Arm 镜像打包
## 🐛 修复

View File

@@ -0,0 +1,32 @@
---
title: 'V4.9.6(进行中)'
description: 'FastGPT V4.9.6 更新说明'
icon: 'upgrade'
draft: false
toc: true
weight: 794
---
## 🚀 新增内容
1. 以 MCP 方式对外提供应用调用。
2. 支持以 MCP SSE 协议创建工具。
3. 批量执行节点支持交互节点,可实现每一轮循环都人工参与。
4. 增加工作台二级菜单,合并工具箱。
5. 增加 grok3、GPT4.1、Gemini2.5 模型系统配置。
## ⚙️ 优化
1. 工作流数据类型转化鲁棒性和兼容性增强。
2. Python sandbox 代码,支持大数据输入。
3. 路径组件支持配置最后一步是否可点击。
4. 知识库工具调用结果,自动补充图片域名。
5. Github action runner 升级成 unbuntu24
## 🐛 修复
1. 修复子工作流包含交互节点时,未成功恢复子工作流所有数据。
2. completion v1 接口,未接受 interactive 参数,导致 API 调用失败。

View File

@@ -5,4 +5,207 @@ icon: "group"
draft: false
toc: true
weight: 450
---
---
# 团队 & 成员组 & 权限
## 权限系统简介
FastGPT
权限系统融合了基于**属性**和基于**角色**的权限管理范式,为团队协作提供精细化的权限控制方案。通过**成员、部门和群组**三种管理模式,您可以灵活配置对团队、应用和知识库等资源的访问权限。
## 团队
每位用户可以同时归属于多个团队,系统默认为每位用户创建一个初始团队。目前暂不支持用户手动创建额外团队。
## 权限管理
FastGPT 提供三种权限管理维度:
**成员权限**:最高优先级,直接赋予个人的权限
**部门与群组权限**:采用权限并集原则,优先级低于成员权限
权限判定遵循以下逻辑:
首先检查用户的个人成员权限
其次检查用户所属部门和群组的权限(取并集)
最终权限为上述结果的组合
鉴权逻辑如下:
![](/imgs/guide/team_permissions/team_roles_permissions/image1.jpeg)
### 资源权限
对于不同的**资源**,有不同的权限。
这里说的资源,是指应用、知识库、团队等等概念。
下表为不同资源,可以进行管理的权限。
<table>
<thead>
<tr>
<th>资源</th>
<th>可管理权限</th>
<th>说明</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="4">团队</td>
<td>创建应用</td>
<td>创建,删除等基础操作</td>
</tr>
<tr>
<td>创建知识库</td>
<td>创建,删除等基础操作</td>
</tr>
<tr>
<td>创建团队 APIKey</td>
<td>创建,删除等基础操作</td>
</tr>
<tr>
<td>管理成员</td>
<td>邀请、移除用户,创建群组等</td>
</tr>
<tr>
<td rowspan="3">应用</td>
<td>可使用</td>
<td>允许进行对话交互</td>
</tr>
<tr>
<td>可编辑</td>
<td>修改基本信息,进行流程编排等</td>
</tr>
<tr>
<td>可管理</td>
<td>添加或删除协作者</td>
</tr>
<tr>
<td rowspan="3">知识库</td>
<td>可使用</td>
<td>可以在应用中调用该知识库</td>
</tr>
<tr>
<td>可编辑</td>
<td>修改知识库的内容</td>
</tr>
<tr>
<td>可管理</td>
<td>添加或删除协作者</td>
</tr>
</tbody>
</table>
### 协作者
必须先添加**协作者**,才能对其进行权限管理:
![](/imgs/guide/team_permissions/team_roles_permissions/image2.png)
管理团队权限时,需先选择成员/组织/群组,再进行权限配置。
![](/imgs/guide/team_permissions/team_roles_permissions/image3.png)
对于应用和知识库等资源,可直接修改成员权限。
![](/imgs/guide/team_permissions/team_roles_permissions/image4.png)
团队权限在专门的权限页面进行设置
![](/imgs/guide/team_permissions/team_roles_permissions/image5.png)
## 特殊权限说明
### 管理员权限
管理员主要负责管理资源的协作关系,但有以下限制:
- 不能修改或移除自身权限
- 不能修改或移除其他管理员权限
-不能将管理员权限赋予其他协作者
### Owner 权限
每个资源都有唯一的 Owner拥有该资源的最高权限。Owner
可以转移所有权,但转移后原 Owner 将失去对资源的权限。
### Root 权限
Root
作为系统唯一的超级管理员账号,对所有团队的所有资源拥有完全访问和管理权限。
## 使用技巧
### 1. 设置团队默认权限
利用\"全员群组\"可快速为整个团队设置基础权限。例如,为应用设置全员可访问权限。
**注意**:个人成员权限会覆盖全员组权限。例如,应用 A
设置了全员编辑权限,而用户 M 被单独设置为使用权限,则用户 M
只能使用而无法编辑该应用。
### 2. 批量权限管理
通过创建群组或组织,可以高效管理多用户的权限配置。先将用户添加到群组,再对群组整体授权。
### 开发者参考
> 以下内容面向开发者,如不涉及二次开发可跳过。
#### 权限设计原理
FastGPT 权限系统参考 Linux 权限设计,采用二进制方式存储权限位。权限位为
1 表示拥有该权限,为 0 表示无权限。Owner 权限特殊标记为全 1。
#### 权限表
权限信息存储在 MongoDB 的 resource_permissions 集合中,其主要字段包括:
- teamId: 团队标识
- tmbId/groupId/orgId: 权限主体(三选一)
- resourceType: 资源类型(team/app/dataset)
- permission: 权限值(数字)
- resourceId: 资源ID(团队资源为null)
系统通过这一数据结构实现了灵活而精确的权限控制。
对于这个表的 Schema 定义在 packages/service/support/permission/schema.ts
文件中。定义如下:
```typescript
export const ResourcePermissionSchema = new Schema({
teamId: {
type: Schema.Types.ObjectId,
ref: TeamCollectionName
},
tmbId: {
type: Schema.Types.ObjectId,
ref: TeamMemberCollectionName
},
groupId: {
type: Schema.Types.ObjectId,
ref: MemberGroupCollectionName
},
orgId: {
type: Schema.Types.ObjectId,
ref: OrgCollectionName
},
resourceType: {
type: String,
enum: Object.values(PerResourceTypeEnum),
required: true
},
permission: {
type: Number,
required: true
},
// Resrouce ID: App or DataSet or any other resource type.
// It is null if the resourceType is team.
resourceId: {
type: Schema.Types.ObjectId
}
});
```

View File

@@ -7,76 +7,64 @@ toc: true
weight: -10
---
FastGPT 是一个AI Agent 构建平台,提供开箱即用的数据处理、模型调用等能力,同时可以通过 Flow 可视化进行工作流编排,从而实现复杂的应用场景!
FastGPT 是一个基于 LLM 大语言模型的知识库问答系统,将智能对话与可视化编排完美结合,让 AI 应用开发变得简单自然。无论您是开发者还是业务人员,都能轻松打造专属的 AI 应用。
{{% alert icon="🤖 " context="success" %}}
FastGPT 在线使用:[https://tryfastgpt.ai](https://tryfastgpt.ai)
快速开始体验
- 海外版:[https://tryfastgpt.ai](https://tryfastgpt.ai)
- 国内版:[https://fastgpt.cn](https://fastgpt.cn)
{{% /alert %}}
| | |
| --------------------- | --------------------- |
| ![](/imgs/intro1.webp) | ![](/imgs/intro2.png) |
| ![](/imgs/intro3.png) | ![](/imgs/intro4.png) |
| | |
| --------------------- | --------------------------------- |
| ![](/imgs/intro/image1.png) | ![](/imgs/intro/image2.png) |
## FastGPT 能力
# FastGPT 的优势
## 1. 简单灵活,像搭积木一样简单 🧱
像搭乐高一样简单有趣FastGPT 提供丰富的功能模块,通过简单拖拽就能搭建出个性化的 AI 应用,零代码也能实现复杂的业务流程。
## 2. 让数据更智能 🧠
FastGPT 提供完整的数据智能化解决方案,从数据导入、预处理到知识匹配,再到智能问答,全流程自动化。配合可视化的工作流设计,轻松打造专业级 AI 应用。
## 3. 开源开放,易于集成 🔗
FastGPT 基于 Apache 2.0 协议开源,支持二次开发。通过标准 API 即可快速接入,无需修改源码。支持 ChatGPT、Claude、DeepSeek 和文心一言等主流模型,持续迭代优化,始终保持产品活力。
### 1. 专属 AI 客服
---
通过导入文档或已有问答对进行训练,让 AI 模型能根据你的文档以交互式对话方式回答问题。
# FastGPT 能做什么
## 1. 全能知识库
可轻松导入各式各样的文档及数据,能自动对其开展知识结构化处理工作。同时,具备支持多轮上下文理解的智能问答功能,还可为用户带来持续优化的知识库管理体验。
![](/imgs/intro/image3.png)
![](/imgs/ability1.png)
## 2. 可视化工作流
FastGPT直观的拖拽式界面设计可零代码搭建复杂业务流程。还拥有丰富的功能节点组件能应对多种业务需求有着灵活的流程编排能力按需定制业务流程。
![](/imgs/intro/image4.png)
### 2. 简单易用的可视化界面
## 3. 数据智能解析
FastGPT知识库系统对导入数据的处理极为灵活可以智能处理PDF文档的复杂结构保留图片、表格和LaTeX公式自动识别扫描文件并将内容结构化为清晰的Markdown格式。同时支持图片自动标注和索引让视觉内容可被理解和检索确保知识在AI问答中能被完整、准确地呈现和应用。
FastGPT 采用直观的可视化界面设计,为各种应用场景提供了丰富实用的功能。通过简洁易懂的操作步骤,可以轻松完成 AI 客服的创建和训练流程。
![](/imgs/ability5.png)
### 3. 自动数据预处理
提供手动输入、直接分段、LLM 自动处理和 CSV 等多种数据导入途径,其中“直接分段”支持通过 PDF、WORD、Markdown 和 CSV 文档内容作为上下文。FastGPT 会自动对文本数据进行预处理、向量化和 QA 分割,节省手动训练时间,提升效能。
![](/imgs/ability2.png)
### 4. 工作流编排
![](/imgs/intro/image5.png)
## 4. 工作流编排
基于 Flow 模块的工作流编排,可以帮助你设计更加复杂的问答流程。例如查询数据库、查询库存、预约实验室等。
![](/imgs/ability3.png)
![](/imgs/intro/image6.png)
### 5. 强大的 API 集成
## 5. 强大的 API 集成
FastGPT 完全对齐 OpenAI 官方接口,支持一键接入企业微信、公众号、飞书、钉钉等平台,让 AI 能力轻松融入您的业务场景。
FastGPT 对外的 API 接口对齐了 OpenAI 官方接口,可以直接接入现有的 GPT 应用,也可以轻松集成到企业微信、公众号、飞书等平台。
![](/imgs/intro/image7.png)
![](/imgs/ability4.png)
---
## FastGPT 特点
# 核心特性
1. **项目开源**
- 开箱即用的知识库系统
- 可视化的低代码工作流编排
- 支持主流大模型
- 简单易用的 API 接口
- 灵活的数据处理能力
FastGPT 遵循**附加条件 Apache License 2.0 开源协议**,你可以 [Fork](https://github.com/labring/FastGPT/fork) 之后进行二次开发和发布。FastGPT 社区版将保留核心功能,商业版仅在社区版基础上使用 API 的形式进行扩展,不影响学习使用。
---
2. **独特的 QA 结构**
针对客服问答场景设计的 QA 结构,提高在大量数据场景中的问答准确性。
3. **可视化工作流**
通过 Flow 模块展示了从问题输入到模型输出的完整流程,便于调试和设计复杂流程。
4. **无限扩展**
基于 API 进行扩展,无需修改 FastGPT 源码,也可快速接入现有的程序中。
5. **便于调试**
提供搜索测试、引用修改、完整对话预览等多种调试途径。
6. **支持多种模型**
支持 GPT、Claude、文心一言等多种 LLM 模型,未来也将支持自定义的向量模型。
## 知识库核心流程图
![](/imgs/functional-arch.webp)
# 知识库核心流程图
![](/imgs/intro/image8.png)

View File

@@ -5,6 +5,7 @@ import { ErrType } from '../errorCode';
const startCode = 507000;
export enum CommonErrEnum {
invalidParams = 'invalidParams',
invalidResource = 'invalidResource',
fileNotFound = 'fileNotFound',
unAuthFile = 'unAuthFile',
missingParams = 'missingParams',
@@ -15,6 +16,10 @@ const datasetErr = [
statusText: CommonErrEnum.fileNotFound,
message: i18nT('common:error.invalid_params')
},
{
statusText: CommonErrEnum.invalidResource,
message: i18nT('common:error_invalid_resource')
},
{
statusText: CommonErrEnum.fileNotFound,
message: 'error.fileNotFound'

View File

@@ -27,7 +27,8 @@ export enum TeamErrEnum {
userNotActive = 'userNotActive',
invitationLinkInvalid = 'invitationLinkInvalid',
youHaveBeenInTheTeam = 'youHaveBeenInTheTeam',
tooManyInvitations = 'tooManyInvitations'
tooManyInvitations = 'tooManyInvitations',
unPermission = 'unPermission'
}
const teamErr = [
@@ -35,6 +36,10 @@ const teamErr = [
statusText: TeamErrEnum.notUser,
message: i18nT('common:code_error.team_error.not_user')
},
{
statusText: TeamErrEnum.unPermission,
message: i18nT('common:error_un_permission')
},
{
statusText: TeamErrEnum.teamOverSize,
message: i18nT('common:code_error.team_error.over_size')

View File

@@ -49,6 +49,7 @@ export type FastGPTFeConfigsType = {
find_password_method?: ['email' | 'phone'];
bind_notification_method?: ['email' | 'phone'];
googleClientVerKey?: string;
mcpServerProxyEndpoint?: string;
show_emptyChat?: boolean;
show_appStore?: boolean;

View File

@@ -11,7 +11,9 @@ export enum AppTypeEnum {
simple = 'simple',
workflow = 'advanced',
plugin = 'plugin',
httpPlugin = 'httpPlugin'
httpPlugin = 'httpPlugin',
toolSet = 'toolSet',
tool = 'tool'
}
export const AppFolderTypeList = [AppTypeEnum.folder, AppTypeEnum.httpPlugin];
@@ -53,7 +55,10 @@ export enum AppTemplateTypeEnum {
imageGeneration = 'image-generation',
webSearch = 'web-search',
roleplay = 'roleplay',
officeServices = 'office-services'
officeServices = 'office-services',
// special type
contribute = 'contribute'
}
export const defaultDatasetMaxTokens = 16000;

View File

@@ -0,0 +1,97 @@
import { NodeOutputKeyEnum, WorkflowIOValueTypeEnum } from '../../workflow/constants';
import {
FlowNodeInputTypeEnum,
FlowNodeOutputTypeEnum,
FlowNodeTypeEnum
} from '../../workflow/node/constant';
import { nanoid } from 'nanoid';
import { ToolType } from '../type';
import { i18nT } from '../../../../web/i18n/utils';
import { RuntimeNodeItemType } from '../../workflow/runtime/type';
export const getMCPToolSetRuntimeNode = ({
url,
toolList,
name,
avatar
}: {
url: string;
toolList: ToolType[];
name?: string;
avatar?: string;
}): RuntimeNodeItemType => {
return {
nodeId: nanoid(16),
flowNodeType: FlowNodeTypeEnum.toolSet,
avatar,
intro: 'MCP Tools',
inputs: [
{
key: 'toolSetData',
label: 'Tool Set Data',
valueType: WorkflowIOValueTypeEnum.object,
renderTypeList: [FlowNodeInputTypeEnum.hidden],
value: { url, toolList }
}
],
outputs: [],
name: name || '',
version: ''
};
};
export const getMCPToolRuntimeNode = ({
tool,
url,
avatar = 'core/app/type/mcpToolsFill'
}: {
tool: ToolType;
url: string;
avatar?: string;
}): RuntimeNodeItemType => {
return {
nodeId: nanoid(16),
flowNodeType: FlowNodeTypeEnum.tool,
avatar,
intro: tool.description,
inputs: [
{
key: 'toolData',
label: 'Tool Data',
valueType: WorkflowIOValueTypeEnum.object,
renderTypeList: [FlowNodeInputTypeEnum.hidden],
value: { ...tool, url }
},
...Object.entries(tool.inputSchema?.properties || {}).map(([key, value]) => ({
key,
label: key,
valueType: value.type as WorkflowIOValueTypeEnum,
description: value.description,
toolDescription: value.description || key,
required: tool.inputSchema?.required?.includes(key) || false,
renderTypeList: [
value.type === 'string'
? FlowNodeInputTypeEnum.input
: value.type === 'number'
? FlowNodeInputTypeEnum.numberInput
: value.type === 'boolean'
? FlowNodeInputTypeEnum.switch
: FlowNodeInputTypeEnum.JSONEditor
]
}))
],
outputs: [
{
id: NodeOutputKeyEnum.rawResponse,
key: NodeOutputKeyEnum.rawResponse,
required: true,
label: i18nT('workflow:raw_response'),
description: i18nT('workflow:tool_raw_response_description'),
valueType: WorkflowIOValueTypeEnum.any,
type: FlowNodeOutputTypeEnum.static
}
],
name: tool.name,
version: ''
};
};

View File

@@ -16,6 +16,16 @@ import { FlowNodeInputTypeEnum } from '../../core/workflow/node/constant';
import { WorkflowTemplateBasicType } from '@fastgpt/global/core/workflow/type';
import { SourceMemberType } from '../../support/user/type';
export type ToolType = {
name: string;
description: string;
inputSchema: {
type: string;
properties?: Record<string, { type: string; description?: string }>;
required?: string[];
};
};
export type AppSchema = {
_id: string;
parentId?: ParentIdType;

View File

@@ -140,7 +140,9 @@ export const appWorkflow2Form = ({
);
} else if (
node.flowNodeType === FlowNodeTypeEnum.pluginModule ||
node.flowNodeType === FlowNodeTypeEnum.appModule
node.flowNodeType === FlowNodeTypeEnum.appModule ||
node.flowNodeType === FlowNodeTypeEnum.tool ||
node.flowNodeType === FlowNodeTypeEnum.toolSet
) {
if (!node.pluginId) return;

View File

@@ -38,7 +38,8 @@ export enum ChatSourceEnum {
team = 'team',
feishu = 'feishu',
official_account = 'official_account',
wecom = 'wecom'
wecom = 'wecom',
mcp = 'mcp'
}
export const ChatSourceMap = {
@@ -68,6 +69,9 @@ export const ChatSourceMap = {
},
[ChatSourceEnum.wecom]: {
name: i18nT('common:core.chat.logs.wecom')
},
[ChatSourceEnum.mcp]: {
name: i18nT('common:core.chat.logs.mcp')
}
};

View File

@@ -154,25 +154,55 @@ export const getChatSourceByPublishChannel = (publishChannel: PublishChannelEnum
/*
Merge chat responseData
1. Same tool mergeSignId (Interactive tool node)
2. Recursively merge plugin details with same mergeSignId
*/
export const mergeChatResponseData = (responseDataList: ChatHistoryItemResType[]) => {
let lastResponse: ChatHistoryItemResType | undefined = undefined;
return responseDataList.reduce<ChatHistoryItemResType[]>((acc, curr) => {
if (lastResponse && lastResponse.mergeSignId && curr.mergeSignId === lastResponse.mergeSignId) {
// 替换 lastResponse
const concatResponse: ChatHistoryItemResType = {
...curr,
runningTime: +((lastResponse.runningTime || 0) + (curr.runningTime || 0)).toFixed(2),
totalPoints: (lastResponse.totalPoints || 0) + (curr.totalPoints || 0),
childTotalPoints: (lastResponse.childTotalPoints || 0) + (curr.childTotalPoints || 0),
toolCallTokens: (lastResponse.toolCallTokens || 0) + (curr.toolCallTokens || 0),
toolDetail: [...(lastResponse.toolDetail || []), ...(curr.toolDetail || [])]
export const mergeChatResponseData = (
responseDataList: ChatHistoryItemResType[]
): ChatHistoryItemResType[] => {
// Merge children reponse data(Children has interactive response)
const responseWithMergedPlugins = responseDataList.map((item) => {
if (item.pluginDetail && item.pluginDetail.length > 1) {
return {
...item,
pluginDetail: mergeChatResponseData(item.pluginDetail)
};
return [...acc.slice(0, -1), concatResponse];
} else {
lastResponse = curr;
return [...acc, curr];
}
}, []);
return item;
});
let lastResponse: ChatHistoryItemResType | undefined = undefined;
let hasMerged = false;
const firstPassResult = responseWithMergedPlugins.reduce<ChatHistoryItemResType[]>(
(acc, curr) => {
if (
lastResponse &&
lastResponse.mergeSignId &&
curr.mergeSignId === lastResponse.mergeSignId
) {
const concatResponse: ChatHistoryItemResType = {
...curr,
runningTime: +((lastResponse.runningTime || 0) + (curr.runningTime || 0)).toFixed(2),
totalPoints: (lastResponse.totalPoints || 0) + (curr.totalPoints || 0),
childTotalPoints: (lastResponse.childTotalPoints || 0) + (curr.childTotalPoints || 0),
toolCallTokens: (lastResponse.toolCallTokens || 0) + (curr.toolCallTokens || 0),
toolDetail: [...(lastResponse.toolDetail || []), ...(curr.toolDetail || [])],
loopDetail: [...(lastResponse.loopDetail || []), ...(curr.loopDetail || [])],
pluginDetail: [...(lastResponse.pluginDetail || []), ...(curr.pluginDetail || [])]
};
hasMerged = true;
return [...acc.slice(0, -1), concatResponse];
} else {
lastResponse = curr;
return [...acc, curr];
}
},
[]
);
if (hasMerged && firstPassResult.length > 1) {
return mergeChatResponseData(firstPassResult);
}
return firstPassResult;
};

View File

@@ -140,7 +140,9 @@ export enum FlowNodeTypeEnum {
loopStart = 'loopStart',
loopEnd = 'loopEnd',
formInput = 'formInput',
comment = 'comment'
comment = 'comment',
tool = 'tool',
toolSet = 'toolSet'
}
// node IO value type

View File

@@ -217,6 +217,8 @@ export type DispatchNodeResponseType = {
// tool params
toolParamsResult?: Record<string, any>;
toolRes?: any;
// abandon
extensionModel?: string;
extensionResult?: string;

View File

@@ -10,6 +10,7 @@ import { FlowNodeOutputItemType, ReferenceValueType } from '../type/io';
import { ChatItemType, NodeOutputItemType } from '../../../core/chat/type';
import { ChatItemValueTypeEnum, ChatRoleEnum } from '../../../core/chat/constants';
import { replaceVariable, valToStr } from '../../../common/string/tools';
import json5 from 'json5';
import {
InteractiveNodeResponseType,
WorkflowInteractiveResponseType
@@ -18,7 +19,10 @@ import {
export const extractDeepestInteractive = (
interactive: WorkflowInteractiveResponseType
): WorkflowInteractiveResponseType => {
if (interactive?.type === 'childrenInteractive' && interactive.params?.childrenResponse) {
if (
(interactive?.type === 'childrenInteractive' || interactive?.type === 'loopInteractive') &&
interactive.params?.childrenResponse
) {
return extractDeepestInteractive(interactive.params.childrenResponse);
}
return interactive;
@@ -40,6 +44,113 @@ export const getMaxHistoryLimitFromNodes = (nodes: StoreNodeItemType[]): number
return limit * 2;
};
/* value type format */
export const valueTypeFormat = (value: any, type?: WorkflowIOValueTypeEnum) => {
const isObjectString = (value: any) => {
if (typeof value === 'string' && value !== 'false' && value !== 'true') {
const trimmedValue = value.trim();
const isJsonString =
(trimmedValue.startsWith('{') && trimmedValue.endsWith('}')) ||
(trimmedValue.startsWith('[') && trimmedValue.endsWith(']'));
return isJsonString;
}
return false;
};
// 1. any值忽略格式化
if (value === undefined || value === null) return value;
if (!type || type === WorkflowIOValueTypeEnum.any) return value;
// 2. 如果值已经符合目标类型,直接返回
if (
(type === WorkflowIOValueTypeEnum.string && typeof value === 'string') ||
(type === WorkflowIOValueTypeEnum.number && typeof value === 'number') ||
(type === WorkflowIOValueTypeEnum.boolean && typeof value === 'boolean') ||
(type.startsWith('array') && Array.isArray(value)) ||
(type === WorkflowIOValueTypeEnum.object && typeof value === 'object') ||
(type === WorkflowIOValueTypeEnum.chatHistory &&
(Array.isArray(value) || typeof value === 'number')) ||
(type === WorkflowIOValueTypeEnum.datasetQuote && Array.isArray(value)) ||
(type === WorkflowIOValueTypeEnum.selectDataset && Array.isArray(value)) ||
(type === WorkflowIOValueTypeEnum.selectApp && typeof value === 'object')
) {
return value;
}
// 4. 按目标类型,进行格式转化
// 4.1 基本类型转换
if (type === WorkflowIOValueTypeEnum.string) {
return typeof value === 'object' ? JSON.stringify(value) : String(value);
}
if (type === WorkflowIOValueTypeEnum.number) {
return Number(value);
}
if (type === WorkflowIOValueTypeEnum.boolean) {
if (typeof value === 'string') {
return value.toLowerCase() === 'true';
}
return Boolean(value);
}
// 4.3 字符串转对象
if (
(type === WorkflowIOValueTypeEnum.object || type.startsWith('array')) &&
typeof value === 'string' &&
value.trim()
) {
const trimmedValue = value.trim();
const isJsonString = isObjectString(trimmedValue);
if (isJsonString) {
try {
const parsed = json5.parse(trimmedValue);
// 检测解析结果与目标类型是否一致
if (type.startsWith('array') && Array.isArray(parsed)) return parsed;
if (type === WorkflowIOValueTypeEnum.object && typeof parsed === 'object') return parsed;
} catch (error) {}
}
}
// 4.4 数组类型(这里 value 不是数组类型)TODO: 嵌套数据类型转化)
if (type.startsWith('array')) {
return [value];
}
// 4.5 特殊类型处理
if (
[WorkflowIOValueTypeEnum.datasetQuote, WorkflowIOValueTypeEnum.selectDataset].includes(type)
) {
if (isObjectString(value)) {
try {
return json5.parse(value);
} catch (error) {
return [];
}
}
return [];
}
if (
[WorkflowIOValueTypeEnum.selectApp, WorkflowIOValueTypeEnum.object].includes(type) &&
typeof value === 'string'
) {
if (isObjectString(value)) {
try {
return json5.parse(value);
} catch (error) {
return {};
}
}
return {};
}
// Invalid history type
if (type === WorkflowIOValueTypeEnum.chatHistory) {
return 0;
}
// 5. 默认返回原值
return value;
};
/*
Get interaction information (if any) from the last AI message.
What can be done:
@@ -62,7 +173,10 @@ export const getLastInteractiveValue = (
return;
}
if (lastValue.interactive.type === 'childrenInteractive') {
if (
lastValue.interactive.type === 'childrenInteractive' ||
lastValue.interactive.type === 'loopInteractive'
) {
return lastValue.interactive;
}
@@ -83,7 +197,7 @@ export const getLastInteractiveValue = (
return;
};
export const initWorkflowEdgeStatus = (
export const storeEdges2RuntimeEdges = (
edges: StoreEdgeItemType[],
lastInteractive?: WorkflowInteractiveResponseType
): RuntimeEdgeItemType[] => {
@@ -114,7 +228,12 @@ export const getWorkflowEntryNodeIds = (
FlowNodeTypeEnum.pluginInput
];
return nodes
.filter((node) => entryList.includes(node.flowNodeType as any))
.filter(
(node) =>
entryList.includes(node.flowNodeType as any) ||
(!nodes.some((item) => entryList.includes(item.flowNodeType as any)) &&
node.flowNodeType === FlowNodeTypeEnum.tool)
)
.map((item) => item.nodeId);
};
@@ -312,7 +431,6 @@ export const formatVariableValByType = (val: any, valueType?: WorkflowIOValueTyp
if (
[
WorkflowIOValueTypeEnum.object,
WorkflowIOValueTypeEnum.chatHistory,
WorkflowIOValueTypeEnum.datasetQuote,
WorkflowIOValueTypeEnum.selectApp,
WorkflowIOValueTypeEnum.selectDataset

View File

@@ -34,6 +34,8 @@ import { LoopStartNode } from './system/loop/loopStart';
import { LoopEndNode } from './system/loop/loopEnd';
import { FormInputNode } from './system/interactive/formInput';
import { ToolParamsNode } from './system/toolParams';
import { RunToolNode } from './system/runTool';
import { RunToolSetNode } from './system/runToolSet';
const systemNodes: FlowNodeTemplateType[] = [
AiChatModule,
@@ -84,5 +86,7 @@ export const moduleTemplatesFlat: FlowNodeTemplateType[] = [
RunAppNode,
RunAppModule,
LoopStartNode,
LoopEndNode
LoopEndNode,
RunToolNode,
RunToolSetNode
];

View File

@@ -8,7 +8,7 @@ import { i18nT } from '../../../../web/i18n/utils';
export const Input_Template_History: FlowNodeInputItemType = {
key: NodeInputKeyEnum.history,
renderTypeList: [FlowNodeInputTypeEnum.numberInput, FlowNodeInputTypeEnum.reference],
valueType: WorkflowIOValueTypeEnum.chatHistory,
valueType: WorkflowIOValueTypeEnum.chatHistory, // Array / Number
label: i18nT('common:core.module.input.label.chat history'),
description: i18nT('workflow:max_dialog_rounds'),

View File

@@ -28,6 +28,15 @@ type ChildrenInteractive = InteractiveNodeType & {
};
};
type LoopInteractive = InteractiveNodeType & {
type: 'loopInteractive';
params: {
loopResult: any[];
childrenResponse: WorkflowInteractiveResponseType;
currentIndex: number;
};
};
export type UserSelectOptionItemType = {
key: string;
value: string;
@@ -71,5 +80,7 @@ type UserInputInteractive = InteractiveNodeType & {
export type InteractiveNodeResponseType =
| UserSelectInteractive
| UserInputInteractive
| ChildrenInteractive;
| ChildrenInteractive
| LoopInteractive;
export type WorkflowInteractiveResponseType = InteractiveBasicType & InteractiveNodeResponseType;

View File

@@ -0,0 +1,19 @@
import { FlowNodeTemplateTypeEnum } from '../../constants';
import { FlowNodeTypeEnum } from '../../node/constant';
import { FlowNodeTemplateType } from '../../type/node';
import { getHandleConfig } from '../utils';
export const RunToolNode: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.tool,
templateType: FlowNodeTemplateTypeEnum.other,
flowNodeType: FlowNodeTypeEnum.tool,
sourceHandle: getHandleConfig(true, true, true, true),
targetHandle: getHandleConfig(true, true, true, true),
intro: '',
name: '',
showStatus: false,
isTool: true,
version: '4.9.6',
inputs: [],
outputs: []
};

View File

@@ -0,0 +1,19 @@
import { FlowNodeTemplateTypeEnum } from '../../constants';
import { FlowNodeTypeEnum } from '../../node/constant';
import { FlowNodeTemplateType } from '../../type/node';
import { getHandleConfig } from '../utils';
export const RunToolSetNode: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.toolSet,
templateType: FlowNodeTemplateTypeEnum.other,
flowNodeType: FlowNodeTypeEnum.toolSet,
sourceHandle: getHandleConfig(false, false, false, false),
targetHandle: getHandleConfig(false, false, false, false),
intro: '',
name: '',
showStatus: false,
isTool: true,
version: '4.9.6',
inputs: [],
outputs: []
};

View File

@@ -311,6 +311,38 @@ export const appData2FlowNodeIO = ({
};
};
export const toolData2FlowNodeIO = ({
nodes
}: {
nodes: StoreNodeItemType[];
}): {
inputs: FlowNodeInputItemType[];
outputs: FlowNodeOutputItemType[];
} => {
const toolNode = nodes.find((node) => node.flowNodeType === FlowNodeTypeEnum.tool);
return {
inputs: toolNode?.inputs || [],
outputs: toolNode?.outputs || []
};
};
export const toolSetData2FlowNodeIO = ({
nodes
}: {
nodes: StoreNodeItemType[];
}): {
inputs: FlowNodeInputItemType[];
outputs: FlowNodeOutputItemType[];
} => {
const toolSetNode = nodes.find((node) => node.flowNodeType === FlowNodeTypeEnum.toolSet);
return {
inputs: toolSetNode?.inputs || [],
outputs: toolSetNode?.outputs || []
};
};
export const formatEditorVariablePickerIcon = (
variables: { key: string; label: string; type?: `${VariableInputEnum}`; required?: boolean }[]
): EditorVariablePickerType[] => {

14
packages/global/support/mcp/type.d.ts vendored Normal file
View File

@@ -0,0 +1,14 @@
export type McpKeyType = {
_id: string;
key: string;
teamId: string;
tmbId: string;
apps: McpAppType[];
name: string;
};
export type McpAppType = {
appId: string;
toolName: string;
description: string;
};

View File

@@ -11,7 +11,8 @@ export enum UsageSourceEnum {
feishu = 'feishu',
dingtalk = 'dingtalk',
official_account = 'official_account',
pdfParse = 'pdfParse'
pdfParse = 'pdfParse',
mcp = 'mcp'
}
export const UsageSourceMap = {
@@ -47,5 +48,8 @@ export const UsageSourceMap = {
},
[UsageSourceEnum.pdfParse]: {
label: i18nT('account_usage:pdf_parse')
},
[UsageSourceEnum.mcp]: {
label: i18nT('account_usage:mcp')
}
};

View File

@@ -32,3 +32,13 @@ export const getImageBase64 = async (url: string) => {
return Promise.reject(error);
}
};
export const addEndpointToImageUrl = (text: string) => {
const baseURL = process.env.FE_DOMAIN;
if (!baseURL) return text;
// 匹配 /api/system/img/xxx.xx 的图片链接,并追加 baseURL
return text.replace(
/(?<!https?:\/\/[^\s]*)(?:\/api\/system\/img\/[^\s.]*\.[^\s]*)/g,
(match) => `${baseURL}${match}`
);
};

View File

@@ -1,6 +1,30 @@
{
"provider": "Gemini",
"list": [
{
"model": "gemini-2.5-pro-exp-03-25",
"name": "gemini-2.5-pro-exp-03-25",
"maxContext": 1000000,
"maxResponse": 63000,
"quoteMaxToken": 1000000,
"maxTemperature": 1,
"vision": true,
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"customCQPrompt": "",
"usedInExtractFields": true,
"usedInQueryExtension": true,
"customExtractPrompt": "",
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
"showTopP": true,
"showStopSign": true
},
{
"model": "gemini-2.0-flash",
"name": "gemini-2.0-flash",

View File

@@ -1,6 +1,54 @@
{
"provider": "Grok",
"list": [
{
"model": "grok-3-mini",
"name": "grok-3-mini",
"maxContext": 128000,
"maxResponse": 8000,
"quoteMaxToken": 128000,
"maxTemperature": 1,
"showTopP": true,
"showStopSign": true,
"vision": false,
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"customCQPrompt": "",
"usedInExtractFields": true,
"usedInQueryExtension": true,
"customExtractPrompt": "",
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
},
{
"model": "grok-3-mini-fast",
"name": "grok-3-mini-fast",
"maxContext": 128000,
"maxResponse": 8000,
"quoteMaxToken": 128000,
"maxTemperature": 1,
"showTopP": true,
"showStopSign": true,
"vision": false,
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"customCQPrompt": "",
"usedInExtractFields": true,
"usedInQueryExtension": true,
"customExtractPrompt": "",
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
},
{
"model": "grok-3",
"name": "grok-3",
@@ -11,7 +59,31 @@
"showTopP": true,
"showStopSign": true,
"vision": false,
"toolChoice": false,
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"customCQPrompt": "",
"usedInExtractFields": true,
"usedInQueryExtension": true,
"customExtractPrompt": "",
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
},
{
"model": "grok-3-fast",
"name": "grok-3-fast",
"maxContext": 128000,
"maxResponse": 8000,
"quoteMaxToken": 128000,
"maxTemperature": 1,
"showTopP": true,
"showStopSign": true,
"vision": false,
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,

View File

@@ -1,6 +1,78 @@
{
"provider": "OpenAI",
"list": [
{
"model": "gpt-4.1",
"name": "gpt-4.1",
"maxContext": 1000000,
"maxResponse": 32000,
"quoteMaxToken": 1000000,
"maxTemperature": 1.2,
"showTopP": true,
"responseFormatList": ["text", "json_object", "json_schema"],
"showStopSign": true,
"vision": true,
"toolChoice": true,
"functionCall": true,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"customCQPrompt": "",
"usedInExtractFields": true,
"customExtractPrompt": "",
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
},
{
"model": "gpt-4.1-mini",
"name": "gpt-4.1-mini",
"maxContext": 1000000,
"maxResponse": 32000,
"quoteMaxToken": 1000000,
"maxTemperature": 1.2,
"showTopP": true,
"responseFormatList": ["text", "json_object", "json_schema"],
"showStopSign": true,
"vision": true,
"toolChoice": true,
"functionCall": true,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"customCQPrompt": "",
"usedInExtractFields": true,
"customExtractPrompt": "",
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
},
{
"model": "gpt-4.1-nano",
"name": "gpt-4.1-nano",
"maxContext": 1000000,
"maxResponse": 32000,
"quoteMaxToken": 1000000,
"maxTemperature": 1.2,
"showTopP": true,
"responseFormatList": ["text", "json_object", "json_schema"],
"showStopSign": true,
"vision": true,
"toolChoice": true,
"functionCall": true,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"customCQPrompt": "",
"usedInExtractFields": true,
"customExtractPrompt": "",
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
},
{
"model": "gpt-4o-mini",
"name": "GPT-4o-mini",
@@ -9,11 +81,7 @@
"quoteMaxToken": 60000,
"maxTemperature": 1.2,
"showTopP": true,
"responseFormatList": [
"text",
"json_object",
"json_schema"
],
"responseFormatList": ["text", "json_object", "json_schema"],
"showStopSign": true,
"vision": true,
"toolChoice": true,
@@ -37,11 +105,7 @@
"quoteMaxToken": 60000,
"maxTemperature": 1.2,
"showTopP": true,
"responseFormatList": [
"text",
"json_object",
"json_schema"
],
"responseFormatList": ["text", "json_object", "json_schema"],
"showStopSign": true,
"vision": true,
"toolChoice": true,
@@ -275,4 +339,4 @@
"type": "stt"
}
]
}
}

View File

@@ -86,3 +86,19 @@ export async function findAppAndAllChildren({
return [app, ...childDatasets];
}
export const getAppBasicInfoByIds = async ({ teamId, ids }: { teamId: string; ids: string[] }) => {
const apps = await MongoApp.find(
{
teamId,
_id: { $in: ids }
},
'_id name avatar'
).lean();
return apps.map((item) => ({
id: item._id,
name: item.name,
avatar: item.avatar
}));
};

View File

@@ -1,6 +1,11 @@
import { FlowNodeTemplateType } from '@fastgpt/global/core/workflow/type/node.d';
import { FlowNodeTypeEnum, defaultNodeVersion } from '@fastgpt/global/core/workflow/node/constant';
import { appData2FlowNodeIO, pluginData2FlowNodeIO } from '@fastgpt/global/core/workflow/utils';
import {
appData2FlowNodeIO,
pluginData2FlowNodeIO,
toolData2FlowNodeIO,
toolSetData2FlowNodeIO
} from '@fastgpt/global/core/workflow/utils';
import { PluginSourceEnum } from '@fastgpt/global/core/plugin/constants';
import { FlowNodeTemplateTypeEnum } from '@fastgpt/global/core/workflow/constants';
import { getHandleConfig } from '@fastgpt/global/core/workflow/template/utils';
@@ -128,11 +133,41 @@ export async function getChildAppPreviewNode({
(node) => node.flowNodeType === FlowNodeTypeEnum.pluginInput
);
const isTool =
!!app.workflow.nodes.find((node) => node.flowNodeType === FlowNodeTypeEnum.tool) &&
app.workflow.nodes.length === 1;
const isToolSet =
!!app.workflow.nodes.find((node) => node.flowNodeType === FlowNodeTypeEnum.toolSet) &&
app.workflow.nodes.length === 1;
const { flowNodeType, nodeIOConfig } = (() => {
if (isToolSet)
return {
flowNodeType: FlowNodeTypeEnum.toolSet,
nodeIOConfig: toolSetData2FlowNodeIO({ nodes: app.workflow.nodes })
};
if (isTool)
return {
flowNodeType: FlowNodeTypeEnum.tool,
nodeIOConfig: toolData2FlowNodeIO({ nodes: app.workflow.nodes })
};
if (isPlugin)
return {
flowNodeType: FlowNodeTypeEnum.pluginModule,
nodeIOConfig: pluginData2FlowNodeIO({ nodes: app.workflow.nodes })
};
return {
flowNodeType: FlowNodeTypeEnum.appModule,
nodeIOConfig: appData2FlowNodeIO({ chatConfig: app.workflow.chatConfig })
};
})();
return {
id: getNanoid(),
pluginId: app.id,
templateType: app.templateType,
flowNodeType: isPlugin ? FlowNodeTypeEnum.pluginModule : FlowNodeTypeEnum.appModule,
flowNodeType,
avatar: app.avatar,
name: app.name,
intro: app.intro,
@@ -141,11 +176,13 @@ export async function getChildAppPreviewNode({
showStatus: app.showStatus,
isTool: true,
version: app.version,
sourceHandle: getHandleConfig(true, true, true, true),
targetHandle: getHandleConfig(true, true, true, true),
...(isPlugin
? pluginData2FlowNodeIO({ nodes: app.workflow.nodes })
: appData2FlowNodeIO({ chatConfig: app.workflow.chatConfig }))
sourceHandle: isToolSet
? getHandleConfig(false, false, false, false)
: getHandleConfig(true, true, true, true),
targetHandle: isToolSet
? getHandleConfig(false, false, false, false)
: getHandleConfig(true, true, true, true),
...nodeIOConfig
};
}

View File

@@ -11,7 +11,7 @@ import axios from 'axios';
import { ChatCompletionRequestMessageRoleEnum } from '@fastgpt/global/core/ai/constants';
import { i18nT } from '../../../web/i18n/utils';
import { addLog } from '../../common/system/log';
import { getImageBase64 } from '../../common/file/image/utils';
import { addEndpointToImageUrl, getImageBase64 } from '../../common/file/image/utils';
export const filterGPTMessageByMaxContext = async ({
messages = [],
@@ -87,26 +87,17 @@ export const loadRequestMessages = async ({
useVision?: boolean;
origin?: string;
}) => {
const replaceLinkUrl = (text: string) => {
const baseURL = process.env.FE_DOMAIN;
if (!baseURL) return text;
// 匹配 /api/system/img/xxx.xx 的图片链接,并追加 baseURL
return text.replace(
/(?<!https?:\/\/[^\s]*)(?:\/api\/system\/img\/[^\s.]*\.[^\s]*)/g,
(match) => `${baseURL}${match}`
);
};
const parseSystemMessage = (
content: string | ChatCompletionContentPartText[]
): string | ChatCompletionContentPartText[] | undefined => {
if (typeof content === 'string') {
if (!content) return;
return replaceLinkUrl(content);
return addEndpointToImageUrl(content);
}
const arrayContent = content
.filter((item) => item.text)
.map((item) => ({ ...item, text: replaceLinkUrl(item.text) }));
.map((item) => ({ ...item, text: addEndpointToImageUrl(item.text) }));
if (arrayContent.length === 0) return;
return arrayContent;
};

View File

@@ -7,7 +7,7 @@ import { ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
import { SseResponseEventEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import {
getWorkflowEntryNodeIds,
initWorkflowEdgeStatus,
storeEdges2RuntimeEdges,
storeNodes2RuntimeNodes,
textAdaptGptResponse
} from '@fastgpt/global/core/workflow/runtime/utils';
@@ -70,7 +70,7 @@ export const dispatchAppRequest = async (props: Props): Promise<Response> => {
appData.modules,
getWorkflowEntryNodeIds(appData.modules)
),
runtimeEdges: initWorkflowEdgeStatus(appData.edges),
runtimeEdges: storeEdges2RuntimeEdges(appData.edges),
histories: chatHistories,
query: runtimePrompt2ChatsValue({
files,

View File

@@ -22,7 +22,7 @@ import { formatModelChars2Points } from '../../../../../support/wallet/usage/uti
import { getHistoryPreview } from '@fastgpt/global/core/chat/utils';
import { runToolWithFunctionCall } from './functionCall';
import { runToolWithPromptCall } from './promptCall';
import { replaceVariable } from '@fastgpt/global/common/string/tools';
import { getNanoid, replaceVariable } from '@fastgpt/global/common/string/tools';
import { getMultiplePrompt, Prompt_Tool_Call } from './constants';
import { filterToolResponseToPreview } from './utils';
import { InteractiveNodeResponseType } from '@fastgpt/global/core/workflow/template/system/interactive/type';
@@ -188,6 +188,8 @@ export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<
if (toolModel.toolChoice) {
return runToolWithToolChoice({
...props,
runtimeNodes,
runtimeEdges,
toolNodes,
toolModel,
maxRunToolTimes: 30,
@@ -198,6 +200,8 @@ export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<
if (toolModel.functionCall) {
return runToolWithFunctionCall({
...props,
runtimeNodes,
runtimeEdges,
toolNodes,
toolModel,
messages: adaptMessages,
@@ -226,6 +230,8 @@ export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<
return runToolWithPromptCall({
...props,
runtimeNodes,
runtimeEdges,
toolNodes,
toolModel,
messages: adaptMessages,

View File

@@ -17,6 +17,7 @@ import { MongoDataset } from '../../../dataset/schema';
import { i18nT } from '../../../../../web/i18n/utils';
import { filterDatasetsByTmbId } from '../../../dataset/utils';
import { ModelTypeEnum } from '@fastgpt/global/core/ai/model';
import { addEndpointToImageUrl } from '../../../../common/file/image/utils';
type DatasetSearchProps = ModuleDispatchProps<{
[NodeInputKeyEnum.datasetSelectList]: SelectedDatasetType;
@@ -246,7 +247,7 @@ export async function dispatchDatasetSearch(
[DispatchNodeResponseKeyEnum.toolResponses]: searchRes.map((item) => ({
sourceName: item.sourceName,
updateTime: item.updateTime,
content: `${item.q}\n${item.a}`.trim()
content: addEndpointToImageUrl(`${item.q}\n${item.a}`.trim())
}))
};
}

View File

@@ -37,7 +37,8 @@ import { dispatchQueryExtension } from './tools/queryExternsion';
import { dispatchRunPlugin } from './plugin/run';
import { dispatchPluginInput } from './plugin/runInput';
import { dispatchPluginOutput } from './plugin/runOutput';
import { formatHttpError, removeSystemVariable, valueTypeFormat } from './utils';
import { formatHttpError, removeSystemVariable, rewriteRuntimeWorkFlow } from './utils';
import { valueTypeFormat } from '@fastgpt/global/core/workflow/runtime/utils';
import {
filterWorkflowEdges,
checkNodeRunStatus,
@@ -74,6 +75,7 @@ import { dispatchFormInput } from './interactive/formInput';
import { dispatchToolParams } from './agent/runTool/toolParams';
import { getErrText } from '@fastgpt/global/common/error/utils';
import { filterModuleTypeList } from '@fastgpt/global/core/chat/utils';
import { dispatchRunTool } from './plugin/runTool';
const callbackMap: Record<FlowNodeTypeEnum, Function> = {
[FlowNodeTypeEnum.workflowStart]: dispatchWorkflowStart,
@@ -104,6 +106,7 @@ const callbackMap: Record<FlowNodeTypeEnum, Function> = {
[FlowNodeTypeEnum.loopStart]: dispatchLoopStart,
[FlowNodeTypeEnum.loopEnd]: dispatchLoopEnd,
[FlowNodeTypeEnum.formInput]: dispatchFormInput,
[FlowNodeTypeEnum.tool]: dispatchRunTool,
// none
[FlowNodeTypeEnum.systemConfig]: dispatchSystemConfig,
@@ -111,6 +114,7 @@ const callbackMap: Record<FlowNodeTypeEnum, Function> = {
[FlowNodeTypeEnum.emptyNode]: () => Promise.resolve(),
[FlowNodeTypeEnum.globalVariable]: () => Promise.resolve(),
[FlowNodeTypeEnum.comment]: () => Promise.resolve(),
[FlowNodeTypeEnum.toolSet]: () => Promise.resolve(),
[FlowNodeTypeEnum.runApp]: dispatchAppRequest // abandoned
};
@@ -136,6 +140,8 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
...props
} = data;
rewriteRuntimeWorkFlow(runtimeNodes, runtimeEdges);
// 初始化深度和自动增加深度,避免无限嵌套
if (!props.workflowDispatchDeep) {
props.workflowDispatchDeep = 1;
@@ -643,9 +649,7 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
) {
props.workflowStreamResponse?.({
event: SseResponseEventEnum.flowNodeResponse,
data: {
...formatResponseData
}
data: formatResponseData
});
}

View File

@@ -17,19 +17,25 @@ type Response = DispatchNodeResultType<{
export const dispatchWorkflowStart = (props: Record<string, any>): Response => {
const {
query,
variables,
params: { userChatInput }
} = props as UserChatInputProps;
const { text, files } = chatValue2RuntimePrompt(query);
const queryFiles = files
.map((item) => {
return item?.url ?? '';
})
.filter(Boolean);
const variablesFiles: string[] = Array.isArray(variables?.fileUrlList)
? variables.fileUrlList
: [];
return {
[DispatchNodeResponseKeyEnum.nodeResponse]: {},
[NodeInputKeyEnum.userChatInput]: text || userChatInput,
[NodeOutputKeyEnum.userFiles]: files
.map((item) => {
return item?.url ?? '';
})
.filter(Boolean)
[NodeOutputKeyEnum.userFiles]: [...queryFiles, ...variablesFiles]
// [NodeInputKeyEnum.inputFiles]: files
};
};

View File

@@ -8,12 +8,18 @@ import { dispatchWorkFlow } from '..';
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import { AIChatItemValueItemType, ChatHistoryItemResType } from '@fastgpt/global/core/chat/type';
import { cloneDeep } from 'lodash';
import {
LoopInteractive,
WorkflowInteractiveResponseType
} from '@fastgpt/global/core/workflow/template/system/interactive/type';
import { storeEdges2RuntimeEdges } from '@fastgpt/global/core/workflow/runtime/utils';
type Props = ModuleDispatchProps<{
[NodeInputKeyEnum.loopInputArray]: Array<any>;
[NodeInputKeyEnum.childrenNodeIdList]: string[];
}>;
type Response = DispatchNodeResultType<{
[DispatchNodeResponseKeyEnum.interactive]?: LoopInteractive;
[NodeOutputKeyEnum.loopArray]: Array<any>;
}>;
@@ -21,6 +27,7 @@ export const dispatchLoop = async (props: Props): Promise<Response> => {
const {
params,
runtimeEdges,
lastInteractive,
runtimeNodes,
node: { name }
} = props;
@@ -29,6 +36,8 @@ export const dispatchLoop = async (props: Props): Promise<Response> => {
if (!Array.isArray(loopInputArray)) {
return Promise.reject('Input value is not an array');
}
// Max loop times
const maxLength = process.env.WORKFLOW_MAX_LOOP_TIMES
? Number(process.env.WORKFLOW_MAX_LOOP_TIMES)
: 50;
@@ -36,34 +45,63 @@ export const dispatchLoop = async (props: Props): Promise<Response> => {
return Promise.reject(`Input array length cannot be greater than ${maxLength}`);
}
const outputValueArr = [];
const loopDetail: ChatHistoryItemResType[] = [];
const interactiveData =
lastInteractive?.type === 'loopInteractive' ? lastInteractive?.params : undefined;
const lastIndex = interactiveData?.currentIndex;
const outputValueArr = interactiveData ? interactiveData.loopResult : [];
const loopResponseDetail: ChatHistoryItemResType[] = [];
let assistantResponses: AIChatItemValueItemType[] = [];
let totalPoints = 0;
let newVariables: Record<string, any> = props.variables;
let interactiveResponse: WorkflowInteractiveResponseType | undefined = undefined;
let index = 0;
for await (const item of loopInputArray.filter(Boolean)) {
runtimeNodes.forEach((node) => {
if (
childrenNodeIdList.includes(node.nodeId) &&
node.flowNodeType === FlowNodeTypeEnum.loopStart
) {
node.isEntry = true;
node.inputs.forEach((input) => {
if (input.key === NodeInputKeyEnum.loopStartInput) {
input.value = item;
} else if (input.key === NodeInputKeyEnum.loopStartIndex) {
input.value = index++;
}
});
}
});
// Skip already looped
if (lastIndex && index < lastIndex) {
index++;
continue;
}
// It takes effect only once in current loop
const isInteractiveResponseIndex = !!interactiveData && index === interactiveData?.currentIndex;
// Init entry
if (isInteractiveResponseIndex) {
runtimeNodes.forEach((node) => {
if (interactiveData?.childrenResponse?.entryNodeIds.includes(node.nodeId)) {
node.isEntry = true;
}
});
} else {
runtimeNodes.forEach((node) => {
if (!childrenNodeIdList.includes(node.nodeId)) return;
// Init interactive response
if (node.flowNodeType === FlowNodeTypeEnum.loopStart) {
node.isEntry = true;
node.inputs.forEach((input) => {
if (input.key === NodeInputKeyEnum.loopStartInput) {
input.value = item;
} else if (input.key === NodeInputKeyEnum.loopStartIndex) {
input.value = index + 1;
}
});
}
});
}
index++;
const response = await dispatchWorkFlow({
...props,
lastInteractive: interactiveData?.childrenResponse,
variables: newVariables,
runtimeEdges: cloneDeep(runtimeEdges)
runtimeNodes,
runtimeEdges: cloneDeep(
storeEdges2RuntimeEdges(runtimeEdges, interactiveData?.childrenResponse)
)
});
const loopOutputValue = response.flowResponses.find(
@@ -71,8 +109,10 @@ export const dispatchLoop = async (props: Props): Promise<Response> => {
)?.loopOutputValue;
// Concat runtime response
outputValueArr.push(loopOutputValue);
loopDetail.push(...response.flowResponses);
if (!response.workflowInteractiveResponse) {
outputValueArr.push(loopOutputValue);
}
loopResponseDetail.push(...response.flowResponses);
assistantResponses.push(...response.assistantResponses);
totalPoints += response.flowUsages.reduce((acc, usage) => acc + usage.totalPoints, 0);
@@ -81,15 +121,32 @@ export const dispatchLoop = async (props: Props): Promise<Response> => {
...newVariables,
...response.newVariables
};
// handle interactive response
if (response.workflowInteractiveResponse) {
interactiveResponse = response.workflowInteractiveResponse;
break;
}
}
return {
[DispatchNodeResponseKeyEnum.interactive]: interactiveResponse
? {
type: 'loopInteractive',
params: {
currentIndex: index - 1,
childrenResponse: interactiveResponse,
loopResult: outputValueArr
}
}
: undefined,
[DispatchNodeResponseKeyEnum.assistantResponses]: assistantResponses,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints,
loopInput: loopInputArray,
loopResult: outputValueArr,
loopDetail: loopDetail
loopDetail: loopResponseDetail,
mergeSignId: props.node.nodeId
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
{

View File

@@ -5,7 +5,7 @@ import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runti
import { getChildAppRuntimeById } from '../../../app/plugin/controller';
import {
getWorkflowEntryNodeIds,
initWorkflowEdgeStatus,
storeEdges2RuntimeEdges,
storeNodes2RuntimeNodes
} from '@fastgpt/global/core/workflow/runtime/utils';
import { DispatchNodeResultType } from '@fastgpt/global/core/workflow/runtime/type';
@@ -101,7 +101,7 @@ export const dispatchRunPlugin = async (props: RunPluginProps): Promise<RunPlugi
}).value,
chatConfig: {},
runtimeNodes,
runtimeEdges: initWorkflowEdgeStatus(plugin.edges)
runtimeEdges: storeEdges2RuntimeEdges(plugin.edges)
});
const output = flowResponses.find((item) => item.moduleType === FlowNodeTypeEnum.pluginOutput);
if (output) {

View File

@@ -5,7 +5,8 @@ import { ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
import { SseResponseEventEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import {
getWorkflowEntryNodeIds,
initWorkflowEdgeStatus,
storeEdges2RuntimeEdges,
rewriteNodeOutputByHistories,
storeNodes2RuntimeNodes,
textAdaptGptResponse
} from '@fastgpt/global/core/workflow/runtime/utils';
@@ -107,9 +108,15 @@ export const dispatchRunAppNode = async (props: Props): Promise<Response> => {
lastInteractive?.type === 'childrenInteractive'
? lastInteractive.params.childrenResponse
: undefined;
const entryNodeIds = getWorkflowEntryNodeIds(nodes, childrenInteractive || undefined);
const runtimeNodes = storeNodes2RuntimeNodes(nodes, entryNodeIds);
const runtimeEdges = initWorkflowEdgeStatus(edges, childrenInteractive);
const runtimeNodes = rewriteNodeOutputByHistories(
storeNodes2RuntimeNodes(
nodes,
getWorkflowEntryNodeIds(nodes, childrenInteractive || undefined)
),
childrenInteractive
);
const runtimeEdges = storeEdges2RuntimeEdges(edges, childrenInteractive);
const theQuery = childrenInteractive
? query
: runtimePrompt2ChatsValue({ files: userInputFiles, text: userChatInput });
@@ -170,7 +177,8 @@ export const dispatchRunAppNode = async (props: Props): Promise<Response> => {
totalPoints: usagePoints,
query: userChatInput,
textOutput: text,
pluginDetail: appData.permission.hasWritePer ? flowResponses : undefined
pluginDetail: appData.permission.hasWritePer ? flowResponses : undefined,
mergeSignId: props.node.nodeId
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
{

View File

@@ -0,0 +1,60 @@
import {
DispatchNodeResultType,
ModuleDispatchProps
} from '@fastgpt/global/core/workflow/runtime/type';
import { SSEClientTransport } from '@modelcontextprotocol/sdk/client/sse.js';
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import { NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
type RunToolProps = ModuleDispatchProps<{
toolData: {
name: string;
url: string;
};
}>;
type RunToolResponse = DispatchNodeResultType<{
[NodeOutputKeyEnum.rawResponse]: any;
}>;
export const dispatchRunTool = async (props: RunToolProps): Promise<RunToolResponse> => {
const {
params,
node: { avatar }
} = props;
const { toolData, ...restParams } = params;
const { name: toolName, url } = toolData;
const client = new Client({
name: 'FastGPT-MCP-client',
version: '1.0.0'
});
const result = await (async () => {
try {
const transport = new SSEClientTransport(new URL(url));
await client.connect(transport);
return await client.callTool({
name: toolName,
arguments: restParams
});
} catch (error) {
console.error('Error running MCP tool:', error);
return Promise.reject(error);
} finally {
await client.close();
}
})();
return {
[DispatchNodeResponseKeyEnum.nodeResponse]: {
toolRes: result,
moduleLogo: avatar
},
[DispatchNodeResponseKeyEnum.toolResponses]: result,
[NodeOutputKeyEnum.rawResponse]: result
};
};

View File

@@ -10,7 +10,8 @@ import {
SseResponseEventEnum
} from '@fastgpt/global/core/workflow/runtime/constants';
import axios from 'axios';
import { formatHttpError, valueTypeFormat } from '../utils';
import { formatHttpError } from '../utils';
import { valueTypeFormat } from '@fastgpt/global/core/workflow/runtime/utils';
import { SERVICE_LOCAL_HOST } from '../../../../common/system/tools';
import { addLog } from '../../../../common/system/log';
import { DispatchNodeResultType } from '@fastgpt/global/core/workflow/runtime/type';

View File

@@ -2,7 +2,7 @@ import type { ModuleDispatchProps } from '@fastgpt/global/core/workflow/runtime/
import { NodeInputKeyEnum, NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import axios from 'axios';
import { valueTypeFormat } from '../utils';
import { valueTypeFormat } from '@fastgpt/global/core/workflow/runtime/utils';
import { SERVICE_LOCAL_HOST } from '../../../../common/system/tools';
import { addLog } from '../../../../common/system/log';
import { DispatchNodeResultType } from '@fastgpt/global/core/workflow/runtime/type';

View File

@@ -10,8 +10,9 @@ import {
} from '@fastgpt/global/core/workflow/runtime/utils';
import { TUpdateListItem } from '@fastgpt/global/core/workflow/template/system/variableUpdate/type';
import { ModuleDispatchProps } from '@fastgpt/global/core/workflow/runtime/type';
import { removeSystemVariable, valueTypeFormat } from '../utils';
import { removeSystemVariable } from '../utils';
import { isValidReferenceValue } from '@fastgpt/global/core/workflow/utils';
import { valueTypeFormat } from '@fastgpt/global/core/workflow/runtime/utils';
type Props = ModuleDispatchProps<{
[NodeInputKeyEnum.updateList]: TUpdateListItem[];

View File

@@ -7,6 +7,7 @@ import {
} from '@fastgpt/global/core/workflow/constants';
import {
RuntimeEdgeItemType,
RuntimeNodeItemType,
SystemVariablesType
} from '@fastgpt/global/core/workflow/runtime/type';
import { responseWrite } from '../../../common/response';
@@ -14,7 +15,8 @@ import { NextApiResponse } from 'next';
import { SseResponseEventEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import { getNanoid } from '@fastgpt/global/common/string/tools';
import { SearchDataResponseItemType } from '@fastgpt/global/core/dataset/type';
import json5 from 'json5';
import { getMCPToolRuntimeNode } from '@fastgpt/global/core/app/mcpTools/utils';
import { FlowNodeTypeEnum } from '@fastgpt/global/core/workflow/node/constant';
export const getWorkflowResponseWrite = ({
res,
@@ -104,49 +106,6 @@ export const getHistories = (history?: ChatItemType[] | number, histories: ChatI
return [...systemHistories, ...filterHistories];
};
/* value type format */
export const valueTypeFormat = (value: any, type?: WorkflowIOValueTypeEnum) => {
if (value === undefined) return;
if (!type || type === WorkflowIOValueTypeEnum.any) return value;
if (type === 'string') {
if (typeof value !== 'object') return String(value);
return JSON.stringify(value);
}
if (type === 'number') return Number(value);
if (type === 'boolean') {
if (typeof value === 'string') return value === 'true';
return Boolean(value);
}
try {
if (type === WorkflowIOValueTypeEnum.arrayString && typeof value === 'string') {
return [value];
}
if (
type &&
[
WorkflowIOValueTypeEnum.object,
WorkflowIOValueTypeEnum.chatHistory,
WorkflowIOValueTypeEnum.datasetQuote,
WorkflowIOValueTypeEnum.selectApp,
WorkflowIOValueTypeEnum.selectDataset,
WorkflowIOValueTypeEnum.arrayString,
WorkflowIOValueTypeEnum.arrayNumber,
WorkflowIOValueTypeEnum.arrayBoolean,
WorkflowIOValueTypeEnum.arrayObject,
WorkflowIOValueTypeEnum.arrayAny
].includes(type) &&
typeof value !== 'object'
) {
return json5.parse(value);
}
} catch (error) {
return value;
}
return value;
};
export const checkQuoteQAValue = (quoteQA?: SearchDataResponseItemType[]) => {
if (!quoteQA) return undefined;
if (quoteQA.length === 0) {
@@ -199,3 +158,53 @@ export const formatHttpError = (error: any) => {
status: error?.status
};
};
export const rewriteRuntimeWorkFlow = (
nodes: RuntimeNodeItemType[],
edges: RuntimeEdgeItemType[]
) => {
const toolSetNodes = nodes.filter((node) => node.flowNodeType === FlowNodeTypeEnum.toolSet);
if (toolSetNodes.length === 0) {
return;
}
const nodeIdsToRemove = new Set<string>();
for (const toolSetNode of toolSetNodes) {
nodeIdsToRemove.add(toolSetNode.nodeId);
const toolList =
toolSetNode.inputs.find((input) => input.key === 'toolSetData')?.value?.toolList || [];
const url = toolSetNode.inputs.find((input) => input.key === 'toolSetData')?.value?.url;
const incomingEdges = edges.filter((edge) => edge.target === toolSetNode.nodeId);
for (const tool of toolList) {
const newToolNode = getMCPToolRuntimeNode({ avatar: toolSetNode.avatar, tool, url });
nodes.push({ ...newToolNode, name: `${toolSetNode.name} / ${tool.name}` });
for (const inEdge of incomingEdges) {
edges.push({
source: inEdge.source,
target: newToolNode.nodeId,
sourceHandle: inEdge.sourceHandle,
targetHandle: 'selectedTools',
status: inEdge.status
});
}
}
}
for (let i = nodes.length - 1; i >= 0; i--) {
if (nodeIdsToRemove.has(nodes[i].nodeId)) {
nodes.splice(i, 1);
}
}
for (let i = edges.length - 1; i >= 0; i--) {
if (nodeIdsToRemove.has(edges[i].target)) {
edges.splice(i, 1);
}
}
};

View File

@@ -3,6 +3,7 @@
"version": "1.0.0",
"dependencies": {
"@fastgpt/global": "workspace:*",
"@modelcontextprotocol/sdk": "^1.9.0",
"@node-rs/jieba": "2.0.1",
"@xmldom/xmldom": "^0.8.10",
"@zilliz/milvus2-sdk-node": "2.4.2",

View File

@@ -0,0 +1,58 @@
import {
TeamCollectionName,
TeamMemberCollectionName
} from '@fastgpt/global/support/user/team/constant';
import { Schema, getMongoModel } from '../../common/mongo';
import { McpKeyType } from '@fastgpt/global/support/mcp/type';
import { getNanoid } from '@fastgpt/global/common/string/tools';
import { AppCollectionName } from '../../core/app/schema';
export const mcpCollectionName = 'mcp_keys';
const McpKeySchema = new Schema({
name: {
type: String,
required: true
},
key: {
type: String,
required: true,
unique: true,
default: () => getNanoid(24)
},
teamId: {
type: Schema.Types.ObjectId,
ref: TeamCollectionName,
required: true
},
tmbId: {
type: Schema.Types.ObjectId,
ref: TeamMemberCollectionName,
required: true
},
apps: {
type: [
{
appId: {
type: Schema.Types.ObjectId,
ref: AppCollectionName,
required: true
},
toolName: {
type: String
},
description: {
type: String
}
}
],
default: []
}
});
try {
} catch (error) {
console.log(error);
}
export const MongoMcpKey = getMongoModel<McpKeyType>(mcpCollectionName, McpKeySchema);

View File

@@ -0,0 +1,45 @@
import { PermissionValueType } from '@fastgpt/global/support/permission/type';
import { AuthModeType, AuthResponseType } from '../type';
import { McpKeyType } from '@fastgpt/global/support/mcp/type';
import { authUserPer } from '../user/auth';
import { MongoMcpKey } from '../../mcp/schema';
import { CommonErrEnum } from '@fastgpt/global/common/error/code/common';
import { TeamErrEnum } from '@fastgpt/global/common/error/code/team';
export const authMcp = async ({
mcpId,
per,
...props
}: AuthModeType & {
mcpId: string;
per: PermissionValueType;
}): Promise<
AuthResponseType & {
mcp: McpKeyType;
}
> => {
const { userId, teamId, tmbId, permission, isRoot } = await authUserPer(props);
const mcp = await MongoMcpKey.findOne({ _id: mcpId }).lean();
if (!mcp) {
return Promise.reject(CommonErrEnum.invalidResource);
}
if (teamId !== String(mcp.teamId)) {
return Promise.reject(TeamErrEnum.unPermission);
}
if (!permission.hasManagePer && !isRoot && tmbId !== String(mcp.tmbId)) {
return Promise.reject(TeamErrEnum.unPermission);
}
return {
mcp,
userId,
teamId,
tmbId,
isRoot,
permission
};
};

View File

@@ -8,6 +8,8 @@ type Props = FlexProps & {
size?: string;
onClick?: () => void;
hoverColor?: string;
hoverBg?: string;
hoverBorderColor?: string;
tip?: string;
isLoading?: boolean;
};
@@ -16,6 +18,8 @@ const MyIconButton = ({
icon,
onClick,
hoverColor = 'primary.600',
hoverBg = 'myGray.05',
hoverBorderColor = '',
size = '1rem',
tip,
isLoading = false,
@@ -33,8 +37,9 @@ const MyIconButton = ({
transition={'background 0.1s'}
cursor={'pointer'}
_hover={{
bg: 'myGray.05',
color: hoverColor
bg: hoverBg,
color: hoverColor,
borderColor: hoverBorderColor
}}
onClick={() => {
if (isLoading) return;

View File

@@ -17,6 +17,7 @@ export const iconPaths = {
'common/addLight': () => import('./icons/common/addLight.svg'),
'common/addUser': () => import('./icons/common/addUser.svg'),
'common/administrator': () => import('./icons/common/administrator.svg'),
'common/app': () => import('./icons/common/app.svg'),
'common/arrowLeft': () => import('./icons/common/arrowLeft.svg'),
'common/arrowRight': () => import('./icons/common/arrowRight.svg'),
'common/backFill': () => import('./icons/common/backFill.svg'),
@@ -32,6 +33,7 @@ export const iconPaths = {
'common/courseLight': () => import('./icons/common/courseLight.svg'),
'common/customTitleLight': () => import('./icons/common/customTitleLight.svg'),
'common/data': () => import('./icons/common/data.svg'),
'common/detail': () => import('./icons/common/detail.svg'),
'common/dingtalkFill': () => import('./icons/common/dingtalkFill.svg'),
'common/disable': () => import('./icons/common/disable.svg'),
'common/downArrowFill': () => import('./icons/common/downArrowFill.svg'),
@@ -157,6 +159,8 @@ export const iconPaths = {
'core/app/type/httpPlugin': () => import('./icons/core/app/type/httpPlugin.svg'),
'core/app/type/httpPluginFill': () => import('./icons/core/app/type/httpPluginFill.svg'),
'core/app/type/jsonImport': () => import('./icons/core/app/type/jsonImport.svg'),
'core/app/type/mcpTools': () => import('./icons/core/app/type/mcpTools.svg'),
'core/app/type/mcpToolsFill': () => import('./icons/core/app/type/mcpToolsFill.svg'),
'core/app/type/plugin': () => import('./icons/core/app/type/plugin.svg'),
'core/app/type/pluginFill': () => import('./icons/core/app/type/pluginFill.svg'),
'core/app/type/pluginLight': () => import('./icons/core/app/type/pluginLight.svg'),
@@ -169,6 +173,7 @@ export const iconPaths = {
'core/app/variable/select': () => import('./icons/core/app/variable/select.svg'),
'core/app/variable/textarea': () => import('./icons/core/app/variable/textarea.svg'),
'core/chat/QGFill': () => import('./icons/core/chat/QGFill.svg'),
'core/chat/backText': () => import('./icons/core/chat/backText.svg'),
'core/chat/cancelSpeak': () => import('./icons/core/chat/cancelSpeak.svg'),
'core/chat/chatFill': () => import('./icons/core/chat/chatFill.svg'),
'core/chat/chatLight': () => import('./icons/core/chat/chatLight.svg'),
@@ -183,7 +188,6 @@ export const iconPaths = {
'core/chat/feedback/goodLight': () => import('./icons/core/chat/feedback/goodLight.svg'),
'core/chat/fileSelect': () => import('./icons/core/chat/fileSelect.svg'),
'core/chat/finishSpeak': () => import('./icons/core/chat/finishSpeak.svg'),
'core/chat/backText':() => import('./icons/core/chat/backText.svg'),
'core/chat/imgSelect': () => import('./icons/core/chat/imgSelect.svg'),
'core/chat/quoteFill': () => import('./icons/core/chat/quoteFill.svg'),
'core/chat/quoteSign': () => import('./icons/core/chat/quoteSign.svg'),
@@ -425,8 +429,8 @@ export const iconPaths = {
'phoneTabbar/toolFill': () => import('./icons/phoneTabbar/toolFill.svg'),
'plugins/dingding': () => import('./icons/plugins/dingding.svg'),
'plugins/doc2x': () => import('./icons/plugins/doc2x.svg'),
'plugins/qiwei': () => import('./icons/plugins/qiwei.svg'),
'plugins/email': () => import('./icons/plugins/email.svg'),
'plugins/qiwei': () => import('./icons/plugins/qiwei.svg'),
'plugins/textEditor': () => import('./icons/plugins/textEditor.svg'),
point: () => import('./icons/point.svg'),
preview: () => import('./icons/preview.svg'),

View File

@@ -0,0 +1,6 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" >
<path fill-rule="evenodd" clip-rule="evenodd" d="M2.06677 2.8C2.06677 2.35817 2.42494 2 2.86677 2H6.60011C7.04193 2 7.40011 2.35817 7.40011 2.8V6.53333C7.40011 6.97516 7.04193 7.33333 6.60011 7.33333H2.86677C2.42494 7.33333 2.06677 6.97516 2.06677 6.53333V2.8ZM3.40011 6V3.33333H6.06677V6H3.40011Z" />
<path fill-rule="evenodd" clip-rule="evenodd" d="M8.73344 2.8C8.73344 2.35817 9.09161 2 9.53344 2H13.2668C13.7086 2 14.0668 2.35817 14.0668 2.8V6.53333C14.0668 6.97516 13.7086 7.33333 13.2668 7.33333H9.53344C9.09161 7.33333 8.73344 6.97516 8.73344 6.53333V2.8ZM10.0668 6V3.33333H12.7334V6H10.0668Z" />
<path fill-rule="evenodd" clip-rule="evenodd" d="M9.53344 8.66667C9.09161 8.66667 8.73344 9.02484 8.73344 9.46667V13.2C8.73344 13.6418 9.09161 14 9.53344 14H13.2668C13.7086 14 14.0668 13.6418 14.0668 13.2V9.46667C14.0668 9.02484 13.7086 8.66667 13.2668 8.66667H9.53344ZM10.0668 10V12.6667H12.7334V10H10.0668Z" />
<path fill-rule="evenodd" clip-rule="evenodd" d="M2.06677 9.46667C2.06677 9.02484 2.42494 8.66667 2.86677 8.66667H6.60011C7.04193 8.66667 7.40011 9.02484 7.40011 9.46667V13.2C7.40011 13.6418 7.04193 14 6.60011 14H2.86677C2.42494 14 2.06677 13.6418 2.06677 13.2V9.46667ZM3.40011 12.6667V10H6.06677V12.6667H3.40011Z" />
</svg>

After

Width:  |  Height:  |  Size: 1.3 KiB

View File

@@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" fill="none">
<path fill-rule="evenodd" clip-rule="evenodd" d="M1.19888 4.05802C1.19888 3.68983 1.51616 3.39136 1.90755 3.39136H14.0925C14.4838 3.39136 14.8011 3.68983 14.8011 4.05802C14.8011 4.42621 14.4838 4.72469 14.0925 4.72469H1.90755C1.51616 4.72469 1.19888 4.42621 1.19888 4.05802ZM1.19888 7.95852C1.19888 7.59033 1.51616 7.29185 1.90755 7.29185H14.0925C14.4838 7.29185 14.8011 7.59033 14.8011 7.95852C14.8011 8.32671 14.4838 8.62518 14.0925 8.62518H1.90755C1.51616 8.62518 1.19888 8.32671 1.19888 7.95852ZM1.19888 11.942C1.19888 11.5738 1.51616 11.2753 1.90755 11.2753H9.7136C10.105 11.2753 10.4223 11.5738 10.4223 11.942C10.4223 12.3102 10.105 12.6087 9.7136 12.6087H1.90755C1.51616 12.6087 1.19888 12.3102 1.19888 11.942Z" />
</svg>

After

Width:  |  Height:  |  Size: 803 B

View File

@@ -0,0 +1,3 @@
<svg viewBox="0 0 12 13" fill="none" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" clip-rule="evenodd" d="M8.01149 1.30106C7.46783 0.771565 6.58633 0.771547 6.04264 1.30107L0.792224 6.41449C0.610991 6.59101 0.317157 6.59101 0.135923 6.41449C-0.0453077 6.23798 -0.0453077 5.95182 0.135923 5.77531L5.38633 0.661891C6.29247 -0.220617 7.76166 -0.220641 8.6678 0.661882C9.20454 1.1846 9.42336 1.8997 9.32429 2.57927C10.0221 2.48276 10.7563 2.69587 11.293 3.21858L11.2933 3.21882L11.3203 3.24523C12.2265 4.12776 12.2265 5.55862 11.3203 6.44112L6.57187 11.0658C6.51148 11.1246 6.51145 11.2199 6.57184 11.2788L7.54692 12.2284C7.72813 12.4049 7.72813 12.6911 7.54689 12.8676C7.36565 13.0441 7.07182 13.0441 6.89058 12.8676L5.91557 11.918C5.4927 11.5061 5.4927 10.8384 5.91557 10.4266L10.6641 5.80193C11.2078 5.27243 11.2078 4.41392 10.6641 3.88442L10.6638 3.88417L10.6368 3.85779C10.0931 3.32835 9.21152 3.32829 8.6678 3.85779L4.75737 7.66624L4.75495 7.66859L4.70266 7.71948C4.52145 7.89599 4.2276 7.89599 4.04639 7.71948C3.86515 7.54297 3.86515 7.25681 4.04636 7.0803L8.01149 3.21861C8.55521 2.68911 8.55521 1.83057 8.01149 1.30106ZM7.35525 2.57943C7.53646 2.40292 7.53646 2.11676 7.35525 1.94024C7.17401 1.76374 6.88018 1.76374 6.69894 1.94024L2.81582 5.72203C1.90965 6.60456 1.90968 8.03539 2.81579 8.91795C3.72196 9.80042 5.19114 9.80042 6.09731 8.91795L9.98043 5.13613C10.1616 4.95962 10.1616 4.67346 9.98043 4.49695C9.79919 4.32043 9.50537 4.32046 9.32413 4.49695L5.44104 8.27876C4.89732 8.80824 4.01582 8.80824 3.47213 8.27879C2.92841 7.74923 2.92844 6.89072 3.47213 6.36122L7.35525 2.57943Z" />
</svg>

After

Width:  |  Height:  |  Size: 1.6 KiB

View File

@@ -0,0 +1,13 @@
<svg viewBox="0 0 32 32" xmlns="http://www.w3.org/2000/svg">
<rect width="32" height="32" rx="4" fill="url(#paint0_linear_19272_45870)"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M19.2184 8.48172C18.3486 7.63453 16.9382 7.6345 16.0682 8.48174L7.66758 16.6632C7.37761 16.9456 6.90748 16.9456 6.6175 16.6632C6.32753 16.3808 6.32753 15.9229 6.6175 15.6405L15.0182 7.45905C16.468 6.04704 18.8187 6.047 20.2685 7.45904C21.1273 8.29538 21.4774 9.43954 21.3189 10.5269C22.4353 10.3724 23.6101 10.7134 24.4689 11.5498L24.4693 11.5501L24.5126 11.5924C25.9624 13.0044 25.9625 15.2938 24.5126 16.7058L16.915 24.1053C16.8184 24.1994 16.8183 24.3519 16.915 24.446L18.4751 25.9655C18.765 26.2479 18.765 26.7058 18.475 26.9882C18.1851 27.2706 17.7149 27.2706 17.425 26.9882L15.8649 25.4688C15.1883 24.8098 15.1883 23.7415 15.8649 23.0826L23.4625 15.6831C24.3324 14.8359 24.3325 13.4623 23.4625 12.6151L23.4621 12.6147L23.4189 12.5725C22.5489 11.7254 21.1384 11.7253 20.2685 12.5725L14.0118 18.666L14.0079 18.6698L13.9243 18.7512C13.6343 19.0336 13.1642 19.0336 12.8742 18.7512C12.5843 18.4688 12.5843 18.0109 12.8742 17.7285L19.2184 11.5498C20.0884 10.7026 20.0884 9.32893 19.2184 8.48172ZM18.1684 10.5271C18.4584 10.2447 18.4584 9.78683 18.1684 9.50442C17.8784 9.22201 17.4083 9.22201 17.1183 9.50442L10.9053 15.5553C9.45547 16.9673 9.45552 19.2566 10.9053 20.6687C12.3552 22.0807 14.7059 22.0807 16.1557 20.6687L22.3687 14.6178C22.6587 14.3354 22.6587 13.8776 22.3687 13.5951C22.0787 13.3127 21.6086 13.3128 21.3186 13.5951L15.1057 19.646C14.2357 20.4932 12.8253 20.4932 11.9554 19.6461C11.0855 18.7988 11.0855 17.4252 11.9554 16.578L18.1684 10.5271Z" fill="white"/>
<defs>
<linearGradient id="paint0_linear_19272_45870" x1="0" y1="32" x2="36.8" y2="8.8" gradientUnits="userSpaceOnUse">
<stop offset="0.201923" stop-color="#454D5D"/>
<stop offset="1" stop-color="#BAC2CE"/>
</linearGradient>
<clipPath id="clip0_19272_45870">
<rect width="22.4" height="22.4" fill="white" transform="translate(4.80005 4.80005)"/>
</clipPath>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 2.0 KiB

View File

@@ -8,7 +8,7 @@ const SearchInput = (props: InputProps) => {
<InputLeftElement>
<MyIcon name="common/searchLight" w="16px" color={'myGray.500'} />
</InputLeftElement>
<Input fontSize="sm" bg={'myGray.50'} {...props} />
<Input fontSize="sm" bg={'myGray.25'} {...props} />
</InputGroup>
);
};

View File

@@ -74,6 +74,7 @@ const PopoverConfirm = ({
isLazy
lazyBehavior="keepMounted"
arrowSize={10}
strategy={'fixed'}
>
<PopoverTrigger>{Trigger}</PopoverTrigger>
<PopoverContent p={4}>

View File

@@ -0,0 +1,61 @@
import { useState, useRef, useCallback, useEffect } from 'react';
interface UseResizableOptions {
initialWidth?: number;
minWidth?: number;
maxWidth?: number;
}
export const useResizable = (options: UseResizableOptions = {}) => {
const { initialWidth = 300, minWidth = 200, maxWidth = 400 } = options;
const [width, setWidth] = useState(initialWidth);
const [isDragging, setIsDragging] = useState(false);
const startX = useRef(0);
const startWidth = useRef(0);
const handleMouseDown = useCallback(
(e: React.MouseEvent) => {
setIsDragging(true);
startX.current = e.clientX;
startWidth.current = width;
e.preventDefault();
},
[width]
);
const handleMouseMove = useCallback(
(e: MouseEvent) => {
if (!isDragging) return;
const diff = e.clientX - startX.current;
const newWidth = Math.min(Math.max(startWidth.current + diff, minWidth), maxWidth);
setWidth(newWidth);
},
[isDragging, minWidth, maxWidth]
);
const handleMouseUp = useCallback(() => {
setIsDragging(false);
}, []);
useEffect(() => {
if (isDragging) {
document.addEventListener('mousemove', handleMouseMove);
document.addEventListener('mouseup', handleMouseUp);
}
return () => {
document.removeEventListener('mousemove', handleMouseMove);
document.removeEventListener('mouseup', handleMouseUp);
};
}, [isDragging, handleMouseMove, handleMouseUp]);
return {
width,
isDragging,
handleMouseDown
};
};
export default useResizable;

View File

@@ -20,6 +20,7 @@
"generation_time": "Generation time",
"image_parse": "Image tagging",
"input_token_length": "input tokens",
"mcp": "MCP call",
"member": "member",
"member_name": "Member name",
"module_name": "module name",

View File

@@ -1,4 +1,9 @@
{
"MCP_tools_list_is_empty": "MCP tool not resolved",
"MCP_tools_parse_failed": "Failed to parse MCP address",
"MCP_tools_url": "MCP Address",
"MCP_tools_url_is_empty": "The MCP address cannot be empty",
"MCP_tools_url_placeholder": "After filling in the MCP address, click Analysis",
"Role_setting": "Permission",
"Run": "Execute",
"Team Tags Set": "Team tags",
@@ -98,6 +103,7 @@
"month.unit": "Day",
"move.hint": "After moving, the selected application/folder will inherit the permission settings of the new folder, and the original permission settings will become invalid.",
"move_app": "Move Application",
"no_mcp_tools_list": "No data yet, the MCP address needs to be parsed first",
"node_not_intro": "This node is not introduced",
"not_json_file": "Please select a JSON file",
"oaste_curl_string": "Enter CURL code",
@@ -158,6 +164,7 @@
"template_market_empty_data": "No suitable templates found",
"time_zone": "Time Zone",
"tool_input_param_tip": "This plugin requires configuration of related information to run properly.",
"tools_no_description": "This tool has not been introduced ~",
"transition_to_workflow": "Convert to Workflow",
"transition_to_workflow_create_new_placeholder": "Create a new app instead of modifying the current app",
"transition_to_workflow_create_new_tip": "Once converted to a workflow, it cannot be reverted to simple mode. Please confirm!",

View File

@@ -36,6 +36,10 @@
"Warning": "Warning",
"add_new": "Add New",
"add_new_param": "Add new param",
"app.templateMarket.templateTags.Image_generation": "Image generation",
"app.templateMarket.templateTags.Office_services": "Office Services",
"app.templateMarket.templateTags.Roleplay": "role play",
"app.templateMarket.templateTags.Web_search": "Search online",
"app.templateMarket.templateTags.Writing": "Writing",
"back": "Back",
"can_copy_content_tip": "It is not possible to copy automatically using the browser, please manually copy the following content",
@@ -97,7 +101,7 @@
"code_error.team_error.org_member_not_exist": "Organization member does not exist",
"code_error.team_error.org_not_exist": "Organization does not exist",
"code_error.team_error.org_parent_not_exist": "Parent organization does not exist",
"code_error.team_error.over_size": "error.team.overSize",
"code_error.team_error.over_size": "Team members exceed limit",
"code_error.team_error.plugin_amount_not_enough": "Plugin Limit Reached",
"code_error.team_error.re_rank_not_enough": "Search rearrangement cannot be used in the free version~",
"code_error.team_error.too_many_invitations": "You have reached the maximum number of active invitation links, please clean up some links first",
@@ -175,6 +179,7 @@
"common.Other": "Other",
"common.Output": "Output",
"common.Params": "Parameters",
"common.Parse": "Analysis",
"common.Password inconsistency": "Passwords Do Not Match",
"common.Permission": "Permission",
"common.Permission_tip": "Individual permissions are greater than group permissions",
@@ -371,6 +376,7 @@
"core.app.share.Is response quote": "Return Quote",
"core.app.share.Not share link": "No Share Link Created",
"core.app.share.Role check": "Identity Verification",
"core.app.switch_to_template_market": "Jump template market",
"core.app.tip.Add a intro to app": "Give the app an introduction",
"core.app.tip.chatNodeSystemPromptTip": "Enter a prompt here",
"core.app.tip.systemPromptTip": "Fixed guide words for the model. By adjusting this content, you can guide the model's chat direction. This content will be fixed at the beginning of the context. You can use / to insert variables.\nIf a Dataset is associated, you can also guide the model when to call the Dataset search by appropriate description. For example:\nYou are an assistant for the movie 'Interstellar'. When users ask about content related to 'Interstellar', please search the Dataset and answer based on the search results.",
@@ -444,6 +450,7 @@
"core.chat.logs.api": "API Call",
"core.chat.logs.feishu": "Feishu",
"core.chat.logs.free_login": "No login link",
"core.chat.logs.mcp": "MCP call",
"core.chat.logs.official_account": "Official Account",
"core.chat.logs.online": "Online Use",
"core.chat.logs.share": "External Link Call",
@@ -896,7 +903,9 @@
"error.username_empty": "Account cannot be empty",
"error_collection_not_exist": "The collection does not exist",
"error_embedding_not_config": "Unconfigured index model",
"error_invalid_resource": "Invalid resources",
"error_llm_not_config": "Unconfigured file understanding model",
"error_un_permission": "No permission to operate",
"error_vlm_not_config": "Image comprehension model not configured",
"extraction_results": "Extraction Results",
"field_name": "Field Name",
@@ -926,6 +935,7 @@
"llm_model_not_config": "No language model was detected",
"max_quote_tokens": "Quote cap",
"max_quote_tokens_tips": "The maximum number of tokens in a single search, about 1 character in Chinese = 1.7 tokens, and about 1 character in English = 1 token",
"mcp_server": "MCP Services",
"min_similarity": "lowest correlation",
"min_similarity_tip": "The relevance of different index models is different. Please select the appropriate value through search testing. \nWhen using Result Rearrange , use the rearranged results for filtering.",
"model.billing": "Billing",
@@ -1208,6 +1218,7 @@
"system.Help Document": "Help Document",
"tag_list": "Tag List",
"team_tag": "Team Tag",
"template_market": "Template Market",
"textarea_variable_picker_tip": "Enter \"/\" to select a variable",
"unauth_token": "The certificate has expired, please log in again",
"unit.character": "Character",

View File

@@ -0,0 +1,20 @@
{
"app_alias_name": "Tool name",
"app_description": "Application Description",
"app_name": "Application name",
"apps": "Exposed applications",
"create_mcp_server": "Create a new service",
"delete_mcp_server_confirm_tip": "Confirm to delete the service?",
"has_chosen": "Selected",
"manage_app": "manage",
"mcp_apps": "Number of associated applications",
"mcp_endpoints": "Access address",
"mcp_json_config": "Access script",
"mcp_name": "MCP service name",
"mcp_server": "MCP Services",
"mcp_server_description": "Allows you to select some applications to provide external use with the MCP protocol. \nDue to the immaturity of the MCP protocol, this feature is still in the beta stage.",
"search_app": "Search for apps",
"select_app": "Application selection",
"start_use": "Get started",
"usage_way": "MCP service usage"
}

View File

@@ -186,6 +186,7 @@
"tool_params.params_name": "Name",
"tool_params.params_name_placeholder": "name/age/sql",
"tool_params.tool_params_result": "Parameter configuration results",
"tool_raw_response_description": "The original response of the tool",
"trigger_after_application_completion": "Will be triggered after the application is fully completed",
"unFoldAll": "Expand all",
"update_link_error": "Error updating link",

View File

@@ -22,6 +22,7 @@
"generation_time": "生成时间",
"image_parse": "图片标注",
"input_token_length": "输入 tokens",
"mcp": "MCP 调用",
"member": "成员",
"member_name": "成员名",
"module_name": "模块名",

View File

@@ -1,4 +1,13 @@
{
"MCP_tools_debug": "调试",
"MCP_tools_detail": "查看详情",
"MCP_tools_list": "工具列表",
"MCP_tools_list_is_empty": "未解析到 MCP 工具",
"MCP_tools_list_with_number": "工具列表: {{total}}",
"MCP_tools_parse_failed": "解析 MCP 地址失败",
"MCP_tools_url": "MCP 地址",
"MCP_tools_url_is_empty": "MCP 地址不能为空",
"MCP_tools_url_placeholder": "填入 MCP 地址后,点击解析",
"Role_setting": "权限设置",
"Run": "运行",
"Team Tags Set": "团队标签",
@@ -98,6 +107,7 @@
"month.unit": "号",
"move.hint": "移动后,所选应用/文件夹将继承新文件夹的权限设置,原先的权限设置失效。",
"move_app": "移动应用",
"no_mcp_tools_list": "暂无数据,需先解析 MCP 地址",
"node_not_intro": "这个节点没有介绍",
"not_json_file": "请选择JSON文件",
"oaste_curl_string": "输入 CURL 代码",
@@ -123,6 +133,7 @@
"response_format": "回复格式",
"saved_success": "保存成功!如需在外部使用该版本,请点击“保存并发布”",
"search_app": "搜索应用",
"search_tool": "搜索工具",
"setting_app": "应用配置",
"setting_plugin": "插件配置",
"show_top_p_tip": "用温度采样的替代方法称为Nucleus采样该模型考虑了具有TOP_P概率质量质量的令牌的结果。因此0.1表示仅考虑包含最高概率质量的令牌。默认为 1。",
@@ -157,7 +168,9 @@
"template_market_description": "在模板市场探索更多玩法,配置教程与使用引导,带你理解并上手各种应用",
"template_market_empty_data": "找不到合适的模板",
"time_zone": "时区",
"tool_detail": "工具详情",
"tool_input_param_tip": "该插件正常运行需要配置相关信息",
"tools_no_description": "这个工具没有介绍~",
"transition_to_workflow": "转成工作流",
"transition_to_workflow_create_new_placeholder": "创建一个新的应用,而不是修改当前应用",
"transition_to_workflow_create_new_tip": "转化成工作流后,将无法转化回简易模式,请确认!",
@@ -166,6 +179,7 @@
"tts_close": "关闭",
"type.All": "全部",
"type.Create http plugin tip": "通过 OpenAPI Schema 批量创建插件,兼容 GPTs 格式",
"type.Create mcp tools tip": "通过输入 MCP 地址,自动解析并批量创建可调用的 MCP 工具",
"type.Create one plugin tip": "可以自定义输入和输出的工作流,通常用于封装重复使用的工作流",
"type.Create plugin bot": "创建插件",
"type.Create simple bot": "创建简易应用",
@@ -175,6 +189,8 @@
"type.Http plugin": "HTTP 插件",
"type.Import from json": "导入 JSON 配置",
"type.Import from json tip": "通过 JSON 配置文件,直接创建应用",
"type.MCP tools": "MCP 工具集",
"type.MCP_tools_url": "MCP 地址",
"type.Plugin": "插件",
"type.Simple bot": "简易应用",
"type.Workflow bot": "工作流",

Some files were not shown because too many files have changed in this diff Show More