Compare commits

..

16 Commits

Author SHA1 Message Date
archer
8af3d85b8a add gemini model 2025-01-25 14:25:10 +08:00
archer
fcf14af64d comment 2025-01-25 14:21:21 +08:00
heheer
991fbe254c fix interactive edge (#3659)
* fix interactive edge

* fix
2025-01-25 14:12:12 +08:00
Archer
ab0fc517dc fix: err tip (#3666)
* fix: err tip

* perf: training queue

* doc
2025-01-25 14:10:56 +08:00
Archer
92105e9a0b reload buffer (#3665)
* reload buffer

* reload buffer

* tts selector
2025-01-25 13:12:21 +08:00
Archer
d2948d7e57 feat: markdown extension (#3663)
* feat: markdown extension

* media cros

* rerank test

* default price

* perf: default model

* fix: cannot custom provider

* fix: default model select

* update bg

* perf: default model selector

* fix: usage export

* i18n

* fix: rerank

* update init extension

* perf: ip limit check

* doubao model order

* web default modle

* perf: tts selector

* perf: tts error

* qrcode package
2025-01-24 23:42:04 +08:00
heheer
02fcb6a61e export usage csv i18n (#3660)
* export usage csv i18n

* fix build
2025-01-24 19:09:08 +08:00
a.e.
4f5a12f33b fix: false triggerd org selection (#3661) 2025-01-24 19:07:36 +08:00
Archer
38efa3e050 feat: default model (#3662)
* move model config

* feat: default model
2025-01-24 18:44:43 +08:00
a.e.
5ce889942a fix: POST 500 error on dingtalk bot (#3655) 2025-01-24 14:10:40 +08:00
Archer
60c72d05d1 model perf (#3657)
* fix: model

* dataset quote

* perf: model config

* model tag

* doubao model config

* perf: config model

* feat: model test
2025-01-24 14:10:14 +08:00
Archer
99ce976b06 4.8.20 test (#3656)
* provider

* perf: model config
2025-01-23 18:32:45 +08:00
heheer
2c03abc6e1 add default model config (#3653) 2025-01-23 18:19:57 +08:00
Archer
34b510cba1 perf: usages list;perf: move components (#3654)
* perf: usages list

* team sub plan load

* perf: usage dashboard code

* perf: dashboard ui

* perf: move components
2025-01-23 17:29:39 +08:00
heheer
0c05add8b2 feat: usage filter & export & dashbord (#3538)
* feat: usage filter & export & dashbord

* adjust ui

* fix tmb scroll

* fix code & selecte all

* merge
2025-01-23 10:54:30 +08:00
Archer
e009be51e7 Aiproxy (#3649)
* model config

* feat: model config ui

* perf: rename variable

* feat: custom request url

* perf: model buffer

* perf: init model

* feat: json model config

* auto login

* fix: ts

* update packages

* package

* fix: dockerfile
2025-01-22 22:59:28 +08:00
78 changed files with 644 additions and 1410 deletions

View File

@@ -58,7 +58,7 @@ jobs:
# Step 4 - Builds the site using Hugo
- name: Build
run: cd docSite && hugo mod get -u github.com/colinwilson/lotusdocs@6d0568e && hugo -v --minify
run: cd docSite && hugo mod get -u github.com/colinwilson/lotusdocs && hugo -v --minify
# Step 5 - Push our generated site to Vercel
- name: Deploy to Vercel

View File

@@ -58,7 +58,7 @@ jobs:
# Step 4 - Builds the site using Hugo
- name: Build
run: cd docSite && hugo mod get -u github.com/colinwilson/lotusdocs@6d0568e && hugo -v --minify
run: cd docSite && hugo mod get -u github.com/colinwilson/lotusdocs && hugo -v --minify
# Step 5 - Push our generated site to Vercel
- name: Deploy to Vercel

View File

@@ -83,7 +83,6 @@ https://github.com/labring/FastGPT/assets/15308462/7d3a38df-eb0e-4388-9250-2409b
- [x] 统一查阅对话记录,并对数据进行标注
`6` 其他
- [x] 可视化模型配置。
- [x] 支持语音输入和输出 (可配置语音输入语音回答)
- [x] 模糊输入提示
- [x] 模板市场

View File

@@ -3,7 +3,7 @@ FROM hugomods/hugo:0.117.0 AS builder
WORKDIR /app
ADD ./docSite hugo
RUN cd /app/hugo && hugo mod get -u github.com/colinwilson/lotusdocs@6d0568e && hugo -v --minify
RUN cd /app/hugo && hugo mod get -u github.com/colinwilson/lotusdocs && hugo -v --minify
FROM fholzer/nginx-brotli:latest

Binary file not shown.

Before

Width:  |  Height:  |  Size: 154 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 197 KiB

View File

@@ -7,13 +7,6 @@ toc: true
weight: 707
---
## 前置知识
1. 基础的网络知识:端口,防火墙……
2. Docker 和 Docker Compose 基础知识
3. 大模型相关接口和参数
4. RAG 相关知识:向量模型,向量数据库,向量检索
## 部署架构图
![](/imgs/sealos-fastgpt.webp)

View File

@@ -11,9 +11,7 @@ weight: 744
从 4.8.20 版本开始,你可以直接在 FastGPT 页面中进行模型配置,并且系统内置了大量模型,无需从 0 开始配置。下面介绍模型配置的基本流程:
## 配置模型
### 1. 使用 OneAPI 对接模型提供商
## 1. 使用 OneAPI 对接模型提供商
可以使用 [OneAPI 接入教程](/docs/development/modelconfig/one-api) 来进行模型聚合,从而可以对接更多模型提供商。你需要先在各服务商申请好 API 接入 OneAPI 后,才能在 FastGPT 中使用这些模型。示例流程如下:
@@ -28,46 +26,44 @@ weight: 744
在 OneAPI 配置好模型后,你就可以打开 FastGPT 页面,启用对应模型了。
### 2. 登录 root 用户
## 2. 登录 root 用户
仅 root 用户可以进行模型配置。
### 3. 进入模型配置页面
## 3. 进入模型配置页面
登录 root 用户后,在`账号-模型提供商-模型配置`中,你可以看到所有内置的模型和自定义模型,以及哪些模型启用了。
![alt text](/image-90.png)
### 4. 配置介绍
## 4. 配置介绍
{{% alert icon="🤖 " context="success" %}}
注意:
1. 目前语音识别模型和重排模型仅会生效一个,所以配置时候,只需要配置一个即可。
2. 用于知识库文件处理的语言模型,至少需要开启一个,否则知识库会报错。
注意:目前语音识别模型和重排模型仅会生效一个,所以配置时候,只需要配置一个即可。
{{% /alert %}}
#### 核心配置
### 核心配置
- 模型 ID接口请求时候Body 中`model`字段的值,全局唯一。
- 自定义请求地址/Key:如果需要绕过`OneAPI`,可以设置自定义请求地址和 Token。一般情况下不需要如果 OneAPI 不支持某些模型,可以使用该特性。
- 模型 ID实际发出请求的`model`值,全局唯一。
- 自定义请求地址/Token:如果需要绕过`OneAPI`,可以设置自定义请求地址和 Token。一般情况下不需要如果 OneAPI 不支持某些模型,可以使用该特性。
#### 模型类型
### 模型类型
1. 语言模型 - 进行文本对话,多模态模型支持图片识别。
2. 索引模型 - 对文本块进行索引,用于相关文本检索。
3. 重排模型 - 对检索结果进行重排,用于优化检索排名
4. 语音合成 - 将文本转换为语音
5. 语音识别 - 将语音转换为文本
3. 语音合成 - 将文本转换为语音
4. 语音识别 - 将语音转换为文本
5. 重排模型 - 对文本进行重排,用于优化文本质量
#### 启用模型
### 启用模型
系统内置了目前主流厂商的模型,如果你不熟悉配置,直接点击`启用`即可,需要注意是,`模型 ID`需要和 OneAPI 中渠道的`模型`一致。
系统内置了目前主流厂商的模型,如果你不熟悉配置,直接点击`启用`即可,需要注意是,模型 ID 需要和 OneAPI 中渠道的`模型`一致。
| | |
| --- | --- |
| ![alt text](/imgs/image-91.png) | ![alt text](/imgs/image-92.png) |
#### 修改模型配置
### 修改模型配置
点击模型右侧的齿轮即可进行模型配置,不同类型模型的配置有区别。
@@ -75,7 +71,7 @@ weight: 744
| --- | --- |
| ![alt text](/imgs/image-93.png) | ![alt text](/imgs/image-94.png) |
## 新增自定义模型
### 新增自定义模型
如果系统内置的模型无法满足你的需求,你可以添加自定义模型。自定义模型中,如果`模型 ID`与系统内置的模型 ID 一致,则会被认为是修改系统模型。
@@ -83,7 +79,7 @@ weight: 744
| --- | --- |
| ![alt text](/imgs/image-96.png) | ![alt text](/imgs/image-97.png) |
#### 通过配置文件配置
### 通过配置文件配置
如果你觉得通过页面配置模型比较麻烦,你也可以通过配置文件来配置模型。或者希望快速将一个系统的配置,复制到另一个系统,也可以通过配置文件来实现。
@@ -210,7 +206,7 @@ FastGPT 页面上提供了每类模型的简单测试,可以初步检查模型
![alt text](/imgs/image-105.png)
## 特殊接入示例
## 模型接入示例
### ReRank 模型接入
@@ -231,60 +227,6 @@ FastGPT 页面上提供了每类模型的简单测试,可以初步检查模型
[点击查看部署 ReRank 模型教程](/docs/development/custom-models/bge-rerank/)
### 接入语音识别模型
OneAPI 的语言识别接口,无法正确的识别其他模型(会始终识别成 whisper-1所以如果想接入其他模型可以通过自定义请求地址来实现。例如接入硅基流动的 `FunAudioLLM/SenseVoiceSmall` 模型,可以参考如下配置:
点击模型编辑:
![alt text](/imgs/image-106.png)
填写硅基流动的地址:`https://api.siliconflow.cn/v1/audio/transcriptions`,并填写硅基流动的 API Key。
![alt text](/imgs/image-107.png)
## 其他配置项说明
### 自定义请求地址
如果填写了该值,则可以允许你绕过 OneAPI直接向自定义请求地址发起请求。需要填写完整的请求地址例如
- LLM: {{host}}/v1/chat/completions
- Embedding: {{host}}/v1/embeddings
- STT: {{host}}/v1/audio/transcriptions
- TTS: {{host}}/v1/audio/speech
- Rerank: {{host}}/v1/rerank
自定义请求 Key则是向自定义请求地址发起请求时候携带请求头Authorization: Bearer xxx 进行请求。
所有接口均遵循 OpenAI 提供的模型格式,可参考 [OpenAI API 文档](https://platform.openai.com/docs/api-reference/introduction) 进行配置。
由于 OpenAI 没有提供 ReRank 模型,遵循的是 Cohere 的格式。[点击查看接口请求示例](/docs/development/faq/#如何检查模型问题)
### 模型价格配置
商业版用户可以通过配置模型价格,来进行账号计费。系统包含两种计费模式:按总 tokens 计费和输入输出 Tokens 分开计费。
如果需要配置`输入输出 Tokens 分开计费模式`,则填写`模型输入价格``模型输出价格`两个值。
如果需要配置`按总 tokens 计费模式`,则填写`模型综合价格`一个值。
## 如何提交内置模型
由于模型更新非常频繁,官方不一定及时更新,如果未能找到你期望的内置模型,你可以[提交 Issue](https://github.com/labring/FastGPT/issues),提供模型的名字和对应官网。或者直接[提交 PR](https://github.com/labring/FastGPT/pulls),提供模型配置。
### 添加模型提供商
如果你需要添加模型提供商,需要修改以下代码:
1. FastGPT/packages/web/components/common/Icon/icons/model - 在此目录下,添加模型提供商的 svg 头像地址。
2. 在 FastGPT 根目录下,运行`pnpm initIcon`,将图片加载到配置文件中。
3. FastGPT/packages/global/core/ai/provider.ts - 在此文件中,追加模型提供商的配置。
### 添加模型
你可以在`FastGPT/packages/service/core/ai/config/provider`目录下,找对应模型提供商的配置文件,并追加模型配置。请自行全文检查,`model`字段,必须在所有模型中唯一。具体配置字段说明,参考[模型配置字段说明](/docs/development/modelconfig/intro/#通过配置文件配置)
## 旧版模型配置说明

View File

@@ -1,5 +1,5 @@
---
title: 'V4.8.19(包含升级脚本)'
title: 'V4.8.19(进行中)'
description: 'FastGPT V4.8.19 更新说明'
icon: 'upgrade'
draft: false

View File

@@ -1,5 +1,5 @@
---
title: 'V4.8.20(包含升级脚本)'
title: 'V4.8.20(进行中)'
description: 'FastGPT V4.8.20 更新说明'
icon: 'upgrade'
draft: false
@@ -9,19 +9,14 @@ weight: 804
## 更新指南
### 1. 做好数据库备份
### 2. 更新环境变量
### 1. 更新环境变量
如果有很早版本用户,配置了`ONEAPI_URL`的,需要统一改成`OPENAI_BASE_URL`
### 3. 更新镜像:
### 1. 更新镜像:
- 更新 fastgpt 镜像 tag: v4.8.20-fix
- 更新 fastgpt-pro 商业版镜像 tag: v4.8.20-fix
- Sandbox 镜像无需更新
### 4. 运行升级脚本
### 2. 运行升级脚本
从任意终端,发起 1 个 HTTP 请求。其中 {{rootkey}} 替换成环境变量里的 `rootkey`{{host}} 替换成**FastGPT 域名**。
@@ -31,20 +26,13 @@ curl --location --request POST 'https://{{host}}/api/admin/initv4820' \
--header 'Content-Type: application/json'
```
脚本会自动把原配置文件的模型加载到新版模型配置中
自动把原配置文件的模型加载到新版模型配置中
## 完整更新内容
1. 新增 - 可视化模型参数配置。预设超过 100 个模型配置。同时支持所有类型模型的一键测试。(预计下个版本会完全支持在页面上配置渠道)。
2. 新增 - DeepSeek resoner 模型支持输出思考过程
3. 新增 - 使用记录导出和仪表盘
4. 新增 - markdown 语法扩展,支持音视频(代码块 audio 和 video
5. 新增 - 调整 max_tokens 计算逻辑。优先保证 max_tokens 为配置值,如超出最大上下文,则减少历史记录。例如:如果申请 8000 的 max_tokens则上下文长度会减少 8000
6. 优化 - 问题优化增加上下文过滤,避免超出上下文。
7. 优化 - 页面组件抽离,减少页面组件路由。
8. 优化 - 全文检索,忽略大小写。
9. 优化 - 问答生成和增强索引改成流输出,避免部分模型超时。
10. 优化 - 自动给 assistant 空 content补充 null同时合并连续的 text assistant避免部分模型抛错。
11. 优化 - 调整图片 Host 取消上传时补充 FE_DOMAIN改成发送对话前补充避免替换域名后原图片无法正常使用。
12. 修复 - 部分场景成员列表无法触底加载。
13. 修复 - 工作流递归执行,部分条件下无法正常运行。
2. 新增 - 使用记录导出和仪表盘
3. 新增 - markdown 语法扩展,支持音视频(代码块 audio 和 video
4. 优化 - 页面组件抽离,减少页面组件路由
5. 优化 - 全文检索,忽略大小写
6. 优化 - 问答生成和增强索引改成流输出,避免部分模型超时。

View File

@@ -114,15 +114,15 @@ services:
# fastgpt
sandbox:
container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.8.20-fix # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.8.20-fix # 阿里云
image: ghcr.io/labring/fastgpt-sandbox:v4.8.17 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.8.17 # 阿里云
networks:
- fastgpt
restart: always
fastgpt:
container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.8.20-fix # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.20-fix # 阿里云
image: ghcr.io/labring/fastgpt:v4.8.17 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.17 # 阿里云
ports:
- 3000:3000
networks:

View File

@@ -72,15 +72,15 @@ services:
# fastgpt
sandbox:
container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.8.20-fix # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.8.20-fix # 阿里云
image: ghcr.io/labring/fastgpt-sandbox:v4.8.17 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.8.17 # 阿里云
networks:
- fastgpt
restart: always
fastgpt:
container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.8.20-fix # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.20-fix # 阿里云
image: ghcr.io/labring/fastgpt:v4.8.17 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.17 # 阿里云
ports:
- 3000:3000
networks:

View File

@@ -53,15 +53,15 @@ services:
wait $$!
sandbox:
container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.8.20-fix # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.8.20-fix # 阿里云
image: ghcr.io/labring/fastgpt-sandbox:v4.8.17 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.8.17 # 阿里云
networks:
- fastgpt
restart: always
fastgpt:
container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.8.20-fix # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.20-fix # 阿里云
image: ghcr.io/labring/fastgpt:v4.8.17 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.8.17 # 阿里云
ports:
- 3000:3000
networks:

View File

@@ -29,11 +29,10 @@ export type LLMModelItemType = PriceType &
maxContext: number;
maxResponse: number;
quoteMaxToken: number;
maxTemperature?: number;
maxTemperature: number;
censor?: boolean;
vision?: boolean;
reasoning?: boolean;
// diff function model
datasetProcess?: boolean; // dataset

View File

@@ -61,9 +61,6 @@ export const getModelFromList = (
model: string
) => {
const modelData = modelList.find((item) => item.model === model) ?? modelList[0];
if (!modelData) {
throw new Error('No Key model is configured');
}
const provider = getModelProvider(modelData.provider);
return {
...modelData,

View File

@@ -11,8 +11,8 @@ export type ModelProviderIdType =
| 'AliCloud'
| 'Qwen'
| 'Doubao'
| 'DeepSeek'
| 'ChatGLM'
| 'DeepSeek'
| 'Ernie'
| 'Moonshot'
| 'MiniMax'

View File

@@ -80,7 +80,6 @@ export type AppSimpleEditFormType = {
maxToken?: number;
isResponseAnswerText: boolean;
maxHistories: number;
[NodeInputKeyEnum.aiChatReasoning]?: boolean;
};
dataset: {
datasets: SelectedDatasetType;
@@ -118,7 +117,6 @@ export type SettingAIDataType = {
isResponseAnswerText?: boolean;
maxHistories?: number;
[NodeInputKeyEnum.aiChatVision]?: boolean; // Is open vision mode
[NodeInputKeyEnum.aiChatReasoning]?: boolean; // Is open reasoning mode
};
// variable

View File

@@ -16,8 +16,7 @@ export const getDefaultAppForm = (): AppSimpleEditFormType => {
temperature: 0,
isResponseAnswerText: true,
maxHistories: 6,
maxToken: 4000,
aiChatReasoning: true
maxToken: 4000
},
dataset: {
datasets: [],

View File

@@ -25,8 +25,7 @@ export enum ChatItemValueTypeEnum {
text = 'text',
file = 'file',
tool = 'tool',
interactive = 'interactive',
reasoning = 'reasoning'
interactive = 'interactive'
}
export enum ChatSourceEnum {

View File

@@ -70,23 +70,14 @@ export type SystemChatItemType = {
obj: ChatRoleEnum.System;
value: SystemChatItemValueItemType[];
};
export type AIChatItemValueItemType = {
type:
| ChatItemValueTypeEnum.text
| ChatItemValueTypeEnum.reasoning
| ChatItemValueTypeEnum.tool
| ChatItemValueTypeEnum.interactive;
type: ChatItemValueTypeEnum.text | ChatItemValueTypeEnum.tool | ChatItemValueTypeEnum.interactive;
text?: {
content: string;
};
reasoning?: {
content: string;
};
tools?: ToolModuleResponseItemType[];
interactive?: WorkflowInteractiveResponseType;
};
export type AIChatItemType = {
obj: ChatRoleEnum.AI;
value: AIChatItemValueItemType[];

View File

@@ -141,7 +141,6 @@ export enum NodeInputKeyEnum {
aiChatDatasetQuote = 'quoteQA',
aiChatVision = 'aiChatVision',
stringQuoteText = 'stringQuoteText',
aiChatReasoning = 'aiChatReasoning',
// dataset
datasetSelectList = 'datasets',
@@ -221,8 +220,7 @@ export enum NodeOutputKeyEnum {
// common
userChatInput = 'userChatInput',
history = 'history',
answerText = 'answerText', // node answer. the value will be show and save to history
reasoningText = 'reasoningText', // node reasoning. the value will be show but not save to history
answerText = 'answerText', // module answer. the value will be show and save to history
success = 'success',
failed = 'failed',
error = 'error',

View File

@@ -220,7 +220,6 @@ export type AIChatNodeProps = {
[NodeInputKeyEnum.aiChatMaxToken]?: number;
[NodeInputKeyEnum.aiChatIsResponseText]: boolean;
[NodeInputKeyEnum.aiChatVision]?: boolean;
[NodeInputKeyEnum.aiChatReasoning]?: boolean;
[NodeInputKeyEnum.aiChatQuoteRole]?: AiChatQuoteRoleType;
[NodeInputKeyEnum.aiChatQuoteTemplate]?: string;

View File

@@ -364,14 +364,12 @@ export function replaceEditorVariable({
export const textAdaptGptResponse = ({
text,
reasoning_content,
model = '',
finish_reason = null,
extraData = {}
}: {
model?: string;
text?: string | null;
reasoning_content?: string | null;
text: string | null;
finish_reason?: null | 'stop';
extraData?: Object;
}) => {
@@ -383,11 +381,10 @@ export const textAdaptGptResponse = ({
model,
choices: [
{
delta: {
role: ChatCompletionRequestMessageRoleEnum.Assistant,
content: text,
...(reasoning_content && { reasoning_content })
},
delta:
text === null
? {}
: { role: ChatCompletionRequestMessageRoleEnum.Assistant, content: text },
index: 0,
finish_reason
}

View File

@@ -63,14 +63,14 @@ export const AiChatModule: FlowNodeTemplateType = {
key: NodeInputKeyEnum.aiChatTemperature,
renderTypeList: [FlowNodeInputTypeEnum.hidden], // Set in the pop-up window
label: '',
value: undefined,
value: 0,
valueType: WorkflowIOValueTypeEnum.number
},
{
key: NodeInputKeyEnum.aiChatMaxToken,
renderTypeList: [FlowNodeInputTypeEnum.hidden], // Set in the pop-up window
label: '',
value: undefined,
value: 2000,
valueType: WorkflowIOValueTypeEnum.number
},
@@ -91,13 +91,6 @@ export const AiChatModule: FlowNodeTemplateType = {
valueType: WorkflowIOValueTypeEnum.boolean,
value: true
},
{
key: NodeInputKeyEnum.aiChatReasoning,
renderTypeList: [FlowNodeInputTypeEnum.hidden],
label: '',
valueType: WorkflowIOValueTypeEnum.boolean,
value: true
},
// settings modal ---
{
...Input_Template_System_Prompt,

View File

@@ -43,14 +43,14 @@ export const ToolModule: FlowNodeTemplateType = {
key: NodeInputKeyEnum.aiChatTemperature,
renderTypeList: [FlowNodeInputTypeEnum.hidden], // Set in the pop-up window
label: '',
value: undefined,
value: 0,
valueType: WorkflowIOValueTypeEnum.number
},
{
key: NodeInputKeyEnum.aiChatMaxToken,
renderTypeList: [FlowNodeInputTypeEnum.hidden], // Set in the pop-up window
label: '',
value: undefined,
value: 2000,
valueType: WorkflowIOValueTypeEnum.number
},
{

View File

@@ -10,7 +10,6 @@
"echarts": "5.4.1",
"expr-eval": "^2.0.2",
"lodash": "^4.17.21",
"mssql": "^11.0.1",
"mysql2": "^3.11.3",
"json5": "^2.2.3",
"pg": "^8.10.0",

View File

@@ -1,6 +1,5 @@
import { Client as PgClient } from 'pg'; // PostgreSQL 客户端
import mysql from 'mysql2/promise'; // MySQL 客户端
import mssql from 'mssql'; // SQL Server 客户端
type Props = {
databaseType: string;
@@ -53,20 +52,6 @@ const main = async ({
const [rows] = await connection.execute(sql);
result = rows;
await connection.end();
} else if (databaseType === 'Microsoft SQL Server') {
const pool = await mssql.connect({
server: host,
port: parseInt(port, 10),
database: databaseName,
user,
password,
options: {
trustServerCertificate: true
}
});
result = await pool.query(sql);
await pool.close();
}
return {
result

View File

@@ -42,10 +42,6 @@
{
"label": "PostgreSQL",
"value": "PostgreSQL"
},
{
"label": "Microsoft SQL Server",
"value": "Microsoft SQL Server"
}
],
"required": true

View File

@@ -40,7 +40,7 @@ export async function uploadMongoImg({
expiredTime: forever ? undefined : addHours(new Date(), 1)
});
return `${process.env.NEXT_PUBLIC_BASE_URL || ''}${imageBaseUrl}${String(_id)}.${extension}`;
return `${process.env.FE_DOMAIN || ''}${process.env.NEXT_PUBLIC_BASE_URL || ''}${imageBaseUrl}${String(_id)}.${extension}`;
}
const getIdFromPath = (path?: string) => {

View File

@@ -63,13 +63,6 @@ export const getMongoModel = <T>(name: string, schema: mongoose.Schema) => {
const model = connectionMongo.model<T>(name, schema);
// Sync index
syncMongoIndex(model);
return model;
};
const syncMongoIndex = async (model: Model<any>) => {
if (process.env.SYNC_INDEX !== '0' && process.env.NODE_ENV !== 'test') {
try {
model.syncIndexes({ background: true });
@@ -77,6 +70,8 @@ const syncMongoIndex = async (model: Model<any>) => {
addLog.error('Create index error', error);
}
}
return model;
};
export const ReadPreference = connectionMongo.mongo.ReadPreference;

View File

@@ -24,7 +24,7 @@ export const aiTranscriptions = async ({
? { url: modelData.requestUrl }
: {
baseURL: aiAxiosConfig.baseUrl,
url: '/audio/transcriptions'
url: modelData.requestUrl || '/audio/transcriptions'
}),
headers: {
Authorization: modelData.requestAuth

View File

@@ -27,9 +27,8 @@
"maxContext": 64000,
"maxResponse": 4096,
"quoteMaxToken": 60000,
"maxTemperature": null,
"maxTemperature": 1.5,
"vision": false,
"reasoning": true,
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
@@ -40,9 +39,11 @@
"usedInQueryExtension": true,
"customExtractPrompt": "",
"usedInToolCall": true,
"defaultConfig": {},
"defaultConfig": {
"temperature": null
},
"fieldMap": {},
"type": "llm"
}
]
}
}

View File

@@ -44,42 +44,16 @@
"fieldMap": {},
"type": "llm"
},
{
"model": "o3-mini",
"name": "o3-mini",
"maxContext": 200000,
"maxResponse": 100000,
"quoteMaxToken": 120000,
"maxTemperature": null,
"vision": false,
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"customCQPrompt": "",
"usedInExtractFields": true,
"usedInQueryExtension": true,
"customExtractPrompt": "",
"usedInToolCall": true,
"defaultConfig": {
"stream": false
},
"fieldMap": {
"max_tokens": "max_completion_tokens"
},
"type": "llm"
},
{
"model": "o1-mini",
"name": "o1-mini",
"maxContext": 128000,
"maxResponse": 4000,
"quoteMaxToken": 120000,
"maxTemperature": null,
"maxTemperature": 1.2,
"vision": false,
"toolChoice": false,
"functionCall": false,
"functionCall": true,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
@@ -89,36 +63,8 @@
"customExtractPrompt": "",
"usedInToolCall": true,
"defaultConfig": {
"stream": false
},
"fieldMap": {
"max_tokens": "max_completion_tokens"
},
"type": "llm"
},
{
"model": "o1",
"name": "o1",
"maxContext": 195000,
"maxResponse": 8000,
"quoteMaxToken": 120000,
"maxTemperature": null,
"vision": true,
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"customCQPrompt": "",
"usedInExtractFields": true,
"usedInQueryExtension": true,
"customExtractPrompt": "",
"usedInToolCall": true,
"defaultConfig": {
"stream": false
},
"fieldMap": {
"max_tokens": "max_completion_tokens"
"temperature": 1,
"max_tokens": null
},
"type": "llm"
},
@@ -128,10 +74,10 @@
"maxContext": 128000,
"maxResponse": 4000,
"quoteMaxToken": 120000,
"maxTemperature": null,
"maxTemperature": 1.2,
"vision": false,
"toolChoice": false,
"functionCall": false,
"functionCall": true,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
@@ -141,10 +87,34 @@
"customExtractPrompt": "",
"usedInToolCall": true,
"defaultConfig": {
"temperature": 1,
"max_tokens": null,
"stream": false
},
"fieldMap": {
"max_tokens": "max_completion_tokens"
"type": "llm"
},
{
"model": "o1",
"name": "o1",
"maxContext": 195000,
"maxResponse": 8000,
"quoteMaxToken": 120000,
"maxTemperature": 1.2,
"vision": false,
"toolChoice": false,
"functionCall": true,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"customCQPrompt": "",
"usedInExtractFields": true,
"usedInQueryExtension": true,
"customExtractPrompt": "",
"usedInToolCall": true,
"defaultConfig": {
"temperature": 1,
"max_tokens": null,
"stream": false
},
"type": "llm"
},

View File

@@ -11,11 +11,7 @@ import {
ReRankModelItemType
} from '@fastgpt/global/core/ai/model.d';
import { debounce } from 'lodash';
import {
getModelProvider,
ModelProviderIdType,
ModelProviderType
} from '@fastgpt/global/core/ai/provider';
import { ModelProviderType } from '@fastgpt/global/core/ai/provider';
import { findModelFromAlldata } from '../model';
import {
reloadFastGPTConfigBuffer,
@@ -95,7 +91,7 @@ export const loadSystemModels = async (init = false) => {
await Promise.all(
providerList.map(async (name) => {
const fileContent = (await import(`./provider/${name}`))?.default as {
provider: ModelProviderIdType;
provider: ModelProviderType;
list: SystemModelItemType[];
};
@@ -105,7 +101,7 @@ export const loadSystemModels = async (init = false) => {
const modelData: any = {
...fileModel,
...dbModel?.metadata,
provider: getModelProvider(dbModel?.metadata?.provider || fileContent.provider).id,
provider: dbModel?.metadata?.provider || fileContent.provider,
type: dbModel?.metadata?.type || fileModel.type,
isCustom: false
};

View File

@@ -32,14 +32,12 @@ export async function getVectorsByText({ model, input, type }: GetVectorProps) {
model: model.model,
input: [input]
},
model.requestUrl
model.requestUrl && model.requestAuth
? {
path: model.requestUrl,
headers: model.requestAuth
? {
Authorization: `Bearer ${model.requestAuth}`
}
: undefined
headers: {
Authorization: `Bearer ${model.requestAuth}`
}
}
: {}
)

View File

@@ -2,12 +2,10 @@ import { replaceVariable } from '@fastgpt/global/common/string/tools';
import { createChatCompletion } from '../config';
import { ChatItemType } from '@fastgpt/global/core/chat/type';
import { countGptMessagesTokens, countPromptTokens } from '../../../common/string/tiktoken/index';
import { chats2GPTMessages } from '@fastgpt/global/core/chat/adapt';
import { chatValue2RuntimePrompt } from '@fastgpt/global/core/chat/adapt';
import { getLLMModel } from '../model';
import { llmCompletionsBodyFormat } from '../utils';
import { addLog } from '../../../common/system/log';
import { filterGPTMessageByMaxContext } from '../../chat/utils';
import json5 from 'json5';
/*
query extension - 问题扩展
@@ -15,73 +13,72 @@ import json5 from 'json5';
*/
const title = global.feConfigs?.systemTitle || 'FastAI';
const defaultPrompt = `## 你的任务
你作为一个向量检索助手,你的任务是结合历史记录,从不同角度,为“原问题”生成个不同版本的“检索词”,从而提高向量检索的语义丰富度,提高向量检索的精度。
const defaultPrompt = `作为一个向量检索助手,你的任务是结合历史记录,从不同角度,为“原问题”生成个不同版本的“检索词”,从而提高向量检索的语义丰富度,提高向量检索的精度。
生成的问题要求指向对象清晰明确,并与“原问题语言相同”。
## 参考示例
参考 <Example></Example> 标中的示例来完成任务。
<Example>
历史记录:
"""
null
"""
原问题: 介绍下剧情。
检索词: ["介绍下故事的背景。","故事的主题是什么?","介绍下故事的主要人物。"]
----------------
历史记录:
"""
user: 对话背景。
assistant: 当前对话是关于 Nginx 的介绍和使用等。
Q: 对话背景。
A: 当前对话是关于 Nginx 的介绍和使用等。
"""
原问题: 怎么下载
检索词: ["Nginx 如何下载?","下载 Nginx 需要什么条件?","有哪些渠道可以下载 Nginx"]
----------------
历史记录:
"""
user: 对话背景。
assistant: 当前对话是关于 Nginx 的介绍和使用等。
user: 报错 "no connection"
assistant: 报错"no connection"可能是因为……
Q: 对话背景。
A: 当前对话是关于 Nginx 的介绍和使用等。
Q: 报错 "no connection"
A: 报错"no connection"可能是因为……
"""
原问题: 怎么解决
检索词: ["Nginx报错"no connection"如何解决?","造成'no connection'报错的原因。","Nginx提示'no connection',要怎么办?"]
----------------
历史记录:
"""
user: How long is the maternity leave?
assistant: The number of days of maternity leave depends on the city in which the employee is located. Please provide your city so that I can answer your questions.
Q: 护产假多少天?
A: 护产假的天数根据员工所在的城市而定。请提供您所在的城市,以便我回答您的问题。
"""
原问题: ShenYang
检索词: ["How many days is maternity leave in Shenyang?","Shenyang's maternity leave policy.","The standard of maternity leave in Shenyang."]
原问题: 沈阳
检索词: ["沈阳的护产假多少天?","沈阳的护产假政策。","沈阳的护产假标准。"]
----------------
历史记录:
"""
user: 作者是谁?
assistant: ${title} 的作者是 labring。
Q: 作者是谁?
A: ${title} 的作者是 labring。
"""
原问题: Tell me about him
检索词: ["Introduce labring, the author of ${title}." ," Background information on author labring." "," Why does labring do ${title}?"]
----------------
历史记录:
"""
user: 对话背景。
assistant: 关于 ${title} 的介绍和使用等问题。
Q: 对话背景。
A: 关于 ${title} 的介绍和使用等问题。
"""
原问题: 你好。
检索词: ["你好"]
----------------
历史记录:
"""
user: ${title} 如何收费?
assistant: ${title} 收费可以参考……
Q: ${title} 如何收费?
A: ${title} 收费可以参考……
"""
原问题: 你知道 laf 么?
检索词: ["laf 的官网地址是多少?","laf 的使用教程。","laf 有什么特点和优势。"]
----------------
历史记录:
"""
user: ${title} 的优势
assistant: 1. 开源
Q: ${title} 的优势
A: 1. 开源
2. 简便
3. 扩展性强
"""
@@ -90,20 +87,18 @@ assistant: 1. 开源
----------------
历史记录:
"""
user: 什么是 ${title}
assistant: ${title} 是一个 RAG 平台。
user: 什么是 Laf
assistant: Laf 是一个云函数开发平台。
Q: 什么是 ${title}
A: ${title} 是一个 RAG 平台。
Q: 什么是 Laf
A: Laf 是一个云函数开发平台。
"""
原问题: 它们有什么关系?
检索词: ["${title}和Laf有什么关系","介绍下${title}","介绍下Laf"]
</Example>
## 输出要求
-----
1. 输出格式为 JSON 数组,数组中每个元素为字符串。无需对输出进行任何解释。
2. 输出语言与原问题相同。原问题为中文则输出中文;原问题为英文则输出英文。
## 开始任务
下面是正式的任务:
历史记录:
"""
@@ -130,39 +125,26 @@ export const queryExtension = async ({
outputTokens: number;
}> => {
const systemFewShot = chatBg
? `user: 对话背景。
assistant: ${chatBg}
? `Q: 对话背景。
A: ${chatBg}
`
: '';
const modelData = getLLMModel(model);
const filterHistories = await filterGPTMessageByMaxContext({
messages: chats2GPTMessages({ messages: histories, reserveId: false }),
maxContext: modelData.maxContext - 1000
});
const historyFewShot = filterHistories
const historyFewShot = histories
.map((item) => {
const role = item.role;
const content = item.content;
if ((role === 'user' || role === 'assistant') && content) {
if (typeof content === 'string') {
return `${role}: ${content}`;
} else {
return `${role}: ${content.map((item) => (item.type === 'text' ? item.text : '')).join('\n')}`;
}
}
const role = item.obj === 'Human' ? 'Q' : 'A';
return `${role}: ${chatValue2RuntimePrompt(item.value).text}`;
})
.filter(Boolean)
.join('\n');
const concatFewShot = `${systemFewShot}${historyFewShot}`.trim();
const modelData = getLLMModel(model);
const messages = [
{
role: 'user',
content: replaceVariable(defaultPrompt, {
query: `${query}`,
histories: concatFewShot || 'null'
histories: concatFewShot
})
}
] as any;
@@ -172,7 +154,7 @@ assistant: ${chatBg}
{
stream: false,
model: modelData.model,
temperature: 0.1,
temperature: 0.01,
messages
},
modelData
@@ -190,41 +172,22 @@ assistant: ${chatBg}
};
}
const start = answer.indexOf('[');
const end = answer.lastIndexOf(']');
if (start === -1 || end === -1) {
addLog.warn('Query extension failed, not a valid JSON', {
answer
});
return {
rawQuery: query,
extensionQueries: [],
model,
inputTokens: 0,
outputTokens: 0
};
}
// Intercept the content of [] and retain []
const jsonStr = answer
.substring(start, end + 1)
.replace(/(\\n|\\)/g, '')
.replace(/ /g, '');
answer = answer.match(/\[.*?\]/)?.[0] || '';
answer = answer.replace(/\\"/g, '"');
try {
const queries = json5.parse(jsonStr) as string[];
const queries = JSON.parse(answer) as string[];
return {
rawQuery: query,
extensionQueries: (Array.isArray(queries) ? queries : []).slice(0, 5),
extensionQueries: Array.isArray(queries) ? queries : [],
model,
inputTokens: await countGptMessagesTokens(messages),
outputTokens: await countPromptTokens(answer)
};
} catch (error) {
addLog.warn('Query extension failed, not a valid JSON', {
answer
});
addLog.error(`Query extension error`, error);
return {
rawQuery: query,
extensionQueries: [],

View File

@@ -26,7 +26,7 @@ export function reRankRecall({
return Promise.reject('no rerank model');
}
const { baseUrl, authorization } = getAxiosConfig();
const { baseUrl, authorization } = getAxiosConfig({});
let start = Date.now();
return POST<PostReRankResponse>(
@@ -38,7 +38,7 @@ export function reRankRecall({
},
{
headers: {
Authorization: model.requestAuth ? `Bearer ${model.requestAuth}` : authorization
Authorization: model.requestAuth ? model.requestAuth : authorization
},
timeout: 30000
}

View File

@@ -2,23 +2,33 @@ import { LLMModelItemType } from '@fastgpt/global/core/ai/model.d';
import {
ChatCompletionCreateParamsNonStreaming,
ChatCompletionCreateParamsStreaming,
ChatCompletionMessageParam,
StreamChatType
} from '@fastgpt/global/core/ai/type';
import { countGptMessagesTokens } from '../../common/string/tiktoken';
import { getLLMModel } from './model';
/*
Count response max token
*/
export const computedMaxToken = ({
export const computedMaxToken = async ({
maxToken,
model
model,
filterMessages = []
}: {
maxToken?: number;
model: LLMModelItemType;
filterMessages: ChatCompletionMessageParam[];
}) => {
if (maxToken === undefined) return;
maxToken = Math.min(maxToken, model.maxResponse);
const tokensLimit = model.maxContext;
/* count response max token */
const promptsToken = await countGptMessagesTokens(filterMessages);
maxToken = promptsToken + maxToken > tokensLimit ? tokensLimit - promptsToken : maxToken;
if (maxToken <= 0) {
maxToken = 200;
}
return maxToken;
};
@@ -30,7 +40,6 @@ export const computedTemperature = ({
model: LLMModelItemType;
temperature: number;
}) => {
if (typeof model.maxTemperature !== 'number') return undefined;
temperature = +(model.maxTemperature * (temperature / 10)).toFixed(2);
temperature = Math.max(temperature, 0.01);

View File

@@ -1,9 +1,6 @@
import { countGptMessagesTokens } from '../../common/string/tiktoken/index';
import type {
ChatCompletionAssistantMessageParam,
ChatCompletionContentPart,
ChatCompletionContentPartRefusal,
ChatCompletionContentPartText,
ChatCompletionMessageParam,
SdkChatCompletionMessageParam
} from '@fastgpt/global/core/ai/type.d';
@@ -14,19 +11,36 @@ import { serverRequestBaseUrl } from '../../common/api/serverRequest';
import { i18nT } from '../../../web/i18n/utils';
import { addLog } from '../../common/system/log';
export const filterGPTMessageByMaxContext = async ({
export const filterGPTMessageByMaxTokens = async ({
messages = [],
maxContext
maxTokens
}: {
messages: ChatCompletionMessageParam[];
maxContext: number;
maxTokens: number;
}) => {
if (!Array.isArray(messages)) {
return [];
}
const rawTextLen = messages.reduce((sum, item) => {
if (typeof item.content === 'string') {
return sum + item.content.length;
}
if (Array.isArray(item.content)) {
return (
sum +
item.content.reduce((sum, item) => {
if (item.type === 'text') {
return sum + item.text.length;
}
return sum;
}, 0)
);
}
return sum;
}, 0);
// If the text length is less than half of the maximum token, no calculation is required
if (messages.length < 4) {
if (rawTextLen < maxTokens * 0.5) {
return messages;
}
@@ -38,7 +52,7 @@ export const filterGPTMessageByMaxContext = async ({
const chatPrompts: ChatCompletionMessageParam[] = messages.slice(chatStartIndex);
// reduce token of systemPrompt
maxContext -= await countGptMessagesTokens(systemPrompts);
maxTokens -= await countGptMessagesTokens(systemPrompts);
// Save the last chat prompt(question)
const question = chatPrompts.pop();
@@ -56,9 +70,9 @@ export const filterGPTMessageByMaxContext = async ({
}
const tokens = await countGptMessagesTokens([assistant, user]);
maxContext -= tokens;
maxTokens -= tokens;
/* 整体 tokens 超出范围,截断 */
if (maxContext < 0) {
if (maxTokens < 0) {
break;
}
@@ -88,327 +102,223 @@ export const loadRequestMessages = async ({
useVision?: boolean;
origin?: string;
}) => {
const replaceLinkUrl = (text: string) => {
const baseURL = process.env.FE_DOMAIN;
if (!baseURL) return text;
// 匹配 /api/system/img/xxx.xx 的图片链接,并追加 baseURL
return text.replace(
/(?<!https?:\/\/[^\s]*)(?:\/api\/system\/img\/[^\s.]*\.[^\s]*)/g,
(match) => `${baseURL}${match}`
);
// Load image to base64
const loadImageToBase64 = async (messages: ChatCompletionContentPart[]) => {
return Promise.all(
messages.map(async (item) => {
if (item.type === 'image_url') {
// Remove url origin
const imgUrl = (() => {
if (origin && item.image_url.url.startsWith(origin)) {
return item.image_url.url.replace(origin, '');
}
return item.image_url.url;
})();
// base64 image
if (imgUrl.startsWith('data:image/')) {
return item;
}
try {
// If imgUrl is a local path, load image from local, and set url to base64
if (imgUrl.startsWith('/') || process.env.MULTIPLE_DATA_TO_BASE64 === 'true') {
addLog.debug('Load image from local server', {
baseUrl: serverRequestBaseUrl,
requestUrl: imgUrl
});
const response = await axios.get(imgUrl, {
baseURL: serverRequestBaseUrl,
responseType: 'arraybuffer',
proxy: false
});
const base64 = Buffer.from(response.data, 'binary').toString('base64');
const imageType =
getFileContentTypeFromHeader(response.headers['content-type']) ||
guessBase64ImageType(base64);
return {
...item,
image_url: {
...item.image_url,
url: `data:${imageType};base64,${base64}`
}
};
}
// 检查下这个图片是否可以被访问,如果不行的话,则过滤掉
const response = await axios.head(imgUrl, {
timeout: 10000
});
if (response.status < 200 || response.status >= 400) {
addLog.info(`Filter invalid image: ${imgUrl}`);
return;
}
} catch (error) {
return;
}
}
return item;
})
).then((res) => res.filter(Boolean) as ChatCompletionContentPart[]);
};
const parseSystemMessage = (
content: string | ChatCompletionContentPartText[]
): string | ChatCompletionContentPartText[] | undefined => {
if (typeof content === 'string') {
if (!content) return;
return replaceLinkUrl(content);
// Split question text and image
const parseStringWithImages = (input: string): ChatCompletionContentPart[] => {
if (!useVision || input.length > 500) {
return [{ type: 'text', text: input || '' }];
}
const arrayContent = content
.filter((item) => item.text)
.map((item) => ({ ...item, text: replaceLinkUrl(item.text) }));
if (arrayContent.length === 0) return;
return arrayContent;
// 正则表达式匹配图片URL
const imageRegex =
/(https?:\/\/[^\s/$.?#].[^\s]*\.(?:png|jpe?g|gif|webp|bmp|tiff?|svg|ico|heic|avif))/gi;
const result: ChatCompletionContentPart[] = [];
// 提取所有HTTPS图片URL并添加到result开头
const httpsImages = [...new Set(Array.from(input.matchAll(imageRegex), (m) => m[0]))];
httpsImages.forEach((url) => {
result.push({
type: 'image_url',
image_url: {
url: url
}
});
});
// Too many images return text
if (httpsImages.length > 4) {
return [{ type: 'text', text: input || '' }];
}
// 添加原始input作为文本
result.push({ type: 'text', text: input });
return result;
};
// Parse user content(text and img) Store history => api messages
const parseUserContent = async (content: string | ChatCompletionContentPart[]) => {
// Split question text and image
const parseStringWithImages = (input: string): ChatCompletionContentPart[] => {
if (!useVision || input.length > 500) {
return [{ type: 'text', text: input }];
}
// 正则表达式匹配图片URL
const imageRegex =
/(https?:\/\/[^\s/$.?#].[^\s]*\.(?:png|jpe?g|gif|webp|bmp|tiff?|svg|ico|heic|avif))/gi;
const result: ChatCompletionContentPart[] = [];
// 提取所有HTTPS图片URL并添加到result开头
const httpsImages = [...new Set(Array.from(input.matchAll(imageRegex), (m) => m[0]))];
httpsImages.forEach((url) => {
result.push({
type: 'image_url',
image_url: {
url: url
}
});
});
// Too many images return text
if (httpsImages.length > 4) {
return [{ type: 'text', text: input }];
}
// 添加原始input作为文本
result.push({ type: 'text', text: input });
return result;
};
// Load image to base64
const loadUserContentImage = async (content: ChatCompletionContentPart[]) => {
return Promise.all(
content.map(async (item) => {
if (item.type === 'image_url') {
// Remove url origin
const imgUrl = (() => {
if (origin && item.image_url.url.startsWith(origin)) {
return item.image_url.url.replace(origin, '');
}
return item.image_url.url;
})();
// base64 image
if (imgUrl.startsWith('data:image/')) {
return item;
}
try {
// If imgUrl is a local path, load image from local, and set url to base64
if (imgUrl.startsWith('/') || process.env.MULTIPLE_DATA_TO_BASE64 === 'true') {
addLog.debug('Load image from local server', {
baseUrl: serverRequestBaseUrl,
requestUrl: imgUrl
});
const response = await axios.get(imgUrl, {
baseURL: serverRequestBaseUrl,
responseType: 'arraybuffer',
proxy: false
});
const base64 = Buffer.from(response.data, 'binary').toString('base64');
const imageType =
getFileContentTypeFromHeader(response.headers['content-type']) ||
guessBase64ImageType(base64);
return {
...item,
image_url: {
...item.image_url,
url: `data:${imageType};base64,${base64}`
}
};
}
// 检查下这个图片是否可以被访问,如果不行的话,则过滤掉
const response = await axios.head(imgUrl, {
timeout: 10000
});
if (response.status < 200 || response.status >= 400) {
addLog.info(`Filter invalid image: ${imgUrl}`);
return;
}
} catch (error) {
return;
}
}
return item;
})
).then((res) => res.filter(Boolean) as ChatCompletionContentPart[]);
};
if (content === undefined) return;
if (typeof content === 'string') {
if (content === '') return;
const loadImageContent = await loadUserContentImage(parseStringWithImages(content));
if (loadImageContent.length === 0) return;
return loadImageContent;
return loadImageToBase64(parseStringWithImages(content));
}
const result = (
await Promise.all(
content.map(async (item) => {
if (item.type === 'text') {
if (item.text) return parseStringWithImages(item.text);
return;
}
if (item.type === 'file_url') return; // LLM not support file_url
if (item.type === 'image_url') {
// close vision, remove image_url
if (!useVision) return;
// remove empty image_url
if (!item.image_url.url) return;
const result = await Promise.all(
content.map(async (item) => {
if (item.type === 'text') return parseStringWithImages(item.text);
if (item.type === 'file_url') return; // LLM not support file_url
if (!item.image_url.url) return item;
return item;
})
);
return loadImageToBase64(result.flat().filter(Boolean) as ChatCompletionContentPart[]);
};
// format GPT messages, concat text messages
const clearInvalidMessages = (messages: ChatCompletionMessageParam[]) => {
return messages
.map((item) => {
if (item.role === ChatCompletionRequestMessageRoleEnum.System && !item.content) {
return;
}
if (item.role === ChatCompletionRequestMessageRoleEnum.User) {
if (item.content === undefined) return;
if (typeof item.content === 'string') {
return {
...item,
content: item.content.trim()
};
}
return item;
})
)
)
.flat()
.filter(Boolean) as ChatCompletionContentPart[];
// array
if (item.content.length === 0) return;
if (item.content.length === 1 && item.content[0].type === 'text') {
return {
...item,
content: item.content[0].text
};
}
}
if (item.role === ChatCompletionRequestMessageRoleEnum.Assistant) {
if (item.content === undefined && !item.tool_calls && !item.function_call) return;
}
const loadImageContent = await loadUserContentImage(result);
if (loadImageContent.length === 0) return;
return loadImageContent;
return item;
})
.filter(Boolean) as ChatCompletionMessageParam[];
};
const formatAssistantItem = (item: ChatCompletionAssistantMessageParam) => {
return {
role: item.role,
content: item.content,
function_call: item.function_call,
name: item.name,
refusal: item.refusal,
tool_calls: item.tool_calls
};
};
const parseAssistantContent = (
content:
| string
| (ChatCompletionContentPartText | ChatCompletionContentPartRefusal)[]
| null
| undefined
) => {
if (typeof content === 'string') {
return content || '';
}
// 交互节点
if (!content) return '';
const result = content.filter((item) => item?.type === 'text');
if (result.length === 0) return '';
return result.map((item) => item.text).join('\n');
};
if (messages.length === 0) {
return Promise.reject(i18nT('common:core.chat.error.Messages empty'));
}
// 合并相邻 role 的内容,只保留一个 role content 变成数组。 assistant 的话,工具调用不合并。
const mergeMessages = ((messages: ChatCompletionMessageParam[]): ChatCompletionMessageParam[] => {
/*
Merge data for some consecutive roles
1. Contiguous assistant and both have content, merge content
*/
const mergeConsecutiveMessages = (
messages: ChatCompletionMessageParam[]
): ChatCompletionMessageParam[] => {
return messages.reduce((mergedMessages: ChatCompletionMessageParam[], currentMessage) => {
const lastMessage = mergedMessages[mergedMessages.length - 1];
if (!lastMessage) {
return [currentMessage];
}
if (
lastMessage.role === ChatCompletionRequestMessageRoleEnum.System &&
currentMessage.role === ChatCompletionRequestMessageRoleEnum.System
) {
const lastContent: ChatCompletionContentPartText[] = Array.isArray(lastMessage.content)
? lastMessage.content
: [{ type: 'text', text: lastMessage.content || '' }];
const currentContent: ChatCompletionContentPartText[] = Array.isArray(
currentMessage.content
)
? currentMessage.content
: [{ type: 'text', text: currentMessage.content || '' }];
lastMessage.content = [...lastContent, ...currentContent];
} // Handle user messages
else if (
lastMessage.role === ChatCompletionRequestMessageRoleEnum.User &&
currentMessage.role === ChatCompletionRequestMessageRoleEnum.User
) {
const lastContent: ChatCompletionContentPart[] = Array.isArray(lastMessage.content)
? lastMessage.content
: [{ type: 'text', text: lastMessage.content }];
const currentContent: ChatCompletionContentPart[] = Array.isArray(currentMessage.content)
? currentMessage.content
: [{ type: 'text', text: currentMessage.content }];
lastMessage.content = [...lastContent, ...currentContent];
} else if (
lastMessage &&
currentMessage.role === ChatCompletionRequestMessageRoleEnum.Assistant &&
lastMessage.role === ChatCompletionRequestMessageRoleEnum.Assistant &&
currentMessage.role === ChatCompletionRequestMessageRoleEnum.Assistant
typeof lastMessage.content === 'string' &&
typeof currentMessage.content === 'string'
) {
// Content 不为空的对象,或者是交互节点
if (
(typeof lastMessage.content === 'string' ||
Array.isArray(lastMessage.content) ||
lastMessage.interactive) &&
(typeof currentMessage.content === 'string' ||
Array.isArray(currentMessage.content) ||
currentMessage.interactive)
) {
const lastContent: (ChatCompletionContentPartText | ChatCompletionContentPartRefusal)[] =
Array.isArray(lastMessage.content)
? lastMessage.content
: [{ type: 'text', text: lastMessage.content || '' }];
const currentContent: (
| ChatCompletionContentPartText
| ChatCompletionContentPartRefusal
)[] = Array.isArray(currentMessage.content)
? currentMessage.content
: [{ type: 'text', text: currentMessage.content || '' }];
lastMessage.content = [...lastContent, ...currentContent];
} else {
// 有其中一个没有 content说明不是连续的文本输出
mergedMessages.push(currentMessage);
}
lastMessage.content += currentMessage ? `\n${currentMessage.content}` : '';
} else {
mergedMessages.push(currentMessage);
}
return mergedMessages;
}, []);
})(messages);
};
const loadMessages = (
await Promise.all(
mergeMessages.map(async (item, i) => {
if (item.role === ChatCompletionRequestMessageRoleEnum.System) {
const content = parseSystemMessage(item.content);
if (!content) return;
return {
...item,
content
};
} else if (item.role === ChatCompletionRequestMessageRoleEnum.User) {
const content = await parseUserContent(item.content);
if (!content) {
return {
...item,
content: 'null'
};
}
if (messages.length === 0) {
return Promise.reject(i18nT('common:core.chat.error.Messages empty'));
}
const formatContent = (() => {
if (Array.isArray(content) && content.length === 1 && content[0].type === 'text') {
return content[0].text;
}
return content;
})();
// filter messages file
const filterMessages = messages.map((item) => {
// If useVision=false, only retain text.
if (
item.role === ChatCompletionRequestMessageRoleEnum.User &&
Array.isArray(item.content) &&
!useVision
) {
return {
...item,
content: item.content.filter((item) => item.type === 'text')
};
}
return {
...item,
content: formatContent
};
} else if (item.role === ChatCompletionRequestMessageRoleEnum.Assistant) {
if (item.tool_calls || item.function_call) {
return formatAssistantItem(item);
}
return item;
});
const parseContent = parseAssistantContent(item.content);
const loadMessages = (await Promise.all(
filterMessages.map(async (item) => {
if (item.role === ChatCompletionRequestMessageRoleEnum.User) {
return {
...item,
content: await parseUserContent(item.content)
};
} else if (item.role === ChatCompletionRequestMessageRoleEnum.Assistant) {
// remove invalid field
return {
role: item.role,
content: item.content,
function_call: item.function_call,
name: item.name,
refusal: item.refusal,
tool_calls: item.tool_calls
};
} else {
return item;
}
})
)) as ChatCompletionMessageParam[];
// 如果内容为空,且前后不再是 assistant需要补充成 null避免丢失 user-assistant 的交互
const formatContent = (() => {
const lastItem = mergeMessages[i - 1];
const nextItem = mergeMessages[i + 1];
if (
parseContent === '' &&
(lastItem?.role === ChatCompletionRequestMessageRoleEnum.Assistant ||
nextItem?.role === ChatCompletionRequestMessageRoleEnum.Assistant)
) {
return;
}
return parseContent || 'null';
})();
if (!formatContent) return;
return {
...formatAssistantItem(item),
content: formatContent
};
} else {
return item;
}
})
)
).filter(Boolean) as ChatCompletionMessageParam[];
return loadMessages as SdkChatCompletionMessageParam[];
return mergeConsecutiveMessages(
clearInvalidMessages(loadMessages)
) as SdkChatCompletionMessageParam[];
};

View File

@@ -37,7 +37,12 @@ try {
{ teamId: 1, datasetId: 1, fullTextToken: 'text' },
{
name: 'teamId_1_datasetId_1_fullTextToken_text',
default_language: 'none'
default_language: 'none',
collation: {
locale: 'simple', // 使用简单匹配规则
strength: 2, // 忽略大小写
caseLevel: false // 进一步确保大小写不敏感
}
}
);
DatasetDataTextSchema.index({ dataId: 1 }, { unique: true });

View File

@@ -1,5 +1,5 @@
import { chats2GPTMessages } from '@fastgpt/global/core/chat/adapt';
import { filterGPTMessageByMaxContext, loadRequestMessages } from '../../../chat/utils';
import { filterGPTMessageByMaxTokens, loadRequestMessages } from '../../../chat/utils';
import type { ChatItemType } from '@fastgpt/global/core/chat/type.d';
import {
countMessagesTokens,
@@ -175,9 +175,9 @@ ${description ? `- ${description}` : ''}
}
];
const adaptMessages = chats2GPTMessages({ messages, reserveId: false });
const filterMessages = await filterGPTMessageByMaxContext({
const filterMessages = await filterGPTMessageByMaxTokens({
messages: adaptMessages,
maxContext: extractModel.maxContext
maxTokens: extractModel.maxContext
});
const requestMessages = await loadRequestMessages({
messages: filterMessages,

View File

@@ -1,5 +1,5 @@
import { createChatCompletion } from '../../../../ai/config';
import { filterGPTMessageByMaxContext, loadRequestMessages } from '../../../../chat/utils';
import { filterGPTMessageByMaxTokens, loadRequestMessages } from '../../../../chat/utils';
import {
ChatCompletion,
StreamChatType,
@@ -172,14 +172,10 @@ export const runToolWithFunctionCall = async (
};
});
const max_tokens = computedMaxToken({
model: toolModel,
maxToken
});
const filterMessages = (
await filterGPTMessageByMaxContext({
await filterGPTMessageByMaxTokens({
messages,
maxContext: toolModel.maxContext - (max_tokens || 0) // filter token. not response maxToken
maxTokens: toolModel.maxContext - 300 // filter token. not response maxToken
})
).map((item) => {
if (item.role === ChatCompletionRequestMessageRoleEnum.Assistant && item.function_call) {
@@ -194,11 +190,16 @@ export const runToolWithFunctionCall = async (
}
return item;
});
const [requestMessages] = await Promise.all([
const [requestMessages, max_tokens] = await Promise.all([
loadRequestMessages({
messages: filterMessages,
useVision: toolModel.vision && aiChatVision,
origin: requestOrigin
}),
computedMaxToken({
model: toolModel,
maxToken,
filterMessages
})
]);
const requestBody = llmCompletionsBodyFormat(

View File

@@ -1,5 +1,5 @@
import { createChatCompletion } from '../../../../ai/config';
import { filterGPTMessageByMaxContext, loadRequestMessages } from '../../../../chat/utils';
import { filterGPTMessageByMaxTokens, loadRequestMessages } from '../../../../chat/utils';
import {
ChatCompletion,
StreamChatType,
@@ -196,20 +196,21 @@ export const runToolWithPromptCall = async (
return Promise.reject('Prompt call invalid input');
}
const max_tokens = computedMaxToken({
model: toolModel,
maxToken
});
const filterMessages = await filterGPTMessageByMaxContext({
const filterMessages = await filterGPTMessageByMaxTokens({
messages,
maxContext: toolModel.maxContext - (max_tokens || 0) // filter token. not response maxToken
maxTokens: toolModel.maxContext - 500 // filter token. not response maxToken
});
const [requestMessages] = await Promise.all([
const [requestMessages, max_tokens] = await Promise.all([
loadRequestMessages({
messages: filterMessages,
useVision: toolModel.vision && aiChatVision,
origin: requestOrigin
}),
computedMaxToken({
model: toolModel,
maxToken,
filterMessages
})
]);
const requestBody = llmCompletionsBodyFormat(

View File

@@ -1,5 +1,5 @@
import { createChatCompletion } from '../../../../ai/config';
import { filterGPTMessageByMaxContext, loadRequestMessages } from '../../../../chat/utils';
import { filterGPTMessageByMaxTokens, loadRequestMessages } from '../../../../chat/utils';
import {
ChatCompletion,
ChatCompletionMessageToolCall,
@@ -228,16 +228,11 @@ export const runToolWithToolChoice = async (
};
});
const max_tokens = computedMaxToken({
model: toolModel,
maxToken
});
// Filter histories by maxToken
const filterMessages = (
await filterGPTMessageByMaxContext({
await filterGPTMessageByMaxTokens({
messages,
maxContext: toolModel.maxContext - (max_tokens || 0) // filter token. not response maxToken
maxTokens: toolModel.maxContext - 300 // filter token. not response maxToken
})
).map((item) => {
if (item.role === 'assistant' && item.tool_calls) {
@@ -253,11 +248,16 @@ export const runToolWithToolChoice = async (
return item;
});
const [requestMessages] = await Promise.all([
const [requestMessages, max_tokens] = await Promise.all([
loadRequestMessages({
messages: filterMessages,
useVision: toolModel.vision && aiChatVision,
origin: requestOrigin
}),
computedMaxToken({
model: toolModel,
maxToken,
filterMessages
})
]);
const requestBody = llmCompletionsBodyFormat(
@@ -272,7 +272,7 @@ export const runToolWithToolChoice = async (
},
toolModel
);
// console.log(JSON.stringify(requestMessages, null, 2), '==requestBody');
// console.log(JSON.stringify(requestBody, null, 2), '==requestBody');
/* Run llm */
const {
response: aiResponse,

View File

@@ -1,5 +1,5 @@
import type { NextApiResponse } from 'next';
import { filterGPTMessageByMaxContext, loadRequestMessages } from '../../../chat/utils';
import { filterGPTMessageByMaxTokens, loadRequestMessages } from '../../../chat/utils';
import type { ChatItemType, UserChatItemValueItemType } from '@fastgpt/global/core/chat/type.d';
import { ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
import { SseResponseEventEnum } from '@fastgpt/global/core/workflow/runtime/constants';
@@ -58,7 +58,6 @@ export type ChatProps = ModuleDispatchProps<
>;
export type ChatResponse = DispatchNodeResultType<{
[NodeOutputKeyEnum.answerText]: string;
[NodeOutputKeyEnum.reasoningText]?: string;
[NodeOutputKeyEnum.history]: ChatItemType[];
}>;
@@ -88,24 +87,22 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
quoteTemplate,
quotePrompt,
aiChatVision,
aiChatReasoning = true,
fileUrlList: fileLinks, // node quote file links
stringQuoteText //abandon
}
} = props;
const { files: inputFiles } = chatValue2RuntimePrompt(query); // Chat box input files
stream = stream && isResponseAnswerText;
const chatHistories = getHistories(history, histories);
quoteQA = checkQuoteQAValue(quoteQA);
const modelConstantsData = getLLMModel(model);
if (!modelConstantsData) {
return Promise.reject('The chat model is undefined, you need to select a chat model.');
}
stream = stream && isResponseAnswerText;
aiChatReasoning = !!aiChatReasoning && !!modelConstantsData.reasoning;
const chatHistories = getHistories(history, histories);
quoteQA = checkQuoteQAValue(quoteQA);
const [{ datasetQuoteText }, { documentQuoteText, userFiles }] = await Promise.all([
filterDatasetQuote({
quoteQA,
@@ -127,15 +124,9 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
return Promise.reject(i18nT('chat:AI_input_is_empty'));
}
const max_tokens = computedMaxToken({
model: modelConstantsData,
maxToken
});
const [{ filterMessages }] = await Promise.all([
getChatMessages({
model: modelConstantsData,
maxTokens: max_tokens,
histories: chatHistories,
useDatasetQuote: quoteQA !== undefined,
datasetQuoteText,
@@ -146,8 +137,8 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
userFiles,
documentQuoteText
}),
// Censor = true and system key, will check content
(() => {
// censor model and system key
if (modelConstantsData.censor && !externalProvider.openaiAccount?.key) {
return postTextCensor({
text: `${systemPrompt}
@@ -158,11 +149,18 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
})()
]);
const requestMessages = await loadRequestMessages({
messages: filterMessages,
useVision: modelConstantsData.vision && aiChatVision,
origin: requestOrigin
});
const [requestMessages, max_tokens] = await Promise.all([
loadRequestMessages({
messages: filterMessages,
useVision: modelConstantsData.vision && aiChatVision,
origin: requestOrigin
}),
computedMaxToken({
model: modelConstantsData,
maxToken,
filterMessages
})
]);
const requestBody = llmCompletionsBodyFormat(
{
@@ -185,41 +183,34 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
}
});
const { answerText, reasoningText } = await (async () => {
const { answerText } = await (async () => {
if (res && isStreamResponse) {
// sse response
const { answer, reasoning } = await streamResponse({
const { answer } = await streamResponse({
res,
stream: response,
aiChatReasoning,
workflowStreamResponse
});
return {
answerText: answer,
reasoningText: reasoning
answerText: answer
};
} else {
const unStreamResponse = response as ChatCompletion;
const answer = unStreamResponse.choices?.[0]?.message?.content || '';
const reasoning = aiChatReasoning
? // @ts-ignore
unStreamResponse.choices?.[0]?.message?.reasoning_content || ''
: '';
if (stream) {
// Some models do not support streaming
workflowStreamResponse?.({
event: SseResponseEventEnum.fastAnswer,
data: textAdaptGptResponse({
text: answer,
reasoning_content: reasoning
text: answer
})
});
}
return {
answerText: answer,
reasoningText: reasoning
answerText: answer
};
}
})();
@@ -250,7 +241,6 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
return {
answerText,
reasoningText,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints: externalProvider.openaiAccount?.key ? 0 : totalPoints,
model: modelName,
@@ -377,7 +367,6 @@ async function getMultiInput({
async function getChatMessages({
model,
maxTokens = 0,
aiChatQuoteRole,
datasetQuotePrompt = '',
datasetQuoteText,
@@ -389,7 +378,6 @@ async function getChatMessages({
documentQuoteText
}: {
model: LLMModelItemType;
maxTokens?: number;
// dataset quote
aiChatQuoteRole: AiChatQuoteRoleType; // user: replace user prompt; system: replace system prompt
datasetQuotePrompt?: string;
@@ -456,9 +444,9 @@ async function getChatMessages({
const adaptMessages = chats2GPTMessages({ messages, reserveId: false });
const filterMessages = await filterGPTMessageByMaxContext({
const filterMessages = await filterGPTMessageByMaxTokens({
messages: adaptMessages,
maxContext: model.maxContext - maxTokens // filter token. not response maxToken
maxTokens: model.maxContext - 300 // filter token. not response maxToken
});
return {
@@ -469,43 +457,33 @@ async function getChatMessages({
async function streamResponse({
res,
stream,
workflowStreamResponse,
aiChatReasoning
workflowStreamResponse
}: {
res: NextApiResponse;
stream: StreamChatType;
workflowStreamResponse?: WorkflowResponseType;
aiChatReasoning?: boolean;
}) {
const write = responseWriteController({
res,
readStream: stream
});
let answer = '';
let reasoning = '';
for await (const part of stream) {
if (res.closed) {
stream.controller?.abort();
break;
}
const content = part.choices?.[0]?.delta?.content || '';
answer += content;
const reasoningContent = aiChatReasoning
? part.choices?.[0]?.delta?.reasoning_content || ''
: '';
reasoning += reasoningContent;
workflowStreamResponse?.({
write,
event: SseResponseEventEnum.answer,
data: textAdaptGptResponse({
text: content,
reasoning_content: reasoningContent
text: content
})
});
}
return { answer, reasoning };
return { answer };
}

View File

@@ -204,7 +204,6 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
{ inputs = [] }: RuntimeNodeItemType,
{
answerText = '',
reasoningText,
responseData,
nodeDispatchUsages,
toolResponses,
@@ -214,7 +213,6 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
}: Omit<
DispatchNodeResultType<{
[NodeOutputKeyEnum.answerText]?: string;
[NodeOutputKeyEnum.reasoningText]?: string;
[DispatchNodeResponseKeyEnum.nodeResponse]?: ChatHistoryItemResType;
}>,
'nodeResponse'
@@ -241,28 +239,18 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
// Histories store
if (assistantResponses) {
chatAssistantResponse = chatAssistantResponse.concat(assistantResponses);
} else {
if (reasoningText) {
} else if (answerText) {
// save assistant text response
const isResponseAnswerText =
inputs.find((item) => item.key === NodeInputKeyEnum.aiChatIsResponseText)?.value ?? true;
if (isResponseAnswerText) {
chatAssistantResponse.push({
type: ChatItemValueTypeEnum.reasoning,
reasoning: {
content: reasoningText
type: ChatItemValueTypeEnum.text,
text: {
content: answerText
}
});
}
if (answerText) {
// save assistant text response
const isResponseAnswerText =
inputs.find((item) => item.key === NodeInputKeyEnum.aiChatIsResponseText)?.value ?? true;
if (isResponseAnswerText) {
chatAssistantResponse.push({
type: ChatItemValueTypeEnum.text,
text: {
content: answerText
}
});
}
}
}
if (rewriteHistories) {

View File

@@ -244,6 +244,7 @@ export const dispatchHttp468Request = async (props: HttpRequestProps): Promise<H
if (!httpJsonBody) return {};
if (httpContentType === ContentTypes.json) {
httpJsonBody = replaceJsonBodyString(httpJsonBody);
console.log(httpJsonBody);
return json5.parse(httpJsonBody);
}

View File

@@ -178,7 +178,6 @@ export const iconPaths = {
'core/chat/sideLine': () => import('./icons/core/chat/sideLine.svg'),
'core/chat/speaking': () => import('./icons/core/chat/speaking.svg'),
'core/chat/stopSpeech': () => import('./icons/core/chat/stopSpeech.svg'),
'core/chat/think': () => import('./icons/core/chat/think.svg'),
'core/dataset/commonDataset': () => import('./icons/core/dataset/commonDataset.svg'),
'core/dataset/commonDatasetColor': () => import('./icons/core/dataset/commonDatasetColor.svg'),
'core/dataset/commonDatasetOutline': () =>

View File

@@ -1 +0,0 @@
<svg t="1737983662269" class="icon" viewBox="0 0 1024 1024" version="1.1" xmlns="http://www.w3.org/2000/svg" p-id="6134" width="64" height="64"><path d="M512 512m-91.264 0a91.264 91.264 0 1 0 182.528 0 91.264 91.264 0 1 0-182.528 0Z" fill="" p-id="6135"></path><path d="M256.341333 693.546667l-20.138666-5.12C86.101333 650.496 0 586.112 0 511.829333s86.101333-138.666667 236.202667-176.597333l20.138666-5.077333 5.674667 19.968a1003.946667 1003.946667 0 0 0 58.154667 152.661333l4.309333 9.088-4.309333 9.088a994.432 994.432 0 0 0-58.154667 152.661333l-5.674667 19.925334zM226.858667 381.866667c-114.090667 32.042667-184.106667 81.066667-184.106667 129.962666 0 48.853333 70.016 97.877333 184.106667 129.962667a1064.533333 1064.533333 0 0 1 50.432-129.962667A1056.085333 1056.085333 0 0 1 226.858667 381.866667z m540.8 311.68l-5.674667-20.010667a996.565333 996.565333 0 0 0-58.197333-152.618667l-4.309334-9.088 4.309334-9.088a999.253333 999.253333 0 0 0 58.197333-152.661333l5.674667-19.968 20.181333 5.077333c150.058667 37.930667 236.16 102.314667 236.16 176.64s-86.101333 138.666667-236.16 176.597334l-20.181333 5.12z m-20.949334-181.717334c20.48 44.330667 37.418667 87.893333 50.432 129.962667 114.133333-32.085333 184.106667-81.109333 184.106667-129.962667 0-48.896-70.016-97.877333-184.106667-129.962666a1057.621333 1057.621333 0 0 1-50.432 129.962666z" fill="" p-id="6136"></path><path d="M226.56 381.653333l-5.674667-19.925333C178.688 212.992 191.488 106.410667 256 69.205333c63.274667-36.522667 164.864 6.613333 271.317333 115.882667l14.506667 14.890667-14.506667 14.890666a1004.885333 1004.885333 0 0 0-103.338666 126.592l-5.76 8.234667-10.026667 0.853333a1009.365333 1009.365333 0 0 0-161.493333 26.026667l-20.138667 5.077333z m80.896-282.88c-11.434667 0-21.546667 2.474667-30.08 7.381334-42.410667 24.448-49.92 109.44-20.693333 224.128a1071.872 1071.872 0 0 1 137.941333-21.376 1060.138667 1060.138667 0 0 1 87.552-108.544c-66.56-64.810667-129.578667-101.589333-174.72-101.589334z m409.130667 868.778667c-0.042667 0-0.042667 0 0 0-60.8 0-138.88-45.781333-219.904-128.981333l-14.506667-14.890667 14.506667-14.890667a1003.946667 1003.946667 0 0 0 103.296-126.634666l5.76-8.234667 9.984-0.853333a1008.213333 1008.213333 0 0 0 161.578666-25.984l20.138667-5.077334 5.717333 19.968c42.112 148.650667 29.354667 255.274667-35.157333 292.437334a101.546667 101.546667 0 0 1-51.413333 13.141333z m-174.762667-144.256c66.56 64.810667 129.578667 101.589333 174.72 101.589333h0.042667c11.392 0 21.546667-2.474667 30.037333-7.381333 42.410667-24.448 49.962667-109.482667 20.693333-224.170667a1067.52 1067.52 0 0 1-137.984 21.376 1052.757333 1052.757333 0 0 1-87.509333 108.586667z" fill="" p-id="6137"></path><path d="M797.44 381.653333l-20.138667-5.077333a1001.770667 1001.770667 0 0 0-161.578666-26.026667l-9.984-0.853333-5.76-8.234667a998.997333 998.997333 0 0 0-103.296-126.592l-14.506667-14.890666 14.506667-14.890667C603.093333 75.861333 704.64 32.725333 768 69.205333c64.512 37.205333 77.312 143.786667 35.157333 292.48l-5.717333 19.968zM629.333333 308.906667c48.725333 4.437333 95.018667 11.648 137.984 21.376 29.269333-114.688 21.717333-199.68-20.693333-224.128-42.154667-24.362667-121.386667 12.970667-204.8 94.208A1060.224 1060.224 0 0 1 629.333333 308.906667zM307.456 967.552A101.546667 101.546667 0 0 1 256 954.410667c-64.512-37.162667-77.312-143.744-35.114667-292.437334l5.632-19.968 20.138667 5.077334c49.28 12.416 103.637333 21.162667 161.493333 25.984l10.026667 0.853333 5.717333 8.234667a1006.762667 1006.762667 0 0 0 103.338667 126.634666l14.506667 14.890667-14.506667 14.890667c-80.981333 83.2-159.061333 128.981333-219.776 128.981333z m-50.773333-274.218667c-29.269333 114.688-21.717333 199.722667 20.693333 224.170667 42.112 24.021333 121.301333-13.013333 204.8-94.208a1066.581333 1066.581333 0 0 1-87.552-108.586667 1065.642667 1065.642667 0 0 1-137.941333-21.376z" fill="" p-id="6138"></path><path d="M512 720.128c-35.114667 0-71.210667-1.536-107.349333-4.522667l-10.026667-0.853333-5.76-8.234667a1296.554667 1296.554667 0 0 1-57.6-90.538666 1295.104 1295.104 0 0 1-49.749333-95.061334l-4.266667-9.088 4.266667-9.088a1292.8 1292.8 0 0 1 49.749333-95.061333c17.664-30.549333 37.077333-61.013333 57.6-90.538667l5.76-8.234666 10.026667-0.853334a1270.826667 1270.826667 0 0 1 214.741333 0l9.984 0.853334 5.717333 8.234666a1280.256 1280.256 0 0 1 107.392 185.6l4.309334 9.088-4.309334 9.088a1262.933333 1262.933333 0 0 1-107.392 185.6l-5.717333 8.234667-9.984 0.853333c-36.138667 2.986667-72.277333 4.522667-107.392 4.522667z m-93.738667-46.250667c63.146667 4.736 124.330667 4.736 187.52 0a1237.589333 1237.589333 0 0 0 93.696-162.048 1219.626667 1219.626667 0 0 0-93.738666-162.048 1238.656 1238.656 0 0 0-187.477334 0 1215.018667 1215.018667 0 0 0-93.738666 162.048 1242.197333 1242.197333 0 0 0 93.738666 162.048z" p-id="6139"></path></svg>

Before

Width:  |  Height:  |  Size: 4.7 KiB

View File

@@ -304,7 +304,7 @@ export function useScrollPagination<
);
return (
<MyBox ref={ref} h={'100%'} overflow={'auto'} isLoading={isLoading} {...props}>
<MyBox {...props} ref={ref} overflow={'overlay'} isLoading={isLoading}>
{scrollLoadType === 'top' && total > 0 && isLoading && (
<Box mt={2} fontSize={'xs'} color={'blackAlpha.500'} textAlign={'center'}>
{t('common:common.is_requesting')}

View File

@@ -20,9 +20,9 @@
"model.charsPointsPrice": "Chars Price",
"model.charsPointsPrice_tip": "Combine the model input and output for Token billing. If the language model is configured with input and output billing separately, the input and output will be calculated separately.",
"model.custom_cq_prompt": "Custom question classification prompt words",
"model.custom_cq_prompt_tip": "Override the system's default question classification prompt words, which default to:\n\"\"\"\n{{prompt}}\n\"\"\"",
"model.custom_cq_prompt_tip": "Override the system's default question classification prompt words, which default to:\n\"\"\"\n请帮我执行一个“问题分类”任务,将问题分类为以下几种类型之一:\n\n\"\"\"\n{{typeList}}\n\"\"\"\n\n## 背景知识\n{{systemPrompt}}\n\n## 对话记录\n{{history}}\n\n## 开始任务\n\n现在我们开始分类我会给你一个\"问题\"请结合背景知识和对话记录将问题分类到对应的类型中并返回类型ID。\n\n问题\"{{question}}\"\n类型ID=\n\"\"\"",
"model.custom_extract_prompt": "Custom content extraction prompt words",
"model.custom_extract_prompt_tip": "The reminder word of the coverage of the system, the default:\n\"\"\"\n{{prompt}}\n\"\"\"",
"model.custom_extract_prompt_tip": "Override system prompt word, default is:\n\"\"\"\n你可以从 <对话记录></对话记录> 中提取指定 Json 信息,你仅需返回 Json 字符串,无需回答问题。\n<提取要求>\n{{description}}\n</提取要求>\n\n<提取规则>\n- 本次需提取的 json 字符串,需符合 JsonSchema 的规则。\n- type 代表数据类型; key 代表字段名; description 代表字段的描述; enum 是枚举值,代表可选的 value。\n- 如果没有可提取的内容,忽略该字段。\n</提取规则>\n\n<JsonSchema>\n{{json}}\n</JsonSchema>\n\n<对话记录>\n{{text}}\n</对话记录>\n\n提取的 json 字符串:\n\"\"\"",
"model.dataset_process": "Dataset file parse",
"model.defaultConfig": "Additional Body parameters",
"model.defaultConfig_tip": "Each request will carry this additional Body parameter.",
@@ -49,12 +49,10 @@
"model.output_price": "Output price",
"model.output_price_tip": "The language model output price. If this item is configured, the model comprehensive price will be invalid.",
"model.param_name": "Parameter name",
"model.reasoning": "Support output thinking",
"model.reasoning_tip": "For example, Deepseek-reasoner can output the thinking process.",
"model.request_auth": "Custom key",
"model.request_auth": "Custom token",
"model.request_auth_tip": "When making a request to a custom request address, carry the request header: Authorization: Bearer xxx to make the request.",
"model.request_url": "Custom url",
"model.request_url_tip": "If you fill in this value, you will initiate a request directly without passing. \nYou need to follow the API format of Openai and fill in the full request address, such as\n\nLLM: {Host}}/v1/Chat/Completions\n\nEmbedding: {host}}/v1/embeddings\n\nSTT: {Host}/v1/Audio/Transcriptions\n\nTTS: {Host}}/v1/Audio/Speech\n\nRERARARARARARARANK: {Host}}/v1/RERARARARARARARARARARANK",
"model.request_url_tip": "If this value is filled in, a request will be made directly to this address without going through OneAPI",
"model.test_model": "Model testing",
"model.tool_choice": "Tool choice",
"model.tool_choice_tag": "ToolCall",

View File

@@ -109,7 +109,6 @@
"publish_channel": "Publish",
"publish_success": "Publish Successful",
"question_guide_tip": "After the conversation, 3 guiding questions will be generated for you.",
"reasoning_response": "Output thinking",
"saved_success": "Saved successfully! \nTo use this version externally, click Save and Publish",
"search_app": "Search apps",
"setting_app": "Workflow",

View File

@@ -2,7 +2,6 @@
"AI_input_is_empty": "The content passed to the AI node is empty",
"Delete_all": "Clear All Lexicon",
"LLM_model_response_empty": "The model flow response is empty, please check whether the model flow output is normal.",
"ai_reasoning": "Thinking process",
"chat_history": "Conversation History",
"chat_input_guide_lexicon_is_empty": "Lexicon not configured yet",
"chat_test_app": "Debug-{{name}}",

View File

@@ -20,9 +20,9 @@
"model.charsPointsPrice": "模型综合价格",
"model.charsPointsPrice_tip": "将模型输入和输出合并起来进行 Token 计费,语言模型如果单独配置了输入和输出计费,则按输入和输出分别计算",
"model.custom_cq_prompt": "自定义问题分类提示词",
"model.custom_cq_prompt_tip": "覆盖系统默认的问题分类提示词,默认为:\n\"\"\"\n{{prompt}}\n\"\"\"",
"model.custom_cq_prompt_tip": "覆盖系统默认的问题分类提示词,默认为:\n",
"model.custom_extract_prompt": "自定义内容提取提示词",
"model.custom_extract_prompt_tip": "覆盖系统的提示词,默认为:\n\"\"\"\n{{prompt}}\n\"\"\"",
"model.custom_extract_prompt_tip": "覆盖系统的提示词,默认为:\n\"\"\"\n你可以从 <对话记录></对话记录> 中提取指定 Json 信息,你仅需返回 Json 字符串,无需回答问题。\n<提取要求>\n{{description}}\n</提取要求>\n\n<提取规则>\n- 本次需提取的 json 字符串,需符合 JsonSchema 的规则。\n- type 代表数据类型; key 代表字段名; description 代表字段的描述; enum 是枚举值,代表可选的 value。\n- 如果没有可提取的内容,忽略该字段。\n</提取规则>\n\n<JsonSchema>\n{{json}}\n</JsonSchema>\n\n<对话记录>\n{{text}}\n</对话记录>\n\n提取的 json 字符串:\n\"\"\"",
"model.dataset_process": "用于知识库文件处理",
"model.defaultConfig": "额外 Body 参数",
"model.defaultConfig_tip": "每次请求时候,都会携带该额外 Body 参数",
@@ -49,12 +49,10 @@
"model.output_price": "模型输出价格",
"model.output_price_tip": "语言模型输出价格,如果配置了该项,则模型综合价格会失效",
"model.param_name": "参数名",
"model.reasoning": "支持输出思考",
"model.reasoning_tip": "例如 Deepseek-reasoner可以输出思考过程。",
"model.request_auth": "自定义请求 Key",
"model.request_auth": "自定义请求 Tokens",
"model.request_auth_tip": "向自定义请求地址发起请求时候携带请求头Authorization: Bearer xxx 进行请求",
"model.request_url": "自定义请求地址",
"model.request_url_tip": "如果填写该值,则会直接向该地址发起请求,不经过 OneAPI。需要遵循 OpenAI 的 API格式并填写完整请求地址例如\nLLM: {{host}}/v1/chat/completions\nEmbedding: {{host}}/v1/embeddings\nSTT: {{host}}/v1/audio/transcriptions\nTTS: {{host}}/v1/audio/speech\nRerank: {{host}}/v1/rerank",
"model.request_url_tip": "如果填写该值,则会直接向该地址发起请求,不经过 OneAPI",
"model.test_model": "模型测试",
"model.tool_choice": "支持工具调用",
"model.tool_choice_tag": "工具调用",

View File

@@ -109,7 +109,6 @@
"publish_channel": "发布渠道",
"publish_success": "发布成功",
"question_guide_tip": "对话结束后,会为你生成 3 个引导性问题。",
"reasoning_response": "输出思考",
"saved_success": "保存成功!如需在外部使用该版本,请点击“保存并发布”",
"search_app": "搜索应用",
"setting_app": "应用配置",

View File

@@ -2,7 +2,6 @@
"AI_input_is_empty": "传入AI 节点的内容为空",
"Delete_all": "清空词库",
"LLM_model_response_empty": "模型流响应为空,请检查模型流输出是否正常",
"ai_reasoning": "思考过程",
"chat_history": "聊天记录",
"chat_input_guide_lexicon_is_empty": "还没有配置词库",
"chat_test_app": "调试-{{name}}",

View File

@@ -19,9 +19,9 @@
"model.charsPointsPrice": "模型綜合價格",
"model.charsPointsPrice_tip": "將模型輸入和輸出合併起來進行 Token 計費,語言模型如果單獨配置了輸入和輸出計費,則按輸入和輸出分別計算",
"model.custom_cq_prompt": "自訂問題分類提示詞",
"model.custom_cq_prompt_tip": "覆蓋系統預設的問題分類提示詞,預設為:\n\"\"\"\n{{prompt}}\n\"\"\"",
"model.custom_cq_prompt_tip": "覆蓋系統預設的問題分類提示詞,預設為:\n\"\"\"\n请帮我执行一个“问题分类”任务,将问题分类为以下几种类型之一:\n\n\"\"\"\n{{typeList}}\n\"\"\"\n\n## 背景知识\n{{systemPrompt}}\n\n## 对话记录\n{{history}}\n\n## 开始任务\n\n现在我们开始分类我会给你一个\"问题\"请结合背景知识和对话记录将问题分类到对应的类型中并返回类型ID。\n\n问题\"{{question}}\"\n类型ID=\n\"\"\"",
"model.custom_extract_prompt": "自訂內容提取提示詞",
"model.custom_extract_prompt_tip": "覆蓋系統的提示詞,默認為:\n\"\"\"\n{{prompt}}\n\"\"\"",
"model.custom_extract_prompt_tip": "覆蓋系統的提示詞,預設為:\n\"\"\"\n你可以从 <对话记录></对话记录> 中提取指定 Json 信息,你仅需返回 Json 字符串,无需回答问题。\n<提取要求>\n{{description}}\n</提取要求>\n\n<提取规则>\n- 本次需提取的 json 字符串,需符合 JsonSchema 的规则。\n- type 代表数据类型; key 代表字段名; description 代表字段的描述; enum 是枚举值,代表可选的 value。\n- 如果没有可提取的内容,忽略该字段。\n</提取规则>\n\n<JsonSchema>\n{{json}}\n</JsonSchema>\n\n<对话记录>\n{{text}}\n</对话记录>\n\n提取的 json 字符串:\n\"\"\"",
"model.dataset_process": "用於知識庫文件處理",
"model.defaultConfig": "額外 Body 參數",
"model.defaultConfig_tip": "每次請求時候,都會攜帶該額外 Body 參數",
@@ -48,12 +48,10 @@
"model.output_price": "模型輸出價格",
"model.output_price_tip": "語言模型輸出價格,如果配置了該項,則模型綜合價格會失效",
"model.param_name": "參數名",
"model.reasoning": "支持輸出思考",
"model.reasoning_tip": "例如 Deepseek-reasoner可以輸出思考過程。",
"model.request_auth": "自訂請求 Key",
"model.request_auth": "自訂請求 Tokens",
"model.request_auth_tip": "向自訂請求地址發起請求時候攜帶請求頭Authorization: Bearer xxx 進行請求",
"model.request_url": "自訂請求地址",
"model.request_url_tip": "如果填該值,則會直接向該址發起請求,不經過 OneAPI。\n需要遵循 OpenAI 的 API格式並填寫完整請求地址例如\n\nLLM: {{host}}/v1/chat/completions\n\nEmbedding: {{host}}/v1/embeddings\n\nSTT: {{host}}/v1/audio/transcriptions\n\nTTS: {{host}}/v1/audio/speech\n\nRerank: {{host}}/v1/rerank",
"model.request_url_tip": "如果填該值,則會直接向該址發起請求,不經過 OneAPI",
"model.test_model": "模型測試",
"model.tool_choice": "支援工具調用",
"model.tool_choice_tag": "工具調用",

View File

@@ -109,7 +109,6 @@
"publish_channel": "發布通道",
"publish_success": "發布成功",
"question_guide_tip": "對話結束後,會為你產生 3 個引導性問題。",
"reasoning_response": "輸出思考",
"saved_success": "保存成功!\n如需在外部使用該版本請點擊“儲存並發布”",
"search_app": "搜尋應用程式",
"setting_app": "應用程式設定",

View File

@@ -2,7 +2,6 @@
"AI_input_is_empty": "傳送至 AI 節點的內容為空",
"Delete_all": "清除所有詞彙",
"LLM_model_response_empty": "模型流程回應為空,請檢查模型流程輸出是否正常",
"ai_reasoning": "思考過程",
"chat_history": "對話紀錄",
"chat_input_guide_lexicon_is_empty": "尚未設定詞彙庫",
"chat_test_app": "調試-{{name}}",

432
pnpm-lock.yaml generated
View File

@@ -22,7 +22,7 @@ importers:
version: 13.3.0
next-i18next:
specifier: 15.3.0
version: 15.3.0(i18next@23.11.5)(next@14.2.5(@babel/core@7.24.9)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8))(react-i18next@14.1.2(i18next@23.11.5)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1)
version: 15.3.0(i18next@23.11.5)(next@14.2.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8))(react-i18next@14.1.2(i18next@23.11.5)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1)
prettier:
specifier: 3.2.4
version: 3.2.4
@@ -67,7 +67,7 @@ importers:
version: 4.0.2
next:
specifier: 14.2.5
version: 14.2.5(@babel/core@7.24.9)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8)
version: 14.2.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8)
openai:
specifier: 4.61.0
version: 4.61.0(encoding@0.1.13)
@@ -111,9 +111,6 @@ importers:
lodash:
specifier: ^4.17.21
version: 4.17.21
mssql:
specifier: ^11.0.1
version: 11.0.1
mysql2:
specifier: ^3.11.3
version: 3.11.3
@@ -213,7 +210,7 @@ importers:
version: 1.4.5-lts.1
next:
specifier: 14.2.5
version: 14.2.5(@babel/core@7.24.9)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8)
version: 14.2.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8)
nextjs-cors:
specifier: ^2.2.0
version: 2.2.0(next@14.2.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8))
@@ -298,7 +295,7 @@ importers:
version: 2.1.1(@chakra-ui/system@2.6.1(@emotion/react@11.11.1(@types/react@18.3.1)(react@18.3.1))(@emotion/styled@11.11.0(@emotion/react@11.11.1(@types/react@18.3.1)(react@18.3.1))(@types/react@18.3.1)(react@18.3.1))(react@18.3.1))(react@18.3.1)
'@chakra-ui/next-js':
specifier: 2.1.5
version: 2.1.5(@chakra-ui/react@2.8.1(@emotion/react@11.11.1(@types/react@18.3.1)(react@18.3.1))(@emotion/styled@11.11.0(@emotion/react@11.11.1(@types/react@18.3.1)(react@18.3.1))(@types/react@18.3.1)(react@18.3.1))(@types/react@18.3.1)(framer-motion@9.1.7(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(@emotion/react@11.11.1(@types/react@18.3.1)(react@18.3.1))(next@14.2.5(@babel/core@7.24.9)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8))(react@18.3.1)
version: 2.1.5(@chakra-ui/react@2.8.1(@emotion/react@11.11.1(@types/react@18.3.1)(react@18.3.1))(@emotion/styled@11.11.0(@emotion/react@11.11.1(@types/react@18.3.1)(react@18.3.1))(@types/react@18.3.1)(react@18.3.1))(@types/react@18.3.1)(framer-motion@9.1.7(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(@emotion/react@11.11.1(@types/react@18.3.1)(react@18.3.1))(next@14.2.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8))(react@18.3.1)
'@chakra-ui/react':
specifier: 2.8.1
version: 2.8.1(@emotion/react@11.11.1(@types/react@18.3.1)(react@18.3.1))(@emotion/styled@11.11.0(@emotion/react@11.11.1(@types/react@18.3.1)(react@18.3.1))(@types/react@18.3.1)(react@18.3.1))(@types/react@18.3.1)(framer-motion@9.1.7(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react-dom@18.3.1(react@18.3.1))(react@18.3.1)
@@ -361,7 +358,7 @@ importers:
version: 4.17.21
next-i18next:
specifier: 15.3.0
version: 15.3.0(i18next@23.11.5)(next@14.2.5(@babel/core@7.24.9)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8))(react-i18next@14.1.2(i18next@23.11.5)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1)
version: 15.3.0(i18next@23.11.5)(next@14.2.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8))(react-i18next@14.1.2(i18next@23.11.5)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1)
papaparse:
specifier: ^5.4.1
version: 5.4.1
@@ -792,74 +789,6 @@ packages:
peerDependencies:
openapi-types: '>=7'
'@azure/abort-controller@2.1.2':
resolution: {integrity: sha512-nBrLsEWm4J2u5LpAPjxADTlq3trDgVZZXHNKabeXZtpq3d3AbN/KGO82R87rdDz5/lYB024rtEf10/q0urNgsA==}
engines: {node: '>=18.0.0'}
'@azure/core-auth@1.9.0':
resolution: {integrity: sha512-FPwHpZywuyasDSLMqJ6fhbOK3TqUdviZNF8OqRGA4W5Ewib2lEEZ+pBsYcBa88B2NGO/SEnYPGhyBqNlE8ilSw==}
engines: {node: '>=18.0.0'}
'@azure/core-client@1.9.2':
resolution: {integrity: sha512-kRdry/rav3fUKHl/aDLd/pDLcB+4pOFwPPTVEExuMyaI5r+JBbMWqRbCY1pn5BniDaU3lRxO9eaQ1AmSMehl/w==}
engines: {node: '>=18.0.0'}
'@azure/core-http-compat@2.1.2':
resolution: {integrity: sha512-5MnV1yqzZwgNLLjlizsU3QqOeQChkIXw781Fwh1xdAqJR5AA32IUaq6xv1BICJvfbHoa+JYcaij2HFkhLbNTJQ==}
engines: {node: '>=18.0.0'}
'@azure/core-lro@2.7.2':
resolution: {integrity: sha512-0YIpccoX8m/k00O7mDDMdJpbr6mf1yWo2dfmxt5A8XVZVVMz2SSKaEbMCeJRvgQ0IaSlqhjT47p4hVIRRy90xw==}
engines: {node: '>=18.0.0'}
'@azure/core-paging@1.6.2':
resolution: {integrity: sha512-YKWi9YuCU04B55h25cnOYZHxXYtEvQEbKST5vqRga7hWY9ydd3FZHdeQF8pyh+acWZvppw13M/LMGx0LABUVMA==}
engines: {node: '>=18.0.0'}
'@azure/core-rest-pipeline@1.18.2':
resolution: {integrity: sha512-IkTf/DWKyCklEtN/WYW3lqEsIaUDshlzWRlZNNwSYtFcCBQz++OtOjxNpm8rr1VcbMS6RpjybQa3u6B6nG0zNw==}
engines: {node: '>=18.0.0'}
'@azure/core-tracing@1.2.0':
resolution: {integrity: sha512-UKTiEJPkWcESPYJz3X5uKRYyOcJD+4nYph+KpfdPRnQJVrZfk0KJgdnaAWKfhsBBtAf/D58Az4AvCJEmWgIBAg==}
engines: {node: '>=18.0.0'}
'@azure/core-util@1.11.0':
resolution: {integrity: sha512-DxOSLua+NdpWoSqULhjDyAZTXFdP/LKkqtYuxxz1SCN289zk3OG8UOpnCQAz/tygyACBtWp/BoO72ptK7msY8g==}
engines: {node: '>=18.0.0'}
'@azure/identity@4.6.0':
resolution: {integrity: sha512-ANpO1iAvcZmpD4QY7/kaE/P2n66pRXsDp3nMUC6Ow3c9KfXOZF7qMU9VgqPw8m7adP7TVIbVyrCEmD9cth3KQQ==}
engines: {node: '>=18.0.0'}
'@azure/keyvault-common@2.0.0':
resolution: {integrity: sha512-wRLVaroQtOqfg60cxkzUkGKrKMsCP6uYXAOomOIysSMyt1/YM0eUn9LqieAWM8DLcU4+07Fio2YGpPeqUbpP9w==}
engines: {node: '>=18.0.0'}
'@azure/keyvault-keys@4.9.0':
resolution: {integrity: sha512-ZBP07+K4Pj3kS4TF4XdkqFcspWwBHry3vJSOFM5k5ZABvf7JfiMonvaFk2nBF6xjlEbMpz5PE1g45iTMme0raQ==}
engines: {node: '>=18.0.0'}
'@azure/logger@1.1.4':
resolution: {integrity: sha512-4IXXzcCdLdlXuCG+8UKEwLA1T1NHqUfanhXYHiQTn+6sfWCZXduqbtXDGceg3Ce5QxTGo7EqmbV6Bi+aqKuClQ==}
engines: {node: '>=18.0.0'}
'@azure/msal-browser@4.0.2':
resolution: {integrity: sha512-bq6PasUpJgBSOSMeSlh8gXh4LZGgAaPoJFNcu5u0zxwueh+I8NpMb9oxlCfS/8CJHyXUhTUAMLSnvThemNdyQw==}
engines: {node: '>=0.8.0'}
'@azure/msal-common@14.16.0':
resolution: {integrity: sha512-1KOZj9IpcDSwpNiQNjt0jDYZpQvNZay7QAEi/5DLubay40iGYtLzya/jbjRPLyOTZhEKyL1MzPuw2HqBCjceYA==}
engines: {node: '>=0.8.0'}
'@azure/msal-common@15.0.2':
resolution: {integrity: sha512-RQHmI5vOMYLNSO0ER7d/O9TojWWEn4m0YtWbL8mZthkKGQI7ALn5ONHUVTUSxSVYwGYdHGNrwiHAzQhboqwZzQ==}
engines: {node: '>=0.8.0'}
'@azure/msal-node@2.16.2':
resolution: {integrity: sha512-An7l1hEr0w1HMMh1LU+rtDtqL7/jw74ORlc9Wnh06v7TU/xpG39/Zdr1ZJu3QpjUfKJ+E0/OXMW8DRSWTlh7qQ==}
engines: {node: '>=16'}
'@babel/code-frame@7.24.7':
resolution: {integrity: sha512-BcYH1CVJBO9tvyIZ2jVeXgSIMvGZ2FDRvDdOIVQyuklNKSsx+eppDEBq/g47Ayw+RqNFE+URvOShmf+f/qwAlA==}
engines: {node: '>=6.9.0'}
@@ -2564,9 +2493,6 @@ packages:
'@jridgewell/trace-mapping@0.3.9':
resolution: {integrity: sha512-3Belt6tdc8bPgAtbcmdtNJlirVoTmEb5e2gC94PnkwEW9jI6CAHUeoG85tjWP5WquqfavoMtMwiG4P926ZKKuQ==}
'@js-joda/core@5.6.4':
resolution: {integrity: sha512-ChdLDTYMEoYoiKZMT90wZMEdGvZ2/QZMnhvjvEqeO5oLoxUfSiLzfe6Lhf3g88+MhZ+utbAu7PAxX1sZkLo5pA==}
'@js-sdsl/ordered-map@4.4.2':
resolution: {integrity: sha512-iUKgm52T8HOE/makSxjqoWhe95ZJA1/G1sYsGev2JDKUSS14KAgg1LHb+Ba+IPow0xflbnSkOsZcO08C7w1gYw==}
@@ -3290,9 +3216,6 @@ packages:
react-native:
optional: true
'@tediousjs/connection-string@0.5.0':
resolution: {integrity: sha512-7qSgZbincDDDFyRweCIEvZULFAw5iz/DeunhvuxpL31nfntX3P4Yd4HkHBRg9H8CdqY1e5WFN1PZIz/REL9MVQ==}
'@tokenizer/token@0.3.0':
resolution: {integrity: sha512-OvjF+z51L3ov0OyAU0duzsYuvO01PH7x4t6DJx+guahgTnBHkhJdG7soQeTSFLWN3efnHyibZ4Z8l2EuWwJN3A==}
@@ -3603,9 +3526,6 @@ packages:
'@types/react@18.3.1':
resolution: {integrity: sha512-V0kuGBX3+prX+DQ/7r2qsv1NsdfnCLnTgnRJ1pYnxykBhGMz+qj+box5lq7XsO5mtZsBqpjwwTu/7wszPfMBcw==}
'@types/readable-stream@4.0.18':
resolution: {integrity: sha512-21jK/1j+Wg+7jVw1xnSwy/2Q1VgVjWuFssbYGTREPUBeZ+rqVFl2udq0IkxzPC0ZhOzVceUbyIACFZKLqKEBlA==}
'@types/request-ip@0.0.37':
resolution: {integrity: sha512-uw6/i3rQnpznxD7LtLaeuZytLhKZK6bRoTS6XVJlwxIOoOpEBU7bgKoVXDNtOg4Xl6riUKHa9bjMVrL6ESqYlQ==}
@@ -4165,9 +4085,6 @@ packages:
bl@4.1.0:
resolution: {integrity: sha512-1W07cM9gS6DcLperZfFSj+bWLtaPGSOHWhPiGzXmvVJbRLdG82sH/Kn8EtW1VqWVA54AKf2h5k5BbnIbwF3h6w==}
bl@6.0.19:
resolution: {integrity: sha512-4Ay3A3oDfGg3GGirhl4s62ebtnk0pJZA5mLp672MPKOQXsWvXjEF4dqdXySjJIs7b9OVr/O8aOo0Lm+xdjo2JA==}
bluebird@3.4.7:
resolution: {integrity: sha512-iD3898SR7sWVRHbiQv+sHUtHnMvC1o3nW5rAcqnq3uOn07DSAppZYUkIGslDz6gXC7HfunPe7YVBgoEJASPcHA==}
@@ -4934,10 +4851,6 @@ packages:
resolution: {integrity: sha512-rBMvIzlpA8v6E+SJZoo++HAYqsLrkg7MSfIinMPFhmkorw7X+dOXVJQs+QT69zGkzMyfDnIMN2Wid1+NbL3T+A==}
engines: {node: '>= 0.4'}
define-lazy-prop@2.0.0:
resolution: {integrity: sha512-Ds09qNh8yw3khSjiJjiUInaGX9xlqZDY7JVryGxdxV7NPeuqQfplOpQ66yJFZut3jLa5zOwkXw1g9EI2uKh4Og==}
engines: {node: '>=8'}
define-properties@1.2.1:
resolution: {integrity: sha512-8QmQKqEASLd5nx0U1B1okLElbUuuttJ/AnYmRXbbbGDWh6uS208EjD4Xqq/I9wK7u0v6O08XhTWnt5XtEbR6Dg==}
engines: {node: '>= 0.4'}
@@ -6051,11 +5964,6 @@ packages:
is-decimal@2.0.1:
resolution: {integrity: sha512-AAB9hiomQs5DXWcRB1rqsxGUstbRroFOPPVAomNk/3XHR5JyEZChOyTWe2oayKnsSsr/kcGqF+z6yuH6HHpN0A==}
is-docker@2.2.1:
resolution: {integrity: sha512-F+i2BKsFrH66iaUFc0woD8sLy8getkwTwtOBjvs56Cx4CgJDeKQeqfz8wAYiSb8JOprWhHH5p77PbmYCvvUuXQ==}
engines: {node: '>=8'}
hasBin: true
is-extglob@2.1.1:
resolution: {integrity: sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ==}
engines: {node: '>=0.10.0'}
@@ -6191,10 +6099,6 @@ packages:
is-word-character@1.0.4:
resolution: {integrity: sha512-5SMO8RVennx3nZrqtKwCGyyetPE9VDba5ugvKLaD4KopPG5kR4mQ7tNt/r7feL5yt5h3lpuBbIUmCOG2eSzXHA==}
is-wsl@2.2.0:
resolution: {integrity: sha512-fKzAra0rGJUUBwGBgNkHZuToZcn+TtXHpeCgmkMJMMYx1sQDYaCSyjJBSCa2nH1DGm7s3n1oBnohoVTBaN7Lww==}
engines: {node: '>=8'}
isarray@1.0.0:
resolution: {integrity: sha512-VLghIWNM6ELQzo7zwmcg0NmTVyWKYjvIeM83yjp0wRDTmUnrM678fQbcKBo6n2CJEF0szoG//ytg+TKla89ALQ==}
@@ -6401,9 +6305,6 @@ packages:
resolution: {integrity: sha512-cEiJEAEoIbWfCZYKWhVwFuvPX1gETRYPw6LlaTKoxD3s2AkXzkCjnp6h0V77ozyqj0jakteJ4YqDJT830+lVGw==}
engines: {node: '>=14'}
js-md4@0.3.2:
resolution: {integrity: sha512-/GDnfQYsltsjRswQhN9fhv3EMw2sCpUdrdxyWDOUK7eyD++r3gRhzgiQgc/x4MAv2i1iuQ4lxO5mvqM3vj4bwA==}
js-tokens@4.0.0:
resolution: {integrity: sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==}
@@ -6498,15 +6399,9 @@ packages:
jwa@1.4.1:
resolution: {integrity: sha512-qiLX/xhEEFKUAJ6FiBMbes3w9ATzyk5W7Hvzpa/SLYdxNtng+gcurvrI7TbACjIXlsJyr05/S1oUhZrc63evQA==}
jwa@2.0.0:
resolution: {integrity: sha512-jrZ2Qx916EA+fq9cEAeCROWPTfCwi1IVHqT2tapuqLEVVDKFDENFw1oL+MwrTvH6msKxsd1YTDVw6uKEcsrLEA==}
jws@3.2.2:
resolution: {integrity: sha512-YHlZCB6lMTllWDtSPHz/ZXTsi8S00usEV6v1tjq8tOUZzw7DpSDWVXjXDre6ed1w/pd495ODpHZYSdkRTsa0HA==}
jws@4.0.0:
resolution: {integrity: sha512-KDncfTmOZoOMTFG4mBlG0qUIOlc03fmzH+ru6RgYVZhPkyiy/92Owlt/8UEN+a4TXR1FQetfIpJE8ApdvdVxTg==}
kareem@2.5.1:
resolution: {integrity: sha512-7jFxRVm+jD+rkq3kY0iZDJfsO2/t4BBPeEb2qKn2lR/9KhuksYk5hxzfRYWMPV8P/x2d0kHD306YyWLzjjH+uA==}
engines: {node: '>=12.0.0'}
@@ -7211,11 +7106,6 @@ packages:
ms@2.1.3:
resolution: {integrity: sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==}
mssql@11.0.1:
resolution: {integrity: sha512-KlGNsugoT90enKlR8/G36H0kTxPthDhmtNUCwEHvgRza5Cjpjoj+P2X6eMpFUDN7pFrJZsKadL4x990G8RBE1w==}
engines: {node: '>=18'}
hasBin: true
multer@1.4.5-lts.1:
resolution: {integrity: sha512-ywPWvcDMeH+z9gQq5qYHCCy+ethsk4goepZ45GLD63fOu0YcNecQxi64nDs3qluZB+murG3/D4dJ7+dGctcCQQ==}
engines: {node: '>= 6.0.0'}
@@ -7251,9 +7141,6 @@ packages:
napi-build-utils@1.0.2:
resolution: {integrity: sha512-ONmRUqK7zj7DWX0D9ADe03wbwOBZxNAfF20PlGfCWQcD3+/MakShIHrMqx9YwPTfxDdF1zLeL+RGZiR9kGMLdg==}
native-duplexpair@1.0.0:
resolution: {integrity: sha512-E7QQoM+3jvNtlmyfqRZ0/U75VFgCls+fSkbml2MpgWkWyz3ox8Y58gNhfuziuQYGNNQAbFZJQck55LHCnCK6CA==}
natural-compare@1.4.0:
resolution: {integrity: sha512-OWND8ei3VtNC9h7V60qff3SVobHr996CTwgxubgyQYEpg290h9J0buyECNNJexkFm5sOajh5G116RYA1c8ZMSw==}
@@ -7453,10 +7340,6 @@ packages:
resolution: {integrity: sha512-1FlR+gjXK7X+AsAHso35MnyN5KqGwJRi/31ft6x0M194ht7S+rWAvd7PHss9xSKMzE0asv1pyIHaJYq+BbacAQ==}
engines: {node: '>=12'}
open@8.4.2:
resolution: {integrity: sha512-7x81NCL719oNbsq/3mh+hVrAWmFuEYUqrq/Iw3kUzH8ReypT9QQ0BLoJS7/G9k6N81XjW4qHWtjWwe/9eLy1EQ==}
engines: {node: '>=12'}
openai@4.61.0:
resolution: {integrity: sha512-xkygRBRLIUumxzKGb1ug05pWmJROQsHkGuj/N6Jiw2dj0dI19JvbFpErSZKmJ/DA+0IvpcugZqCAyk8iLpyM6Q==}
hasBin: true
@@ -7883,7 +7766,6 @@ packages:
react-beautiful-dnd@13.1.1:
resolution: {integrity: sha512-0Lvs4tq2VcrEjEgDXHjT98r+63drkKEgqyxdA7qD3mvKwga6a5SscbdLPO2IExotU1jW8L0Ksdl0Cj2AF67nPQ==}
deprecated: 'react-beautiful-dnd is now deprecated. Context and options: https://github.com/atlassian/react-beautiful-dnd/issues/2672'
peerDependencies:
react: ^16.8.5 || ^17.0.0 || ^18.0.0
react-dom: ^16.8.5 || ^17.0.0 || ^18.0.0
@@ -8483,10 +8365,6 @@ packages:
resolution: {integrity: sha512-iCGQj+0l0HOdZ2AEeBADlsRC+vsnDsZsbdSiH1yNSjcfKM7fdpCMfqAL/dwF5BLiw/XhRft/Wax6zQbhq2BcjQ==}
engines: {node: '>= 0.4'}
stoppable@1.1.0:
resolution: {integrity: sha512-KXDYZ9dszj6bzvnEMRYvxgeTHU74QBFL54XKtP3nyMuJ81CFYtABZ3bAzL2EdFUaEwJOBOgENyFj3R7oTzDyyw==}
engines: {node: '>=4', npm: '>=6'}
streamsearch@1.1.0:
resolution: {integrity: sha512-Mcc5wHehp9aXz1ax6bZUyY5afg9u2rv5cqQI3mRrYkGC8rW2hM02jWuwjtL++LS5qinSyhj2QfLyNsuc+VsExg==}
engines: {node: '>=10.0.0'}
@@ -8666,14 +8544,6 @@ packages:
resolution: {integrity: sha512-DZ4yORTwrbTj/7MZYq2w+/ZFdI6OZ/f9SFHR+71gIVUZhOQPHzVCLpvRnPgyaMpfWxxk/4ONva3GQSyNIKRv6A==}
engines: {node: '>=10'}
tarn@3.0.2:
resolution: {integrity: sha512-51LAVKUSZSVfI05vjPESNc5vwqqZpbXCsU+/+wxlOrUjk2SnFTt97v9ZgQrD4YmxYW1Px6w2KjaDitCfkvgxMQ==}
engines: {node: '>=8.0.0'}
tedious@18.6.1:
resolution: {integrity: sha512-9AvErXXQTd6l7TDd5EmM+nxbOGyhnmdbp/8c3pw+tjaiSXW9usME90ET/CRG1LN1Y9tPMtz/p83z4Q97B4DDpw==}
engines: {node: '>=18'}
terser-webpack-plugin@5.3.10:
resolution: {integrity: sha512-BKFPWlPDndPs+NGGCr1U59t0XScL5317Y0UReNrHaw9/FwhPENlq6bfgs+4yPfyP51vqC1bQ4rp1EfXW5ZSH9w==}
engines: {node: '>= 10.13.0'}
@@ -9543,136 +9413,6 @@ snapshots:
call-me-maybe: 1.0.2
openapi-types: 12.1.3
'@azure/abort-controller@2.1.2':
dependencies:
tslib: 2.8.0
'@azure/core-auth@1.9.0':
dependencies:
'@azure/abort-controller': 2.1.2
'@azure/core-util': 1.11.0
tslib: 2.8.0
'@azure/core-client@1.9.2':
dependencies:
'@azure/abort-controller': 2.1.2
'@azure/core-auth': 1.9.0
'@azure/core-rest-pipeline': 1.18.2
'@azure/core-tracing': 1.2.0
'@azure/core-util': 1.11.0
'@azure/logger': 1.1.4
tslib: 2.8.0
transitivePeerDependencies:
- supports-color
'@azure/core-http-compat@2.1.2':
dependencies:
'@azure/abort-controller': 2.1.2
'@azure/core-client': 1.9.2
'@azure/core-rest-pipeline': 1.18.2
transitivePeerDependencies:
- supports-color
'@azure/core-lro@2.7.2':
dependencies:
'@azure/abort-controller': 2.1.2
'@azure/core-util': 1.11.0
'@azure/logger': 1.1.4
tslib: 2.8.0
'@azure/core-paging@1.6.2':
dependencies:
tslib: 2.8.0
'@azure/core-rest-pipeline@1.18.2':
dependencies:
'@azure/abort-controller': 2.1.2
'@azure/core-auth': 1.9.0
'@azure/core-tracing': 1.2.0
'@azure/core-util': 1.11.0
'@azure/logger': 1.1.4
http-proxy-agent: 7.0.2
https-proxy-agent: 7.0.5
tslib: 2.8.0
transitivePeerDependencies:
- supports-color
'@azure/core-tracing@1.2.0':
dependencies:
tslib: 2.8.0
'@azure/core-util@1.11.0':
dependencies:
'@azure/abort-controller': 2.1.2
tslib: 2.8.0
'@azure/identity@4.6.0':
dependencies:
'@azure/abort-controller': 2.1.2
'@azure/core-auth': 1.9.0
'@azure/core-client': 1.9.2
'@azure/core-rest-pipeline': 1.18.2
'@azure/core-tracing': 1.2.0
'@azure/core-util': 1.11.0
'@azure/logger': 1.1.4
'@azure/msal-browser': 4.0.2
'@azure/msal-node': 2.16.2
events: 3.3.0
jws: 4.0.0
open: 8.4.2
stoppable: 1.1.0
tslib: 2.8.0
transitivePeerDependencies:
- supports-color
'@azure/keyvault-common@2.0.0':
dependencies:
'@azure/abort-controller': 2.1.2
'@azure/core-auth': 1.9.0
'@azure/core-client': 1.9.2
'@azure/core-rest-pipeline': 1.18.2
'@azure/core-tracing': 1.2.0
'@azure/core-util': 1.11.0
'@azure/logger': 1.1.4
tslib: 2.8.0
transitivePeerDependencies:
- supports-color
'@azure/keyvault-keys@4.9.0':
dependencies:
'@azure/abort-controller': 2.1.2
'@azure/core-auth': 1.9.0
'@azure/core-client': 1.9.2
'@azure/core-http-compat': 2.1.2
'@azure/core-lro': 2.7.2
'@azure/core-paging': 1.6.2
'@azure/core-rest-pipeline': 1.18.2
'@azure/core-tracing': 1.2.0
'@azure/core-util': 1.11.0
'@azure/keyvault-common': 2.0.0
'@azure/logger': 1.1.4
tslib: 2.8.0
transitivePeerDependencies:
- supports-color
'@azure/logger@1.1.4':
dependencies:
tslib: 2.8.0
'@azure/msal-browser@4.0.2':
dependencies:
'@azure/msal-common': 15.0.2
'@azure/msal-common@14.16.0': {}
'@azure/msal-common@15.0.2': {}
'@azure/msal-node@2.16.2':
dependencies:
'@azure/msal-common': 14.16.0
jsonwebtoken: 9.0.2
uuid: 8.3.2
'@babel/code-frame@7.24.7':
dependencies:
'@babel/highlight': 7.24.7
@@ -9693,7 +9433,7 @@ snapshots:
'@babel/traverse': 7.25.6
'@babel/types': 7.25.6
convert-source-map: 2.0.0
debug: 4.3.7
debug: 4.3.5
gensync: 1.0.0-beta.2
json5: 2.2.3
semver: 6.3.1
@@ -10545,7 +10285,7 @@ snapshots:
'@babel/parser': 7.25.6
'@babel/template': 7.25.0
'@babel/types': 7.25.6
debug: 4.3.7
debug: 4.3.5
globals: 11.12.0
transitivePeerDependencies:
- supports-color
@@ -10842,6 +10582,14 @@ snapshots:
next: 14.2.5(@babel/core@7.24.9)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8)
react: 18.3.1
'@chakra-ui/next-js@2.1.5(@chakra-ui/react@2.8.1(@emotion/react@11.11.1(@types/react@18.3.1)(react@18.3.1))(@emotion/styled@11.11.0(@emotion/react@11.11.1(@types/react@18.3.1)(react@18.3.1))(@types/react@18.3.1)(react@18.3.1))(@types/react@18.3.1)(framer-motion@9.1.7(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(@emotion/react@11.11.1(@types/react@18.3.1)(react@18.3.1))(next@14.2.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8))(react@18.3.1)':
dependencies:
'@chakra-ui/react': 2.8.1(@emotion/react@11.11.1(@types/react@18.3.1)(react@18.3.1))(@emotion/styled@11.11.0(@emotion/react@11.11.1(@types/react@18.3.1)(react@18.3.1))(@types/react@18.3.1)(react@18.3.1))(@types/react@18.3.1)(framer-motion@9.1.7(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react-dom@18.3.1(react@18.3.1))(react@18.3.1)
'@emotion/cache': 11.11.0
'@emotion/react': 11.11.1(@types/react@18.3.1)(react@18.3.1)
next: 14.2.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8)
react: 18.3.1
'@chakra-ui/number-input@2.1.1(@chakra-ui/system@2.6.1(@emotion/react@11.11.1(@types/react@18.3.1)(react@18.3.1))(@emotion/styled@11.11.0(@emotion/react@11.11.1(@types/react@18.3.1)(react@18.3.1))(@types/react@18.3.1)(react@18.3.1))(react@18.3.1))(react@18.3.1)':
dependencies:
'@chakra-ui/counter': 2.1.0(react@18.3.1)
@@ -11867,8 +11615,6 @@ snapshots:
'@jridgewell/resolve-uri': 3.1.2
'@jridgewell/sourcemap-codec': 1.5.0
'@js-joda/core@5.6.4': {}
'@js-sdsl/ordered-map@4.4.2': {}
'@jsdevtools/ono@7.1.3': {}
@@ -12599,8 +12345,6 @@ snapshots:
optionalDependencies:
react-dom: 18.3.1(react@18.3.1)
'@tediousjs/connection-string@0.5.0': {}
'@tokenizer/token@0.3.0': {}
'@trysound/sax@0.2.0': {}
@@ -12971,11 +12715,6 @@ snapshots:
'@types/prop-types': 15.7.12
csstype: 3.1.3
'@types/readable-stream@4.0.18':
dependencies:
'@types/node': 20.14.11
safe-buffer: 5.1.2
'@types/request-ip@0.0.37':
dependencies:
'@types/node': 20.14.11
@@ -13091,7 +12830,7 @@ snapshots:
dependencies:
'@typescript-eslint/types': 6.21.0
'@typescript-eslint/visitor-keys': 6.21.0
debug: 4.3.7
debug: 4.3.5
globby: 11.1.0
is-glob: 4.0.3
minimatch: 9.0.3
@@ -13557,7 +13296,7 @@ snapshots:
async-mutex@0.4.1:
dependencies:
tslib: 2.8.0
tslib: 2.6.3
async-mutex@0.5.0:
dependencies:
@@ -13594,7 +13333,7 @@ snapshots:
axios@1.7.7:
dependencies:
follow-redirects: 1.15.9(debug@4.3.7)
follow-redirects: 1.15.9
form-data: 4.0.1
proxy-from-env: 1.1.0
transitivePeerDependencies:
@@ -13714,13 +13453,6 @@ snapshots:
inherits: 2.0.4
readable-stream: 3.6.2
bl@6.0.19:
dependencies:
'@types/readable-stream': 4.0.18
buffer: 6.0.3
inherits: 2.0.4
readable-stream: 4.5.2
bluebird@3.4.7: {}
body-parser@1.20.2:
@@ -14556,8 +14288,6 @@ snapshots:
es-errors: 1.3.0
gopd: 1.0.1
define-lazy-prop@2.0.0: {}
define-properties@1.2.1:
dependencies:
define-data-property: 1.1.4
@@ -14913,7 +14643,7 @@ snapshots:
eslint: 8.56.0
eslint-import-resolver-node: 0.3.9
eslint-import-resolver-typescript: 3.6.1(@typescript-eslint/parser@6.21.0(eslint@8.56.0)(typescript@5.5.3))(eslint-import-resolver-node@0.3.9)(eslint-plugin-import@2.29.1(eslint@8.56.0))(eslint@8.56.0)
eslint-plugin-import: 2.29.1(@typescript-eslint/parser@6.21.0(eslint@8.56.0)(typescript@5.5.3))(eslint-import-resolver-typescript@3.6.1(@typescript-eslint/parser@6.21.0(eslint@8.56.0)(typescript@5.5.3))(eslint-import-resolver-node@0.3.9)(eslint-plugin-import@2.29.1(eslint@8.56.0))(eslint@8.56.0))(eslint@8.56.0)
eslint-plugin-import: 2.29.1(@typescript-eslint/parser@6.21.0(eslint@8.56.0)(typescript@5.5.3))(eslint-import-resolver-typescript@3.6.1)(eslint@8.56.0)
eslint-plugin-jsx-a11y: 6.9.0(eslint@8.56.0)
eslint-plugin-react: 7.34.4(eslint@8.56.0)
eslint-plugin-react-hooks: 4.6.2(eslint@8.56.0)
@@ -14933,11 +14663,11 @@ snapshots:
eslint-import-resolver-typescript@3.6.1(@typescript-eslint/parser@6.21.0(eslint@8.56.0)(typescript@5.5.3))(eslint-import-resolver-node@0.3.9)(eslint-plugin-import@2.29.1(eslint@8.56.0))(eslint@8.56.0):
dependencies:
debug: 4.3.7
debug: 4.3.5
enhanced-resolve: 5.17.0
eslint: 8.56.0
eslint-module-utils: 2.8.1(@typescript-eslint/parser@6.21.0(eslint@8.56.0)(typescript@5.5.3))(eslint-import-resolver-node@0.3.9)(eslint-import-resolver-typescript@3.6.1(@typescript-eslint/parser@6.21.0(eslint@8.56.0)(typescript@5.5.3))(eslint-import-resolver-node@0.3.9)(eslint-plugin-import@2.29.1(eslint@8.56.0))(eslint@8.56.0))(eslint@8.56.0)
eslint-plugin-import: 2.29.1(@typescript-eslint/parser@6.21.0(eslint@8.56.0)(typescript@5.5.3))(eslint-import-resolver-typescript@3.6.1(@typescript-eslint/parser@6.21.0(eslint@8.56.0)(typescript@5.5.3))(eslint-import-resolver-node@0.3.9)(eslint-plugin-import@2.29.1(eslint@8.56.0))(eslint@8.56.0))(eslint@8.56.0)
eslint-plugin-import: 2.29.1(@typescript-eslint/parser@6.21.0(eslint@8.56.0)(typescript@5.5.3))(eslint-import-resolver-typescript@3.6.1)(eslint@8.56.0)
fast-glob: 3.3.2
get-tsconfig: 4.7.5
is-core-module: 2.14.0
@@ -14959,7 +14689,7 @@ snapshots:
transitivePeerDependencies:
- supports-color
eslint-plugin-import@2.29.1(@typescript-eslint/parser@6.21.0(eslint@8.56.0)(typescript@5.5.3))(eslint-import-resolver-typescript@3.6.1(@typescript-eslint/parser@6.21.0(eslint@8.56.0)(typescript@5.5.3))(eslint-import-resolver-node@0.3.9)(eslint-plugin-import@2.29.1(eslint@8.56.0))(eslint@8.56.0))(eslint@8.56.0):
eslint-plugin-import@2.29.1(@typescript-eslint/parser@6.21.0(eslint@8.56.0)(typescript@5.5.3))(eslint-import-resolver-typescript@3.6.1)(eslint@8.56.0):
dependencies:
array-includes: 3.1.8
array.prototype.findlastindex: 1.2.5
@@ -15407,6 +15137,12 @@ snapshots:
follow-redirects@1.15.6: {}
follow-redirects@1.15.9: {}
follow-redirects@1.15.9(debug@4.3.4):
optionalDependencies:
debug: 4.3.4
follow-redirects@1.15.9(debug@4.3.7):
optionalDependencies:
debug: 4.3.7
@@ -16014,8 +15750,6 @@ snapshots:
is-decimal@2.0.1: {}
is-docker@2.2.1: {}
is-extglob@2.1.1: {}
is-finalizationregistry@1.0.2:
@@ -16112,10 +15846,6 @@ snapshots:
is-word-character@1.0.4: {}
is-wsl@2.2.0:
dependencies:
is-docker: 2.2.1
isarray@1.0.0: {}
isarray@2.0.5: {}
@@ -16521,8 +16251,6 @@ snapshots:
js-cookie@3.0.5: {}
js-md4@0.3.2: {}
js-tokens@4.0.0: {}
js-tokens@9.0.0: {}
@@ -16621,22 +16349,11 @@ snapshots:
ecdsa-sig-formatter: 1.0.11
safe-buffer: 5.2.1
jwa@2.0.0:
dependencies:
buffer-equal-constant-time: 1.0.1
ecdsa-sig-formatter: 1.0.11
safe-buffer: 5.2.1
jws@3.2.2:
dependencies:
jwa: 1.4.1
safe-buffer: 5.2.1
jws@4.0.0:
dependencies:
jwa: 2.0.0
safe-buffer: 5.2.1
kareem@2.5.1: {}
katex@0.16.11:
@@ -17608,9 +17325,9 @@ snapshots:
dependencies:
async-mutex: 0.4.1
camelcase: 6.3.0
debug: 4.3.7
debug: 4.3.4
find-cache-dir: 3.3.2
follow-redirects: 1.15.9(debug@4.3.7)
follow-redirects: 1.15.9(debug@4.3.4)
https-proxy-agent: 7.0.5
mongodb: 5.9.2
new-find-package-json: 2.0.0
@@ -17701,17 +17418,6 @@ snapshots:
ms@2.1.3: {}
mssql@11.0.1:
dependencies:
'@tediousjs/connection-string': 0.5.0
commander: 11.0.0
debug: 4.3.7
rfdc: 1.4.1
tarn: 3.0.2
tedious: 18.6.1
transitivePeerDependencies:
- supports-color
multer@1.4.5-lts.1:
dependencies:
append-field: 1.0.0
@@ -17751,8 +17457,6 @@ snapshots:
napi-build-utils@1.0.2: {}
native-duplexpair@1.0.0: {}
natural-compare@1.4.0: {}
needle@3.3.1:
@@ -17782,6 +17486,18 @@ snapshots:
react: 18.3.1
react-i18next: 14.1.2(i18next@23.11.5)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)
next-i18next@15.3.0(i18next@23.11.5)(next@14.2.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8))(react-i18next@14.1.2(i18next@23.11.5)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1):
dependencies:
'@babel/runtime': 7.24.8
'@types/hoist-non-react-statics': 3.3.5
core-js: 3.37.1
hoist-non-react-statics: 3.3.2
i18next: 23.11.5
i18next-fs-backend: 2.3.1
next: 14.2.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8)
react: 18.3.1
react-i18next: 14.1.2(i18next@23.11.5)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)
next@14.2.5(@babel/core@7.24.9)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8):
dependencies:
'@next/env': 14.2.5
@@ -17808,10 +17524,36 @@ snapshots:
- '@babel/core'
- babel-plugin-macros
next@14.2.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8):
dependencies:
'@next/env': 14.2.5
'@swc/helpers': 0.5.5
busboy: 1.6.0
caniuse-lite: 1.0.30001669
graceful-fs: 4.2.11
postcss: 8.4.31
react: 18.3.1
react-dom: 18.3.1(react@18.3.1)
styled-jsx: 5.1.1(react@18.3.1)
optionalDependencies:
'@next/swc-darwin-arm64': 14.2.5
'@next/swc-darwin-x64': 14.2.5
'@next/swc-linux-arm64-gnu': 14.2.5
'@next/swc-linux-arm64-musl': 14.2.5
'@next/swc-linux-x64-gnu': 14.2.5
'@next/swc-linux-x64-musl': 14.2.5
'@next/swc-win32-arm64-msvc': 14.2.5
'@next/swc-win32-ia32-msvc': 14.2.5
'@next/swc-win32-x64-msvc': 14.2.5
sass: 1.77.8
transitivePeerDependencies:
- '@babel/core'
- babel-plugin-macros
nextjs-cors@2.2.0(next@14.2.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8)):
dependencies:
cors: 2.8.5
next: 14.2.5(@babel/core@7.24.9)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8)
next: 14.2.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.77.8)
nextjs-node-loader@1.1.5(webpack@5.92.1):
dependencies:
@@ -17967,12 +17709,6 @@ snapshots:
dependencies:
mimic-fn: 4.0.0
open@8.4.2:
dependencies:
define-lazy-prop: 2.0.0
is-docker: 2.2.1
is-wsl: 2.2.0
openai@4.61.0(encoding@0.1.13):
dependencies:
'@types/node': 18.19.40
@@ -18519,7 +18255,7 @@ snapshots:
dependencies:
react: 18.3.1
react-style-singleton: 2.2.1(@types/react@18.3.1)(react@18.3.1)
tslib: 2.8.0
tslib: 2.6.3
optionalDependencies:
'@types/react': 18.3.1
@@ -18547,7 +18283,7 @@ snapshots:
get-nonce: 1.0.1
invariant: 2.2.4
react: 18.3.1
tslib: 2.8.0
tslib: 2.6.3
optionalDependencies:
'@types/react': 18.3.1
@@ -19132,8 +18868,6 @@ snapshots:
dependencies:
internal-slot: 1.0.7
stoppable@1.1.0: {}
streamsearch@1.1.0: {}
streamx@2.20.0:
@@ -19265,6 +18999,11 @@ snapshots:
'@babel/core': 7.24.9
babel-plugin-macros: 3.1.0
styled-jsx@5.1.1(react@18.3.1):
dependencies:
client-only: 0.0.1
react: 18.3.1
stylis@4.2.0: {}
stylis@4.3.2: {}
@@ -19363,23 +19102,6 @@ snapshots:
mkdirp: 1.0.4
yallist: 4.0.0
tarn@3.0.2: {}
tedious@18.6.1:
dependencies:
'@azure/core-auth': 1.9.0
'@azure/identity': 4.6.0
'@azure/keyvault-keys': 4.9.0
'@js-joda/core': 5.6.4
'@types/node': 20.14.11
bl: 6.0.19
iconv-lite: 0.6.3
js-md4: 0.3.2
native-duplexpair: 1.0.0
sprintf-js: 1.1.3
transitivePeerDependencies:
- supports-color
terser-webpack-plugin@5.3.10(webpack@5.92.1):
dependencies:
'@jridgewell/trace-mapping': 0.3.25
@@ -19888,7 +19610,7 @@ snapshots:
vite-node@1.6.0(@types/node@22.7.8)(sass@1.77.8)(terser@5.31.3):
dependencies:
cac: 6.7.14
debug: 4.3.7
debug: 4.3.5
pathe: 1.1.2
picocolors: 1.0.1
vite: 5.3.4(@types/node@22.7.8)(sass@1.77.8)(terser@5.31.3)

View File

@@ -1,8 +1,11 @@
### 常见问题
- [**Git 地址**,点击查看项目地址](https://github.com/labring/FastGPT)
- [点击查看官方文档](https://doc.tryfastgpt.ai/docs/)
- [点击查看商业版文档](https://doc.tryfastgpt.ai/docs/shopping_cart/intro/)
- [本地部署 FastGPT](https://doc.tryfastgpt.ai/docs/installation)
- [API 文档](https://doc.tryfastgpt.ai/docs/development/openapi?pre_pathname=%2Fdrive%2Fhome%2F)
- **反馈问卷**: 如果你遇到任何使用问题或有期望的功能,可以[填写该问卷](https://www.wjx.cn/vm/rLIw1uD.aspx#)
- **问题文档**: [先看文档,再提问](https://kjqvjse66l.feishu.cn/docx/HtrgdT0pkonP4kxGx8qcu6XDnGh)
- [点击查看商业版文档](https://doc.tryfastgpt.ai/docs/commercial)
- [计费规则](https://doc.tryfastgpt.ai/docs/pricing/)
**其他问题**

View File

@@ -1,13 +1,12 @@
### FastGPT V4.8.20 更新说明
1. 新增 - 使用记录导出和仪表盘。
2. 新增 - DeepSeek resoner 模型支持输出思考过程
3. 新增 - markdown 语法扩展,支持音视频(代码块 audio 和 video
4. 新增 - 飞书/语雀知识库
5. 新增 - 工作流知识库检索支持按知识库权限进行过滤
6. 新增 - 流程等待插件,可以等待 n 毫秒后继续执行流程
7. 新增 - 飞书机器人接入,支持配置私有化飞书地址
8. 新增 - 支持通过 JSON 配置直接创建应用
9. 新增 - 支持通过 CURL 脚本快速创建 HTTP 插件
10. 新增 - 支持部门架构权限模式。
2. 新增 - markdown 语法扩展,支持音视频(代码块 audio 和 video
3. 新增 - 飞书/语雀知识库
4. 新增 - 工作流知识库检索支持按知识库权限进行过滤
5. 新增 - 流程等待插件,可以等待 n 毫秒后继续执行流程
6. 新增 - 飞书机器人接入,支持配置私有化飞书地址
7. 新增 - 支持通过 JSON 配置直接创建应用
8. 新增 - 支持通过 CURL 脚本快速创建 HTTP 插件
9. 新增 - 支持部门架构权限模式

View File

@@ -73,8 +73,6 @@ const Layout = ({ children }: { children: JSX.Element }) => {
const showUpdateNotification =
isUpdateNotification &&
feConfigs?.bind_notification_method &&
feConfigs?.bind_notification_method.length > 0 &&
!userInfo?.team.notificationAccount &&
!!userInfo?.team.permission.isOwner;

View File

@@ -72,7 +72,6 @@ const AIChatSettingsModal = ({
defaultValues: defaultData
});
const model = watch('model');
const reasoning = watch(NodeInputKeyEnum.aiChatReasoning);
const showResponseAnswerText = watch(NodeInputKeyEnum.aiChatIsResponseText) !== undefined;
const showVisionSwitch = watch(NodeInputKeyEnum.aiChatVision) !== undefined;
const showMaxHistoriesSlider = watch('maxHistories') !== undefined;
@@ -85,8 +84,6 @@ const AIChatSettingsModal = ({
return getWebLLMModel(model);
}, [model]);
const llmSupportVision = !!selectedModel?.vision;
const llmSupportTemperature = typeof selectedModel?.maxTemperature === 'number';
const llmSupportReasoning = !!selectedModel?.reasoning;
const tokenLimit = useMemo(() => {
return selectedModel?.maxResponse || 4096;
@@ -261,51 +258,36 @@ const AIChatSettingsModal = ({
/>
</Box>
</Flex>
{llmSupportTemperature && (
<Flex {...FlexItemStyles}>
<Box {...LabelStyles}>
<Flex alignItems={'center'}>
{t('app:temperature')}
<QuestionTip label={t('app:temperature_tip')} />
</Flex>
<Switch
isChecked={temperature !== undefined}
size={'sm'}
onChange={(e) => {
setValue('temperature', e.target.checked ? 0 : undefined);
}}
/>
</Box>
<Box flex={'1 0 0'}>
<InputSlider
min={0}
max={10}
step={1}
value={temperature}
isDisabled={temperature === undefined}
onChange={(e) => {
setValue(NodeInputKeyEnum.aiChatTemperature, e);
setRefresh(!refresh);
}}
/>
</Box>
</Flex>
)}
{llmSupportReasoning && (
<Flex {...FlexItemStyles} h={'25px'}>
<Box {...LabelStyles}>
<Flex alignItems={'center'}>{t('app:reasoning_response')}</Flex>
<Switch
isChecked={reasoning || false}
size={'sm'}
onChange={(e) => {
const value = e.target.checked;
setValue(NodeInputKeyEnum.aiChatReasoning, value);
}}
/>
</Box>
</Flex>
)}
<Flex {...FlexItemStyles}>
<Box {...LabelStyles}>
<Flex alignItems={'center'}>
{t('app:temperature')}
<QuestionTip label={t('app:temperature_tip')} />
</Flex>
<Switch
isChecked={temperature !== undefined}
size={'sm'}
onChange={(e) => {
setValue('temperature', e.target.checked ? 0 : undefined);
}}
/>
</Box>
<Box flex={'1 0 0'}>
<InputSlider
min={0}
max={10}
step={1}
value={temperature}
isDisabled={temperature === undefined}
onChange={(e) => {
setValue(NodeInputKeyEnum.aiChatTemperature, e);
setRefresh(!refresh);
}}
/>
</Box>
</Flex>
{showResponseAnswerText && (
<Flex {...FlexItemStyles} h={'25px'}>
<Box {...LabelStyles}>

View File

@@ -201,7 +201,6 @@ const ChatBox = ({
({
event,
text = '',
reasoningText,
status,
name,
tool,
@@ -226,25 +225,6 @@ const ChatBox = ({
status,
moduleName: name
};
} else if (event === SseResponseEventEnum.answer && reasoningText) {
if (lastValue.type === ChatItemValueTypeEnum.reasoning && lastValue.reasoning) {
lastValue.reasoning.content += reasoningText;
return {
...item,
value: item.value.slice(0, -1).concat(lastValue)
};
} else {
const val: AIChatItemValueItemType = {
type: ChatItemValueTypeEnum.reasoning,
reasoning: {
content: reasoningText
}
};
return {
...item,
value: item.value.concat(val)
};
}
} else if (
(event === SseResponseEventEnum.answer || event === SseResponseEventEnum.fastAnswer) &&
text

View File

@@ -6,7 +6,6 @@ import { WorkflowInteractiveResponseType } from '@fastgpt/global/core/workflow/t
export type generatingMessageProps = {
event: SseResponseEventEnum;
text?: string;
reasoningText?: string;
name?: string;
status?: 'running' | 'finish';
tool?: ToolModuleResponseItemType;

View File

@@ -8,7 +8,6 @@ import {
Box,
Button,
Flex,
HStack,
Textarea
} from '@chakra-ui/react';
import { ChatItemValueTypeEnum } from '@fastgpt/global/core/chat/constants';
@@ -140,58 +139,6 @@ ${toolResponse}`}
},
(prevProps, nextProps) => isEqual(prevProps, nextProps)
);
const RenderResoningContent = React.memo(function RenderResoningContent({
content,
isChatting,
isLastResponseValue
}: {
content: string;
isChatting: boolean;
isLastResponseValue: boolean;
}) {
const { t } = useTranslation();
const showAnimation = isChatting && isLastResponseValue;
return (
<Accordion allowToggle defaultIndex={isLastResponseValue ? 0 : undefined}>
<AccordionItem borderTop={'none'} borderBottom={'none'}>
<AccordionButton
w={'auto'}
bg={'white'}
borderRadius={'md'}
borderWidth={'1px'}
borderColor={'myGray.200'}
boxShadow={'1'}
pl={3}
pr={2.5}
py={1}
_hover={{
bg: 'auto'
}}
>
<HStack mr={2} spacing={1}>
<MyIcon name={'core/chat/think'} w={'0.85rem'} />
<Box fontSize={'sm'}>{t('chat:ai_reasoning')}</Box>
</HStack>
{showAnimation && <MyIcon name={'common/loading'} w={'0.85rem'} />}
<AccordionIcon color={'myGray.600'} ml={5} />
</AccordionButton>
<AccordionPanel
py={0}
pr={0}
pl={3}
mt={2}
borderLeft={'2px solid'}
borderColor={'myGray.300'}
color={'myGray.500'}
>
<Markdown source={content} showAnimation={showAnimation} />
</AccordionPanel>
</AccordionItem>
</Accordion>
);
});
const RenderUserSelectInteractive = React.memo(function RenderInteractive({
interactive
}: {
@@ -343,14 +290,6 @@ const AIResponseBox = ({ value, isLastResponseValue, isChatting }: props) => {
return (
<RenderText showAnimation={isChatting && isLastResponseValue} text={value.text.content} />
);
if (value.type === ChatItemValueTypeEnum.reasoning && value.reasoning)
return (
<RenderResoningContent
isChatting={isChatting}
isLastResponseValue={isLastResponseValue}
content={value.reasoning.content}
/>
);
if (value.type === ChatItemValueTypeEnum.tool && value.tools)
return <RenderTool showAnimation={isChatting} tools={value.tools} />;
if (value.type === ChatItemValueTypeEnum.interactive && value.interactive) {

View File

@@ -58,7 +58,6 @@ import CopyBox from '@fastgpt/web/components/common/String/CopyBox';
import MyIcon from '@fastgpt/web/components/common/Icon';
import AIModelSelector from '@/components/Select/AIModelSelector';
import { useRefresh } from '../../../../../../packages/web/hooks/useRefresh';
import { Prompt_CQJson, Prompt_ExtractJson } from '@fastgpt/global/core/ai/prompt/agent';
const MyModal = dynamic(() => import('@fastgpt/web/components/common/MyModal'));
@@ -804,10 +803,6 @@ const ModelEditModal = ({
<JsonEditor
value={JSON.stringify(getValues('defaultConfig'), null, 2)}
onChange={(e) => {
if (!e) {
setValue('defaultConfig', {});
return;
}
try {
setValue('defaultConfig', JSON.parse(e));
} catch (error) {
@@ -922,19 +917,6 @@ const ModelEditModal = ({
</Flex>
</Td>
</Tr>
<Tr>
<Td>
<HStack spacing={1}>
<Box>{t('account:model.reasoning')}</Box>
<QuestionTip label={t('account:model.reasoning_tip')} />
</HStack>
</Td>
<Td textAlign={'right'}>
<Flex justifyContent={'flex-end'}>
<Switch {...register('reasoning')} />
</Flex>
</Td>
</Tr>
{feConfigs?.isPlus && (
<Tr>
<Td>
@@ -997,9 +979,7 @@ const ModelEditModal = ({
<Td>
<HStack spacing={1}>
<Box>{t('account:model.custom_cq_prompt')}</Box>
<QuestionTip
label={t('account:model.custom_cq_prompt_tip', { prompt: Prompt_CQJson })}
/>
<QuestionTip label={t('account:model.custom_cq_prompt_tip')} />
</HStack>
</Td>
<Td textAlign={'right'}>
@@ -1010,11 +990,7 @@ const ModelEditModal = ({
<Td>
<HStack spacing={1}>
<Box>{t('account:model.custom_extract_prompt')}</Box>
<QuestionTip
label={t('account:model.custom_extract_prompt_tip', {
prompt: Prompt_ExtractJson
})}
/>
<QuestionTip label={t('account:model.custom_extract_prompt_tip')} />
</HStack>
</Td>
<Td textAlign={'right'}>
@@ -1033,10 +1009,6 @@ const ModelEditModal = ({
value={JSON.stringify(getValues('defaultConfig'), null, 2)}
resize
onChange={(e) => {
if (!e) {
setValue('defaultConfig', {});
return;
}
try {
setValue('defaultConfig', JSON.parse(e));
} catch (error) {

View File

@@ -14,7 +14,7 @@ import Avatar from '@fastgpt/web/components/common/Avatar';
import Tag from '@fastgpt/web/components/common/Tag';
import { useTranslation } from 'next-i18next';
import React, { useMemo, useRef, useState } from 'react';
import React, { useMemo, useState } from 'react';
import { useRequest2 } from '@fastgpt/web/hooks/useRequest';
import { useContextSelector } from 'use-context-selector';
import { TeamContext } from '../context';
@@ -50,8 +50,6 @@ function GroupEditModal({ onClose, editGroupId }: { onClose: () => void; editGro
const refetchMembers = useContextSelector(TeamContext, (v) => v.refetchMembers);
const MemberScrollData = useContextSelector(TeamContext, (v) => v.MemberScrollData);
const [hoveredMemberId, setHoveredMemberId] = useState<string>();
const selectedMembersRef = useRef<HTMLDivElement>(null);
const [members, setMembers] = useState(group?.members || []);
const [searchKey, setSearchKey] = useState('');
@@ -157,7 +155,7 @@ function GroupEditModal({ onClose, editGroupId }: { onClose: () => void; editGro
setSearchKey(e.target.value);
}}
/>
<MemberScrollData mt={3} flexGrow="1" overflow={'auto'} maxH={'400px'}>
<MemberScrollData mt={3} flex={'1 0 0'} h={0}>
{filtered.map((member) => {
return (
<HStack
@@ -187,7 +185,7 @@ function GroupEditModal({ onClose, editGroupId }: { onClose: () => void; editGro
</Flex>
<Flex borderLeft="1px" borderColor="myGray.200" flexDirection="column" p="4" h={'100%'}>
<Box mt={2}>{t('common:chosen') + ': ' + members.length}</Box>
<MemberScrollData ScrollContainerRef={selectedMembersRef} mt={3} flex={'1 0 0'} h={0}>
<MemberScrollData mt={3} flex={'1 0 0'} h={0}>
{members.map((member) => {
return (
<HStack

View File

@@ -169,8 +169,8 @@ function MemberTable({ Tabs }: { Tabs: React.ReactNode }) {
</Flex>
<Box flex={'1 0 0'} overflow={'auto'}>
<MemberScrollData>
<TableContainer overflow={'unset'} fontSize={'sm'}>
<TableContainer overflow={'unset'} fontSize={'sm'}>
<MemberScrollData>
<Table overflow={'unset'}>
<Thead>
<Tr bgColor={'white !important'}>
@@ -246,9 +246,9 @@ function MemberTable({ Tabs }: { Tabs: React.ReactNode }) {
))}
</Tbody>
</Table>
<ConfirmRemoveMemberModal />
</TableContainer>
</MemberScrollData>
</MemberScrollData>
<ConfirmRemoveMemberModal />
</TableContainer>
</Box>
<ConfirmLeaveTeamModal />

View File

@@ -121,34 +121,36 @@ function OrgMemberManageModal({
setSearchKey(e.target.value);
}}
/>
<MemberScrollData mt={3} flexGrow="1" overflow={'auto'} maxH={'400px'}>
{filterMembers.map((member) => {
return (
<HStack
py="2"
px={3}
borderRadius={'md'}
alignItems="center"
key={member.tmbId}
cursor={'pointer'}
_hover={{
bg: 'myGray.50',
...(!isSelected(member.tmbId) ? { svg: { color: 'myGray.50' } } : {})
}}
_notLast={{ mb: 2 }}
onClick={() => handleToggleSelect(member.tmbId)}
>
<Checkbox
isChecked={!!isSelected(member.tmbId)}
icon={<CheckboxIcon name={'common/check'} />}
pointerEvents="none"
/>
<Avatar src={member.avatar} w="1.5rem" borderRadius={'50%'} />
<Box>{member.memberName}</Box>
</HStack>
);
})}
</MemberScrollData>
<Flex flexDirection="column" mt={3} flexGrow="1" overflow={'auto'} maxH={'400px'}>
<MemberScrollData>
{filterMembers.map((member) => {
return (
<HStack
py="2"
px={3}
borderRadius={'md'}
alignItems="center"
key={member.tmbId}
cursor={'pointer'}
_hover={{
bg: 'myGray.50',
...(!isSelected(member.tmbId) ? { svg: { color: 'myGray.50' } } : {})
}}
_notLast={{ mb: 2 }}
onClick={() => handleToggleSelect(member.tmbId)}
>
<Checkbox
isChecked={!!isSelected(member.tmbId)}
icon={<CheckboxIcon name={'common/check'} />}
pointerEvents="none"
/>
<Avatar src={member.avatar} w="1.5rem" borderRadius={'50%'} />
<Box>{member.memberName}</Box>
</HStack>
);
})}
</MemberScrollData>
</Flex>
</Flex>
<Flex borderLeft="1px" borderColor="myGray.200" flexDirection="column" p="4" h={'100%'}>
<Box mt={2}>{`${t('common:chosen')}:${selectedMembers.length}`}</Box>

View File

@@ -138,16 +138,9 @@ const EditForm = ({
model: appForm.aiSettings.model,
temperature: appForm.aiSettings.temperature,
maxToken: appForm.aiSettings.maxToken,
maxHistories: appForm.aiSettings.maxHistories,
aiChatReasoning: appForm.aiSettings.aiChatReasoning ?? true
maxHistories: appForm.aiSettings.maxHistories
}}
onChange={({
model,
temperature,
maxToken,
maxHistories,
aiChatReasoning = false
}) => {
onChange={({ model, temperature, maxToken, maxHistories }: SettingAIDataType) => {
setAppForm((state) => ({
...state,
aiSettings: {
@@ -155,8 +148,7 @@ const EditForm = ({
model,
temperature,
maxToken,
maxHistories: maxHistories ?? 6,
aiChatReasoning
maxHistories: maxHistories ?? 6
}
}));
}}

View File

@@ -38,9 +38,7 @@ const SelectAiModelRender = ({ item, inputs = [], nodeId }: RenderInputProps) =>
(input) => input.key === NodeInputKeyEnum.aiChatIsResponseText
)?.value,
aiChatVision:
inputs.find((input) => input.key === NodeInputKeyEnum.aiChatVision)?.value ?? true,
aiChatReasoning:
inputs.find((input) => input.key === NodeInputKeyEnum.aiChatReasoning)?.value ?? true
inputs.find((input) => input.key === NodeInputKeyEnum.aiChatVision)?.value ?? true
}),
[inputs]
);

View File

@@ -66,11 +66,9 @@ const testLLMModel = async (model: LLMModelItemType) => {
},
{
...(model.requestUrl ? { path: model.requestUrl } : {}),
headers: model.requestAuth
? {
Authorization: `Bearer ${model.requestAuth}`
}
: undefined
headers: {
...(model.requestAuth ? { Authorization: `Bearer ${model.requestAuth}` } : {})
}
}
);
@@ -100,14 +98,12 @@ const testTTSModel = async (model: TTSModelType) => {
response_format: 'mp3',
speed: 1
},
model.requestUrl
model.requestUrl && model.requestAuth
? {
path: model.requestUrl,
headers: model.requestAuth
? {
Authorization: `Bearer ${model.requestAuth}`
}
: undefined
headers: {
Authorization: `Bearer ${model.requestAuth}`
}
}
: {}
);

View File

@@ -6,7 +6,7 @@ export const getChatModelNameListByModules = (nodes: StoreNodeItemType[]): strin
const modelList = nodes
.map((item) => {
const model = item.inputs.find((input) => input.key === NodeInputKeyEnum.aiModel)?.value;
return model ? getLLMModel(model)?.name : '';
return getLLMModel(model)?.name || '';
})
.filter(Boolean);

View File

@@ -186,12 +186,6 @@ export const streamFetch = ({
text: item
});
}
const reasoningText = parseJson.choices?.[0]?.delta?.reasoning_content || '';
onMessage({
event,
reasoningText
});
} else if (event === SseResponseEventEnum.fastAnswer) {
const text = parseJson.choices?.[0]?.delta?.content || '';
pushDataToQueue({

View File

@@ -1,7 +1,7 @@
import { parseCurl } from '@fastgpt/global/common/string/http';
import { AppTypeEnum } from '@fastgpt/global/core/app/constants';
import { AppSchema } from '@fastgpt/global/core/app/type';
import { NodeInputKeyEnum, WorkflowIOValueTypeEnum } from '@fastgpt/global/core/workflow/constants';
import { WorkflowIOValueTypeEnum } from '@fastgpt/global/core/workflow/constants';
import {
FlowNodeInputTypeEnum,
FlowNodeOutputTypeEnum,
@@ -150,7 +150,7 @@ export const emptyTemplates: Record<
key: 'temperature',
renderTypeList: [FlowNodeInputTypeEnum.hidden],
label: '',
value: undefined,
value: 0,
valueType: WorkflowIOValueTypeEnum.number,
min: 0,
max: 10,
@@ -160,7 +160,7 @@ export const emptyTemplates: Record<
key: 'maxToken',
renderTypeList: [FlowNodeInputTypeEnum.hidden],
label: '',
value: undefined,
value: 2000,
valueType: WorkflowIOValueTypeEnum.number,
min: 100,
max: 4000,
@@ -221,13 +221,6 @@ export const emptyTemplates: Record<
debugLabel: i18nT('common:core.module.Dataset quote.label'),
description: '',
valueType: WorkflowIOValueTypeEnum.datasetQuote
},
{
key: NodeInputKeyEnum.aiChatReasoning,
renderTypeList: [FlowNodeInputTypeEnum.hidden],
label: '',
valueType: WorkflowIOValueTypeEnum.boolean,
value: true
}
],
outputs: [

View File

@@ -190,13 +190,6 @@ export function form2AppWorkflow(
label: '',
valueType: WorkflowIOValueTypeEnum.boolean,
value: true
},
{
key: NodeInputKeyEnum.aiChatReasoning,
renderTypeList: [FlowNodeInputTypeEnum.hidden],
label: '',
valueType: WorkflowIOValueTypeEnum.boolean,
value: formData.aiSettings.aiChatReasoning
}
],
outputs: AiChatModule.outputs