Compare commits

..

4 Commits

Author SHA1 Message Date
Archer
0ed99d8c9a Check debug (#4384)
* feat : Added support for interactive nodes in the debugging interface (#4339)

* feat: add VSCode launch configuration and enhance debug API handler

* feat: refactor debug API handler to streamline workflow processing and enhance interactive chat features

* feat: enhance debug API handler with structured input forms and improved query handling

* feat: enhance debug API handler to support optional query and histories parameters

* feat: simplify query and histories initialization in debug API handler

* feat: add realmode parameter to workflow dispatch and update interactive handling

* feat: add optional query parameter to PostWorkflowDebugProps and remove realmode from ModuleDispatchProps

* feat: add history parameter to PostWorkflowDebugProps and update related components

* feat: remove realmode

* feat: simplify handler parameter destructuring in debug.ts

* feat: remove unused interactive prop from WholeResponseContent component

* feat: refactor onNextNodeDebug to use parameter object for better readability

* feat: Merge selections and next actions to remove unused state management

* feat: 添加 NodeDebugResponse 组件以增强调试功能

* feat: Simplify the import statements in InteractiveComponents.tsx

* feat: Update the handler function to use default parameters to simplify the code

* feat: Add optional workflowInteractiveResponse field to PostWorkflowDebugResponse type

* feat: Add the workflowInteractiveResponse field in the debugging handler to enhance response capabilities

* feat: Added workflowInteractiveResponse field in FlowNodeItemType to enhance responsiveness

* feat: Refactor NodeDebugResponse to utilize workflowInteractiveResponse for improved interactivity

* feat: Extend UserSelectInteractive and UserInputInteractive types to inherit from InteractiveBasicType

* feat: Refactor NodeDebugResponse to streamline interactive handling and improve code clarity

* feat: 重构交互式调试逻辑,创建共用 Hook 以简化用户选择和输入处理

* fix: type error

* feat: 重构 AIResponseBox 组件,简化用户交互逻辑并引入共用表单组件

* feat: 清理 AIResponseBox 和表单组件代码,移除冗余注释和未使用的导入

* fix: type error

* feat: 重构 AIResponseBox 组件,简化类型定义并优化代码结构

* refactor: 将 FormItem 接口更改为类型定义,优化代码结构

* refactor: 将 NodeDebugResponseProps 接口更改为类型定义,优化代码结构

* refactor: 移除不必要的入口节点检查,简化调试处理逻辑

* feat: 移动调试交互组件位置

* refactor: 将 InteractiveBasicType 中的属性设为可选,简化数据结构

* refactor: 优化类型定义

* refactor: 移除未使用的 ChatItemType 和 UserChatItemValueItemType 导入

* refactor: 将接口定义更改为类型别名,简化代码结构

* refactor: 更新类型定义,使用类型别名简化代码结构

* refactor: 使用类型导入简化代码结构,重构 AIResponseBox 组件

* refactor: 提取描述框和表单项标签组件,简化代码结构

* refactor: 移除多余的空行

* refactor: 移除多余的空行和注释

* refactor: 移除多余的空行,简化 AIResponseBox 组件代码

* refactor: 重构组件,移动 FormComponents 到 InteractiveComponents,简化代码结构

* refactor: 移除多余的空行,简化 NodeDebugResponse 组件代码

* refactor: 更新导入语句,使用 type 关键字优化类型导入

* refactor: 在 tsconfig.json 中启用 verbatimModuleSyntax 选项

* Revert "refactor: 在 tsconfig.json 中启用 verbatimModuleSyntax 选项"

This reverts commit 2b335a9938.

* revert: rendertool

* refactor: Remove unused imports and functions to simplify code

* perf: debug interactive

---------

Co-authored-by: Theresa <63280168+sd0ric4@users.noreply.github.com>
2025-03-28 17:09:08 +08:00
Archer
2d3ae7f944 doc (#4381)
* doc

* doc
2025-03-28 13:52:08 +08:00
Archer
565a966d19 Python Sandbox (#4380)
* Python3 Sandbox (#3944)

* update python box (#4251)

* update python box

* Adjust the height of the NodeCode border.

* update python sandbox and add test systemcall bash

* update sandbox

* add VERSION_RELEASE (#4376)

* save empty docx

* fix pythonbox log error

* fix: js template

---------

Co-authored-by: dogfar <37035781+dogfar@users.noreply.github.com>
Co-authored-by: gggaaallleee <91131304+gggaaallleee@users.noreply.github.com>
Co-authored-by: gggaaallleee <1293587368@qq.com>
2025-03-28 13:45:09 +08:00
Shixian Sheng
8323c2d27e 修复了几个链接 (#4377)
* Update bge-rerank.md

* Update bge-rerank.md

* Update chatglm2.md

* Update README.md
2025-03-28 10:59:12 +08:00
46 changed files with 1597 additions and 609 deletions

39
.vscode/launch.json vendored Normal file
View File

@@ -0,0 +1,39 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "Next.js: debug server-side",
"type": "node-terminal",
"request": "launch",
"command": "pnpm run dev",
"cwd": "${workspaceFolder}/projects/app"
},
{
"name": "Next.js: debug client-side",
"type": "chrome",
"request": "launch",
"url": "http://localhost:3000"
},
{
"name": "Next.js: debug client-side (Edge)",
"type": "msedge",
"request": "launch",
"url": "http://localhost:3000"
},
{
"name": "Next.js: debug full stack",
"type": "node-terminal",
"request": "launch",
"command": "pnpm run dev",
"cwd": "${workspaceFolder}/projects/app",
"skipFiles": ["<node_internals>/**"],
"serverReadyAction": {
"action": "debugWithEdge",
"killOnServerStop": true,
"pattern": "- Local:.+(https?://.+)",
"uriFormat": "%s",
"webRoot": "${workspaceFolder}/projects/app"
}
}
]
}

View File

@@ -31,9 +31,9 @@ weight: 920
3 个模型代码分别为:
1. [https://github.com/labring/FastGPT/tree/main/plugins/rerank-bge/bge-reranker-base](https://github.com/labring/FastGPT/tree/main/plugins/rerank-bge/bge-reranker-base)
2. [https://github.com/labring/FastGPT/tree/main/plugins/rerank-bge/bge-reranker-large](https://github.com/labring/FastGPT/tree/main/plugins/rerank-bge/bge-reranker-large)
3. [https://github.com/labring/FastGPT/tree/main/plugins/rerank-bge/bge-reranker-v2-m3](https://github.com/labring/FastGPT/tree/main/plugins/rerank-bge/bge-reranker-v2-m3)
1. [https://github.com/labring/FastGPT/tree/main/plugins/model/rerank-bge/bge-reranker-base](https://github.com/labring/FastGPT/tree/main/plugins/model/rerank-bge/bge-reranker-base)
2. [https://github.com/labring/FastGPT/tree/main/plugins/model/rerank-bge/bge-reranker-large](https://github.com/labring/FastGPT/tree/main/plugins/model/rerank-bge/bge-reranker-large)
3. [https://github.com/labring/FastGPT/tree/main/plugins/model/rerank-bge/bge-reranker-v2-m3](https://github.com/labring/FastGPT/tree/main/plugins/model/rerank-bge/bge-reranker-v2-m3)
### 3. 安装依赖

View File

@@ -46,7 +46,7 @@ ChatGLM2-6B 是开源中英双语对话模型 ChatGLM-6B 的第二代版本,
### 源码部署
1. 根据上面的环境配置配置好环境,具体教程自行 GPT
2. 下载 [python 文件](https://github.com/labring/FastGPT/blob/main/files/models/ChatGLM2/openai_api.py)
2. 下载 [python 文件](https://github.com/labring/FastGPT/blob/main/plugins/model/llm-ChatGLM2/openai_api.py)
3. 在命令行输入命令 `pip install -r requirements.txt`
4. 打开你需要启动的 py 文件,在代码的 `verify_token` 方法中配置 token这里的 token 只是加一层验证,防止接口被人盗用;
5. 执行命令 `python openai_api.py --model_name 16`。这里的数字根据上面的配置进行选择。

View File

@@ -8,20 +8,29 @@ weight: 798
---
## 更新指南
### 配置参数变更
### 1. 做好数据库备份
修改`config.json`文件中`systemEnv.pgHNSWEfSearch`参数名,改成`hnswEfSearch`
商业版用户直接在后台`系统配置-基础配置`中进行变更。
### SSO 迁移
### 2. SSO 迁移
使用了 SSO 或成员同步的商业版用户,并且是对接`钉钉``企微`的,需要迁移已有的 SSO 相关配置:
参考:[SSO & 外部成员同步](/docs/guide/admin/sso.md)中的配置进行`sso-service`的部署和配置。
参考:[SSO & 外部成员同步](/docs/guide/admin/sso)中的配置进行`sso-service`的部署和配置。
1. 先将原商业版后台中的相关配置项复制备份出来(以企微为例,将 AppId, Secret 等复制出来)再进行镜像升级。
2. 参考上述文档,部署 SSO 服务,配置相关的环境变量
3. 如果原先使用企微组织架构同步的用户,在商业版后台切换团队模式为“同步模式”
3. 如果原先使用企微组织架构同步的用户,升级完镜像后,需要在商业版后台切换团队模式为“同步模式”
### 3. 配置参数变更
修改`config.json`文件中`systemEnv.pgHNSWEfSearch`参数名,改成`hnswEfSearch`
商业版用户升级镜像后,直接在后台`系统配置-基础配置`中进行变更。
### 4. 更新镜像
- 更新 FastGPT 镜像 tag: v4.9.2
- 更新 FastGPT 商业版镜像 tag: v4.9.2
- Sandbox 镜像,可以不更新
- AIProxy 镜像修改为: registry.cn-hangzhou.aliyuncs.com/labring/aiproxy:v0.1.4
## 重要更新
@@ -35,6 +44,8 @@ weight: 798
4. 集合同步时,支持同步修改标题。
5. 团队成员管理重构,抽离主流 IM SSO企微、飞书、钉钉并支持通过自定义 SSO 接入 FastGPT。同时完善与外部系统的成员同步。
6. 支持 `oceanbase` 向量数据库。填写环境变量`OCEANBASE_URL`即可。
7. 基于 mistral-ocr 的 PDF 解析示例。
8. 基于 miner-u 的 PDF 解析示例。
## ⚙️ 优化

View File

@@ -10,7 +10,6 @@ import { FlowNodeOutputItemType, ReferenceValueType } from '../type/io';
import { ChatItemType, NodeOutputItemType } from '../../../core/chat/type';
import { ChatItemValueTypeEnum, ChatRoleEnum } from '../../../core/chat/constants';
import { replaceVariable, valToStr } from '../../../common/string/tools';
import { ChatCompletionChunk } from 'openai/resources';
export const getMaxHistoryLimitFromNodes = (nodes: StoreNodeItemType[]): number => {
let limit = 10;

View File

@@ -5,10 +5,36 @@ import { FlowNodeInputTypeEnum } from 'core/workflow/node/constant';
import { WorkflowIOValueTypeEnum } from 'core/workflow/constants';
import type { ChatCompletionMessageParam } from '../../../../ai/type';
type InteractiveBasicType = {
entryNodeIds: string[];
memoryEdges: RuntimeEdgeItemType[];
nodeOutputs: NodeOutputItemType[];
toolParams?: {
entryNodeIds: string[]; // 记录工具中,交互节点的 Id而不是起始工作流的入口
memoryMessages: ChatCompletionMessageParam[]; // 这轮工具中,产生的新的 messages
toolCallId: string; // 记录对应 tool 的id用于后续交互节点可以替换掉 tool 的 response
};
};
type InteractiveNodeType = {
entryNodeIds?: string[];
memoryEdges?: RuntimeEdgeItemType[];
nodeOutputs?: NodeOutputItemType[];
};
export type UserSelectOptionItemType = {
key: string;
value: string;
};
type UserSelectInteractive = InteractiveNodeType & {
type: 'userSelect';
params: {
description: string;
userSelectOptions: UserSelectOptionItemType[];
userSelectedVal?: string;
};
};
export type UserInputFormItemType = {
type: FlowNodeInputTypeEnum;
@@ -28,29 +54,7 @@ export type UserInputFormItemType = {
// select
list?: { label: string; value: string }[];
};
type InteractiveBasicType = {
entryNodeIds: string[];
memoryEdges: RuntimeEdgeItemType[];
nodeOutputs: NodeOutputItemType[];
toolParams?: {
entryNodeIds: string[]; // 记录工具中,交互节点的 Id而不是起始工作流的入口
memoryMessages: ChatCompletionMessageParam[]; // 这轮工具中,产生的新的 messages
toolCallId: string; // 记录对应 tool 的id用于后续交互节点可以替换掉 tool 的 response
};
};
type UserSelectInteractive = {
type: 'userSelect';
params: {
description: string;
userSelectOptions: UserSelectOptionItemType[];
userSelectedVal?: string;
};
};
type UserInputInteractive = {
type UserInputInteractive = InteractiveNodeType & {
type: 'userInput';
params: {
description: string;
@@ -58,6 +62,5 @@ type UserInputInteractive = {
submitted?: boolean;
};
};
export type InteractiveNodeResponseType = UserSelectInteractive | UserInputInteractive;
export type WorkflowInteractiveResponseType = InteractiveBasicType & InteractiveNodeResponseType;

View File

@@ -1,7 +1,23 @@
export const JS_TEMPLATE = `function main({data1, data2}){
return {
result: data1,
data2
}
return {
result: data1,
data2
}
}`;
export const PY_TEMPLATE = `def main(data1, data2):
return {
"result": data1,
"data2": data2
}
`;
export enum SandboxCodeTypeEnum {
js = 'js',
py = 'py'
}
export const SNADBOX_CODE_TEMPLATE = {
[SandboxCodeTypeEnum.js]: JS_TEMPLATE,
[SandboxCodeTypeEnum.py]: PY_TEMPLATE
};

View File

@@ -68,12 +68,14 @@ export const CodeNode: FlowNodeTemplateType = {
key: NodeInputKeyEnum.codeType,
renderTypeList: [FlowNodeInputTypeEnum.hidden],
label: '',
valueType: WorkflowIOValueTypeEnum.string,
value: 'js'
},
{
key: NodeInputKeyEnum.code,
renderTypeList: [FlowNodeInputTypeEnum.custom],
label: '',
valueType: WorkflowIOValueTypeEnum.string,
value: JS_TEMPLATE
}
],

View File

@@ -23,6 +23,7 @@ import { NextApiResponse } from 'next';
import { AppDetailType, AppSchema } from '../../app/type';
import { ParentIdType } from 'common/parentFolder/type';
import { AppTypeEnum } from 'core/app/constants';
import { WorkflowInteractiveResponseType } from '../template/system/interactive/type';
export type FlowNodeCommonType = {
parentNodeId?: string;
@@ -120,6 +121,7 @@ export type FlowNodeItemType = FlowNodeTemplateType & {
showResult?: boolean; // show and hide result modal
response?: ChatHistoryItemResType;
isExpired?: boolean;
workflowInteractiveResponse?: WorkflowInteractiveResponseType;
};
isFolded?: boolean;
};

View File

@@ -4,9 +4,10 @@ import { DispatchNodeResultType } from '@fastgpt/global/core/workflow/runtime/ty
import axios from 'axios';
import { formatHttpError } from '../utils';
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import { SandboxCodeTypeEnum } from '@fastgpt/global/core/workflow/template/system/sandbox/constants';
type RunCodeType = ModuleDispatchProps<{
[NodeInputKeyEnum.codeType]: 'js';
[NodeInputKeyEnum.codeType]: string;
[NodeInputKeyEnum.code]: string;
[NodeInputKeyEnum.addInputParam]: Record<string, any>;
}>;
@@ -16,6 +17,14 @@ type RunCodeResponse = DispatchNodeResultType<{
[key: string]: any;
}>;
function getURL(codeType: string): string {
if (codeType == SandboxCodeTypeEnum.py) {
return `${process.env.SANDBOX_URL}/sandbox/python`;
} else {
return `${process.env.SANDBOX_URL}/sandbox/js`;
}
}
export const dispatchRunCode = async (props: RunCodeType): Promise<RunCodeResponse> => {
const {
params: { codeType, code, [NodeInputKeyEnum.addInputParam]: customVariables }
@@ -27,7 +36,7 @@ export const dispatchRunCode = async (props: RunCodeType): Promise<RunCodeRespon
};
}
const sandBoxRequestUrl = `${process.env.SANDBOX_URL}/sandbox/js`;
const sandBoxRequestUrl = getURL(codeType);
try {
const { data: runResult } = await axios.post<{
success: boolean;
@@ -40,6 +49,8 @@ export const dispatchRunCode = async (props: RunCodeType): Promise<RunCodeRespon
variables: customVariables
});
console.log(runResult);
if (runResult.success) {
return {
[NodeOutputKeyEnum.rawResponse]: runResult.data.codeReturn,
@@ -52,7 +63,7 @@ export const dispatchRunCode = async (props: RunCodeType): Promise<RunCodeRespon
...runResult.data.codeReturn
};
} else {
throw new Error('Run code failed');
return Promise.reject('Run code failed');
}
} catch (error) {
return {

View File

@@ -44,14 +44,14 @@ import {
textAdaptGptResponse,
replaceEditorVariable
} from '@fastgpt/global/core/workflow/runtime/utils';
import { ChatNodeUsageType } from '@fastgpt/global/support/wallet/bill/type';
import type { ChatNodeUsageType } from '@fastgpt/global/support/wallet/bill/type';
import { dispatchRunTools } from './agent/runTool/index';
import { ChatItemValueTypeEnum } from '@fastgpt/global/core/chat/constants';
import { DispatchFlowResponse } from './type';
import type { DispatchFlowResponse } from './type';
import { dispatchStopToolCall } from './agent/runTool/stopTool';
import { dispatchLafRequest } from './tools/runLaf';
import { dispatchIfElse } from './tools/runIfElse';
import { RuntimeEdgeItemType } from '@fastgpt/global/core/workflow/type/edge';
import type { RuntimeEdgeItemType } from '@fastgpt/global/core/workflow/type/edge';
import { getReferenceVariableValue } from '@fastgpt/global/core/workflow/runtime/utils';
import { dispatchSystemConfig } from './init/systemConfig';
import { dispatchUpdateVariable } from './tools/runUpdateVar';
@@ -62,7 +62,7 @@ import { dispatchTextEditor } from './tools/textEditor';
import { dispatchCustomFeedback } from './tools/customFeedback';
import { dispatchReadFiles } from './tools/readFiles';
import { dispatchUserSelect } from './interactive/userSelect';
import {
import type {
WorkflowInteractiveResponseType,
InteractiveNodeResponseType
} from '@fastgpt/global/core/workflow/template/system/interactive/type';
@@ -451,6 +451,11 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
const interactiveResponse = nodeRunResult.result?.[DispatchNodeResponseKeyEnum.interactive];
if (interactiveResponse) {
pushStore(nodeRunResult.node, nodeRunResult.result);
if (props.mode === 'debug') {
debugNextStepRunNodes = debugNextStepRunNodes.concat([nodeRunResult.node]);
}
nodeInteractiveResponse = {
entryNodeIds: [nodeRunResult.node.nodeId],
interactiveResponse

View File

@@ -1,11 +1,11 @@
import { chatValue2RuntimePrompt } from '@fastgpt/global/core/chat/adapt';
import { NodeInputKeyEnum, NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import {
import type {
DispatchNodeResultType,
ModuleDispatchProps
} from '@fastgpt/global/core/workflow/runtime/type';
import {
import type {
UserInputFormItemType,
UserInputInteractive
} from '@fastgpt/global/core/workflow/template/system/interactive/type';
@@ -32,7 +32,6 @@ export const dispatchFormInput = async (props: Props): Promise<FormInputResponse
query
} = props;
const { isEntry } = node;
const interactive = getLastInteractiveValue(histories);
// Interactive node is not the entry node, return interactive result

View File

@@ -1,5 +1,5 @@
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import {
import type {
DispatchNodeResultType,
ModuleDispatchProps
} from '@fastgpt/global/core/workflow/runtime/type';
@@ -30,7 +30,6 @@ export const dispatchUserSelect = async (props: Props): Promise<UserSelectRespon
query
} = props;
const { nodeId, isEntry } = node;
const interactive = getLastInteractiveValue(histories);
// Interactive node is not the entry node, return interactive result

View File

@@ -106,6 +106,7 @@ export const getHistories = (history?: ChatItemType[] | number, histories: ChatI
/* value type format */
export const valueTypeFormat = (value: any, type?: WorkflowIOValueTypeEnum) => {
if (value === undefined) return;
if (!type || type === WorkflowIOValueTypeEnum.any) return value;
if (type === 'string') {
if (typeof value !== 'object') return String(value);
@@ -117,7 +118,7 @@ export const valueTypeFormat = (value: any, type?: WorkflowIOValueTypeEnum) => {
return Boolean(value);
}
try {
if (WorkflowIOValueTypeEnum.arrayString && typeof value === 'string') {
if (type === WorkflowIOValueTypeEnum.arrayString && typeof value === 'string') {
return [value];
}
if (

View File

@@ -13,6 +13,7 @@ export const readDocsFile = async ({ buffer }: ReadRawTextByBuffer): Promise<Rea
buffer
},
{
ignoreEmptyParagraphs: false,
convertImage: images.imgElement(async (image) => {
const imageBase64 = await image.readAsBase64String();
const uuid = crypto.randomUUID();

View File

@@ -1,9 +1,9 @@
import React, { useCallback, useRef, useState } from 'react';
import React, { useCallback, useRef, useState, useEffect } from 'react';
import Editor, { Monaco, loader } from '@monaco-editor/react';
import { Box, BoxProps } from '@chakra-ui/react';
import MyIcon from '../../Icon';
import { getWebReqUrl } from '../../../../common/system/utils';
import usePythonCompletion from './usePythonCompletion';
loader.config({
paths: { vs: getWebReqUrl('/js/monaco-editor.0.45.0/vs') }
});
@@ -21,6 +21,7 @@ export type Props = Omit<BoxProps, 'resize' | 'onChange'> & {
onOpenModal?: () => void;
variables?: EditorVariablePickerType[];
defaultHeight?: number;
language?: string;
};
const options = {
@@ -53,11 +54,14 @@ const MyEditor = ({
variables = [],
defaultHeight = 200,
onOpenModal,
language = 'typescript',
...props
}: Props) => {
const [height, setHeight] = useState(defaultHeight);
const initialY = useRef(0);
const registerPythonCompletion = usePythonCompletion();
const handleMouseDown = useCallback((e: React.MouseEvent) => {
initialY.current = e.clientY;
@@ -76,35 +80,47 @@ const MyEditor = ({
document.addEventListener('mouseup', handleMouseUp);
}, []);
const beforeMount = useCallback((monaco: Monaco) => {
monaco.languages.json.jsonDefaults.setDiagnosticsOptions({
validate: false,
allowComments: false,
schemas: [
{
uri: 'http://myserver/foo-schema.json', // 一个假设的 URI
fileMatch: ['*'], // 匹配所有文件
schema: {} // 空的 Schema
}
]
});
const editorRef = useRef<any>(null);
const monacoRef = useRef<Monaco | null>(null);
monaco.editor.defineTheme('JSONEditorTheme', {
base: 'vs', // 可以基于已有的主题进行定制
inherit: true, // 继承基础主题的设置
rules: [{ token: 'variable', foreground: '2B5FD9' }],
colors: {
'editor.background': '#ffffff00',
'editorLineNumber.foreground': '#aaa',
'editorOverviewRuler.border': '#ffffff00',
'editor.lineHighlightBackground': '#F7F8FA',
'scrollbarSlider.background': '#E8EAEC',
'editorIndentGuide.activeBackground': '#ddd',
'editorIndentGuide.background': '#eee'
}
});
const handleEditorDidMount = useCallback((editor: any, monaco: Monaco) => {
editorRef.current = editor;
monacoRef.current = monaco;
}, []);
const beforeMount = useCallback(
(monaco: Monaco) => {
monaco.languages.json.jsonDefaults.setDiagnosticsOptions({
validate: false,
allowComments: false,
schemas: [
{
uri: 'http://myserver/foo-schema.json', // 一个假设的 URI
fileMatch: ['*'], // 匹配所有文件
schema: {} // 空的 Schema
}
]
});
monaco.editor.defineTheme('JSONEditorTheme', {
base: 'vs', // 可以基于已有的主题进行定制
inherit: true, // 继承基础主题的设置
rules: [{ token: 'variable', foreground: '2B5FD9' }],
colors: {
'editor.background': '#ffffff00',
'editorLineNumber.foreground': '#aaa',
'editorOverviewRuler.border': '#ffffff00',
'editor.lineHighlightBackground': '#F7F8FA',
'scrollbarSlider.background': '#E8EAEC',
'editorIndentGuide.activeBackground': '#ddd',
'editorIndentGuide.background': '#eee'
}
});
registerPythonCompletion(monaco);
},
[registerPythonCompletion]
);
return (
<Box
borderWidth={'1px'}
@@ -118,7 +134,7 @@ const MyEditor = ({
>
<Editor
height={'100%'}
defaultLanguage="typescript"
language={language}
options={options as any}
theme="JSONEditorTheme"
beforeMount={beforeMount}
@@ -127,6 +143,7 @@ const MyEditor = ({
onChange={(e) => {
onChange?.(e || '');
}}
onMount={handleEditorDidMount}
/>
{resize && (
<Box

View File

@@ -4,15 +4,31 @@ import { Button, ModalBody, ModalFooter, useDisclosure } from '@chakra-ui/react'
import MyModal from '../../MyModal';
import { useTranslation } from 'next-i18next';
type Props = Omit<EditorProps, 'resize'> & {};
type Props = Omit<EditorProps, 'resize'> & { language?: string };
function getLanguage(language: string | undefined): string {
let fullName: string;
switch (language) {
case 'py':
fullName = 'python';
break;
case 'js':
fullName = 'typescript';
break;
default:
fullName = `typescript`;
break;
}
return fullName;
}
const CodeEditor = (props: Props) => {
const { t } = useTranslation();
const { isOpen, onOpen, onClose } = useDisclosure();
const { language, ...otherProps } = props;
const fullName = getLanguage(language);
return (
<>
<MyEditor {...props} resize onOpenModal={onOpen} />
<MyEditor {...props} resize onOpenModal={onOpen} language={fullName} />
<MyModal
isOpen={isOpen}
onClose={onClose}
@@ -23,7 +39,7 @@ const CodeEditor = (props: Props) => {
isCentered
>
<ModalBody flex={'1 0 0'} overflow={'auto'}>
<MyEditor {...props} bg={'myGray.50'} height={'100%'} />
<MyEditor {...props} bg={'myGray.50'} height={'100%'} language={fullName} />
</ModalBody>
<ModalFooter>
<Button mr={2} onClick={onClose} px={6}>

View File

@@ -0,0 +1,83 @@
import { Monaco } from '@monaco-editor/react';
import { useCallback } from 'react';
let monacoInstance: Monaco | null = null;
const usePythonCompletion = () => {
return useCallback((monaco: Monaco) => {
if (monacoInstance === monaco) return;
monacoInstance = monaco;
monaco.languages.registerCompletionItemProvider('python', {
provideCompletionItems: (model, position) => {
const wordInfo = model.getWordUntilPosition(position);
const currentWordPrefix = wordInfo.word;
const lineContent = model.getLineContent(position.lineNumber);
const range = {
startLineNumber: position.lineNumber,
endLineNumber: position.lineNumber,
startColumn: wordInfo.startColumn,
endColumn: wordInfo.endColumn
};
const baseSuggestions = [
{
label: 'len',
kind: monaco.languages.CompletionItemKind.Function,
insertText: 'len()',
documentation: 'get length of object',
range,
sortText: 'a'
}
];
const filtered = baseSuggestions.filter((item) =>
item.label.toLowerCase().startsWith(currentWordPrefix.toLowerCase())
);
if (lineContent.startsWith('import')) {
const importLength = 'import'.length;
const afterImport = lineContent.slice(importLength);
const spaceMatch = afterImport.match(/^\s*/);
const spaceLength = spaceMatch ? spaceMatch[0].length : 0;
const startReplaceCol = importLength + spaceLength + 1;
const currentCol = position.column;
const replaceRange = new monaco.Range(
position.lineNumber,
startReplaceCol,
position.lineNumber,
currentCol
);
const needsSpace = spaceLength === 0;
return {
suggestions: [
{
label: 'numpy',
kind: monaco.languages.CompletionItemKind.Module,
insertText: `${needsSpace ? ' ' : ''}numpy as np`,
documentation: 'numerical computing library',
range: replaceRange,
sortText: 'a'
},
{
label: 'pandas',
kind: monaco.languages.CompletionItemKind.Module,
insertText: `${needsSpace ? ' ' : ''}pandas as pd`,
documentation: 'data analysis library',
range: replaceRange
}
]
};
}
return { suggestions: filtered };
},
triggerCharacters: ['.', '_']
});
}, []);
};
export default usePythonCompletion;

View File

@@ -1,4 +1,5 @@
{
"Hunyuan": "Tencent Hunyuan",
"api_key": "API key",
"azure": "Azure",
"base_url": "Base url",

View File

@@ -20,6 +20,7 @@
"classification_result": "Classification Result",
"code.Reset template": "Reset Template",
"code.Reset template confirm": "Confirm reset code template? This will reset all inputs and outputs to template values. Please save your current code.",
"code.Switch language confirm": "Switching the language will reset the code, will it continue?",
"code_execution": "Code Sandbox",
"collection_metadata_filter": "Collection Metadata Filter",
"complete_extraction_result": "Complete Extraction Result",
@@ -153,6 +154,7 @@
"select_another_application_to_call": "You can choose another application to call",
"special_array_format": "Special array format, returns an empty array when the search result is empty.",
"start_with": "Starts With",
"support_code_language": "Support import list: pandasnumpy",
"target_fields_description": "A target field consists of 'description' and 'key'. Multiple target fields can be extracted.",
"template.ai_chat": "AI Chat",
"template.ai_chat_intro": "AI Large Model Chat",

View File

@@ -1,4 +1,5 @@
{
"Hunyuan": "腾讯混元",
"api_key": "API 密钥",
"azure": "微软 Azure",
"base_url": "代理地址",

View File

@@ -20,6 +20,7 @@
"classification_result": "分类结果",
"code.Reset template": "还原模板",
"code.Reset template confirm": "确认还原代码模板?将会重置所有输入和输出至模板值,请注意保存当前代码。",
"code.Switch language confirm": "切换语言将重置代码,是否继续?",
"code_execution": "代码运行",
"collection_metadata_filter": "集合元数据过滤",
"complete_extraction_result": "完整提取结果",
@@ -153,6 +154,7 @@
"select_another_application_to_call": "可以选择一个其他应用进行调用",
"special_array_format": "特殊数组格式,搜索结果为空时,返回空数组。",
"start_with": "开始为",
"support_code_language": "支持import列表pandasnumpy",
"target_fields_description": "由 '描述' 和 'key' 组成一个目标字段,可提取多个目标字段",
"template.ai_chat": "AI 对话",
"template.ai_chat_intro": "AI 大模型对话",

View File

@@ -1,4 +1,5 @@
{
"Hunyuan": "騰訊混元",
"api_key": "API 密鑰",
"azure": "Azure",
"base_url": "代理地址",

View File

@@ -20,6 +20,7 @@
"classification_result": "分類結果",
"code.Reset template": "重設範本",
"code.Reset template confirm": "確定要重設程式碼範本嗎?這將會把所有輸入和輸出重設為範本值。請儲存您目前的程式碼。",
"code.Switch language confirm": "切換語言將重置代碼,是否繼續?",
"code_execution": "程式碼執行",
"collection_metadata_filter": "資料集詮釋資料篩選器",
"complete_extraction_result": "完整擷取結果",
@@ -153,6 +154,7 @@
"select_another_application_to_call": "可以選擇另一個應用程式來呼叫",
"special_array_format": "特殊陣列格式,搜尋結果為空時,回傳空陣列。",
"start_with": "開頭為",
"support_code_language": "支持import列表pandasnumpy",
"target_fields_description": "由「描述」和「鍵值」組成一個目標欄位,可以擷取多個目標欄位",
"template.ai_chat": "AI 對話",
"template.ai_chat_intro": "AI 大型語言模型對話",

View File

@@ -22,9 +22,9 @@
3 个模型代码分别为:
1. [https://github.com/labring/FastGPT/tree/main/python/bge-rerank/bge-reranker-base](https://github.com/labring/FastGPT/tree/main/python/bge-rerank/bge-reranker-base)
2. [https://github.com/labring/FastGPT/tree/main/python/bge-rerank/bge-reranker-large](https://github.com/labring/FastGPT/tree/main/python/bge-rerank/bge-reranker-large)
3. [https://github.com/labring/FastGPT/tree/main/python/bge-rerank/bge-reranker-v2-m3](https://github.com/labring/FastGPT/tree/main/python/bge-rerank/bge-reranker-v2-m3)
1. [https://github.com/labring/FastGPT/tree/main/plugins/model/rerank-bge/bge-reranker-base](https://github.com/labring/FastGPT/tree/main/plugins/model/rerank-bge/bge-reranker-base)
2. [https://github.com/labring/FastGPT/tree/main/plugins/model/rerank-bge/bge-reranker-large](https://github.com/labring/FastGPT/tree/main/plugins/model/rerank-bge/bge-reranker-large)
3. [https://github.com/labring/FastGPT/tree/main/plugins/model/rerank-bge/bge-reranker-v2-m3](https://github.com/labring/FastGPT/tree/main/plugins/model/rerank-bge/bge-reranker-v2-m3)
### 3. 安装依赖

View File

@@ -0,0 +1,80 @@
name: spider
version: "2.2"
services:
searxng:
container_name: searxng
image: docker.io/searxng/searxng:latest
platform: linux/amd64
restart: unless-stopped
networks:
- spider_net
ports:
- "8080:8080"
volumes:
- ./searxng:/etc/searxng:rw
environment:
- SEARXNG_BASE_URL=https://${SEARXNG_HOSTNAME:-localhost}/
- UWSGI_WORKERS=4 # UWSGI 工作进程数
- UWSGI_THREADS=4 # UWSGI 线程数
cap_drop:
- ALL
mongodb:
container_name: mongodb
image: mongo:4.4
restart: unless-stopped
networks:
- spider_net
ports:
- "27017:27017"
volumes:
- mongo-data:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: root # MongoDB 根用户名
MONGO_INITDB_ROOT_PASSWORD: example # MongoDB 根用户密码
nodeapp:
container_name: main
platform: linux/amd64
#build:
# context: .
image: gggaaallleee/webcrawler-test-new:latest
ports:
- "3000:3000"
networks:
- spider_net
depends_on:
- mongodb
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
volumes:
- /dev/shm:/dev/shm
environment:
- ACCESS_TOKEN=webcrawler # 访问令牌
- DETECT_WEBSITE=zhuanlan.zhihu.com # 无法处理跳过的网站
- STRATEGIES=[{"waitUntil":"networkidle0","timeout":5000},{"waitUntil":"networkidle2","timeout":10000},{"waitUntil":"load","timeout":15000}] # 页面加载策略
- PORT=3000
- MAX_CONCURRENCY=10 # 最大并发数
- NODE_ENV=development
- ENGINE_BAIDUURL=https://www.baidu.com/s # 百度搜索引擎 URL
- ENGINE_SEARCHXNGURL=http://searxng:8080/search # Searxng 搜索引擎 URL
- MONGODB_URI=mongodb://root:example@mongodb:27017 # MongoDB 连接 URI
- BLACKLIST=[".gov.cn",".edu.cn"] # 受保护域名
- STD_TTL=3600 # 标准 TTL
- EXPIRE_AFTER_SECONDS=9000 # 过期时间(秒)
#- VALIDATE_PROXY=[{"ip":"","port":},{"ip":"","port":}] #代理池
deploy:
resources:
limits:
memory: 4G
cpus: '2.0'
networks:
spider_net:
volumes:
mongo-data:

View File

@@ -0,0 +1,6 @@
# This configuration file updates the default configuration file
# See https://github.com/searxng/searxng/blob/master/searx/limiter.toml
[botdetection.ip_limit]
# activate link_token method in the ip_limit method
link_token = true

View File

@@ -0,0 +1,122 @@
general:
debug: false
instance_name: "searxng"
privacypolicy_url: false
donation_url: false
contact_url: false
enable_metrics: true
open_metrics: ''
brand:
new_issue_url: https://github.com/searxng/searxng/issues/new
docs_url: https://docs.searxng.org/
public_instances: https://searx.space
wiki_url: https://github.com/searxng/searxng/wiki
issue_url: https://github.com/searxng/searxng/issues
search:
safe_search: 0
autocomplete: ""
autocomplete_min: 4
default_lang: "auto"
ban_time_on_fail: 5
max_ban_time_on_fail: 120
formats:
- html
server:
port: 8080
bind_address: "0.0.0.0"
base_url: false
limiter: false
public_instance: false
secret_key: "example"
image_proxy: false
http_protocol_version: "1.0"
method: "POST"
default_http_headers:
X-Content-Type-Options: nosniff
X-Download-Options: noopen
X-Robots-Tag: noindex, nofollow
Referrer-Policy: no-referrer
redis:
url: false
ui:
static_path: ""
static_use_hash: false
templates_path: ""
default_theme: simple
default_locale: ""
query_in_title: false
infinite_scroll: false
center_alignment: false
theme_args:
simple_style: auto
# 启用 cn 分类
enabled_categories: [cn,en, general, images,en]
# 或者定义分类显示顺序
categories_order: [cn, en,general, images]
outgoing:
request_timeout: 30.0
max_request_timeout: 40.0
pool_connections: 200
pool_maxsize: 50
enable_http2: false
retries: 5
engines:
- name: bing
engine: bing
disabled: false
categories: cn
#- name: bilibili
# engine: bilibili
# shortcut: bil
# disabled: false
# categories: cn
- name : baidu
engine : json_engine
paging : True
first_page_num : 0
search_url : https://www.baidu.com/s?tn=json&wd={query}&pn={pageno}&rn=50
url_query : url
title_query : title
content_query : abs
categories : cn
- name : 360search
engine: 360search
disabled: false
categories: cn
- name : sogou
disabled: false
categories: cn
- name: google
disabled: false
categories: en
- name: yahoo
disabled: false
categories: en
- name: duckduckgo
disabled: false
categories: en
search:
formats:
- html
- json
doi_resolvers:
oadoi.org: 'https://oadoi.org/'
doi.org: 'https://doi.org/'
doai.io: 'https://dissem.in/'
sci-hub.se: 'https://sci-hub.se/'
sci-hub.st: 'https://sci-hub.st/'
sci-hub.ru: 'https://sci-hub.ru/'
default_doi_resolver: 'oadoi.org'

View File

@@ -2,5 +2,15 @@
该目录为 FastGPT 主项目。
- app 前端项目,用于展示和使用 FastGPT
- sandbox 沙盒项目,用于测试和开发。
- app fastgpt 核心应用
- sandbox 沙盒项目,用于运行工作流里的代码执行 需求python环境为python:3.11额外安装的包请于requirements.txt填写同时注意个别包可能额外安装库如pandas需要安装libffi
- 新加入python包遇见超时或者权限拦截的问题(确定不是自己的语法问题)请进入docker容器内部执行以下指令
```shell
docker exec -it 《替换成容器名》 /bin/bash
chmod -x testSystemCall.sh
bash ./testSystemCall.sh
```
然后将新的数组替换src下sandbox的constants.py中的SYSTEM_CALLS数组即可

View File

@@ -8,43 +8,80 @@ import {
Box,
Button,
Flex,
HStack,
Textarea
HStack
} from '@chakra-ui/react';
import { ChatItemValueTypeEnum } from '@fastgpt/global/core/chat/constants';
import {
import type {
AIChatItemValueItemType,
ToolModuleResponseItemType,
UserChatItemValueItemType
} from '@fastgpt/global/core/chat/type';
import React, { useCallback, useEffect } from 'react';
import React, { useCallback, useMemo } from 'react';
import MyIcon from '@fastgpt/web/components/common/Icon';
import Avatar from '@fastgpt/web/components/common/Avatar';
import {
import type {
InteractiveBasicType,
UserInputInteractive,
UserSelectInteractive
} from '@fastgpt/global/core/workflow/template/system/interactive/type';
import { isEqual } from 'lodash';
import FormLabel from '@fastgpt/web/components/common/MyBox/FormLabel';
import QuestionTip from '@fastgpt/web/components/common/MyTooltip/QuestionTip';
import { FlowNodeInputTypeEnum } from '@fastgpt/global/core/workflow/node/constant';
import { useTranslation } from 'next-i18next';
import { Controller, useForm } from 'react-hook-form';
import MySelect from '@fastgpt/web/components/common/MySelect';
import MyTextarea from '@/components/common/Textarea/MyTextarea';
import MyNumberInput from '@fastgpt/web/components/common/Input/NumberInput';
import { SendPromptFnType } from '../ChatContainer/ChatBox/type';
import { eventBus, EventNameEnum } from '@/web/common/utils/eventbus';
import { SelectOptionsComponent, FormInputComponent } from './Interactive/InteractiveComponents';
type props = {
value: UserChatItemValueItemType | AIChatItemValueItemType;
isLastResponseValue: boolean;
isChatting: boolean;
const accordionButtonStyle = {
w: 'auto',
bg: 'white',
borderRadius: 'md',
borderWidth: '1px',
borderColor: 'myGray.200',
boxShadow: '1',
pl: 3,
pr: 2.5,
_hover: {
bg: 'auto'
}
};
const onSendPrompt: SendPromptFnType = (e) => eventBus.emit(EventNameEnum.sendQuestion, e);
const RenderResoningContent = React.memo(function RenderResoningContent({
content,
isChatting,
isLastResponseValue
}: {
content: string;
isChatting: boolean;
isLastResponseValue: boolean;
}) {
const { t } = useTranslation();
const showAnimation = isChatting && isLastResponseValue;
return (
<Accordion allowToggle defaultIndex={isLastResponseValue ? 0 : undefined}>
<AccordionItem borderTop={'none'} borderBottom={'none'}>
<AccordionButton {...accordionButtonStyle} py={1}>
<HStack mr={2} spacing={1}>
<MyIcon name={'core/chat/think'} w={'0.85rem'} />
<Box fontSize={'sm'}>{t('chat:ai_reasoning')}</Box>
</HStack>
{showAnimation && <MyIcon name={'common/loading'} w={'0.85rem'} />}
<AccordionIcon color={'myGray.600'} ml={5} />
</AccordionButton>
<AccordionPanel
py={0}
pr={0}
pl={3}
mt={2}
borderLeft={'2px solid'}
borderColor={'myGray.300'}
color={'myGray.500'}
>
<Markdown source={content} showAnimation={showAnimation} />
</AccordionPanel>
</AccordionItem>
</Accordion>
);
});
const RenderText = React.memo(function RenderText({
showAnimation,
text
@@ -58,6 +95,7 @@ const RenderText = React.memo(function RenderText({
return <Markdown source={source} showAnimation={showAnimation} />;
});
const RenderTool = React.memo(
function RenderTool({
showAnimation,
@@ -69,37 +107,20 @@ const RenderTool = React.memo(
return (
<Box>
{tools.map((tool) => {
const toolParams = (() => {
const formatJson = (string: string) => {
try {
return JSON.stringify(JSON.parse(tool.params), null, 2);
return JSON.stringify(JSON.parse(string), null, 2);
} catch (error) {
return tool.params;
return string;
}
})();
const toolResponse = (() => {
try {
return JSON.stringify(JSON.parse(tool.response), null, 2);
} catch (error) {
return tool.response;
}
})();
};
const toolParams = formatJson(tool.params);
const toolResponse = formatJson(tool.response);
return (
<Accordion key={tool.id} allowToggle _notLast={{ mb: 2 }}>
<AccordionItem borderTop={'none'} borderBottom={'none'}>
<AccordionButton
w={'auto'}
bg={'white'}
borderRadius={'md'}
borderWidth={'1px'}
borderColor={'myGray.200'}
boxShadow={'1'}
pl={3}
pr={2.5}
_hover={{
bg: 'auto'
}}
>
<AccordionButton {...accordionButtonStyle}>
<Avatar src={tool.toolAvatar} w={'1.25rem'} h={'1.25rem'} borderRadius={'sm'} />
<Box mx={2} fontSize={'sm'} color={'myGray.900'}>
{tool.toolName}
@@ -140,99 +161,24 @@ ${toolResponse}`}
},
(prevProps, nextProps) => isEqual(prevProps, nextProps)
);
const RenderResoningContent = React.memo(function RenderResoningContent({
content,
isChatting,
isLastResponseValue
}: {
content: string;
isChatting: boolean;
isLastResponseValue: boolean;
}) {
const { t } = useTranslation();
const showAnimation = isChatting && isLastResponseValue;
return (
<Accordion allowToggle defaultIndex={isLastResponseValue ? 0 : undefined}>
<AccordionItem borderTop={'none'} borderBottom={'none'}>
<AccordionButton
w={'auto'}
bg={'white'}
borderRadius={'md'}
borderWidth={'1px'}
borderColor={'myGray.200'}
boxShadow={'1'}
pl={3}
pr={2.5}
py={1}
_hover={{
bg: 'auto'
}}
>
<HStack mr={2} spacing={1}>
<MyIcon name={'core/chat/think'} w={'0.85rem'} />
<Box fontSize={'sm'}>{t('chat:ai_reasoning')}</Box>
</HStack>
{showAnimation && <MyIcon name={'common/loading'} w={'0.85rem'} />}
<AccordionIcon color={'myGray.600'} ml={5} />
</AccordionButton>
<AccordionPanel
py={0}
pr={0}
pl={3}
mt={2}
borderLeft={'2px solid'}
borderColor={'myGray.300'}
color={'myGray.500'}
>
<Markdown source={content} showAnimation={showAnimation} />
</AccordionPanel>
</AccordionItem>
</Accordion>
);
});
const onSendPrompt = (e: { text: string; isInteractivePrompt: boolean }) =>
eventBus.emit(EventNameEnum.sendQuestion, e);
const RenderUserSelectInteractive = React.memo(function RenderInteractive({
interactive
}: {
interactive: InteractiveBasicType & UserSelectInteractive;
}) {
return (
<>
{interactive?.params?.description && <Markdown source={interactive.params.description} />}
<Flex flexDirection={'column'} gap={2} w={'250px'}>
{interactive.params.userSelectOptions?.map((option) => {
const selected = option.value === interactive?.params?.userSelectedVal;
return (
<Button
key={option.key}
variant={'whitePrimary'}
whiteSpace={'pre-wrap'}
isDisabled={interactive?.params?.userSelectedVal !== undefined}
{...(selected
? {
_disabled: {
cursor: 'default',
borderColor: 'primary.300',
bg: 'primary.50 !important',
color: 'primary.600'
}
}
: {})}
onClick={() => {
onSendPrompt({
text: option.value,
isInteractivePrompt: true
});
}}
>
{option.value}
</Button>
);
})}
</Flex>
</>
<SelectOptionsComponent
interactiveParams={interactive.params}
onSelect={(value) => {
onSendPrompt({
text: value,
isInteractivePrompt: true
});
}}
/>
);
});
const RenderUserFormInteractive = React.memo(function RenderFormInput({
@@ -241,110 +187,52 @@ const RenderUserFormInteractive = React.memo(function RenderFormInput({
interactive: InteractiveBasicType & UserInputInteractive;
}) {
const { t } = useTranslation();
const { register, setValue, handleSubmit: handleSubmitChat, control, reset } = useForm();
const onSubmit = useCallback((data: any) => {
const defaultValues = useMemo(() => {
if (interactive.type === 'userInput') {
return interactive.params.inputForm?.reduce((acc: Record<string, any>, item) => {
acc[item.label] = !!item.value ? item.value : item.defaultValue;
return acc;
}, {});
}
return {};
}, [interactive]);
const handleFormSubmit = useCallback((data: Record<string, any>) => {
onSendPrompt({
text: JSON.stringify(data),
isInteractivePrompt: true
});
}, []);
useEffect(() => {
if (interactive.type === 'userInput') {
const defaultValues = interactive.params.inputForm?.reduce(
(acc: Record<string, any>, item) => {
acc[item.label] = !!item.value ? item.value : item.defaultValue;
return acc;
},
{}
);
reset(defaultValues);
}
}, []);
return (
<Flex flexDirection={'column'} gap={2} w={'250px'}>
{interactive.params.description && <Markdown source={interactive.params.description} />}
{interactive.params.inputForm?.map((input) => (
<Box key={input.label}>
<FormLabel mb={1} required={input.required} whiteSpace={'pre-wrap'}>
{input.label}
{input.description && <QuestionTip ml={1} label={input.description} />}
</FormLabel>
{input.type === FlowNodeInputTypeEnum.input && (
<MyTextarea
isDisabled={interactive.params.submitted}
{...register(input.label, {
required: input.required
})}
bg={'white'}
autoHeight
minH={40}
maxH={100}
/>
)}
{input.type === FlowNodeInputTypeEnum.textarea && (
<Textarea
isDisabled={interactive.params.submitted}
bg={'white'}
{...register(input.label, {
required: input.required
})}
rows={5}
maxLength={input.maxLength || 4000}
/>
)}
{input.type === FlowNodeInputTypeEnum.numberInput && (
<MyNumberInput
min={input.min}
max={input.max}
defaultValue={input.defaultValue}
isDisabled={interactive.params.submitted}
bg={'white'}
register={register}
name={input.label}
isRequired={input.required}
/>
)}
{input.type === FlowNodeInputTypeEnum.select && (
<Controller
key={input.label}
control={control}
name={input.label}
rules={{ required: input.required }}
render={({ field: { ref, value } }) => {
if (!input.list) return <></>;
return (
<MySelect
ref={ref}
width={'100%'}
list={input.list}
value={value}
isDisabled={interactive.params.submitted}
onChange={(e) => setValue(input.label, e)}
/>
);
}}
/>
)}
</Box>
))}
{!interactive.params.submitted && (
<Flex w={'full'} justifyContent={'end'}>
<Button onClick={handleSubmitChat(onSubmit)}>{t('common:Submit')}</Button>
</Flex>
)}
<FormInputComponent
interactiveParams={interactive.params}
defaultValues={defaultValues}
SubmitButton={({ onSubmit }) => (
<Button onClick={() => onSubmit(handleFormSubmit)()}>{t('common:Submit')}</Button>
)}
/>
</Flex>
);
});
const AIResponseBox = ({ value, isLastResponseValue, isChatting }: props) => {
if (value.type === ChatItemValueTypeEnum.text && value.text)
const AIResponseBox = ({
value,
isLastResponseValue,
isChatting
}: {
value: UserChatItemValueItemType | AIChatItemValueItemType;
isLastResponseValue: boolean;
isChatting: boolean;
}) => {
if (value.type === ChatItemValueTypeEnum.text && value.text) {
return (
<RenderText showAnimation={isChatting && isLastResponseValue} text={value.text.content} />
);
if (value.type === ChatItemValueTypeEnum.reasoning && value.reasoning)
}
if (value.type === ChatItemValueTypeEnum.reasoning && value.reasoning) {
return (
<RenderResoningContent
isChatting={isChatting}
@@ -352,14 +240,18 @@ const AIResponseBox = ({ value, isLastResponseValue, isChatting }: props) => {
content={value.reasoning.content}
/>
);
if (value.type === ChatItemValueTypeEnum.tool && value.tools)
return <RenderTool showAnimation={isChatting} tools={value.tools} />;
if (value.type === ChatItemValueTypeEnum.interactive && value.interactive) {
if (value.interactive.type === 'userSelect')
return <RenderUserSelectInteractive interactive={value.interactive} />;
if (value.interactive?.type === 'userInput')
return <RenderUserFormInteractive interactive={value.interactive} />;
}
if (value.type === ChatItemValueTypeEnum.tool && value.tools) {
return <RenderTool showAnimation={isChatting} tools={value.tools} />;
}
if (value.type === ChatItemValueTypeEnum.interactive && value.interactive) {
if (value.interactive.type === 'userSelect') {
return <RenderUserSelectInteractive interactive={value.interactive} />;
}
if (value.interactive?.type === 'userInput') {
return <RenderUserFormInteractive interactive={value.interactive} />;
}
}
return null;
};
export default React.memo(AIResponseBox);

View File

@@ -0,0 +1,206 @@
import React, { useCallback } from 'react';
import { Box, Button, Flex, Textarea } from '@chakra-ui/react';
import { Controller, useForm, UseFormHandleSubmit } from 'react-hook-form';
import Markdown from '@/components/Markdown';
import FormLabel from '@fastgpt/web/components/common/MyBox/FormLabel';
import QuestionTip from '@fastgpt/web/components/common/MyTooltip/QuestionTip';
import MySelect from '@fastgpt/web/components/common/MySelect';
import MyTextarea from '@/components/common/Textarea/MyTextarea';
import MyNumberInput from '@fastgpt/web/components/common/Input/NumberInput';
import { FlowNodeInputTypeEnum } from '@fastgpt/global/core/workflow/node/constant';
import {
UserInputFormItemType,
UserInputInteractive,
UserSelectInteractive,
UserSelectOptionItemType
} from '@fastgpt/global/core/workflow/template/system/interactive/type';
const DescriptionBox = React.memo(function DescriptionBox({
description
}: {
description?: string;
}) {
if (!description) return null;
return (
<Box mb={4}>
<Markdown source={description} />
</Box>
);
});
export const SelectOptionsComponent = React.memo(function SelectOptionsComponent({
interactiveParams,
onSelect
}: {
interactiveParams: UserSelectInteractive['params'];
onSelect: (value: string) => void;
}) {
const { description, userSelectOptions, userSelectedVal } = interactiveParams;
return (
<Box maxW={'100%'}>
<DescriptionBox description={description} />
<Flex flexDirection={'column'} gap={3} w={'250px'}>
{userSelectOptions.map((option: UserSelectOptionItemType) => {
const selected = option.value === userSelectedVal;
return (
<Button
key={option.key}
variant={'whitePrimary'}
whiteSpace={'pre-wrap'}
isDisabled={!!userSelectedVal}
{...(selected
? {
_disabled: {
cursor: 'default',
borderColor: 'primary.300',
bg: 'primary.50 !important',
color: 'primary.600'
}
}
: {})}
onClick={() => onSelect(option.value)}
>
{option.value}
</Button>
);
})}
</Flex>
</Box>
);
});
export const FormInputComponent = React.memo(function FormInputComponent({
interactiveParams,
defaultValues = {},
SubmitButton
}: {
interactiveParams: UserInputInteractive['params'];
defaultValues?: Record<string, any>;
SubmitButton: (e: { onSubmit: UseFormHandleSubmit<Record<string, any>> }) => React.JSX.Element;
}) {
const { description, inputForm, submitted } = interactiveParams;
const { register, setValue, handleSubmit, control } = useForm({
defaultValues
});
const FormItemLabel = useCallback(
({
label,
required,
description
}: {
label: string;
required?: boolean;
description?: string;
}) => {
return (
<Flex mb={1} alignItems={'center'}>
<FormLabel required={required} mb={0} fontWeight="medium" color="gray.700">
{label}
</FormLabel>
{description && <QuestionTip ml={1} label={description} />}
</Flex>
);
},
[]
);
const RenderFormInput = useCallback(
({ input }: { input: UserInputFormItemType }) => {
const { type, label, required, maxLength, min, max, defaultValue, list } = input;
switch (type) {
case FlowNodeInputTypeEnum.input:
return (
<MyTextarea
isDisabled={submitted}
{...register(label, {
required: required
})}
bg={'white'}
autoHeight
minH={40}
maxH={100}
/>
);
case FlowNodeInputTypeEnum.textarea:
return (
<Textarea
isDisabled={submitted}
bg={'white'}
{...register(label, {
required: required
})}
rows={5}
maxLength={maxLength || 4000}
/>
);
case FlowNodeInputTypeEnum.numberInput:
return (
<MyNumberInput
min={min}
max={max}
defaultValue={defaultValue}
isDisabled={submitted}
bg={'white'}
register={register}
name={label}
isRequired={required}
/>
);
case FlowNodeInputTypeEnum.select:
return (
<Controller
key={label}
control={control}
name={label}
rules={{ required: required }}
render={({ field: { ref, value } }) => {
if (!list) return <></>;
return (
<MySelect
ref={ref}
width={'100%'}
list={list}
value={value}
isDisabled={submitted}
onChange={(e) => setValue(label, e)}
/>
);
}}
/>
);
default:
return null;
}
},
[control, register, setValue, submitted]
);
return (
<Box>
<DescriptionBox description={description} />
<Flex flexDirection={'column'} gap={3}>
{inputForm.map((input) => (
<Box key={input.label}>
<FormItemLabel
label={input.label}
required={input.required}
description={input.description}
/>
<RenderFormInput input={input} />
</Box>
))}
</Flex>
{!submitted && (
<Flex justifyContent={'flex-end'} mt={4}>
<SubmitButton onSubmit={handleSubmit} />
</Flex>
)}
</Box>
);
});

View File

@@ -130,7 +130,7 @@ export const aiproxyIdMap: Record<
provider: 'Ollama'
},
23: {
label: 'OneAPI',
label: i18nT('account_model:Hunyuan'),
provider: 'Hunyuan'
},
44: {

View File

@@ -1,6 +1,7 @@
import { AppSchema } from '@fastgpt/global/core/app/type';
import { ChatHistoryItemResType } from '@fastgpt/global/core/chat/type';
import { RuntimeNodeItemType } from '@fastgpt/global/core/workflow/runtime/type';
import { WorkflowInteractiveResponseType } from '@fastgpt/global/core/workflow/template/system/interactive/type';
import { StoreNodeItemType } from '@fastgpt/global/core/workflow/type';
import { RuntimeEdgeItemType, StoreEdgeItemType } from '@fastgpt/global/core/workflow/type/edge';
@@ -9,6 +10,8 @@ export type PostWorkflowDebugProps = {
edges: RuntimeEdgeItemType[];
variables: Record<string, any>;
appId: string;
query?: UserChatItemValueItemType[];
history?: ChatItemType[];
};
export type PostWorkflowDebugResponse = {
@@ -16,5 +19,6 @@ export type PostWorkflowDebugResponse = {
finishedEdges: RuntimeEdgeItemType[];
nextStepRunNodes: RuntimeNodeItemType[];
flowResponses: ChatHistoryItemResType[];
workflowInteractiveResponse?: WorkflowInteractiveResponseType;
newVariables: Record<string, any>;
};

View File

@@ -15,38 +15,89 @@ import RenderOutput from './render/RenderOutput';
import CodeEditor from '@fastgpt/web/components/common/Textarea/CodeEditor';
import { Box, Flex } from '@chakra-ui/react';
import { useConfirm } from '@fastgpt/web/hooks/useConfirm';
import { JS_TEMPLATE } from '@fastgpt/global/core/workflow/template/system/sandbox/constants';
import QuestionTip from '@fastgpt/web/components/common/MyTooltip/QuestionTip';
import {
JS_TEMPLATE,
PY_TEMPLATE,
SandboxCodeTypeEnum,
SNADBOX_CODE_TEMPLATE
} from '@fastgpt/global/core/workflow/template/system/sandbox/constants';
import MySelect from '@fastgpt/web/components/common/MySelect';
const NodeCode = ({ data, selected }: NodeProps<FlowNodeItemType>) => {
const { t } = useTranslation();
const { nodeId, inputs, outputs } = data;
const codeType = inputs.find(
(item) => item.key === NodeInputKeyEnum.codeType
) as FlowNodeInputItemType;
const splitToolInputs = useContextSelector(WorkflowContext, (ctx) => ctx.splitToolInputs);
const onChangeNode = useContextSelector(WorkflowContext, (ctx) => ctx.onChangeNode);
const { ConfirmModal, openConfirm } = useConfirm({
// 重置模板确认
const { ConfirmModal: ResetTemplateConfirm, openConfirm: openResetTemplateConfirm } = useConfirm({
content: t('workflow:code.Reset template confirm')
});
// 切换语言确认
const { ConfirmModal: SwitchLangConfirm, openConfirm: openSwitchLangConfirm } = useConfirm({
content: t('workflow:code.Switch language confirm')
});
const CustomComponent = useMemo(() => {
return {
[NodeInputKeyEnum.code]: (item: FlowNodeInputItemType) => {
return (
<Box mt={-3}>
<Flex mb={2} alignItems={'flex-end'}>
<Box flex={'1'}>{'Javascript ' + t('workflow:Code')}</Box>
<Box mt={-4}>
<Flex mb={2} alignItems={'center'} className="nodrag">
<MySelect<SandboxCodeTypeEnum>
fontSize="xs"
size="sm"
list={[
{ label: 'JavaScript', value: SandboxCodeTypeEnum.js },
{ label: 'Python 3', value: SandboxCodeTypeEnum.py }
]}
value={codeType?.value}
onChange={(newLang) => {
console.log(newLang);
openSwitchLangConfirm(() => {
onChangeNode({
nodeId,
type: 'updateInput',
key: NodeInputKeyEnum.codeType,
value: { ...codeType, value: newLang }
});
onChangeNode({
nodeId,
type: 'updateInput',
key: item.key,
value: {
...item,
value: SNADBOX_CODE_TEMPLATE[newLang]
}
});
})();
}}
/>
{codeType.value === 'py' && (
<QuestionTip ml={2} label={t('workflow:support_code_language')} />
)}
<Box
cursor={'pointer'}
color={'primary.500'}
fontSize={'xs'}
onClick={openConfirm(() => {
ml="auto"
mr={2}
onClick={openResetTemplateConfirm(() => {
onChangeNode({
nodeId,
type: 'updateInput',
key: item.key,
value: {
...item,
value: JS_TEMPLATE
value: codeType.value === 'js' ? JS_TEMPLATE : PY_TEMPLATE
}
});
})}
@@ -63,29 +114,25 @@ const NodeCode = ({ data, selected }: NodeProps<FlowNodeItemType>) => {
nodeId,
type: 'updateInput',
key: item.key,
value: {
...item,
value: e
}
value: { ...item, value: e }
});
}}
language={codeType.value}
/>
</Box>
);
}
};
}, [nodeId, onChangeNode, openConfirm, t]);
}, [codeType, nodeId, onChangeNode, openResetTemplateConfirm, openSwitchLangConfirm, t]);
const { isTool, commonInputs } = splitToolInputs(inputs, nodeId);
return (
<NodeCard minW={'400px'} selected={selected} {...data}>
{isTool && (
<>
<Container>
<RenderToolInput nodeId={nodeId} inputs={inputs} />
</Container>
</>
<Container>
<RenderToolInput nodeId={nodeId} inputs={inputs} />
</Container>
)}
<Container>
<IOTitle text={t('common:common.Input')} mb={-1} />
@@ -99,7 +146,8 @@ const NodeCode = ({ data, selected }: NodeProps<FlowNodeItemType>) => {
<IOTitle text={t('common:common.Output')} />
<RenderOutput nodeId={nodeId} flowOutputList={outputs} />
</Container>
<ConfirmModal />
<ResetTemplateConfirm />
<SwitchLangConfirm />
</NodeCard>
);
};

View File

@@ -1,5 +1,5 @@
import React, { useCallback, useMemo } from 'react';
import { Box, Button, Card, Flex, FlexProps } from '@chakra-ui/react';
import { Box, Button, Flex, type FlexProps } from '@chakra-ui/react';
import MyIcon from '@fastgpt/web/components/common/Icon';
import Avatar from '@fastgpt/web/components/common/Avatar';
import type { FlowNodeItemType } from '@fastgpt/global/core/workflow/type/node.d';
@@ -13,7 +13,6 @@ import { ToolSourceHandle, ToolTargetHandle } from './Handle/ToolHandle';
import { useEditTextarea } from '@fastgpt/web/hooks/useEditTextarea';
import { ConnectionSourceHandle, ConnectionTargetHandle } from './Handle/ConnectionHandle';
import { useDebug } from '../../hooks/useDebug';
import EmptyTip from '@fastgpt/web/components/common/EmptyTip';
import { getPreviewPluginNode } from '@/web/core/app/api/plugin';
import { storeNode2FlowNode } from '@/web/core/workflow/utils';
import { getNanoid } from '@fastgpt/global/common/string/tools';
@@ -23,12 +22,12 @@ import { moduleTemplatesFlat } from '@fastgpt/global/core/workflow/template/cons
import MyTooltip from '@fastgpt/web/components/common/MyTooltip';
import { useRequest2 } from '@fastgpt/web/hooks/useRequest';
import { useWorkflowUtils } from '../../hooks/useUtils';
import { WholeResponseContent } from '@/components/core/chat/components/WholeResponseModal';
import { WorkflowNodeEdgeContext } from '../../../context/workflowInitContext';
import { WorkflowEventContext } from '../../../context/workflowEventContext';
import MyImage from '@fastgpt/web/components/common/Image/MyImage';
import MyIconButton from '@fastgpt/web/components/common/Icon/button';
import UseGuideModal from '@/components/common/Modal/UseGuideModal';
import NodeDebugResponse from './RenderDebug/NodeDebugResponse';
type Props = FlowNodeItemType & {
children?: React.ReactNode | React.ReactNode[] | string;
@@ -62,6 +61,7 @@ const NodeCard = (props: Props) => {
w = 'full',
h = 'full',
nodeId,
flowNodeType,
selected,
menuForbid,
isTool = false,
@@ -409,7 +409,7 @@ const NodeCard = (props: Props) => {
})}
{...customStyle}
>
<NodeDebugResponse nodeId={nodeId} debugResult={debugResult} />
{debugResult && <NodeDebugResponse nodeId={nodeId} debugResult={debugResult} />}
{Header}
<Flex flexDirection={'column'} flex={1} my={!isFolded ? 3 : 0} gap={2}>
{!isFolded ? children : <Box h={4} />}
@@ -661,168 +661,3 @@ const NodeIntro = React.memo(function NodeIntro({
return Render;
});
const NodeDebugResponse = React.memo(function NodeDebugResponse({
nodeId,
debugResult
}: {
nodeId: string;
debugResult: FlowNodeItemType['debugResult'];
}) {
const { t } = useTranslation();
const { onChangeNode, onStopNodeDebug, onNextNodeDebug, workflowDebugData } = useContextSelector(
WorkflowContext,
(v) => v
);
const { openConfirm, ConfirmModal } = useConfirm({
content: t('common:core.workflow.Confirm stop debug')
});
const RenderStatus = useMemo(() => {
const map = {
running: {
bg: 'primary.50',
text: t('common:core.workflow.Running'),
icon: 'core/workflow/running'
},
success: {
bg: 'green.50',
text: t('common:core.workflow.Success'),
icon: 'core/workflow/runSuccess'
},
failed: {
bg: 'red.50',
text: t('common:core.workflow.Failed'),
icon: 'core/workflow/runError'
},
skipped: {
bg: 'myGray.50',
text: t('common:core.workflow.Skipped'),
icon: 'core/workflow/runSkip'
}
};
const statusData = map[debugResult?.status || 'running'];
const response = debugResult?.response;
const onStop = () => {
openConfirm(onStopNodeDebug)();
};
return !!debugResult && !!statusData ? (
<>
<Flex px={3} bg={statusData.bg} borderTopRadius={'md'} py={3}>
<MyIcon name={statusData.icon as any} w={'16px'} mr={2} />
<Box color={'myGray.900'} fontWeight={'bold'} flex={'1 0 0'}>
{statusData.text}
</Box>
{debugResult.status !== 'running' && (
<Box
color={'primary.700'}
cursor={'pointer'}
fontSize={'sm'}
onClick={() =>
onChangeNode({
nodeId,
type: 'attr',
key: 'debugResult',
value: {
...debugResult,
showResult: !debugResult.showResult
}
})
}
>
{debugResult.showResult
? t('common:core.workflow.debug.Hide result')
: t('common:core.workflow.debug.Show result')}
</Box>
)}
</Flex>
{/* Result card */}
{debugResult.showResult && (
<Card
className="nowheel"
position={'absolute'}
right={'-430px'}
top={0}
zIndex={10}
w={'420px'}
maxH={'max(100%,500px)'}
border={'base'}
>
{/* Status header */}
<Flex h={'54x'} px={3} py={3} alignItems={'center'}>
<MyIcon mr={1} name={'core/workflow/debugResult'} w={'20px'} color={'primary.600'} />
<Box fontWeight={'bold'} flex={'1'}>
{t('common:core.workflow.debug.Run result')}
</Box>
{workflowDebugData?.nextRunNodes.length !== 0 && (
<Button
size={'sm'}
leftIcon={<MyIcon name={'core/chat/stopSpeech'} w={'16px'} />}
variant={'whiteDanger'}
onClick={onStop}
>
{t('common:core.workflow.Stop debug')}
</Button>
)}
{(debugResult.status === 'success' || debugResult.status === 'skipped') &&
!debugResult.isExpired &&
workflowDebugData?.nextRunNodes &&
workflowDebugData.nextRunNodes.length > 0 && (
<Button
ml={2}
size={'sm'}
leftIcon={<MyIcon name={'core/workflow/debugNext'} w={'16px'} />}
variant={'primary'}
onClick={() => onNextNodeDebug()}
>
{t('common:common.Next Step')}
</Button>
)}
{workflowDebugData?.nextRunNodes && workflowDebugData?.nextRunNodes.length === 0 && (
<Button ml={2} size={'sm'} variant={'primary'} onClick={onStopNodeDebug}>
{t('common:core.workflow.debug.Done')}
</Button>
)}
</Flex>
{/* Response list */}
{debugResult.status !== 'skipped' && (
<Box borderTop={'base'} mt={1} overflowY={'auto'} minH={'250px'}>
{!debugResult.message && !response && (
<EmptyTip text={t('common:core.workflow.debug.Not result')} pt={2} pb={5} />
)}
{debugResult.message && (
<Box color={'red.600'} px={3} py={4}>
{debugResult.message}
</Box>
)}
{response && <WholeResponseContent activeModule={response} />}
</Box>
)}
</Card>
)}
</>
) : null;
}, [
debugResult,
nodeId,
onChangeNode,
onNextNodeDebug,
onStopNodeDebug,
openConfirm,
t,
workflowDebugData
]);
return (
<>
{RenderStatus}
<ConfirmModal />
</>
);
});

View File

@@ -0,0 +1,269 @@
import React, { useCallback, useMemo, useRef } from 'react';
import { Box, Button, Card, Flex } from '@chakra-ui/react';
import { useTranslation } from 'next-i18next';
import MyIcon from '@fastgpt/web/components/common/Icon';
import { useConfirm } from '@fastgpt/web/hooks/useConfirm';
import { useContextSelector } from 'use-context-selector';
import { WorkflowContext } from '../../../../context';
import EmptyTip from '@fastgpt/web/components/common/EmptyTip';
import { WholeResponseContent } from '@/components/core/chat/components/WholeResponseModal';
import type { FlowNodeItemType } from '@fastgpt/global/core/workflow/type/node.d';
import {
FormInputComponent,
SelectOptionsComponent
} from '@/components/core/chat/components/Interactive/InteractiveComponents';
import { UserInputInteractive } from '@fastgpt/global/core/workflow/template/system/interactive/type';
import { initWorkflowEdgeStatus } from '@fastgpt/global/core/workflow/runtime/utils';
import { ChatItemType, UserChatItemValueItemType } from '@fastgpt/global/core/chat/type';
import { ChatItemValueTypeEnum, ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
type NodeDebugResponseProps = {
nodeId: string;
debugResult: FlowNodeItemType['debugResult'];
};
const RenderUserFormInteractive = React.memo(function RenderFormInput({
interactive,
onNext
}: {
interactive: UserInputInteractive;
onNext: (val: string) => void;
}) {
const { t } = useTranslation();
const defaultValues = useMemo(() => {
return interactive.params.inputForm?.reduce((acc: Record<string, any>, item) => {
acc[item.label] = !!item.value ? item.value : item.defaultValue;
return acc;
}, {});
}, [interactive.params.inputForm]);
return (
<Box px={4} py={4} bg="white" borderRadius="md">
<FormInputComponent
defaultValues={defaultValues}
interactiveParams={interactive.params}
SubmitButton={({ onSubmit }) => (
<Button
leftIcon={<MyIcon name="core/workflow/debugNext" />}
onClick={() =>
onSubmit((data) => {
onNext(JSON.stringify(data));
})()
}
>
{t('common:common.Next Step')}
</Button>
)}
/>
</Box>
);
});
const NodeDebugResponse = ({ nodeId, debugResult }: NodeDebugResponseProps) => {
const { t } = useTranslation();
const { onChangeNode, onStopNodeDebug, onNextNodeDebug, workflowDebugData } = useContextSelector(
WorkflowContext,
(v) => v
);
const statusMap = useRef({
running: {
bg: 'primary.50',
text: t('common:core.workflow.Running'),
icon: 'core/workflow/running'
},
success: {
bg: 'green.50',
text: t('common:core.workflow.Success'),
icon: 'core/workflow/runSuccess'
},
failed: {
bg: 'red.50',
text: t('common:core.workflow.Failed'),
icon: 'core/workflow/runError'
},
skipped: {
bg: 'myGray.50',
text: t('common:core.workflow.Skipped'),
icon: 'core/workflow/runSkip'
}
});
const statusData = statusMap.current[debugResult?.status || 'running'];
const response = debugResult?.response;
const { openConfirm, ConfirmModal } = useConfirm({
content: t('common:core.workflow.Confirm stop debug')
});
const onStop = () => {
openConfirm(onStopNodeDebug)();
};
const interactive = debugResult?.workflowInteractiveResponse;
const onNextInteractive = useCallback(
(userContent: string) => {
if (!workflowDebugData || !workflowDebugData || !interactive) return;
const updatedQuery: UserChatItemValueItemType[] = [
{
type: ChatItemValueTypeEnum.text,
text: { content: userContent }
}
];
const mockHistory: ChatItemType[] = [
{
obj: ChatRoleEnum.AI,
value: [
{
type: ChatItemValueTypeEnum.interactive,
interactive: {
...interactive,
memoryEdges: interactive.memoryEdges || [],
entryNodeIds: interactive.entryNodeIds || [],
nodeOutputs: interactive.nodeOutputs || []
}
}
]
}
];
onNextNodeDebug({
...workflowDebugData,
// Rewrite runtimeEdges
runtimeEdges: initWorkflowEdgeStatus(workflowDebugData.runtimeEdges, mockHistory),
query: updatedQuery,
history: mockHistory
});
},
[workflowDebugData, interactive, onNextNodeDebug]
);
return !!debugResult && !!statusData ? (
<>
{/* Status header */}
<Flex px={3} bg={statusData.bg} borderTopRadius={'md'} py={3}>
<MyIcon name={statusData.icon as any} w={'16px'} mr={2} />
<Box color={'myGray.900'} fontWeight={'bold'} flex={'1 0 0'}>
{statusData.text}
</Box>
{debugResult.status !== 'running' && (
<Box
color={'primary.700'}
cursor={'pointer'}
fontSize={'sm'}
onClick={() =>
onChangeNode({
nodeId,
type: 'attr',
key: 'debugResult',
value: {
...debugResult,
showResult: !debugResult.showResult
}
})
}
>
{debugResult.showResult
? t('common:core.workflow.debug.Hide result')
: t('common:core.workflow.debug.Show result')}
</Box>
)}
</Flex>
{/* Result card */}
{debugResult.showResult && (
<Card
className="nowheel"
position={'absolute'}
right={'-430px'}
top={0}
zIndex={10}
w={'420px'}
maxH={'max(100%,500px)'}
border={'base'}
>
{/* Status header */}
<Flex h={'54x'} px={3} py={3} alignItems={'center'}>
<MyIcon mr={1} name={'core/workflow/debugResult'} w={'20px'} color={'primary.600'} />
<Box fontWeight={'bold'} flex={'1'}>
{t('common:core.workflow.debug.Run result')}
</Box>
{workflowDebugData?.nextRunNodes.length !== 0 && (
<Button
size={'sm'}
leftIcon={<MyIcon name={'core/chat/stopSpeech'} w={'16px'} />}
variant={'whiteDanger'}
onClick={onStop}
>
{t('common:core.workflow.Stop debug')}
</Button>
)}
{!interactive && (
<>
{(debugResult.status === 'success' || debugResult.status === 'skipped') &&
!debugResult.isExpired &&
workflowDebugData?.nextRunNodes &&
workflowDebugData.nextRunNodes.length > 0 && (
<Button
ml={2}
size={'sm'}
leftIcon={<MyIcon name={'core/workflow/debugNext'} w={'16px'} />}
variant={'primary'}
onClick={() => onNextNodeDebug(workflowDebugData)}
>
{t('common:common.Next Step')}
</Button>
)}
{workflowDebugData?.nextRunNodes &&
workflowDebugData?.nextRunNodes.length === 0 && (
<Button ml={2} size={'sm'} variant={'primary'} onClick={onStopNodeDebug}>
{t('common:core.workflow.debug.Done')}
</Button>
)}
</>
)}
</Flex>
{/* Response list */}
{debugResult.status !== 'skipped' && (
<Box borderTop={'base'} mt={1} overflowY={'auto'} minH={'250px'}>
{!debugResult.message && !response && !interactive && (
<EmptyTip text={t('common:core.workflow.debug.Not result')} pt={2} pb={5} />
)}
{debugResult.message && (
<Box color={'red.600'} px={3} py={4}>
{debugResult.message}
</Box>
)}
{interactive && onNextInteractive && (
<>
{interactive.type === 'userSelect' && (
<Box px={4} py={3}>
<SelectOptionsComponent
interactiveParams={interactive.params}
onSelect={(val) => {
onNextInteractive(val);
}}
/>
</Box>
)}
{interactive.type === 'userInput' && (
<RenderUserFormInteractive
interactive={interactive}
onNext={onNextInteractive}
/>
)}
</>
)}
{response && <WholeResponseContent activeModule={response} />}
</Box>
)}
</Card>
)}
<ConfirmModal />
</>
) : null;
};
export default React.memo(NodeDebugResponse);

View File

@@ -35,6 +35,8 @@ import WorkflowInitContextProvider, { WorkflowNodeEdgeContext } from './workflow
import WorkflowEventContextProvider from './workflowEventContext';
import { getAppConfigByDiff } from '@/web/core/app/diff';
import WorkflowStatusContextProvider from './workflowStatusContext';
import { ChatItemType, UserChatItemValueItemType } from '@fastgpt/global/core/chat/type';
import { WorkflowInteractiveResponseType } from '@fastgpt/global/core/workflow/template/system/interactive/type';
/*
Context
@@ -156,24 +158,22 @@ type WorkflowContextType = {
| undefined;
// debug
workflowDebugData:
| {
runtimeNodes: RuntimeNodeItemType[];
runtimeEdges: RuntimeEdgeItemType[];
nextRunNodes: RuntimeNodeItemType[];
}
| undefined;
onNextNodeDebug: () => Promise<void>;
workflowDebugData?: DebugDataType;
onNextNodeDebug: (params: DebugDataType) => Promise<void>;
onStartNodeDebug: ({
entryNodeId,
runtimeNodes,
runtimeEdges,
variables
variables,
query,
history
}: {
entryNodeId: string;
runtimeNodes: RuntimeNodeItemType[];
runtimeEdges: RuntimeEdgeItemType[];
variables: Record<string, any>;
query?: UserChatItemValueItemType[];
history?: ChatItemType[];
}) => Promise<void>;
onStopNodeDebug: () => void;
@@ -189,11 +189,14 @@ type WorkflowContextType = {
>;
};
type DebugDataType = {
export type DebugDataType = {
runtimeNodes: RuntimeNodeItemType[];
runtimeEdges: RuntimeEdgeItemType[];
nextRunNodes: RuntimeNodeItemType[];
variables: Record<string, any>;
history?: ChatItemType[];
query?: UserChatItemValueItemType[];
workflowInteractiveResponse?: WorkflowInteractiveResponseType;
};
export const WorkflowContext = createContext<WorkflowContextType>({
@@ -236,17 +239,25 @@ export const WorkflowContext = createContext<WorkflowContextType>({
throw new Error('Function not implemented.');
},
workflowDebugData: undefined,
onNextNodeDebug: function (): Promise<void> {
onNextNodeDebug: function (params?: {
history?: ChatItemType[];
query?: UserChatItemValueItemType[];
debugData?: DebugDataType;
}): Promise<void> {
throw new Error('Function not implemented.');
},
onStartNodeDebug: function ({
entryNodeId,
runtimeNodes,
runtimeEdges
runtimeEdges,
query,
history
}: {
entryNodeId: string;
runtimeNodes: RuntimeNodeItemType[];
runtimeEdges: RuntimeEdgeItemType[];
query?: UserChatItemValueItemType[];
history?: ChatItemType[];
}): Promise<void> {
throw new Error('Function not implemented.');
},
@@ -551,8 +562,7 @@ const WorkflowContextProvider = ({
/* debug */
const [workflowDebugData, setWorkflowDebugData] = useState<DebugDataType>();
const onNextNodeDebug = useCallback(
async (debugData = workflowDebugData) => {
if (!debugData) return;
async (debugData: DebugDataType) => {
// 1. Cancel node selected status and debugResult.showStatus
setNodes((state) =>
state.map((node) => ({
@@ -612,26 +622,35 @@ const WorkflowContextProvider = ({
try {
// 4. Run one step
const { finishedEdges, finishedNodes, nextStepRunNodes, flowResponses, newVariables } =
await postWorkflowDebug({
nodes: runtimeNodes,
edges: debugData.runtimeEdges,
variables: {
appId,
cTime: formatTime2YMDHMW(),
...debugData.variables
},
appId
});
const {
finishedEdges,
finishedNodes,
nextStepRunNodes,
flowResponses,
newVariables,
workflowInteractiveResponse
} = await postWorkflowDebug({
nodes: runtimeNodes,
edges: debugData.runtimeEdges,
variables: {
appId,
cTime: formatTime2YMDHMW(),
...debugData.variables
},
query: debugData.query, // 添加 query 参数
history: debugData.history,
appId
});
// 5. Store debug result
const newStoreDebugData = {
setWorkflowDebugData({
runtimeNodes: finishedNodes,
// edges need to save status
runtimeEdges: finishedEdges,
nextRunNodes: nextStepRunNodes,
variables: newVariables
};
setWorkflowDebugData(newStoreDebugData);
variables: newVariables,
workflowInteractiveResponse: workflowInteractiveResponse
});
// 6. selected entry node and Update entry node debug result
setNodes((state) =>
@@ -665,16 +684,21 @@ const WorkflowContextProvider = ({
status: 'success',
response: result,
showResult: true,
isExpired: false
isExpired: false,
workflowInteractiveResponse: workflowInteractiveResponse
}
}
};
})
);
// Check for an empty response
if (flowResponses.length === 0 && nextStepRunNodes.length > 0) {
onNextNodeDebug(newStoreDebugData);
// Check for an empty response(Skip node)
if (
!workflowInteractiveResponse &&
flowResponses.length === 0 &&
nextStepRunNodes.length > 0
) {
onNextNodeDebug(debugData);
}
} catch (error) {
entryNodes.forEach((node) => {
@@ -692,7 +716,7 @@ const WorkflowContextProvider = ({
console.log(error);
}
},
[appId, onChangeNode, setNodes, workflowDebugData]
[appId, onChangeNode, setNodes]
);
const onStopNodeDebug = useMemoizedFn(() => {
setWorkflowDebugData(undefined);
@@ -712,18 +736,24 @@ const WorkflowContextProvider = ({
entryNodeId,
runtimeNodes,
runtimeEdges,
variables
variables,
query,
history
}: {
entryNodeId: string;
runtimeNodes: RuntimeNodeItemType[];
runtimeEdges: RuntimeEdgeItemType[];
variables: Record<string, any>;
query?: UserChatItemValueItemType[];
history?: ChatItemType[];
}) => {
const data = {
const data: DebugDataType = {
runtimeNodes,
runtimeEdges,
nextRunNodes: runtimeNodes.filter((node) => node.nodeId === entryNodeId),
variables
variables,
query,
history
};
onStopNodeDebug();
setWorkflowDebugData(data);

View File

@@ -11,7 +11,7 @@ import type { AIChatItemType, UserChatItemType } from '@fastgpt/global/core/chat
import { authApp } from '@fastgpt/service/support/permission/app/auth';
import { dispatchWorkFlow } from '@fastgpt/service/core/workflow/dispatch';
import { getUserChatInfoAndAuthTeamPoints } from '@fastgpt/service/support/permission/auth/team';
import { StoreEdgeItemType } from '@fastgpt/global/core/workflow/type/edge';
import type { StoreEdgeItemType } from '@fastgpt/global/core/workflow/type/edge';
import {
concatHistories,
getChatTitleFromChatMessage,
@@ -25,8 +25,8 @@ import {
} from '@fastgpt/global/core/workflow/utils';
import { NextAPI } from '@/service/middleware/entry';
import { chatValue2RuntimePrompt, GPTMessages2Chats } from '@fastgpt/global/core/chat/adapt';
import { ChatCompletionMessageParam } from '@fastgpt/global/core/ai/type';
import { AppChatConfigType } from '@fastgpt/global/core/app/type';
import type { ChatCompletionMessageParam } from '@fastgpt/global/core/ai/type';
import type { AppChatConfigType } from '@fastgpt/global/core/app/type';
import {
getLastInteractiveValue,
getMaxHistoryLimitFromNodes,
@@ -36,7 +36,7 @@ import {
storeNodes2RuntimeNodes,
textAdaptGptResponse
} from '@fastgpt/global/core/workflow/runtime/utils';
import { StoreNodeItemType } from '@fastgpt/global/core/workflow/type/node';
import type { StoreNodeItemType } from '@fastgpt/global/core/workflow/type/node';
import { getWorkflowResponseWrite } from '@fastgpt/service/core/workflow/dispatch/utils';
import { WORKFLOW_MAX_RUN_TIMES } from '@fastgpt/service/core/workflow/constants';
import { getPluginInputsFromStoreNodes } from '@fastgpt/global/core/app/plugin/utils';

View File

@@ -5,7 +5,7 @@ import { authApp } from '@fastgpt/service/support/permission/app/auth';
import { dispatchWorkFlow } from '@fastgpt/service/core/workflow/dispatch';
import { authCert } from '@fastgpt/service/support/permission/auth/common';
import { getUserChatInfoAndAuthTeamPoints } from '@fastgpt/service/support/permission/auth/team';
import { PostWorkflowDebugProps, PostWorkflowDebugResponse } from '@/global/core/workflow/api';
import type { PostWorkflowDebugProps, PostWorkflowDebugResponse } from '@/global/core/workflow/api';
import { NextAPI } from '@/service/middleware/entry';
import { ReadPermissionVal } from '@fastgpt/global/support/permission/constant';
import { defaultApp } from '@/web/core/app/constants';
@@ -15,16 +15,22 @@ async function handler(
req: NextApiRequest,
res: NextApiResponse
): Promise<PostWorkflowDebugResponse> {
const { nodes = [], edges = [], variables = {}, appId } = req.body as PostWorkflowDebugProps;
const {
nodes = [],
edges = [],
variables = {},
appId,
query = [],
history = []
} = req.body as PostWorkflowDebugProps;
if (!nodes) {
throw new Error('Prams Error');
return Promise.reject('Prams Error');
}
if (!Array.isArray(nodes)) {
throw new Error('Nodes is not array');
return Promise.reject('Nodes is not array');
}
if (!Array.isArray(edges)) {
throw new Error('Edges is not array');
return Promise.reject('Edges is not array');
}
/* user auth */
@@ -40,31 +46,32 @@ async function handler(
const { timezone, externalProvider } = await getUserChatInfoAndAuthTeamPoints(tmbId);
/* start process */
const { flowUsages, flowResponses, debugResponse, newVariables } = await dispatchWorkFlow({
res,
requestOrigin: req.headers.origin,
mode: 'debug',
runningAppInfo: {
id: app._id,
teamId: app.teamId,
tmbId: app.tmbId
},
runningUserInfo: {
teamId,
tmbId
},
uid: tmbId,
timezone,
externalProvider,
runtimeNodes: nodes,
runtimeEdges: edges,
variables,
query: [],
chatConfig: defaultApp.chatConfig,
histories: [],
stream: false,
maxRunTimes: WORKFLOW_MAX_RUN_TIMES
});
const { flowUsages, flowResponses, debugResponse, newVariables, workflowInteractiveResponse } =
await dispatchWorkFlow({
res,
requestOrigin: req.headers.origin,
mode: 'debug',
timezone,
externalProvider,
uid: tmbId,
runningAppInfo: {
id: app._id,
teamId: app.teamId,
tmbId: app.tmbId
},
runningUserInfo: {
teamId,
tmbId
},
runtimeNodes: nodes,
runtimeEdges: edges,
variables,
query: query,
chatConfig: defaultApp.chatConfig,
histories: history,
stream: false,
maxRunTimes: WORKFLOW_MAX_RUN_TIMES
});
createChatUsage({
appName: `${app.name}-Debug`,
@@ -78,12 +85,12 @@ async function handler(
return {
...debugResponse,
newVariables,
flowResponses
flowResponses,
workflowInteractiveResponse
};
}
export default NextAPI(handler);
export const config = {
api: {
bodyParser: {

View File

@@ -1,8 +1,22 @@
# --------- install dependence -----------
FROM python:3.11-alpine AS python_base
ENV VERSION_RELEASE = Alpine3.11
# 安装make和g++
RUN apk add --no-cache make g++
RUN apk add --no-cache make g++ tar wget gperf automake libtool linux-headers
WORKDIR /app
COPY projects/sandbox/requirements.txt /app/requirements.txt
RUN wget https://github.com/seccomp/libseccomp/releases/download/v2.5.5/libseccomp-2.5.5.tar.gz && \
tar -zxvf libseccomp-2.5.5.tar.gz && \
cd libseccomp-2.5.5 && \
./configure --prefix=/usr && \
make && \
make install && \
pip install --no-cache-dir -i https://mirrors.aliyun.com/pypi/simple Cython && \
pip install --no-cache-dir -i https://mirrors.aliyun.com/pypi/simple -r /app/requirements.txt && \
cd src/python && \
python setup.py install
FROM node:20.14.0-alpine AS install
@@ -10,7 +24,7 @@ WORKDIR /app
ARG proxy
RUN [ -z "$proxy" ] || sed -i 's/dl-cdn.alpinelinux.org/mirrors.ustc.edu.cn/g' /etc/apk/repositories
RUN apk add --no-cache make g++
RUN apk add --no-cache make g++ python3
# copy py3.11
COPY --from=python_base /usr/local /usr/local
@@ -42,9 +56,12 @@ RUN pnpm --filter=sandbox build
FROM node:20.14.0-alpine AS runner
WORKDIR /app
RUN apk add --no-cache libffi libffi-dev strace bash
COPY --from=python_base /usr/local /usr/local
COPY --from=builder /app/node_modules /app/node_modules
COPY --from=builder /app/projects/sandbox /app/projects/sandbox
ENV NODE_ENV=production
ENV PATH="/usr/local/bin:${PATH}"
CMD ["node", "--no-node-snapshot", "projects/sandbox/dist/main.js"]

View File

@@ -0,0 +1,2 @@
numpy
pandas

View File

@@ -0,0 +1,130 @@
export const pythonScript = `
import subprocess
import json
import ast
import base64
def extract_imports(code):
tree = ast.parse(code)
imports = []
for node in ast.walk(tree):
if isinstance(node, (ast.Import, ast.ImportFrom)):
if isinstance(node, ast.Import):
for alias in node.names:
imports.append(f"import {alias.name}")
elif isinstance(node, ast.ImportFrom):
module = node.module
for alias in node.names:
imports.append(f"from {module} import {alias.name}")
return imports
seccomp_prefix = """
from seccomp import *
import sys
allowed_syscalls = [
"syscall.SYS_ARCH_PRCTL", "syscall.SYS_BRK", "syscall.SYS_CLONE",
"syscall.SYS_CLOSE", "syscall.SYS_EPOLL_CREATE1", "syscall.SYS_EXECVE",
"syscall.SYS_EXIT", "syscall.SYS_EXIT_GROUP", "syscall.SYS_FCNTL",
"syscall.SYS_FSTAT", "syscall.SYS_FUTEX", "syscall.SYS_GETDENTS64",
"syscall.SYS_GETEGID", "syscall.SYS_GETEUID", "syscall.SYS_GETGID",
"syscall.SYS_GETRANDOM", "syscall.SYS_GETTID", "syscall.SYS_GETUID",
"syscall.SYS_IOCTL", "syscall.SYS_LSEEK", "syscall.SYS_LSTAT",
"syscall.SYS_MBIND", "syscall.SYS_MEMBARRIER", "syscall.SYS_MMAP",
"syscall.SYS_MPROTECT", "syscall.SYS_MUNMAP", "syscall.SYS_OPEN",
"syscall.SYS_PREAD64", "syscall.SYS_READ", "syscall.SYS_READLINK",
"syscall.SYS_READV", "syscall.SYS_RT_SIGACTION", "syscall.SYS_RT_SIGPROCMASK",
"syscall.SYS_SCHED_GETAFFINITY", "syscall.SYS_SET_TID_ADDRESS",
"syscall.SYS_STAT", "syscall.SYS_UNAME",
"syscall.SYS_MREMAP", "syscall.SYS_RT_SIGRETURN", "syscall.SYS_SETUID",
"syscall.SYS_SETGID", "syscall.SYS_GETPID", "syscall.SYS_GETPPID",
"syscall.SYS_TGKILL", "syscall.SYS_SCHED_YIELD", "syscall.SYS_SET_ROBUST_LIST",
"syscall.SYS_GET_ROBUST_LIST", "syscall.SYS_RSEQ", "syscall.SYS_CLOCK_GETTIME",
"syscall.SYS_GETTIMEOFDAY", "syscall.SYS_NANOSLEEP", "syscall.SYS_EPOLL_CTL",
"syscall.SYS_CLOCK_NANOSLEEP", "syscall.SYS_PSELECT6", "syscall.SYS_TIME",
"syscall.SYS_SIGALTSTACK", "syscall.SYS_MKDIRAT", "syscall.SYS_MKDIR"
]
allowed_syscalls_tmp = allowed_syscalls
L = []
for item in allowed_syscalls_tmp:
item = item.strip()
parts = item.split(".")[1][4:].lower()
L.append(parts)
f = SyscallFilter(defaction=KILL)
for item in L:
f.add_rule(ALLOW, item)
f.add_rule(ALLOW, "write", Arg(0, EQ, sys.stdout.fileno()))
f.add_rule(ALLOW, "write", Arg(0, EQ, sys.stderr.fileno()))
f.add_rule(ALLOW, 307)
f.add_rule(ALLOW, 318)
f.add_rule(ALLOW, 334)
f.load()
"""
def remove_print_statements(code):
class PrintRemover(ast.NodeTransformer):
def visit_Expr(self, node):
if (
isinstance(node.value, ast.Call)
and isinstance(node.value.func, ast.Name)
and node.value.func.id == "print"
):
return None
return node
tree = ast.parse(code)
modified_tree = PrintRemover().visit(tree)
ast.fix_missing_locations(modified_tree)
return ast.unparse(modified_tree)
def detect_dangerous_imports(code):
dangerous_modules = ["os", "sys", "subprocess", "shutil", "socket", "ctypes", "multiprocessing", "threading", "pickle"]
tree = ast.parse(code)
for node in ast.walk(tree):
if isinstance(node, ast.Import):
for alias in node.names:
if alias.name in dangerous_modules:
return alias.name
elif isinstance(node, ast.ImportFrom):
if node.module in dangerous_modules:
return node.module
return None
def run_pythonCode(data:dict):
if not data or "code" not in data or "variables" not in data:
return {"error": "Invalid request format"}
code = data["code"]
code = remove_print_statements(code)
dangerous_import = detect_dangerous_imports(code)
if dangerous_import:
return {"error": f"Importing {dangerous_import} is not allowed."}
variables = data["variables"]
imports = "\\n".join(extract_imports(code))
var_def = ""
output_code = "res = main("
for k, v in variables.items():
if isinstance(v, str):
one_var = k + " = \\"" + v + "\\"\\n"
else:
one_var = k + " = " + str(v) + "\\n"
var_def = var_def + one_var
output_code = output_code + k + ", "
if output_code[-1] == "(":
output_code = output_code + ")\\n"
else:
output_code = output_code[:-2] + ")\\n"
output_code = output_code + "print(res)"
code = imports + "\\n" + seccomp_prefix + "\\n" + var_def + "\\n" + code + "\\n" + output_code
try:
result = subprocess.run(["python3", "-c", code], capture_output=True, text=True, timeout=10)
if result.returncode == -31:
return {"error": "Dangerous behavior detected."}
if result.stderr != "":
return {"error": result.stderr}
out = ast.literal_eval(result.stdout.strip())
return out
except subprocess.TimeoutExpired:
return {"error": "Timeout error"}
except Exception as e:
return {"error": str(e)}
`;

View File

@@ -1,6 +1,6 @@
import { Controller, Post, Body, HttpCode } from '@nestjs/common';
import { RunCodeDto } from './dto/create-sandbox.dto';
import { runSandbox } from './utils';
import { runJsSandbox, runPythonSandbox } from './utils';
@Controller('sandbox')
export class SandboxController {
@@ -9,6 +9,12 @@ export class SandboxController {
@Post('/js')
@HttpCode(200)
runJs(@Body() codeProps: RunCodeDto) {
return runSandbox(codeProps);
return runJsSandbox(codeProps);
}
@Post('/python')
@HttpCode(200)
runPython(@Body() codeProps: RunCodeDto) {
return runPythonSandbox(codeProps);
}
}

View File

@@ -6,24 +6,30 @@ import { timeDelay } from './jsFn/delay';
import { strToBase64 } from './jsFn/str2Base64';
import { createHmac } from './jsFn/crypto';
import { spawn } from 'child_process';
import { pythonScript } from './constants';
const CustomLogStr = 'CUSTOM_LOG';
/*
Rewrite code to add custom functions: Promise function; Log.
*/
function getFnCode(code: string) {
// rewrite log
code = code.replace(/console\.log/g, `${CustomLogStr}`);
export const runJsSandbox = async ({
code,
variables = {}
}: RunCodeDto): Promise<RunCodeResponse> => {
/*
Rewrite code to add custom functions: Promise function; Log.
*/
function getFnCode(code: string) {
// rewrite log
code = code.replace(/console\.log/g, `${CustomLogStr}`);
// Promise function rewrite
const rewriteSystemFn = `
// Promise function rewrite
const rewriteSystemFn = `
const thisDelay = (...args) => global_delay.applySyncPromise(undefined,args)
`;
// rewrite delay
code = code.replace(/delay\((.*)\)/g, `thisDelay($1)`);
// rewrite delay
code = code.replace(/delay\((.*)\)/g, `thisDelay($1)`);
const runCode = `
const runCode = `
(async() => {
try {
${rewriteSystemFn}
@@ -36,23 +42,18 @@ function getFnCode(code: string) {
}
})
`;
return runCode;
}
return runCode;
}
// Register global function
function registerSystemFn(jail: IsolatedVM.Reference<Record<string | number | symbol, any>>) {
return Promise.all([
jail.set('global_delay', new Reference(timeDelay)),
jail.set('countToken', countToken),
jail.set('strToBase64', strToBase64),
jail.set('createHmac', createHmac)
]);
}
// Register global function
function registerSystemFn(jail: IsolatedVM.Reference<Record<string | number | symbol, any>>) {
return Promise.all([
jail.set('global_delay', new Reference(timeDelay)),
jail.set('countToken', countToken),
jail.set('strToBase64', strToBase64),
jail.set('createHmac', createHmac)
]);
}
export const runSandbox = async ({
code,
variables = {}
}: RunCodeDto): Promise<RunCodeResponse> => {
const logData = [];
const isolate = new Isolate({ memoryLimit: 32 });
@@ -106,3 +107,50 @@ export const runSandbox = async ({
return Promise.reject(err);
}
};
export const runPythonSandbox = async ({
code,
variables = {}
}: RunCodeDto): Promise<RunCodeResponse> => {
const mainCallCode = `
data = ${JSON.stringify({ code, variables })}
res = run_pythonCode(data)
print(json.dumps(res))
`;
const fullCode = [pythonScript, mainCallCode].filter(Boolean).join('\n');
const pythonProcess = spawn('python3', ['-u', '-c', fullCode]);
const stdoutChunks: string[] = [];
const stderrChunks: string[] = [];
pythonProcess.stdout.on('data', (data) => stdoutChunks.push(data.toString()));
pythonProcess.stderr.on('data', (data) => stderrChunks.push(data.toString()));
const stdoutPromise = new Promise<string>((resolve) => {
pythonProcess.on('close', (code) => {
if (code !== 0) {
resolve(JSON.stringify({ error: stderrChunks.join('') }));
} else {
resolve(stdoutChunks.join(''));
}
});
});
const stdout = await stdoutPromise;
try {
const parsedOutput = JSON.parse(stdout);
if (parsedOutput.error) {
return Promise.reject(parsedOutput.error || 'Unknown error');
}
return { codeReturn: parsedOutput, log: '' };
} catch (err) {
if (stdout.includes('malformed node or string on line 1')) {
return Promise.reject(`The result should be a parsable variable, such as a list. ${stdout}`);
} else if (stdout.includes('Unexpected end of JSON input')) {
return Promise.reject(`Not allowed print or ${stdout}`);
}
return Promise.reject(`Run failed: ${err}`);
}
};

View File

@@ -0,0 +1,41 @@
#!/bin/bash
temp_dir=$(mktemp -d)
trap 'rm -rf "$temp_dir"' EXIT
syscall_table_file="$temp_dir/syscall_table.txt"
code_file="$temp_dir/test_code.py"
strace_log="$temp_dir/strace.log"
syscalls_file="$temp_dir/syscalls.txt"
code='
import pandas as pd
def main():
data = {"Name": ["Alice", "Bob"], "Age": [25, 30]}
df = pd.DataFrame(data)
return {
"head": df.head().to_dict()
}
'
if ! ausyscall --dump > "$syscall_table_file" 2>/dev/null; then
grep -E '^#define __NR_' /usr/include/asm/unistd_64.h | \
sed 's/#define __NR_//;s/[ \t]\+/ /g' | \
awk '{print $1, $2}' > "$syscall_table_file"
fi
echo "$code" > "$code_file"
strace -ff -e trace=all -o "$strace_log" python3 "$code_file" >/dev/null 2>&1
cat "$strace_log"* 2>/dev/null | grep -oE '^[[:alnum:]_]+' | sort -u > "$syscalls_file"
allowed_syscalls=()
while read raw_name; do
go_name=$(echo "$raw_name" | tr 'a-z' 'A-Z' | sed 's/-/_/g')
allowed_syscalls+=("\"syscall.SYS_${go_name}\"")
done < "$syscalls_file"
echo "allowed_syscalls = ["
printf ' %s,\n' "${allowed_syscalls[@]}" | paste -sd ' \n'
echo "]"

View File

@@ -436,6 +436,28 @@ FastGPT是一款基于大语言模型LLM的智能问答系统专为提
expect(chunks).toEqual(mock.result);
});
// 自定义分隔符测试:换行符号
it(`Test splitText2Chunks 1`, () => {
const mock = {
text: `111
222
333`,
result: [
`111
222`,
'333'
]
};
const { chunks } = splitText2Chunks({ customReg: ['\\n\\n'], text: mock.text, chunkSize: 2000 });
fs.writeFileSync(
'/Users/yjl/fastgpt-pro/FastGPT/test/cases/function/packages/global/common/string/test.md',
chunks.join('------')
);
expect(chunks).toEqual(mock.result);
});
// 长代码块分割
it(`Test splitText2Chunks 7`, () => {
const mock = {