docs
92
docSite/docs/deploy/docker.md
Normal file
@@ -0,0 +1,92 @@
|
||||
# docker-compose 快速部署
|
||||
|
||||
## 代理环境(国外服务器可忽略)
|
||||
|
||||
选择一个即可。这只是代理!!!不是项目。
|
||||
|
||||
1. [sealos nginx 方案](./proxy/sealos.md) - 推荐。约等于不用钱,不需要额外准备任何东西。
|
||||
2. [clash 方案](./proxy/clash.md) - 仅需一台服务器(需要有 clash)
|
||||
3. [nginx 方案](./proxy/nginx.md) - 需要一台国外服务器
|
||||
4. [cloudflare 方案](./proxy/cloudflare.md) - 需要有域名(每日免费 10w 次代理请求)
|
||||
5. [腾讯云函数代理方案](https://github.com/easychen/openai-api-proxy/blob/master/FUNC.md) - 仅需一台服务器
|
||||
|
||||
## openai key 池管理方案
|
||||
|
||||
推荐使用 [one-api](https://github.com/songquanpeng/one-api) 项目来管理 key 池,兼容 openai 和微软等多渠道。部署可以看该项目的 README.md,也可以看 [在 Sealos 1 分钟部署 one-api](./one-api/sealos.md)
|
||||
|
||||
### 1. 准备一些内容
|
||||
|
||||
> 1. 服务器开通 80 端口。用代理的话,对应的代理端口也需要打开。
|
||||
> 2. QQ 邮箱 Code:进入 QQ 邮箱 -> 账号 -> 申请 SMTP 账号
|
||||
> 3. 有域名的准备好 SSL 证书
|
||||
|
||||
### 2. 安装 docker 和 docker-compose
|
||||
|
||||
这个不同系统略有区别,百度安装下。验证安装成功后进行下一步。下面给出一个例子:
|
||||
|
||||
```bash
|
||||
# 安装docker
|
||||
curl -L https://get.daocloud.io/docker | sh
|
||||
sudo systemctl start docker
|
||||
# 安装 docker-compose
|
||||
curl -L https://github.com/docker/compose/releases/download/1.23.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
|
||||
sudo chmod +x /usr/local/bin/docker-compose
|
||||
# 验证安装
|
||||
docker -v
|
||||
docker-compose -v
|
||||
# 如果docker-compose运行不了,可以把 deploy/fastgpt/docker-compose 文件复制到服务器,然后在 docker-compose 文件夹里执行 sh init.sh。会把docker-compose文件复制到对应目录。
|
||||
```
|
||||
|
||||
### 2. 创建 3 个初始化文件
|
||||
|
||||
fastgpt 文件夹。分别为:fastgpt/docker-compose.yaml, fastgpt/pg/init.sql, fastgpt/nginx/nginx.conf
|
||||
|
||||
手动创建或者直接把 fastgpt 文件夹复制过去。
|
||||
|
||||
### 3. 运行 docker-compose
|
||||
|
||||
下面是一个辅助脚本,也可以直接 docker-compose up -d
|
||||
|
||||
**run.sh 运行文件**
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
docker-compose pull
|
||||
docker-compose up -d
|
||||
|
||||
echo "Docker Compose 重新拉取镜像完成!"
|
||||
|
||||
# 删除本地旧镜像
|
||||
images=$(docker images --format "{{.ID}} {{.Repository}}" | grep fastgpt)
|
||||
|
||||
# 将镜像 ID 和名称放入数组中
|
||||
IFS=$'\n' read -rd '' -a image_array <<<"$images"
|
||||
|
||||
# 遍历数组并删除所有旧的镜像
|
||||
for ((i=1; i<${#image_array[@]}; i++))
|
||||
do
|
||||
image=${image_array[$i]}
|
||||
image_id=${image%% *}
|
||||
docker rmi $image_id
|
||||
done
|
||||
```
|
||||
|
||||
## FastGpt Admin
|
||||
|
||||
参考 admin 里的 README.md
|
||||
|
||||
## 其他优化点
|
||||
|
||||
# Git Action 自动打包镜像
|
||||
|
||||
.github 里拥有一个 git 提交到 main 分支时自动打包 amd64 和 arm64 镜像的 actions。你仅需要提前在 git 配置好 session。
|
||||
|
||||
1. 创建账号 session: 头像 -> settings -> 最底部 Developer settings -> Personal access tokens -> tokens(classic) -> 创建新 session,把一些看起来需要的权限勾上。
|
||||
2. 添加 session 到仓库: 仓库 -> settings -> Secrets and variables -> Actions -> 创建 secret
|
||||
3. 填写 secret: Name-GH_PAT, Secret-第一步的 tokens
|
||||
|
||||
## 其他问题
|
||||
|
||||
### Mac 可能的问题
|
||||
|
||||
> 因为教程有部分镜像不兼容 arm64,所以写个文档指导新手如何快速在 mac 上面搭建 fast-gpt[在 mac 上面部署 fastgpt 可能存在的问题](./mac.md)
|
||||
101
docSite/docs/deploy/mac.md
Normal file
@@ -0,0 +1,101 @@
|
||||
## Mac 上部署可能遇到的问题
|
||||
|
||||
### 前置条件
|
||||
|
||||
1、可以 curl api.openai.com
|
||||
|
||||
2、有 openai key
|
||||
|
||||
3、有邮箱 MAILE_CODE
|
||||
|
||||
4、有 docker
|
||||
|
||||
```
|
||||
docker -v
|
||||
```
|
||||
|
||||
5、有 pnpm ,可以使用`brew install pnpm`安装
|
||||
|
||||
6、需要创建一个放置 pg 和 mongo 数据的文件夹,这里创建在`~/fastgpt`目录中,里面有`pg` 和`mongo `两个文件夹
|
||||
|
||||
```
|
||||
➜ fastgpt pwd
|
||||
/Users/jie/fastgpt
|
||||
➜ fastgpt ls
|
||||
mongo pg
|
||||
```
|
||||
|
||||
### docker 部署方式
|
||||
|
||||
这种方式主要是为了方便调试,可以使用`pnpm dev ` 运行 fastgpt 项目
|
||||
|
||||
**1、.env.local 文件**
|
||||
|
||||
```
|
||||
# proxy
|
||||
AXIOS_PROXY_HOST=127.0.0.1
|
||||
AXIOS_PROXY_PORT_FAST=7890
|
||||
AXIOS_PROXY_PORT_NORMAL=7890
|
||||
# email
|
||||
MY_MAIL= {Your Mail}
|
||||
MAILE_CODE={Yoir Mail code}
|
||||
# ali ems
|
||||
aliAccessKeyId=xxx
|
||||
aliAccessKeySecret=xxx
|
||||
aliSignName=xxx
|
||||
aliTemplateCode=SMS_xxx
|
||||
# token
|
||||
TOKEN_KEY=sswada
|
||||
# 使用 oneapi
|
||||
ONEAPI_URL=[https://api.xyz.com/v1](https://xxxxx.cloud.sealos.io/v1)
|
||||
ONEAPI_KEY=sk-xxxxxx
|
||||
# openai
|
||||
OPENAIKEY=sk-xxx # 对话用的key
|
||||
OPENAI_TRAINING_KEY=sk-xxx # 训练用的key
|
||||
# db
|
||||
MONGODB_URI=mongodb://username:password@0.0.0.0:27017/test?authSource=admin
|
||||
PG_HOST=0.0.0.0
|
||||
PG_PORT=8100
|
||||
PG_USER=xxx
|
||||
PG_PASSWORD=xxx
|
||||
PG_DB_NAME=fastgpt
|
||||
```
|
||||
|
||||
**2、部署 mongo**
|
||||
|
||||
```
|
||||
docker run --name mongo -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=username -e MONGO_INITDB_ROOT_PASSWORD=password -v ~/fastgpt/mongo/data:/data/db -d mongo:4.0.1
|
||||
```
|
||||
|
||||
**3、部署 pgsql**
|
||||
|
||||
```
|
||||
docker run -it --name pg -e "POSTGRES_DB=fastgpt" -e "POSTGRES_PASSWORD=xxx" -e POSTGRES_USER=xxx -p 8100:5432 -v ~/fastgpt/pg/data:/var/lib/postgresql/data -d octoberlan/pgvector:v0.4.1
|
||||
```
|
||||
|
||||
进 pgsql 容器运行
|
||||
|
||||
```
|
||||
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
|
||||
|
||||
CREATE EXTENSION IF NOT EXISTS vector;
|
||||
-- init table
|
||||
CREATE TABLE IF NOT EXISTS modeldata (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
vector VECTOR(1536) NOT NULL,
|
||||
user_id VARCHAR(50) NOT NULL,
|
||||
kb_id VARCHAR(50) NOT NULL,
|
||||
source VARCHAR(100),
|
||||
q TEXT NOT NULL,
|
||||
a TEXT NOT NULL
|
||||
);
|
||||
-- 索引设置,按需取
|
||||
-- CREATE INDEX IF NOT EXISTS modeldata_userId_index ON modeldata USING HASH (user_id);
|
||||
-- CREATE INDEX IF NOT EXISTS modeldata_kbId_index ON modeldata USING HASH (kb_id);
|
||||
-- CREATE INDEX IF NOT EXISTS idx_model_data_md5_q_a_user_id_kb_id ON modeldata (md5(q), md5(a), user_id, kb_id);
|
||||
-- CREATE INDEX modeldata_id_desc_idx ON modeldata (id DESC);
|
||||
-- vector 索引,可以参考 [pg vector](https://github.com/pgvector/pgvector) 去配置,根据数据量去配置
|
||||
EOSQL
|
||||
```
|
||||
|
||||
4、**最后在 FASTGPT 项目里面运行 pnpm dev 运行项目,然后进入 localhost:3000 看项目是否跑起来了**
|
||||
36
docSite/docs/deploy/oneapi/index.md
Normal file
@@ -0,0 +1,36 @@
|
||||
# 在 Sealos 1 分钟部署 one-api
|
||||
|
||||
## 1. 进入 Sealos 公有云
|
||||
|
||||
https://cloud.sealos.io/
|
||||
|
||||
## 2. 打开 AppLaunchpad(应用管理) 工具
|
||||
|
||||

|
||||
|
||||
## 3. 点击创建新应用
|
||||
|
||||
## 4. 填写对应参数
|
||||
|
||||
镜像:ghcr.io/songquanpeng/one-api:latest
|
||||
|
||||

|
||||
打开外网访问开关后,Sealos 会自动分配一个可访问的地址,不需要自己配置。
|
||||
|
||||

|
||||
填写完参数后,点击右上角部署即可。
|
||||
|
||||
## 5. 访问
|
||||
|
||||
点击 Sealos 提供的外网访问地址,即可访问 one-api 项目。
|
||||

|
||||

|
||||
|
||||
## 6. 替换 FastGpt 的环境变量
|
||||
|
||||
```
|
||||
# 下面的地址是 Sealos 提供的,务必写上 v1
|
||||
ONEAPI_URL=https://xxxx.cloud.sealos.io/v1
|
||||
# 下面的 key 由 one-api 提供
|
||||
ONEAPI_KEY=sk-xxxxxx
|
||||
```
|
||||
BIN
docSite/docs/deploy/oneapi/sealosImg/step1.png
Normal file
|
After Width: | Height: | Size: 6.0 MiB |
BIN
docSite/docs/deploy/oneapi/sealosImg/step2.png
Normal file
|
After Width: | Height: | Size: 208 KiB |
BIN
docSite/docs/deploy/oneapi/sealosImg/step3.png
Normal file
|
After Width: | Height: | Size: 215 KiB |
BIN
docSite/docs/deploy/oneapi/sealosImg/step4.png
Normal file
|
After Width: | Height: | Size: 247 KiB |
BIN
docSite/docs/deploy/oneapi/sealosImg/step5.png
Normal file
|
After Width: | Height: | Size: 128 KiB |
68
docSite/docs/deploy/proxy/clash.md
Normal file
@@ -0,0 +1,68 @@
|
||||
# 安装 clash
|
||||
|
||||
clash 会在本机启动代理。对应的,你需要配置项目的两个环境变量:
|
||||
|
||||
```
|
||||
AXIOS_PROXY_HOST=127.0.0.1
|
||||
AXIOS_PROXY_PORT=7890
|
||||
```
|
||||
|
||||
需要注的是,在你的 config.yaml 文件中,最好仅指定 api.openai.com 走代理,其他请求都直连。
|
||||
|
||||
**安装clash**
|
||||
```bash
|
||||
# 下载包
|
||||
curl https://glados.rocks/tools/clash-linux.zip -o clash.zip
|
||||
# 解压
|
||||
unzip clash.zip
|
||||
# 下载终端配置⽂件(改成自己配置文件路径)
|
||||
curl https://update.glados-config.com/clash/98980/8f30944/70870/glados-terminal.yaml > config.yaml
|
||||
# 赋予运行权限
|
||||
chmod +x ./clash-linux-amd64-v1.10.0
|
||||
```
|
||||
|
||||
**runClash.sh**
|
||||
```sh
|
||||
# 记得配置端口变量:
|
||||
export ALL_PROXY=socks5://127.0.0.1:7891
|
||||
export http_proxy=http://127.0.0.1:7890
|
||||
export https_proxy=http://127.0.0.1:7890
|
||||
export HTTP_PROXY=http://127.0.0.1:7890
|
||||
export HTTPS_PROXY=http://127.0.0.1:7890
|
||||
|
||||
# 运行脚本: 删除clash - 到 clash 目录 - 删除缓存 - 执行运行. 会生成一个 nohup.out 文件,可以看到 clash 的 logs
|
||||
OLD_PROCESS=$(pgrep clash)
|
||||
if [ ! -z "$OLD_PROCESS" ]; then
|
||||
echo "Killing old process: $OLD_PROCESS"
|
||||
kill $OLD_PROCESS
|
||||
fi
|
||||
sleep 2
|
||||
cd **/clash
|
||||
rm -f ./nohup.out || true
|
||||
rm -f ./cache.db || true
|
||||
nohup ./clash-linux-amd64-v1.10.0 -d ./ &
|
||||
echo "Restart clash"
|
||||
```
|
||||
|
||||
**config.yaml配置例子**
|
||||
```yaml
|
||||
mixed-port: 7890
|
||||
allow-lan: false
|
||||
bind-address: '*'
|
||||
mode: rule
|
||||
log-level: warning
|
||||
dns:
|
||||
enable: true
|
||||
ipv6: false
|
||||
nameserver:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
cache-size: 400
|
||||
proxies:
|
||||
-
|
||||
proxy-groups:
|
||||
- { name: '♻️ 自动选择', type: url-test, proxies: [香港V01×1.5], url: 'https://api.openai.com', interval: 3600}
|
||||
rules:
|
||||
- 'DOMAIN-SUFFIX,api.openai.com,♻️ 自动选择'
|
||||
- 'MATCH,DIRECT'
|
||||
```
|
||||
46
docSite/docs/deploy/proxy/cloudflare.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# cloudflare 代理配置
|
||||
|
||||
[来自 "不做了睡觉" 教程](https://gravel-twister-d32.notion.site/FastGPT-API-ba7bb261d5fd4fd9bbb2f0607dacdc9e)
|
||||
|
||||
**workers 配置文件**
|
||||
|
||||
```js
|
||||
const TELEGRAPH_URL = 'https://api.openai.com';
|
||||
|
||||
addEventListener('fetch', (event) => {
|
||||
event.respondWith(handleRequest(event.request));
|
||||
});
|
||||
|
||||
async function handleRequest(request) {
|
||||
// 安全校验
|
||||
if (request.headers.get('auth') !== 'auth_code') {
|
||||
return new Response('UnAuthorization', { status: 403 });
|
||||
}
|
||||
|
||||
const url = new URL(request.url);
|
||||
url.host = TELEGRAPH_URL.replace(/^https?:\/\//, '');
|
||||
|
||||
const modifiedRequest = new Request(url.toString(), {
|
||||
headers: request.headers,
|
||||
method: request.method,
|
||||
body: request.body,
|
||||
redirect: 'follow'
|
||||
});
|
||||
|
||||
const response = await fetch(modifiedRequest);
|
||||
const modifiedResponse = new Response(response.body, response);
|
||||
|
||||
// 添加允许跨域访问的响应头
|
||||
modifiedResponse.headers.set('Access-Control-Allow-Origin', '*');
|
||||
|
||||
return modifiedResponse;
|
||||
}
|
||||
```
|
||||
|
||||
**对应的环境变量**
|
||||
务必别忘了填 v1
|
||||
|
||||
```
|
||||
OPENAI_BASE_URL=https://xxxxxx/v1
|
||||
OPENAI_BASE_URL_AUTH=auth_code
|
||||
```
|
||||
BIN
docSite/docs/deploy/proxy/imgs/sealos1.png
Normal file
|
After Width: | Height: | Size: 2.9 MiB |
BIN
docSite/docs/deploy/proxy/imgs/sealos2.png
Normal file
|
After Width: | Height: | Size: 205 KiB |
BIN
docSite/docs/deploy/proxy/imgs/sealos3.png
Normal file
|
After Width: | Height: | Size: 152 KiB |
BIN
docSite/docs/deploy/proxy/imgs/sealos4.png
Normal file
|
After Width: | Height: | Size: 88 KiB |
BIN
docSite/docs/deploy/proxy/imgs/sealos5.png
Normal file
|
After Width: | Height: | Size: 224 KiB |
73
docSite/docs/deploy/proxy/nginx.md
Normal file
@@ -0,0 +1,73 @@
|
||||
# nginx 反向代理 openai 接口
|
||||
|
||||
如果你有国外的服务器,可以通过配置 nginx 反向代理,转发 openai 相关的请求,从而让国内的服务器可以通过访问该 nginx 去访问 openai 接口。
|
||||
|
||||
```conf
|
||||
user nginx;
|
||||
worker_processes auto;
|
||||
worker_rlimit_nofile 51200;
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
http {
|
||||
resolver 8.8.8.8;
|
||||
proxy_ssl_server_name on;
|
||||
|
||||
access_log off;
|
||||
server_names_hash_bucket_size 512;
|
||||
client_header_buffer_size 32k;
|
||||
large_client_header_buffers 4 32k;
|
||||
client_max_body_size 50M;
|
||||
|
||||
gzip on;
|
||||
gzip_min_length 1k;
|
||||
gzip_buffers 4 8k;
|
||||
gzip_http_version 1.1;
|
||||
gzip_comp_level 6;
|
||||
gzip_vary on;
|
||||
gzip_types text/plain application/x-javascript text/css application/javascript application/json application/xml;
|
||||
gzip_disable "MSIE [1-6]\.";
|
||||
|
||||
open_file_cache max=1000 inactive=1d;
|
||||
open_file_cache_valid 30s;
|
||||
open_file_cache_min_uses 8;
|
||||
open_file_cache_errors off;
|
||||
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name your_host;
|
||||
ssl_certificate /ssl/your_host.pem;
|
||||
ssl_certificate_key /ssl/your_host.key;
|
||||
ssl_session_timeout 5m;
|
||||
|
||||
location ~ /openai/(.*) {
|
||||
# auth check
|
||||
if ($auth != "xxxxxx") {
|
||||
return 403;
|
||||
}
|
||||
|
||||
proxy_pass https://api.openai.com/$1$is_args$args;
|
||||
proxy_set_header Host api.openai.com;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
# 流式响应
|
||||
proxy_set_header Connection '';
|
||||
proxy_http_version 1.1;
|
||||
chunked_transfer_encoding off;
|
||||
proxy_buffering off;
|
||||
proxy_cache off;
|
||||
# 一般响应
|
||||
proxy_buffer_size 128k;
|
||||
proxy_buffers 4 256k;
|
||||
proxy_busy_buffers_size 256k;
|
||||
}
|
||||
}
|
||||
server {
|
||||
listen 80;
|
||||
server_name ai.fastgpt.run;
|
||||
rewrite ^(.*) https://$server_name$1 permanent;
|
||||
}
|
||||
}
|
||||
```
|
||||
100
docSite/docs/deploy/proxy/sealos.md
Normal file
@@ -0,0 +1,100 @@
|
||||
# sealos 部署 openai 中转
|
||||
|
||||
## 登录 sealos cloud
|
||||
|
||||
[sealos cloud](https://cloud.sealos.io/)
|
||||
|
||||
## 创建应用
|
||||
|
||||
打开 App Launchpad -> 新建应用
|
||||
|
||||

|
||||

|
||||
|
||||
### 开启外网访问
|
||||
|
||||

|
||||
|
||||
### 添加 configmap 文件
|
||||
|
||||
1. 复制下面这段代码,注意 `server_name` 后面的内容替换成上图的地址。
|
||||
|
||||
```
|
||||
user nginx;
|
||||
worker_processes auto;
|
||||
worker_rlimit_nofile 51200;
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
http {
|
||||
resolver 8.8.8.8;
|
||||
proxy_ssl_server_name on;
|
||||
|
||||
access_log off;
|
||||
server_names_hash_bucket_size 512;
|
||||
client_header_buffer_size 64k;
|
||||
large_client_header_buffers 4 64k;
|
||||
client_max_body_size 50M;
|
||||
|
||||
proxy_connect_timeout 240s;
|
||||
proxy_read_timeout 240s;
|
||||
proxy_buffer_size 128k;
|
||||
proxy_buffers 4 256k;
|
||||
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name tgohwtdlrmer.cloud.sealos.io; # 这个地方替换成 sealos 提供的内容
|
||||
|
||||
location ~ /openai/(.*) {
|
||||
# auth check
|
||||
if ($http_auth != "auth") { # 安全凭证
|
||||
return 403;
|
||||
}
|
||||
|
||||
proxy_pass https://api.openai.com/$1$is_args$args;
|
||||
proxy_set_header Host api.openai.com;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
# 如果响应是流式的
|
||||
proxy_set_header Connection '';
|
||||
proxy_http_version 1.1;
|
||||
chunked_transfer_encoding off;
|
||||
proxy_buffering off;
|
||||
proxy_cache off;
|
||||
# 如果响应是一般的
|
||||
proxy_buffer_size 128k;
|
||||
proxy_buffers 4 256k;
|
||||
proxy_busy_buffers_size 256k;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. 点开高级配置
|
||||
3. 点击新增 configmap
|
||||
4. 文件名写: `/etc/nginx/nginx.conf`
|
||||
5. 文件值为刚刚复制的那段代码
|
||||
6. 点击确认
|
||||
|
||||

|
||||
|
||||
### 部署应用
|
||||
|
||||
填写完毕后,点击右上角的 `部署应用`,即可完成。
|
||||
|
||||
## 修改 FastGpt 环境变量
|
||||
|
||||
1. 进入刚刚部署应用的详情,复制外网地址
|
||||

|
||||
|
||||
2. 修改环境变量(是 FastGpt 的环境变量,不是 sealos 的):
|
||||
|
||||
```
|
||||
OPENAI_BASE_URL=https://tgohwtdlrmer.cloud.sealos.io/openai/v1
|
||||
OPENAI_BASE_URL_AUTH=auth
|
||||
```
|
||||
|
||||
**Done!**
|
||||