Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

📑 [DEMO] This is a demo PR of how to add an OpenAI Compatible Provider #2804

Draft
wants to merge 5 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions src/app/(main)/settings/llm/ProviderList/providers.tsx
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add the configuration information of Provider Config in this file. After adding it, it can be displayed in the configuration list.

在这个文件中添加 Provider Config 的配置信息,添加后就可以显示在配置列表中。

Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ import {
OpenRouterProviderCard,
PerplexityProviderCard,
QwenProviderCard,
StepfunProviderCard,
TogetherAIProviderCard,
ZeroOneProviderCard,
ZhiPuProviderCard,
Expand Down Expand Up @@ -135,6 +136,10 @@ export const useProviderList = (): ProviderItem[] => {
...ZeroOneProviderCard,
title: <ZeroOne.Text size={20} />,
},
{
...StepfunProviderCard,
title: <ZeroOne.Text size={20} />,
},
],
[azureProvider, ollamaProvider, ollamaProvider, bedrockProvider],
);
Expand Down
7 changes: 7 additions & 0 deletions src/app/api/chat/agentRuntime.ts
Copy link
Contributor Author

@arvinxx arvinxx Jun 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function of this file is the server implementation called by /chat/xxx, where it needs to accept the api key passed by the client and the API_KEY env configured by the server.

这个文件的作用是 /chat/xxx 所调用的服务端实现,在这里需要承接客户端传过来的 api key 和服务端配置的 API_KEY env

Original file line number Diff line number Diff line change
Expand Up @@ -163,6 +163,13 @@

const apiKey = apiKeyManager.pick(payload?.apiKey || QWEN_API_KEY);

return { apiKey };
}
case ModelProvider.Stepfun: {
const { STEPFUN_API_KEY } = getLLMConfig();

const apiKey = apiKeyManager.pick(payload?.apiKey || STEPFUN_API_KEY);

Check warning on line 172 in src/app/api/chat/agentRuntime.ts

View check run for this annotation

Codecov / codecov/patch

src/app/api/chat/agentRuntime.ts#L169-L172

Added lines #L169 - L172 were not covered by tests
return { apiKey };
}
}
Expand Down
6 changes: 6 additions & 0 deletions src/config/llm.ts
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add the server API KEY environment variable here, there are usually two: ENABLED_XXX and XXXX_API_KEY

在此处添加服务端 API KEY 环境变量,一般会有两个: ENABLED_XXXXXXX_API_KEY

Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,9 @@ export const getLLMConfig = () => {

ENABLED_QWEN: z.boolean(),
QWEN_API_KEY: z.string().optional(),

ENABLED_STEPFUN: z.boolean(),
STEPFUN_API_KEY: z.string().optional(),
},
runtimeEnv: {
API_KEY_SELECT_MODE: process.env.API_KEY_SELECT_MODE,
Expand Down Expand Up @@ -188,6 +191,9 @@ export const getLLMConfig = () => {

ENABLED_QWEN: !!process.env.QWEN_API_KEY,
QWEN_API_KEY: process.env.QWEN_API_KEY,

ENABLED_STEPFUN: !!process.env.STEPFUN_API_KEY,
STEPFUN_API_KEY: process.env.STEPFUN_API_KEY,
},
});
};
Expand Down
4 changes: 4 additions & 0 deletions src/config/modelProviders/index.ts
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

src/config/modelProviders stores information about this provider, including which models it has, whether proxyURL is enabled, whether it supports pulling the model list, etc.

src/config/modelProviders 中存关于这个 provider 的信息,包括有哪些模型、是否开启 proxyURL,是否支持拉取模型列表等等。

Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ import OpenAIProvider from './openai';
import OpenRouterProvider from './openrouter';
import PerplexityProvider from './perplexity';
import QwenProvider from './qwen';
import StepfunProvider from './stepfun';
import TogetherAIProvider from './togetherai';
import ZeroOneProvider from './zeroone';
import ZhiPuProvider from './zhipu';
Expand All @@ -35,6 +36,7 @@ export const LOBE_DEFAULT_MODEL_LIST: ChatModelCard[] = [
PerplexityProvider.chatModels,
AnthropicProvider.chatModels,
ZeroOneProvider.chatModels,
StepfunProvider.chatModels,
].flat();

export const DEFAULT_MODEL_PROVIDER_LIST = [
Expand All @@ -55,6 +57,7 @@ export const DEFAULT_MODEL_PROVIDER_LIST = [
MoonshotProvider,
ZeroOneProvider,
ZhiPuProvider,
StepfunProvider,
];

export const filterEnabledModels = (provider: ModelProviderCard) => {
Expand All @@ -75,6 +78,7 @@ export { default as OpenAIProviderCard } from './openai';
export { default as OpenRouterProviderCard } from './openrouter';
export { default as PerplexityProviderCard } from './perplexity';
export { default as QwenProviderCard } from './qwen';
export { default as StepfunProviderCard } from './stepfun';
export { default as TogetherAIProviderCard } from './togetherai';
export { default as ZeroOneProviderCard } from './zeroone';
export { default as ZhiPuProviderCard } from './zhipu';
38 changes: 38 additions & 0 deletions src/config/modelProviders/stepfun.ts
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Metadata of this provider

对应 provider 的 元信息

Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
import { ModelProviderCard } from '@/types/llm';

// ref https://platform.stepfun.com/docs/llm/text
const Stepfun: ModelProviderCard = {
chatModels: [
{
enabled: true,
id: 'step-1-256k',
tokens: 32_768,
},
{
enabled: true,
id: 'step-1-128k',
tokens: 32_768,
},
{
enabled: true,
id: 'step-1-32k',
tokens: 32_768,
},
{
enabled: true,
id: 'step-1v-32k',
tokens: 32_768,
vision: true,
},
{
id: 'step-1-8k',
tokens: 8192,
},
],
checkModel: 'step-1-8k',
id: 'stepfun',
modelList: { showModelFetcher: true },
name: '阶跃星辰',
};

export default Stepfun;
5 changes: 5 additions & 0 deletions src/const/settings/llm.ts
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

llm The default value of the configuration, enabled means whether to display the provider by default, enabledModels contains the key value of the default display model, such as ["gpt-3.5-turbo","gpt- 4o"] and so on

llm 配置的默认值, enabled 意味着是否默认显示该provider, enabledModels 则包含了默认显示模型的 key 值,例如 ["gpt-3.5-turbo","gpt-4o"] 等等

Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ import {
OpenRouterProviderCard,
PerplexityProviderCard,
QwenProviderCard,
StepfunProviderCard,
TogetherAIProviderCard,
ZeroOneProviderCard,
ZhiPuProviderCard,
Expand Down Expand Up @@ -77,6 +78,10 @@ export const DEFAULT_LLM_CONFIG: UserModelProviderConfig = {
enabled: false,
enabledModels: filterEnabledModels(QwenProviderCard),
},
stepfun: {
enabled: false,
enabledModels: filterEnabledModels(StepfunProviderCard),
},
togetherai: {
enabled: false,
enabledModels: filterEnabledModels(TogetherAIProviderCard),
Expand Down
7 changes: 7 additions & 0 deletions src/libs/agent-runtime/AgentRuntime.ts
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

libs/agent-runtime is a module that is planned to be extracted into an independent package in the future. AgentRuntime is a universe runtime. You only need to pass in the corresponding provider name to call the corresponding provider service. Therefore, it is also necessary to add the corresponding input parameters to the file and instantiate the runtime.

libs/agent-runtime 是未来计划抽成独立包的模块,其中的 AgentRuntime 是一个 universe 的运行时,只需要传入对应的 provider 名称即可调用对应的供应商服务。因此同样需要在该文件中补充对应的入参,并实例化 runtime。

Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
import { LobeOpenRouterAI } from './openrouter';
import { LobePerplexityAI } from './perplexity';
import { LobeQwenAI } from './qwen';
import { LobeStepfunAI } from './stepfun';
import { LobeTogetherAI } from './togetherai';
import {
ChatCompetitionOptions,
Expand Down Expand Up @@ -114,6 +115,7 @@
openrouter: Partial<ClientOptions>;
perplexity: Partial<ClientOptions>;
qwen: Partial<ClientOptions>;
stepfun: Partial<ClientOptions>;
togetherai: Partial<ClientOptions>;
zeroone: Partial<ClientOptions>;
zhipu: Partial<ClientOptions>;
Expand Down Expand Up @@ -212,6 +214,11 @@
runtimeModel = new LobeQwenAI(params.qwen ?? {});
break;
}

case ModelProvider.Stepfun: {
runtimeModel = new LobeStepfunAI(params.stepfun ?? {});
break;
}

Check warning on line 221 in src/libs/agent-runtime/AgentRuntime.ts

View check run for this annotation

Codecov / codecov/patch

src/libs/agent-runtime/AgentRuntime.ts#L219-L221

Added lines #L219 - L221 were not covered by tests
}

return new AgentRuntime(runtimeModel);
Expand Down
10 changes: 10 additions & 0 deletions src/libs/agent-runtime/stepfun/index.ts
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The runtime implementation of this model provider. For OpenAI compatible providers, we provide the LobeOpenAICompatibleFactory class to quickly create the runtime. If it is other specific Provider, such as Wen Xinyiyan, Google, etc., it needs to be implemented specifically.

该模型 provider 的运行时实现,针对 OpenAI 兼容的 provider,我们提供了 LobeOpenAICompatibleFactory 类快速创建运行时,如果是其他特异的 Provider ,例如文心一言、Google 等,则需要专门进行实现。

Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
import { ModelProvider } from '../types';
import { LobeOpenAICompatibleFactory } from '../utils/openaiCompatibleFactory';

export const LobeStepfunAI = LobeOpenAICompatibleFactory({
baseURL: 'https://api.stepfun.com/v1',
debug: {
chatCompletion: () => process.env.DEBUG_STEPFUN_CHAT_COMPLETION === '1',
},
provider: ModelProvider.Stepfun,
});
1 change: 1 addition & 0 deletions src/libs/agent-runtime/types/type.ts
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add the model identifier, this ModelProvider is all provider types we support.

添加模型标识符,这个 ModelProvider 是我们所支持的所有 provider 类型。

Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ export enum ModelProvider {
OpenRouter = 'openrouter',
Perplexity = 'perplexity',
Qwen = 'qwen',
Stepfun = 'stepfun',
TogetherAI = 'togetherai',
ZeroOne = 'zeroone',
ZhiPu = 'zhipu',
Expand Down
3 changes: 3 additions & 0 deletions src/server/globalConfig/index.ts
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is the module where the server-side configuration is sent to the front-end. When the user configures the API key of the Provider in the env, it is natural to hope that the provider is open by default. Therefore, the status needs to be sent from the server to the client

此处是服务端配置发送到前端的模块,当用户在 env 中配置了 Provider 的 API key,自然希望该 provider 是默认打开的状态。因此需要把该状态从 server 发给 client。

Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ export const getServerGlobalConfig = () => {
ENABLED_MINIMAX,
ENABLED_MISTRAL,
ENABLED_QWEN,
ENABLED_STEPFUN,

ENABLED_AZURE_OPENAI,
AZURE_MODEL_LIST,
Expand Down Expand Up @@ -104,6 +105,8 @@ export const getServerGlobalConfig = () => {
perplexity: { enabled: ENABLED_PERPLEXITY },
qwen: { enabled: ENABLED_QWEN },

stepfun: { enabled: ENABLED_STEPFUN },

togetherai: {
enabled: ENABLED_TOGETHERAI,
enabledModels: extractEnabledModels(TOGETHERAI_MODEL_LIST),
Expand Down
1 change: 1 addition & 0 deletions src/types/user/settings/keyVaults.ts
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

keyVaults stores the user's own provider apikey information. In the server-side DB implementation, we will encrypt and store keyVaults

keyVaults 中存储了用户填写的自己的 provider apikey 信息,在服务端 DB 实现中,我们会将 keyVaults 加密存储。

Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ export interface UserKeyVaults {
password?: string;
perplexity?: OpenAICompatibleKeyVault;
qwen?: OpenAICompatibleKeyVault;
stepfun?: OpenAICompatibleKeyVault;
togetherai?: OpenAICompatibleKeyVault;
zeroone?: OpenAICompatibleKeyVault;
zhipu?: OpenAICompatibleKeyVault;
Expand Down
Loading