Using streamText with a custom API instead of a known model provider #4070
-
Hi everyone, I’m working on a project where I’d like to use the streamText function from the AI SDK, but not with any of the standard providers like OpenAI or Anthropic. Instead, I want to connect it to a custom API that behaves similarly to OpenAI’s streaming endpoint. Here’s a brief overview of my setup:
My goal is that, in "chatWithAgent" function, I want to use "streamText" to handle the streaming response from agent-handler (Web Service B). However, instead of passing a standard provider (e.g., OpenAI), I need to simulate the behavior of a model using my custom API (agent-handler). The current streamText API seems to require a model object, but in my case, I don’t have a traditional model. Instead, I want to stream data directly from my agent-handler service. Is there a way to integrate streamText with a custom API like mine? If not, are there alternative approaches to achieve this within the AI SDK? Here’s what my chatWithAgent function ideally looks like: export const chatWithAgent = async (): Promise<StreamTextResult<Record<string, CoreTool>>> => {
...
const { textStream } = streamText({
model: {
// Instead of using a standard model, I want to use my custom agent-handler service here
async doStream() {
return await fetch(`URL_AGENT_HANDLER/chat/completions`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
messages: chatMessages,
model: 'gpt-3.5-turbo',
stream: true,
temperature: 0.7,
max_tokens: 1000
}),
});
},
},
messages: chatMessages,
onCompletion: async (completion: string) => {
// Update conversation once the response is fully received
},
});
...
}; Is it possible to use streamText without a standard provider, and instead connect it to a custom API that streams data? If yes, how can I configure the model object to make this work? Any guidance, examples, or alternative approaches would be greatly appreciated! Thank you in advance! |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 7 replies
-
If your endpoint is fully OpenAI compatible, you can use the openai provider and set a custom baseURL. See https://sdk.vercel.ai/providers/openai-compatible-providers |
Beta Was this translation helpful? Give feedback.
-
how about if is a custom api from a certain backend ? |
Beta Was this translation helpful? Give feedback.
-
i do not know it. Because me i was trying to use AI SDK with my custom
api's from a certain backend. instead of managing all the application in
nextjs (backend and frontend)
…On Mon, Jan 27, 2025 at 7:31 PM Khel ***@***.***> wrote:
@lgrammel <https://github.com/lgrammel> Is there a clear guide/docs on
how to do this? I want to use the AI SDK for streaming with a specific
backend API that is SSE-ready.
—
Reply to this email directly, view it on GitHub
<#4070 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AQXGVOA4JFWNYDVH3I5A4Y32MZUNVAVCNFSM6AAAAABTOQ3NT2VHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTCOJXGI4DCOI>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
If your endpoint is fully OpenAI compatible, you can use the openai provider and set a custom baseURL. See https://sdk.vercel.ai/providers/openai-compatible-providers