You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I copied the corresponding example from this repository and ran it with GPT-4o, GPT-4o-2024-08-06, and GPT-4-turbo. All models show the same behavior.
Code snippets
Here is the exact code that I used (it is an exact copy of the example from the OpenAI repo, except it is no longer a unit test):
using System.ClientModel;using System.ClientModel.Primitives;using System.Text.Json;using dotenv.net;using OpenAI.Assistants;
#pragma warning disable OPENAI001
varenv= DotEnv.Read(options:new DotEnvOptions(probeForEnv:true, probeLevelsToSearch:7));// This example parallels the content at the following location:// https://platform.openai.com/docs/assistants/tools/function-calling/function-calling-beta
#region Step 1 - Define Functions
// First, define the functions that the assistant will use in its defined tools.FunctionToolDefinitiongetTemperatureTool=new(){FunctionName="get_current_temperature",Description="Gets the current temperature at a specific location.",Parameters= BinaryData.FromString("""
{
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g., San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["Celsius", "Fahrenheit"],
"description": "The temperature unit to use. Infer this from the user's location."
}
}
}
"""),};FunctionToolDefinitiongetRainProbabilityTool=new(){FunctionName="get_current_rain_probability",Description="Gets the current forecasted probability of rain at a specific location,"+" represented as a percent chance in the range of 0 to 100.",Parameters= BinaryData.FromString("""
{
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g., San Francisco, CA"
}
},
"required": ["location"]
}
"""),};
#endregion
// Assistants is a beta API and subject to change; acknowledge its experimental status by suppressing the matching warning.AssistantClientclient=new(env["OPENAI_KEY"]!);
#region Create a new assistant with function tools
// Create an assistant that can call the function tools.AssistantCreationOptionsassistantOptions=new(){Name="Example: Function Calling",Instructions="Don't make assumptions about what values to plug into functions."+" Ask for clarification if a user request is ambiguous.",Tools={ getTemperatureTool, getRainProbabilityTool },};Assistantassistant=await client.CreateAssistantAsync(env["OPENAI_MODEL"]!, assistantOptions);
#endregion
#region Step 2 - Create a thread and add messages
AssistantThreadthread=await client.CreateThreadAsync();ThreadMessagemessage=await client.CreateMessageAsync(
thread,
MessageRole.User,["What's the weather in San Francisco today and the likelihood it'll rain?"]);
#endregion
#region Step 3 - Initiate a streaming run
AsyncCollectionResult<StreamingUpdate>asyncUpdates= client.CreateRunStreamingAsync(thread, assistant);ThreadRun?currentRun=null;do{currentRun=null;List<ToolOutput>outputsToSubmit=[];awaitforeach(StreamingUpdate update in asyncUpdates){if(update is RunUpdate runUpdate){currentRun=runUpdate;}elseif(update is RequiredActionUpdate requiredActionUpdate){if(requiredActionUpdate.FunctionName == getTemperatureTool.FunctionName){
outputsToSubmit.Add(new ToolOutput(requiredActionUpdate.ToolCallId,"57"));}elseif(requiredActionUpdate.FunctionName == getRainProbabilityTool.FunctionName){
outputsToSubmit.Add(new ToolOutput(requiredActionUpdate.ToolCallId,"25%"));}}elseif(update is MessageContentUpdate contentUpdate){
Console.Write(contentUpdate.Text);}}if(outputsToSubmit.Count >0){asyncUpdates= client.SubmitToolOutputsToRunStreamingAsync(currentRun, outputsToSubmit);}}while(currentRun?.Status.IsTerminal ==false);
#endregion
// Optionally, delete the resources for tidiness if no longer needed.RequestOptionsnoThrowOptions=new(){ErrorOptions= ClientErrorBehaviors.NoThrow };_=await client.DeleteThreadAsync(thread.Id, noThrowOptions);_=await client.DeleteAssistantAsync(assistant.Id, noThrowOptions);
OS
Ubuntu
.NET version
8
Library version
2.0.0-beta.11
The text was updated successfully, but these errors were encountered:
Confirm this is not an issue with the OpenAI Python Library
Confirm this is not an issue with the underlying OpenAI API
Confirm this is not an issue with Azure OpenAI
Describe the bug
When using the C# OpenAI SDK with assistants, function calling, and streaming, an error is thrown when calling
CreateRunStreamingAsync
. I described the issue already in https://community.openai.com/t/error-400-already-has-an-active-run/930753/6?u=rainer1, but got no feedback. Other users are reporting this problem, too.To Reproduce
I copied the corresponding example from this repository and ran it with GPT-4o, GPT-4o-2024-08-06, and GPT-4-turbo. All models show the same behavior.
Code snippets
Here is the exact code that I used (it is an exact copy of the example from the OpenAI repo, except it is no longer a unit test):
OS
Ubuntu
.NET version
8
Library version
2.0.0-beta.11
The text was updated successfully, but these errors were encountered: