You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Confirm this is a feature request for the .NET library and not the underlying OpenAI API
This is a feature request for the .NET library
Describe the feature or improvement you are requesting
LLM providers like Groq, Mistral and Cerebras have OpenAI compatible APIs, but these still use the legacy max_tokens field.
It would be great if this library could be backward compatible with these legacy APIs. Right now, we can't use the latest SDK with those APIs since they error out if you sent max_completion_tokens.
> ChatCompletionOptions will automatically apply its MaxOutputTokenCount value (renamed from MaxTokens) to the new max_completion_tokens request body property
Additional context
No response
The text was updated successfully, but these errors were encountered:
I use this library for OpenAI, Azure OpenAI and Azure AI Intenerce.
Azure AI Intenerce reports errors when passing certain unsupported parameters.
Currently, I am using reflection to set _deprecatedMaxTokens and setting null for StreamOptions.
using OpenAIChatCompletionOptions = OpenAI.Chat.ChatCompletionOptions;using OpenAIChatToolChoice = OpenAI.Chat.ChatToolChoice;publicstaticclassOpenAISdkHelper{[UnsafeAccessor(UnsafeAccessorKind.Method, Name ="set__deprecatedMaxTokens")]publicstaticexternvoidSetMaxTokens(OpenAIChatCompletionOptionsoptions,int?deprecatedMaxTokens);[UnsafeAccessor(UnsafeAccessorKind.Field, Name ="_predefinedValue")]publicstaticexternrefstring?GetToolChoicePredefinedValue(OpenAIChatToolChoicetoolChoice);privatestaticPropertyInfo?StreamOptionsProperty{get;}=typeof(OpenAIChatCompletionOptions).GetProperty("StreamOptions", BindingFlags.NonPublic | BindingFlags.Instance)!;publicstaticvoidSetStreamOptionsToNull(OpenAIChatCompletionOptionsoptions){
StreamOptionsProperty?.SetValue(options,null);}}
Confirm this is a feature request for the .NET library and not the underlying OpenAI API
Describe the feature or improvement you are requesting
LLM providers like Groq, Mistral and Cerebras have OpenAI compatible APIs, but these still use the legacy max_tokens field.
It would be great if this library could be backward compatible with these legacy APIs. Right now, we can't use the latest SDK with those APIs since they error out if you sent max_completion_tokens.
>
ChatCompletionOptions
will automatically apply itsMaxOutputTokenCount
value (renamed fromMaxTokens
) to the newmax_completion_tokens
request body propertyAdditional context
No response
The text was updated successfully, but these errors were encountered: