Skip to content

When adding an Azure OpenAI model in a prompt, the model & modelName variables are always set to gpt-3.5-turbo #8982 #8983

@s-nuyens

Description

@s-nuyens

Checked other resources

  • This is a bug, not a usage question. For questions, please use the LangChain Forum (https://forum.langchain.com/).
  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

import { RunnableSequence } from "@langchain/core/runnables";
import { pull } from "langchain/hub";
import { AzureChatOpenAI } from '@langchain/openai';
import { config } from 'dotenv';
config();

const promptRef = "dev_test:dev"
const runnable: RunnableSequence = await pull<RunnableSequence>(promptRef, {
    apiKey: process.env.LANGSMITH_API_KEY,
    includeModel: true 
});

let runnable_model: AzureChatOpenAI | undefined;
for (const step of runnable.steps) {
    if (step instanceof AzureChatOpenAI) {
        runnable_model = step;
    }
}

if (runnable_model) {
    console.log("AzureOpenAI DeploymentName: \t", runnable_model.azureOpenAIApiDeploymentName);
    console.log("Model Name: \t\t\t", runnable_model.modelName);
    console.log("Model: \t\t\t\t", runnable_model.model);
}

Error Message and Stack Trace (if applicable)

No response

Description

I defined the following prompt with a Azure OpenAI model.
Image

Notice that there is no possibility to define the model or modelName.
When retrieving the runnableSequence and extracting the model (see code example), the output shows that the deploymentName is gpt-4o-mini but the model & modelName are set to gpt-3.5-turbo.

Image

I think this setting is hardcoded since I found no way to override this in the prompts tab of Langgraph Platform. The only way to change the model & modelName variables is to manually change them in code and rebuild the llm to use for invoking the prompt.

System Info

"@langchain/langgraph-cli": "0.0.52",
"@langchain/community": "^0.3.53",
"@langchain/core": "0.3.72",
"@langchain/langgraph": "0.4.6",
"@langchain/langgraph-sdk": "^0.0.109",
"@langchain/langgraph-supervisor": "0.0.18",
"@langchain/mcp-adapters": "^0.5.3",
"@langchain/openai": "0.5.18",
"@langchain/tavily": "0.1.5",
"langchain": "^0.3.31",
"langsmith": "^0.3.64",
"typescript": "^5"

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions