-
Notifications
You must be signed in to change notification settings - Fork 20.5k
fix(core): fix strict schema generation for functions with optional args #34599
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(core): fix strict schema generation for functions with optional args #34599
Conversation
Merging this PR will not alter performance
|
ccurme
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a test that fails on master and passes here?
On line 452 above we already apply _recursive_set_additional_properties_false
|
@ccurme just run the following example import aiohttp
import asyncio
from langchain_core.messages import HumanMessage
from langchain_core.tools import StructuredTool
from langchain.agents import create_agent
from langchain.agents.structured_output import ProviderStrategy
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
class StructuredResult(BaseModel):
answer: str
class SearchInput(BaseModel):
query: str = Field(default="", description="The search query string to find relevant information.")
limit: int = Field(default=3, description="The number of results to return.")
model_config = {
"extra": "forbid"
}
def search(aiohttp_session) -> StructuredTool:
"""Search Tool"""
async def _search(
query: str = "",
limit: int = 3,
) -> str:
# Just echo the query
answer = f"Echo: {query} | Limit: {limit}"
return answer
return StructuredTool.from_function(
coroutine=_search,
name="search-tool",
description="Use this tool to search for information based on a query and an optional limit.",
args_schema=SearchInput,
return_direct=False,)
async def main():
model = ChatOpenAI(
model="gpt-4.1"
)
async with aiohttp.ClientSession() as session:
# session shared across tools
tools = [search(aiohttp_session=session)]
agent = create_agent(
model=model,
tools=tools,
system_prompt="You are a helpful assistant. Use the tools to answer user queries.",
response_format=ProviderStrategy(schema=StructuredResult,strict=True)
)
input_state = {"messages": [HumanMessage(content="Search for information about AI research.")]}
out = await agent.ainvoke(input_state)
print(out,'out')
if __name__ == "__main__":
asyncio.run(main())Fails on master. |
strict mode, additionalProperties needs to be set to False* test(text-splitters): add edge case tests for CharacterTextSplitter (langchain-ai#34628) * chore(groq): document vision support (langchain-ai#34620) * feat(core): support custom message separator in get_buffer_string() (langchain-ai#34569) * chore(langchain): fix types in test_wrap_model_call (langchain-ai#34573) * fix: handle empty assistant content in Responses API (langchain-ai#34272) (langchain-ai#34296) * fix(openai): raise proper exception `OpenAIRefusalError` on structured output refusal (langchain-ai#34619) * release(openai): 1.1.7 (langchain-ai#34640) * fix(core): fix strict schema generation for functions with optional args (langchain-ai#34599) * test(core): add edge case for empty examples in LengthBasedExampleSelector (langchain-ai#34641) * fix(langchain): handle parallel usage of the todo tool in planning middleware (langchain-ai#34637) The agent should only make a single call to update the todo list at a time. A parallel call doesn't make sense, but also cannot work as there's no obvious reducer to use. On parallel calls of the todo tool, we return ToolMessage containing to guide the LLM to not call the tool in parallel. --------- Co-authored-by: Eugene Yurtsev <[email protected]> * release(langchain): release 1.2.2 (langchain-ai#34643) Release langchain 1.2.2 * fix(langchain): add test to verify version (langchain-ai#34644) verify version in langchain to avoid accidental drift --------- Co-authored-by: Manas karthik <[email protected]> Co-authored-by: Aarav Dugar <[email protected]> Co-authored-by: Chris Papademetrious <[email protected]> Co-authored-by: Christophe Bornet <[email protected]> Co-authored-by: Sujal M H <[email protected]> Co-authored-by: OysterMax <[email protected]> Co-authored-by: ccurme <[email protected]> Co-authored-by: Mohammad Mohtashim <[email protected]> Co-authored-by: Harrison Chase <[email protected]> Co-authored-by: Eugene Yurtsev <[email protected]>
additionalPropertiesset toFalse, otherwise we get 400 error. This is also stated in the Official Doc: https://platform.openai.com/docs/guides/function-calling#strict-mode