fix(local-models): support OpenAI-style tool calls#1512
fix(local-models): support OpenAI-style tool calls#1512Alexxigang wants to merge 3 commits intoagentscope-ai:mainfrom
Conversation
… callsve argument handling
Add unit tests for LocalChatModel and tool call parsing.
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the local model's ability to handle tool calls by introducing compatibility with OpenAI's nested function format. It addresses issues where tool call arguments were not correctly parsed or streamed, leading to more robust and reliable tool integration for local backends. The changes ensure that tool calls are accurately interpreted and delivered, especially in streaming contexts, by preserving critical metadata and delaying output until complete. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request successfully adds support for OpenAI-style tool calls, including nested function calls and preserving tool call IDs. The changes in tag_parser.py correctly handle different tool call formats, and the updates in chat_model.py improve streaming logic for fragmented tool calls. The addition of regression tests is also a great improvement. I've found a couple of areas where robustness can be improved to handle edge cases, detailed in my comments.
| if isinstance(arguments, dict): | ||
| return arguments, json.dumps(arguments, ensure_ascii=False) |
There was a problem hiding this comment.
The json.dumps call for dictionary arguments is not wrapped in a try-except block. If the dictionary contains non-serializable types (e.g., datetime objects), this will raise an unhandled TypeError, causing the application to crash. This is inconsistent with the error handling for other types within the same function.
| if isinstance(arguments, dict): | |
| return arguments, json.dumps(arguments, ensure_ascii=False) | |
| if isinstance(arguments, dict): | |
| try: | |
| return arguments, json.dumps(arguments, ensure_ascii=False) | |
| except TypeError: | |
| return arguments, "" |
| if idx not in tool_calls: | ||
| tool_calls[idx] = { | ||
| "id": tc.get("id", f"call_{idx}"), | ||
| "name": (tc.get("function") or {}).get("name", ""), | ||
| "name": "", | ||
| "arguments": "", | ||
| } | ||
| tool_calls[idx]["arguments"] += (tc.get("function") or {}).get( | ||
| "arguments", | ||
| ) or "" | ||
| if tc.get("id"): | ||
| tool_calls[idx]["id"] = tc["id"] |
There was a problem hiding this comment.
The current logic for handling tool call IDs can result in an empty string ID if the model provides id: "". This is inconsistent with the logic in tag_parser.py, which generates a new ID in such cases. An empty ID might cause issues downstream. The logic can also be simplified to be more readable and robust.
| if idx not in tool_calls: | |
| tool_calls[idx] = { | |
| "id": tc.get("id", f"call_{idx}"), | |
| "name": (tc.get("function") or {}).get("name", ""), | |
| "name": "", | |
| "arguments": "", | |
| } | |
| tool_calls[idx]["arguments"] += (tc.get("function") or {}).get( | |
| "arguments", | |
| ) or "" | |
| if tc.get("id"): | |
| tool_calls[idx]["id"] = tc["id"] | |
| if idx not in tool_calls: | |
| tool_calls[idx] = { | |
| "id": f"call_{idx}", | |
| "name": "", | |
| "arguments": "", | |
| } | |
| if tc.get("id"): | |
| tool_calls[idx]["id"] = tc["id"] |
Summary
<tool_call>payloadsidand raw arguments when parsing local model responsestool_useblocks untilfunction.nameis availableWhy
tag_parser.pyonly handled the flatname/argumentsshape, so OpenAI-style payloads such as{\"function\": {\"name\": ..., \"arguments\": ...}}were dropped.chat_model.pycould also emit an invalid streamedtool_useblock with an empty name when the first chunk only contained partial arguments and the function name arrived later.Related Issue: Fixes #1455
Also addresses #1456
Type of Change
Component(s) Affected
Checklist
pre-commit run --all-fileslocally and it passespytestor as relevant) and they passTesting
PYTHONPATH=src pytest tests/unit/local_models/test_local_model_tool_calls.py -q # 2 passedI also manually executed the existing provider tool-call compatibility checks in this environment because
pytest-asynciois not installed locally, and those checks passed.Additional Notes
I submitted this through the GitHub web editor because direct
git pushto GitHub was failing from the current environment.