-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Mising tool outputss Error when using Azure OpenAI Assistants, when the Assistant planner calls the same tool multiple times in a run #1705
Comments
Hi @NikhilB95, do you mind indenting the stack trace line-by-line so that it is easier to see? Do you know which call_id is associated to the function you are trying to call multiple times? I see this error at the bottom- it looks like the outputs of the first two calls were not returned? |
Closing due to inactivity, please re-open if support is required. |
Hello @lilyydu @corinagum @NikhilB95. I am facing the very same issue. Was about to open an issue when found this one, dormant though. I get the exact same error (different callIDs of course). This can be easily reproduced as follows (pseudocode):
3b. something that will triger two calls to the same or different tools where no input is dependent on the output of another call. The stack trace and error message make clear that they were called in parallel, or at least the LLM thinks they were, or I think anyway. It will fail. 100% repeatable:
I experienced this ever since I started working with the Assistants version of the template...and hoped the issue would just go away [yeah, right]. Versions of the conversation such as: This is an example of the errors I get. Notice that the error is shown 3 times, I suppose while the exception bubbles up. None of the text below is mine or my code's:
My conclusion is that the LLM calls the tool -at least the first invocation reaches the tool implementation in Python- and when that tool returns [meaning posts response back?], the LLM throws a 400 error because it was expecting the return value from both calls on the same request. It is not clear to me if the second call from the LLM to the tool entry point ever went out or if it was received. I did not see evidence of it reaching the entry point. Code for the Assistant definition, copied from the Asistants Playground on Azure AI Studio:
Code for the tool implementations (called from the LLM)
|
I have found the problem, it was being triggered when the same tool was being called more than once. The Teams AI library contains a dictionary of tools being called I have fixed it but I don't know how to submit the actual fix to the correct repository, or the protocol. Also, I fixed this scenario but did not explore any kind of impact anywhere else, testing included. Current tests should pass though, because functionality was not changed. The fix consists on changing a variable from containing a list of calls
Fix in two places: This is the before and after exported as a git diff. Not certain if this is the best way to pass this info though.
|
Re-opening. Will try to reproduce it soon. |
@singhk97 Hi! While you reopened I was copying here and pasting there, and created a new bug report report 😁 at the very same time I just saw. |
Language
Python
Version
latest
Description
When using Azure OpenAI Assistants, with custom tools, the teams bot raises a error (shown below) in specific situations when the same tool is called multiple times.
Error: Traceback (most recent call last): File "/tmp/8dc83d67fdc82b2/antenv/lib/python3.11/site-packages/botbuilder/core/bot_adapter.py", line 174, in run_pipeline return await self._middleware.receive_activity_with_status( File "/tmp/8dc83d67fdc82b2/antenv/lib/python3.11/site-packages/botbuilder/core/middleware_set.py", line 69, in receive_activity_with_status return await self.receive_activity_internal(context, callback) File "/tmp/8dc83d67fdc82b2/antenv/lib/python3.11/site-packages/botbuilder/core/middleware_set.py", line 79, in receive_activity_internal return await callback(context) File "/tmp/8dc83d67fdc82b2/antenv/lib/python3.11/site-packages/teams/app.py", line 663, in on_turn await self._start_long_running_call(context, self._on_turn) File "/tmp/8dc83d67fdc82b2/antenv/lib/python3.11/site-packages/teams/app.py", line 813, in _start_long_running_call return await func(context) File "/tmp/8dc83d67fdc82b2/antenv/lib/python3.11/site-packages/teams/app.py", line 756, in _on_turn is_ok = await self._ai.run(context, state) File "/tmp/8dc83d67fdc82b2/antenv/lib/python3.11/site-packages/teams/ai/ai.py", line 187, in run return await self.run(context, state, started_at, step) File "/tmp/8dc83d67fdc82b2/antenv/lib/python3.11/site-packages/teams/ai/ai.py", line 187, in run return await self.run(context, state, started_at, step) File "/tmp/8dc83d67fdc82b2/antenv/lib/python3.11/site-packages/teams/ai/ai.py", line 143, in run plan = await self.planner.continue_task(context, state) File "/tmp/8dc83d67fdc82b2/antenv/lib/python3.11/site-packages/teams/ai/planners/assistants_planner.py", line 187, in continue_task return await self._submit_action_results(state) File "/tmp/8dc83d67fdc82b2/antenv/lib/python3.11/site-packages/teams/ai/planners/assistants_planner.py", line 279, in _submit_action_results run = await self._client.beta.threads.runs.submit_tool_outputs( File "/tmp/8dc83d67fdc82b2/antenv/lib/python3.11/site-packages/openai/resources/beta/threads/runs/runs.py", line 2979, in submit_tool_outputs return await self._post( File "/tmp/8dc83d67fdc82b2/antenv/lib/python3.11/site-packages/openai/_base_client.py", line 1790, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) File "/tmp/8dc83d67fdc82b2/antenv/lib/python3.11/site-packages/openai/_base_client.py", line 1493, in request return await self._request( File "/tmp/8dc83d67fdc82b2/antenv/lib/python3.11/site-packages/openai/_base_client.py", line 1584, in _request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "Expected tool outputs for call_ids ['call_dSFAdfcF9CLsB6LutGJleiFJ', 'call_kxqGFm4LegeYA80wdG4nX0q4', 'call_RzLxCptvGVXCD298Klh170xX'], got ['call_RzLxCptvGVXCD298Klh170xX']", 'type': 'invalid_request_error', 'param': None, 'code': None}}
Reproduction Steps
The text was updated successfully, but these errors were encountered: