Replies: 1 comment
-
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I'm working on building a voice agent, and one of the main issues currently is the low function calling performance with the new native audio models.
Once solution I have considered is to run a second LLM agent in the background that ingests the chat history with a certain frequency and makes the function calls and then injects them into the voice agent events history.
I have the following code snippet that seems to work, in the sense that if asked, the agent will remember the injected information, but it doesn't acknowledge it unless asked, which fails my purpose. Ideally when a tool response is injected, it would act as if it had received the actual response from the backend:
Do you have any suggestions on how to achieve this? Maybe modifying the agent memory in ADK directly instead of the live api events?
Beta Was this translation helpful? Give feedback.
All reactions