-
Notifications
You must be signed in to change notification settings - Fork 914
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
context management between the agents in a multi agent set up #348
Comments
Hi, without having a code example it may be difficult to understand where it's the problem. But if you read the doc, it states:
So, if you have some extra information passed in the Otherwise, if you mean the context of the LLM, you can print the detailed log as follows: import logging
logger = logging.getLogger("openai.agents") # or openai.agents.tracing for the Tracing logger
# To make all logs show up
logger.setLevel(logging.DEBUG)
logger.addHandler(logging.FileHandler("agents.log", mode="w")) in order to check if the agents receive all the information |
Hi @DanieleMorotti , in the trace dash board if u have look there is a output field(which is empty here) after agent 2 is called and below ill attach the image which actually( the 1 or 2 times i mentioned right) applied the functionality its supposed to and forwarded it correctly to the next agent. So to summarize the problem the output which agent2 has to send to agent3 after processing the input given to it, is not being sent to agent3 although in the trace redirection is happening and below is the skeleton code of my implementation `import os RECOMMENDED_PROMPT_PREFIX = """# System context\nYou are part of a multi-agent system agent4 = Agent( agent3 = Agent( agent2 = Agent( agent2_handoff = handoff( triage_agent = Agent( `` |
@rm-openai it would be great if u can suggest where do i look into for solving this |
i think your context should save in One dict? maybe the start level handoff/agent response will not pass to the next level(i tried to catch the input/output process flow on my local serve llm) |
according to the documentation the context between the agents is passed locally only link
] and handofffinput dataclass has input_history , pre_handoff_items ,new_items. so were you not able to catch these in your flow ? |
I have some global background knowledge, I expect to keep passing down, and I also encountered the above problem. In swarm, I can directly use the built-in context_variables, but now the context is obviously a local local variable and cannot be passed. I searched all the documents and code but found no solution. |
I don't know, if I pass the context in @function_tool
async def fetch_user_age(wrapper: RunContextWrapper[UserInfo]) -> str:
# Example
return f"User {wrapper.context.name} is 47 years old" And you can use the context information in the prompts with dynamic instructions, but maybe I didn't get your problem |
i have a workflow of 3 agents, for security issues lets name them agent 1(generates a code) , agent 2(add some functionality to the output of 1) , agent 3(adds another functionality to the output given by agent 2) , ideally this should be happening but that's not getting implemented correctly , i have also customized the handoff to ensure that handoffs are done properly between agent although handoffs are happening as expected (confirmed after seeing the trace), but the final output is not having the functionality which must have been provided by the agent 2 and 3, so i doubt there are some issue in the context sharing between agents , any ideas on which parts should i start debugging? oh yeah and one more thing sometimes if i re run the flow like multiple times say 10 times per query 1 or 2 times at most the flow is working as expected with all the functionalities implemented correctly
The text was updated successfully, but these errors were encountered: