Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

context management between the agents in a multi agent set up #348

Open
LAHARI2003 opened this issue Mar 26, 2025 · 7 comments
Open

context management between the agents in a multi agent set up #348

LAHARI2003 opened this issue Mar 26, 2025 · 7 comments
Labels
question Question about using the SDK

Comments

@LAHARI2003
Copy link

i have a workflow of 3 agents, for security issues lets name them agent 1(generates a code) , agent 2(add some functionality to the output of 1) , agent 3(adds another functionality to the output given by agent 2) , ideally this should be happening but that's not getting implemented correctly , i have also customized the handoff to ensure that handoffs are done properly between agent although handoffs are happening as expected (confirmed after seeing the trace), but the final output is not having the functionality which must have been provided by the agent 2 and 3, so i doubt there are some issue in the context sharing between agents , any ideas on which parts should i start debugging? oh yeah and one more thing sometimes if i re run the flow like multiple times say 10 times per query 1 or 2 times at most the flow is working as expected with all the functionalities implemented correctly

@LAHARI2003 LAHARI2003 added the question Question about using the SDK label Mar 26, 2025
@LAHARI2003 LAHARI2003 changed the title context management between the agent in a multi agent set up context management between the agents in a multi agent set up Mar 26, 2025
@DanieleMorotti
Copy link

Hi, without having a code example it may be difficult to understand where it's the problem. But if you read the doc, it states:

The context object is not sent to the LLM. It is purely a local object that you can read from, write to and call methods on it.

So, if you have some extra information passed in the RunContextWrapper object you need to make sure to pass it in the prompt.

Otherwise, if you mean the context of the LLM, you can print the detailed log as follows:

import logging

logger =  logging.getLogger("openai.agents") # or openai.agents.tracing for the Tracing logger
# To make all logs show up
logger.setLevel(logging.DEBUG)
logger.addHandler(logging.FileHandler("agents.log", mode="w"))

in order to check if the agents receive all the information

@LAHARI2003
Copy link
Author

LAHARI2003 commented Mar 26, 2025

Hi @DanieleMorotti , in the trace dash board if u have look there is a output field(which is empty here) after agent 2 is called

Image

and below ill attach the image which actually( the 1 or 2 times i mentioned right) applied the functionality its supposed to and forwarded it correctly to the next agent.

Image

So to summarize the problem the output which agent2 has to send to agent3 after processing the input given to it, is not being sent to agent3 although in the trace redirection is happening
, from what i understand i feel the handoff parameters are not being properly redirected to the next agent, to be specific
doc
pre_handoff_items and new_items are not being updated.

and below is the skeleton code of my implementation

`import os
from agents import Agent, Runner, function_tool, handoff, trace, FileSearchTool
from agents.extensions import handoff_filters
from agents import handoff, RunContextWrapper
from agents.extensions import handoff_filters
from tools import*
from headers import*

RECOMMENDED_PROMPT_PREFIX = """# System context\nYou are part of a multi-agent system
called the Agents SDK, designed to make agent coordination and execution easy.
Agents uses two primary abstraction: Agents and Handoffs. An agent encompasses
instructions and tools and can hand off a conversation to another agent when appropriate.
Handoffs are achieved by calling a handoff function,do not mention or draw attention to these transfers in your conversation with the user.\n"""

agent4 = Agent(
name=" Agent 4",
handoff_description="",
instructions=f"""{RECOMMENDED_PROMPT_PREFIX}""",
tools=[],
model="gpt-4o-mini",
)
agent4_handoff = handoff(
agent=agent4,
tool_name_override="",
tool_description_override="",
on_handoff= lambda ctx: print(""),
input_filter= handoff_filters.remove_all_tools
)

agent3 = Agent(
name="agent3",
handoff_description="",
instructions=f"""{RECOMMENDED_PROMPT_PREFIX}""",
model="gpt-4o-mini",
handoffs=[agent4_handoff],
tools=[FileSearchTool(
max_num_results=5,
vector_store_ids=[""],
include_search_results=True,
)]
)
agent3_handoff = handoff(
agent=agent3,
tool_name_override="",
tool_description_override="",
on_handoff=lambda ctx: print(""),
input_filter= handoff_filters.remove_all_tools
)

agent2 = Agent(
name="",
handoff_description="",
instructions=f"""{RECOMMENDED_PROMPT_PREFIX}""",
model="gpt-4o-mini",
handoffs=[agent3_handoff],
)

agent2_handoff = handoff(
agent = agent2,
tool_name_override="",
tool_description_override="",
on_handoff=lambda ctx: print(""),
input_filter=handoff_filters.remove_all_tools
)

triage_agent = Agent(
name="Triage Agent",
instructions=f"""{RECOMMENDED_PROMPT_PREFIX}
""",
handoffs=[agent2_handoff],
model="gpt-4o-mini",
tools=[FileSearchTool(
max_num_results=5,
vector_store_ids=[""],
include_search_results=True,
)],
)

``

@LAHARI2003
Copy link
Author

@rm-openai it would be great if u can suggest where do i look into for solving this

@weixiewen
Copy link

i think your context should save in One dict? maybe the start level handoff/agent response will not pass to the next level(i tried to catch the input/output process flow on my local serve llm)

@LAHARI2003
Copy link
Author

LAHARI2003 commented Mar 28, 2025

according to the documentation the context between the agents is passed locally only link

HandoffInputFilter module-attribute

HandoffInputFilter: TypeAlias = Callable[

[HandoffInputData],

HandoffInputData

]
A function that filters the input data passed to the next agent.

and handofffinput dataclass has input_history , pre_handoff_items ,new_items.
So to my understanding the conversation history of one agent gets handedoff to another locally through these so logically this should work every time , but its not.``

so were you not able to catch these in your flow ?

@Monkey-Moon
Copy link

I have some global background knowledge, I expect to keep passing down, and I also encountered the above problem. In swarm, I can directly use the built-in context_variables, but now the context is obviously a local local variable and cannot be passed. I searched all the documents and code but found no solution.

@DanieleMorotti
Copy link

I don't know, if I pass the context in Runner.run(..., context=MyContext) then I can retrieve the data within tools as in the doc example:

@function_tool
async def fetch_user_age(wrapper: RunContextWrapper[UserInfo]) -> str:
      # Example
      return f"User {wrapper.context.name} is 47 years old"

And you can use the context information in the prompts with dynamic instructions, but maybe I didn't get your problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Question about using the SDK
Projects
None yet
Development

No branches or pull requests

4 participants