-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
Description
What features would you like to see added?
From my understanding of the current architecture, all LLM interactions are now routed exclusively through the agents pipeline. At the moment, there does not appear to be a built-in summarization mechanism available within agents.
Previously, I had implemented a summarization factory method that delegated to the underlying client implementations (e.g., OpenAI client, Google client). In practice, only the OpenAI summarization path was implemented in the repository, but this was sufficient for my use case.
Recently, the legacy summarization-related code has been removed, which has caused the factory-based summarization approach to break. As a result, there is currently no supported way to perform message or context summarization when using agents.
Impact: Summarization is a critical requirement for my use case, primarily to:
- Manage long conversations / context windows
- Prevent token overflows
- Ensure correctness when historical context must be condensed
Without summarization support in the agents flow, it becomes difficult to use agents reliably for non-trivial, long-running interactions.
Questions / Requests
- Is there a recommended workaround for implementing summarization in the current agents-based pipeline?
- Are there any plans to add first-class summarization support for agents (e.g., message summarization, context condensation, or pluggable summarizers)?
- If this is planned, is there an approximate timeline or milestone when this feature might be available?
This is a must-have feature for my use case, so any guidance on the intended direction or interim solutions would be greatly appreciated.
More details
NA
Which components are impacted by your request?
No response
Pictures
No response
Code of Conduct
- I agree to follow this project's Code of Conduct