-
Notifications
You must be signed in to change notification settings - Fork 3.1k
feat: #2206 Add responses.compact: auto-compact long conversations #2224
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
c6d62d4 to
cb496bf
Compare
|
Greetings @seratch and @ihower, this PR implements the core Following the TS SDK approach, a few related enhancements could either be handled in follow-up PRs or incorporated here, depending on your preference: I’d appreciate your feedback on whether this aligns with the intended design for the issue, and I’m happy to adjust the implementation as needed. |
|
Thanks for sending this. This looks like a good port of the TS implementation I did. I will review the details early next year. |
|
This PR is stale because it has been open for 10 days with no activity. |
seratch
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you again for working on this!
I've checked the code changes and found a few changes would be necessary:
diff --git a/examples/memory/compaction_session_example.py b/examples/memory/compaction_session_example.py
index c84822e3..cc1f6cf5 100644
--- a/examples/memory/compaction_session_example.py
+++ b/examples/memory/compaction_session_example.py
@@ -50,7 +50,8 @@ async def main():
print("=== Final Session State ===")
print(f"Total items: {len(items)}")
for item in items:
- item_type = item.get("type", "unknown")
+ # Some inputs are stored as easy messages (only `role` and `content`).
+ item_type = item.get("type") or ("message" if "role" in item else "unknown")
if item_type == "compaction":
print(" - compaction (encrypted content)")
elif item_type == "message":
diff --git a/src/agents/memory/openai_responses_compaction_session.py b/src/agents/memory/openai_responses_compaction_session.py
index e23c5909..95b9a61d 100644
--- a/src/agents/memory/openai_responses_compaction_session.py
+++ b/src/agents/memory/openai_responses_compaction_session.py
@@ -29,12 +29,19 @@ def select_compaction_candidate_items(
Excludes user messages and compaction items.
"""
+
+ def _is_user_message(item: TResponseInputItem) -> bool:
+ if not isinstance(item, dict):
+ return False
+ if item.get("type") == "message":
+ return item.get("role") == "user"
+ return item.get("role") == "user" and "content" in item
+
return [
item
for item in items
if not (
- (item.get("type") == "message" and item.get("role") == "user")
- or item.get("type") == "compaction"
+ _is_user_message(item) or (isinstance(item, dict) and item.get("type") == "compaction")
)
]
@@ -160,7 +167,11 @@ class OpenAIResponsesCompactionSession(SessionABC, OpenAIResponsesCompactionAwar
if isinstance(item, dict):
output_items.append(item)
else:
- output_items.append(item.model_dump(exclude_unset=True)) # type: ignore
+ # Suppress Pydantic literal warnings: responses.compact can return
+ # user-style input_text content inside ResponseOutputMessage.
+ output_items.append(
+ item.model_dump(exclude_unset=True, warnings=False) # type: ignore
+ )
if output_items:
await self.underlying_session.add_items(output_items)
diff --git a/tests/memory/test_openai_responses_compaction_session.py b/tests/memory/test_openai_responses_compaction_session.py
index 204dbcb1..0b528701 100644
--- a/tests/memory/test_openai_responses_compaction_session.py
+++ b/tests/memory/test_openai_responses_compaction_session.py
@@ -1,5 +1,6 @@
from __future__ import annotations
+import warnings as warnings_module
from typing import cast
from unittest.mock import AsyncMock, MagicMock
@@ -62,6 +63,15 @@ class TestSelectCompactionCandidateItems:
assert len(result) == 1
assert result[0].get("type") == "message"
+ def test_excludes_easy_user_messages_without_type(self) -> None:
+ items: list[TResponseInputItem] = [
+ cast(TResponseInputItem, {"content": "hi", "role": "user"}),
+ cast(TResponseInputItem, {"type": "message", "role": "assistant", "content": "hello"}),
+ ]
+ result = select_compaction_candidate_items(items)
+ assert len(result) == 1
+ assert result[0].get("role") == "assistant"
+
class TestOpenAIResponsesCompactionSession:
def create_mock_session(self) -> MagicMock:
@@ -205,6 +215,43 @@ class TestOpenAIResponsesCompactionSession:
mock_client.responses.compact.assert_called_once()
+ @pytest.mark.asyncio
+ async def test_run_compaction_suppresses_model_dump_warnings(self) -> None:
+ mock_session = self.create_mock_session()
+ mock_session.get_items.return_value = [
+ cast(TResponseInputItem, {"type": "message", "role": "assistant", "content": "hi"})
+ for _ in range(DEFAULT_COMPACTION_THRESHOLD)
+ ]
+
+ class WarningModel:
+ def __init__(self) -> None:
+ self.received_warnings_arg: bool | None = None
+
+ def model_dump(self, *, exclude_unset: bool, warnings: bool | None = None) -> dict:
+ self.received_warnings_arg = warnings
+ if warnings:
+ warnings_module.warn("unexpected warning", stacklevel=2)
+ return {"type": "message", "role": "assistant", "content": "ok"}
+
+ warning_model = WarningModel()
+ mock_compact_response = MagicMock()
+ mock_compact_response.output = [warning_model]
+
+ mock_client = MagicMock()
+ mock_client.responses.compact = AsyncMock(return_value=mock_compact_response)
+
+ session = OpenAIResponsesCompactionSession(
+ session_id="test",
+ underlying_session=mock_session,
+ client=mock_client,
+ )
+
+ with warnings_module.catch_warnings():
+ warnings_module.simplefilter("error")
+ await session.run_compaction({"response_id": "resp-123"})
+
+ assert warning_model.received_warnings_arg is False
+
class TestTypeGuard:
def test_is_compaction_aware_session_true(self) -> None:|
Greetings @seratch, I have applied all requested changes. Apologies for the failed CI runs earlier, I have fixed them now. This should complete the work here |
|
@codex review again |
|
Codex Review: Didn't find any major issues. You're on a roll. ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
Summary
Implements OpenAIResponsesCompactionSession to add seamless support for the newly added
responses.compactAPI, matching the TypeScript SDK implementation (openai-agents-js#760).Core Implementation
OpenAIResponsesCompactionAwareSession: Protocol extension for compaction-aware sessionsis_openai_responses_compaction_aware_session(): Type guard for runtime checksKey Features
Usage Example
Test plan
Issue number
Closes #2206
Checks
make lintandmake format