You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We cordially invite you to join our AI Tech Sharing Session this Friday (December 19, 2025). This meeting will focus on the latest research advances in AI memory management. We have carefully selected three top-tier conference papers from 2025, each exploring— from distinct perspectives—how AI systems can more effectively remember and leverage historical information to enhance long-term task performance. We look forward to deep discussions with all of you!
Meeting Agenda
Topic: [Community] OceanBase AI SIG Weekly Meeting
Date & Time: Friday, December 19, 2025 15:00 – 16:00 (GMT+8)
DingTalk Meeting ID: 581800573
Join Link: https://meeting.dingtalk.com/j/WTVDgGE0W2q
→ You can join directly via your web browser—no DingTalk app download required.
Sharing Content:
Paper Presentations:
● Agent Workflow Memory (ICML 2025)
● In Prospect and Retrospect: Reflective Memory Management (ACL 2025)
● Lessons Learned: A Multi-Agent Framework for Code LLMs to Learn and Improve (NeurIPS 2025)
Open discussion and Q&A
Preview of Paper Highlights:
Agent Workflow Memory (ICML 2025):
This work introduces a memory mechanism enabling AI agents to learn and reuse task workflows. By identifying and storing reusable sequences of task steps (i.e., workflows), it significantly boosts success rates on complex tasks—achieving relative improvements of 24.6% on Mind2Web and 51.1% on WebArena. Particularly valuable for multi-step interactive scenarios (e.g., web navigation, automated operations), this approach reduces redundant reasoning and enhances execution efficiency.
**In Prospect and Retrospect: Reflective Memory Management (ACL 2025)*:
The paper proposes a bidirectional memory management framework combining prospective and retrospective reflection to address memory fragmentation and rigid retrieval in long conversations. Prospective reflection dynamically structures dialogue memory, while retrospective reflection optimizes retrieval via reinforcement learning, enabling more coherent and personalized interactions. Experiments show >10% accuracy gain over baseline models without memory management on the LongMemEval benchmark.
Lessons Learned: A Multi-Agent Framework for Code LLMs to Learn and Improve (NeurIPS 2025):
This study presents a multi-agent framework where code-focused LLMs collaboratively learn and share experiences to continuously improve. Through a hierarchical agent architecture and prioritized experience replay, the system achieves memory-optimized, self-evolving code generation. Each agent manages a small-scale code memory independently, yet forms a global memory network via task decomposition and hierarchical composition—ideal for large-scale code generation and optimization with dynamic scalability.
Common Themes & Distinctions:
All three papers address memory management in dynamic environments, leveraging optimized memory organization or retrieval—often enhanced by reinforcement learning—to adaptively update memories and boost model performance. Their key differences lie in scope:
AWM* focuses on structured workflow memory,
RMM targets unstructured dialogue memory with dynamic organization,
Lessons Learned explores collaborative optimization of distributed code memory*.
We hope these cutting-edge insights spark ideas on how such techniques might integrate with your own use cases—and inspire potential improvements or adaptations.
Please join the meeting 3 minutes early, and come prepared with your thoughts and questions!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Dear OceanBase AI SIG Members,
We cordially invite you to join our AI Tech Sharing Session this Friday (December 19, 2025). This meeting will focus on the latest research advances in AI memory management. We have carefully selected three top-tier conference papers from 2025, each exploring— from distinct perspectives—how AI systems can more effectively remember and leverage historical information to enhance long-term task performance. We look forward to deep discussions with all of you!
Meeting Agenda
Topic: [Community] OceanBase AI SIG Weekly Meeting
Date & Time: Friday, December 19, 2025 15:00 – 16:00 (GMT+8)
DingTalk Meeting ID: 581800573
Join Link: https://meeting.dingtalk.com/j/WTVDgGE0W2q
→ You can join directly via your web browser—no DingTalk app download required.
Sharing Content:
● Agent Workflow Memory (ICML 2025)
● In Prospect and Retrospect: Reflective Memory Management (ACL 2025)
● Lessons Learned: A Multi-Agent Framework for Code LLMs to Learn and Improve (NeurIPS 2025)
Preview of Paper Highlights:
Agent Workflow Memory (ICML 2025):
This work introduces a memory mechanism enabling AI agents to learn and reuse task workflows. By identifying and storing reusable sequences of task steps (i.e., workflows), it significantly boosts success rates on complex tasks—achieving relative improvements of 24.6% on Mind2Web and 51.1% on WebArena. Particularly valuable for multi-step interactive scenarios (e.g., web navigation, automated operations), this approach reduces redundant reasoning and enhances execution efficiency.
**In Prospect and Retrospect: Reflective Memory Management (ACL 2025)*:
The paper proposes a bidirectional memory management framework combining prospective and retrospective reflection to address memory fragmentation and rigid retrieval in long conversations. Prospective reflection dynamically structures dialogue memory, while retrospective reflection optimizes retrieval via reinforcement learning, enabling more coherent and personalized interactions. Experiments show >10% accuracy gain over baseline models without memory management on the LongMemEval benchmark.
Lessons Learned: A Multi-Agent Framework for Code LLMs to Learn and Improve (NeurIPS 2025):
This study presents a multi-agent framework where code-focused LLMs collaboratively learn and share experiences to continuously improve. Through a hierarchical agent architecture and prioritized experience replay, the system achieves memory-optimized, self-evolving code generation. Each agent manages a small-scale code memory independently, yet forms a global memory network via task decomposition and hierarchical composition—ideal for large-scale code generation and optimization with dynamic scalability.
Common Themes & Distinctions:
All three papers address memory management in dynamic environments, leveraging optimized memory organization or retrieval—often enhanced by reinforcement learning—to adaptively update memories and boost model performance. Their key differences lie in scope:
AWM* focuses on structured workflow memory,
RMM targets unstructured dialogue memory with dynamic organization,
Lessons Learned explores collaborative optimization of distributed code memory*.
We hope these cutting-edge insights spark ideas on how such techniques might integrate with your own use cases—and inspire potential improvements or adaptations.
Please join the meeting 3 minutes early, and come prepared with your thoughts and questions!
Best regards,
OceanBase AI SIG Team
Beta Was this translation helpful? Give feedback.
All reactions