Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llm.generate_str use_history is not working #38

Open
HuntZhaozq opened this issue Mar 4, 2025 · 3 comments
Open

llm.generate_str use_history is not working #38

HuntZhaozq opened this issue Mar 4, 2025 · 3 comments
Assignees
Labels
bug Something isn't working enhancement New feature or request

Comments

@HuntZhaozq
Copy link

I use the example of strealit_mcp_basic_agent. And set the use_history=True in the llm.generate_str method like the above.

message=prompt, request_params=RequestParams(use_history=True)

But it is not working. When I chat with the agent in the second round. The chat history of the first round is gone. How to solve this?

@HuntZhaozq HuntZhaozq changed the title use_history is not working llm.generate_str use_history is not working Mar 4, 2025
@saqadri
Copy link
Collaborator

saqadri commented Mar 5, 2025

I use the example of strealit_mcp_basic_agent. And set the use_history=True in the llm.generate_str method like the above.

mcp-agent/examples/streamlit_mcp_basic_agent/main.py

Line 57 in af54d24

message=prompt, request_params=RequestParams(use_history=True)
But it is not working. When I chat with the agent in the second round. The chat history of the first round is gone. How to solve this?

@HuntZhaozq thanks for reporting this issue. This is an annoying thing about Streamlit, which reruns the script top-to-bottom, overwriting anything that is held in-memory.

The solution is to save the messages into Streamlit session state (st.session_state), and pass the full messages array each time.

So in that example application you linked, change:

message=prompt, request_params=RequestParams(use_history=True)

to

response = await llm.generate_str(
    message=st.session_state["messages"], request_params=RequestParams(use_history=True)
)

I am working on proper session state management for Streamlit so you don't have to think about these problems (e.g. when use_history is True, in streamlit mode the framework should automatically be saving things to streamlit session_state). So I'll leave this issue open until that's fixed.

But please let me know if my workaround above works for you.

@saqadri saqadri self-assigned this Mar 5, 2025
@saqadri saqadri added bug Something isn't working enhancement New feature or request labels Mar 5, 2025
@HuntZhaozq
Copy link
Author

Yes, it works. Thank you!

@doncat99
Copy link

doncat99 commented Mar 7, 2025

I use the example of strealit_mcp_basic_agent. And set the use_history=True in the llm.generate_str method like the above.
mcp-agent/examples/streamlit_mcp_basic_agent/main.py
Line 57 in af54d24
message=prompt, request_params=RequestParams(use_history=True)
But it is not working. When I chat with the agent in the second round. The chat history of the first round is gone. How to solve this?

@HuntZhaozq thanks for reporting this issue. This is an annoying thing about Streamlit, which reruns the script top-to-bottom, overwriting anything that is held in-memory.

The solution is to save the messages into Streamlit session state (st.session_state), and pass the full messages array each time.

So in that example application you linked, change:

mcp-agent/examples/streamlit_mcp_basic_agent/main.py

Line 57 in af54d24

message=prompt, request_params=RequestParams(use_history=True)
to

response = await llm.generate_str(
    message=st.session_state["messages"], request_params=RequestParams(use_history=True)
)

I am working on proper session state management for Streamlit so you don't have to think about these problems (e.g. when use_history is True, in streamlit mode the framework should automatically be saving things to streamlit session_state). So I'll leave this issue open until that's fixed.

But please let me know if my workaround above works for you.

@saqadri, Should mcp-agent implement session management in close loop, instead of relying on Streamlit?

And the readme file points out that "Memory -- adding support for long-term memory" is on the roadmap. I do believe Session management is part of the Memory.

Btw: Memory is the most valuable feature on the list.

Roadmap
We will be adding a detailed roadmap (ideally driven by your feedback). The current set of priorities include:

Durable Execution -- allow workflows to pause/resume and serialize state so they can be replayed or be paused indefinitely. We are working on integrating [Temporal](https://github.com/lastmile-ai/mcp-agent/blob/main/src/mcp_agent/executor/temporal.py) for this purpose.
Memory -- adding support for long-term memory
Streaming -- Support streaming listeners for iterative progress
Additional MCP capabilities -- Expand beyond tool calls to support:
Resources
Prompts
Notifications

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants