-
Notifications
You must be signed in to change notification settings - Fork 5
fix: prevent KeyError 'tool_call_id' in LangChain message handling #184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
2fdb6f2
4615d2a
ab02a5b
3231061
c3db10a
e8dee71
ec408b5
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -334,15 +334,21 @@ async def _build_llm_node(self, config: GraphNodeConfig) -> NodeFunction: | |||||
| async def llm_node(state: StateDict | BaseModel) -> StateDict: | ||||||
| logger.info(f"[LLM Node: {config.id}] Starting execution") | ||||||
|
|
||||||
| # Convert state to dict (handles both dict and Pydantic BaseModel) | ||||||
| # Get messages BEFORE converting state to dict to preserve BaseMessage types | ||||||
| # model_dump() loses tool_call_id and other message-specific fields | ||||||
| if isinstance(state, BaseModel): | ||||||
| messages: list[BaseMessage] = list(getattr(state, "messages", [])) | ||||||
| else: | ||||||
| messages = list(state.get("messages", [])) | ||||||
|
|
||||||
| # Convert state to dict for template rendering (but we already have messages) | ||||||
| state_dict = self._state_to_dict(state) | ||||||
| messages: list[BaseMessage] = list(state_dict.get("messages", [])) | ||||||
|
|
||||||
| # Render prompt template | ||||||
| prompt = self._render_template(llm_config.prompt_template, state_dict) | ||||||
|
|
||||||
| # Build messages for LLM | ||||||
| llm_messages = messages + [HumanMessage(content=prompt)] | ||||||
| llm_messages = list(messages) + [HumanMessage(content=prompt)] | ||||||
|
||||||
| llm_messages = list(messages) + [HumanMessage(content=prompt)] | |
| llm_messages = messages + [HumanMessage(content=prompt)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue: 在调用 list() 之前要防止 state.messages 为 None,以避免 TypeError。
如果
state.messages/state.get("messages")可能为None,list(None)会抛出TypeError。可以考虑先做归一化处理,例如:这样既保持了当前行为(拷贝并保留
BaseMessage实例),又能在 messages 缺失或为None时避免运行时错误。Original comment in English
issue: Guard against state.messages being None before calling list() to avoid TypeError.
If
state.messages/state.get("messages")can beNone,list(None)will raise aTypeError. Consider normalizing first, e.g.:This keeps the current behavior (copying and preserving
BaseMessageinstances) while avoiding runtime errors when messages is missing orNone.