Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions service/app/agents/graph_builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -331,15 +331,15 @@ async def _build_llm_node(self, config: GraphNodeConfig) -> NodeFunction:
else:
configured_llm = base_llm

async def llm_node(state: StateDict) -> StateDict:
async def llm_node(state: StateDict | BaseModel) -> StateDict:
logger.info(f"[LLM Node: {config.id}] Starting execution")

# Access messages directly from state
# LangGraph passes state as dict with 'messages' key containing BaseMessage objects
messages: list[BaseMessage] = list(state.get("messages", []))
# Convert state to dict (handles both dict and Pydantic BaseModel)
state_dict = self._state_to_dict(state)
messages: list[BaseMessage] = list(state_dict.get("messages", []))

# Render prompt template
prompt = self._render_template(llm_config.prompt_template, state)
prompt = self._render_template(llm_config.prompt_template, state_dict)
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Passing state_dict to _render_template is redundant because _render_template already calls self._state_to_dict(state) internally at line 255. This results in _state_to_dict being called twice on the already-converted dict. Instead, pass state directly to _render_template to avoid the redundant conversion: prompt = self._render_template(llm_config.prompt_template, state)

Suggested change
prompt = self._render_template(llm_config.prompt_template, state_dict)
prompt = self._render_template(llm_config.prompt_template, state)

Copilot uses AI. Check for mistakes.

# Build messages for LLM
llm_messages = messages + [HumanMessage(content=prompt)]
Expand Down
Loading