Skip to content

Conversation

@xinquiry
Copy link
Collaborator

@xinquiry xinquiry commented Jan 19, 2026

变更内容

  • 新功能
  • 修复 Bug
  • 增强重构
  • 其他(请描述)

简要描述本次 PR 的主要变更内容。

相关 Issue

请关联相关 Issue(如有):#编号

检查清单

默认已勾选,如不满足,请检查。

  • 已在本地测试通过
  • 已补充/更新相关文档
  • 已添加测试用例
  • 代码风格已经过 pre-commit 钩子检查

其他说明

如有特殊说明或注意事项,请补充。

Summary by Sourcery

在构建提示和消息时,统一处理 LLM 图节点状态,支持将 dict 和 Pydantic 模型作为输入。

Bug 修复:

  • 当 Celery 或其他调用方向 LLM 节点提供的状态是 Pydantic BaseModel 而不是普通 dict 时,防止出现运行时错误。

增强功能:

  • 通过一个辅助工具规范化图节点状态的访问,从 dictBaseModel 输入中一致地提取消息并渲染模板。
Original summary in English

Summary by Sourcery

Handle LLM graph node state uniformly by supporting both dict and Pydantic model inputs when building prompts and messages.

Bug Fixes:

  • Prevent runtime errors in the LLM node when Celery or other callers provide state as a Pydantic BaseModel instead of a plain dict.

Enhancements:

  • Normalize graph node state access through a helper to consistently extract messages and render templates from either dict or BaseModel inputs.

xinquiry and others added 5 commits January 19, 2026 16:27
Backend changes:
- Create LLM before graph compilation for proper streaming interception
- Change react config from COMPONENT (subgraph) to direct LLM+TOOL nodes
- Make build_graph async in all components to enable pre-creation of LLM
- Skip final AIMessage only when buffer has content (fix deep research)
- Move node transition detection before AIMessage skip check
- Access messages directly from state to preserve BaseMessage types

Frontend changes:
- Fix duplicate detection in streaming_chunk handler (condition was inverted)

Co-Authored-By: Claude <[email protected]>
Copilot AI review requested due to automatic review settings January 19, 2026 15:53
@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Jan 19, 2026

审阅者指南(在小型 PR 上折叠)

审阅者指南

通过一个辅助函数规范化对状态的访问,将 LLM 图节点的执行函数更新为同时接受基于字典的状态对象和 Pydantic BaseModel 状态对象,从而确保消息提取和提示词渲染的一致性。

更新后 llm_node 状态处理的时序图

sequenceDiagram
    actor LangGraph
    participant llm_node
    participant GraphBuilder
    participant LLM

    LangGraph->>llm_node: call(state: StateDict | BaseModel)
    llm_node->>GraphBuilder: _state_to_dict(state)
    GraphBuilder-->>llm_node: state_dict
    llm_node->>llm_node: extract messages from state_dict[messages]
    llm_node->>GraphBuilder: _render_template(prompt_template, state_dict)
    GraphBuilder-->>llm_node: prompt
    llm_node->>llm_node: build llm_messages
    llm_node->>LLM: invoke(llm_messages)
    LLM-->>llm_node: response
    llm_node-->>LangGraph: updated StateDict
Loading

GraphBuilder 和更新后 llm_node 的类图

classDiagram
    class GraphBuilder {
        _build_llm_node(config: GraphNodeConfig) NodeFunction
        _state_to_dict(state: StateDict | BaseModel) dict
        _render_template(prompt_template: str, state: dict) str
    }

    class llm_node {
        <<function>>
        call(state: StateDict | BaseModel) StateDict
    }

    GraphBuilder o-- llm_node : creates
    llm_node ..> GraphBuilder : uses _state_to_dict
    llm_node ..> GraphBuilder : uses _render_template
Loading

文件级改动

变更 详情 文件
规范 LLM 节点的状态处理,以支持 dict 和 Pydantic BaseModel 输入,同时保持下游逻辑不变。
  • 更新 llm_node 的签名,使其可以接受类 dict 的状态或 Pydantic BaseModel 实例
  • 在使用前,通过现有的辅助函数将传入状态转换为普通 dict
  • 从规范化后的 dict 中读取 messages,而不是直接从原始状态对象读取
  • 使用规范化后的 dict 渲染提示模板,而不是使用原始状态对象
service/app/agents/graph_builder.py

提示与命令

与 Sourcery 交互

  • 触发新审阅: 在 pull request 中评论 @sourcery-ai review
  • 继续讨论: 直接回复 Sourcery 的审阅评论。
  • 基于审阅评论生成 GitHub issue: 通过回复某条审阅评论,请求 Sourcery 从该评论创建一个 issue。你也可以直接回复该审阅评论 @sourcery-ai issue 来创建 issue。
  • 生成 pull request 标题: 在 pull request 标题中的任意位置写上 @sourcery-ai,即可随时生成标题。你也可以在 pull request 中评论 @sourcery-ai title 以(重新)生成标题。
  • 生成 pull request 概要: 在 pull request 正文中的任意位置写上 @sourcery-ai summary,即可在你想要的位置生成 PR 概要。你也可以在 pull request 中评论 @sourcery-ai summary 以(重新)生成概要。
  • 生成审阅者指南: 在 pull request 中评论 @sourcery-ai guide,即可随时(重新)生成审阅者指南。
  • 一次性解决所有 Sourcery 评论: 在 pull request 中评论 @sourcery-ai resolve,即可标记解决所有 Sourcery 评论。如果你已经处理完所有评论且不想再看到它们,这会非常有用。
  • 一次性撤销所有 Sourcery 审阅: 在 pull request 中评论 @sourcery-ai dismiss,即可撤销所有现有的 Sourcery 审阅。如果你想从头开始一次新的审阅,这尤其有用——别忘了再评论 @sourcery-ai review 来触发新审阅!

自定义你的使用体验

访问你的 控制台 以:

  • 启用或禁用审阅功能,例如 Sourcery 自动生成的 pull request 概要、审阅者指南等。
  • 更改审阅语言。
  • 添加、删除或编辑自定义审阅说明。
  • 调整其他审阅设置。

获取帮助

Original review guide in English
Reviewer's guide (collapsed on small PRs)

Reviewer's Guide

Updates the LLM graph node execution function to accept both dict-based and Pydantic BaseModel state objects by normalizing state access through a helper, ensuring consistent message extraction and prompt rendering.

Sequence diagram for updated llm_node state handling

sequenceDiagram
    actor LangGraph
    participant llm_node
    participant GraphBuilder
    participant LLM

    LangGraph->>llm_node: call(state: StateDict | BaseModel)
    llm_node->>GraphBuilder: _state_to_dict(state)
    GraphBuilder-->>llm_node: state_dict
    llm_node->>llm_node: extract messages from state_dict[messages]
    llm_node->>GraphBuilder: _render_template(prompt_template, state_dict)
    GraphBuilder-->>llm_node: prompt
    llm_node->>llm_node: build llm_messages
    llm_node->>LLM: invoke(llm_messages)
    LLM-->>llm_node: response
    llm_node-->>LangGraph: updated StateDict
Loading

Class diagram for GraphBuilder and updated llm_node

classDiagram
    class GraphBuilder {
        _build_llm_node(config: GraphNodeConfig) NodeFunction
        _state_to_dict(state: StateDict | BaseModel) dict
        _render_template(prompt_template: str, state: dict) str
    }

    class llm_node {
        <<function>>
        call(state: StateDict | BaseModel) StateDict
    }

    GraphBuilder o-- llm_node : creates
    llm_node ..> GraphBuilder : uses _state_to_dict
    llm_node ..> GraphBuilder : uses _render_template
Loading

File-Level Changes

Change Details Files
Normalize LLM node state handling to support both dict and Pydantic BaseModel inputs while keeping downstream logic unchanged.
  • Update llm_node signature to accept either a dict-like state or a Pydantic BaseModel instance
  • Convert the incoming state to a plain dict via an existing helper before use
  • Read messages from the normalized dict instead of the raw state object
  • Render the prompt template using the normalized dict rather than the original state
service/app/agents/graph_builder.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - 我在这里提供了一些高层次的反馈:

  • 由于 llm_node 现在同时接受 StateDictBaseModel,建议重命名该参数的类型别名,或者在其定义附近添加简短注释,以便让未来的调用方更清楚它是一个联合类型(union type)。
  • 请确保 _state_to_dict 是无副作用且调用开销很小的,因为它现在会在每次节点执行时被调用;如果不是这种情况,可以考虑做结果缓存,或者更严格地收窄其可接受的输入类型。
给 AI Agent 的提示
Please address the comments from this code review:

## Overall Comments
- Since `llm_node` now accepts both `StateDict` and `BaseModel`, consider renaming the parameter type alias or adding a brief comment near its definition to make this union type expectation clearer for future callers.
- Ensure `_state_to_dict` is side‑effect free and cheap to call, as it is now invoked on every node execution; if not, consider caching or constraining the accepted input types more narrowly.

Sourcery 对开源项目免费——如果你觉得我们的代码评审有帮助,欢迎分享 ✨
帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈改进后续评审。
Original comment in English

Hey - I've left some high level feedback:

  • Since llm_node now accepts both StateDict and BaseModel, consider renaming the parameter type alias or adding a brief comment near its definition to make this union type expectation clearer for future callers.
  • Ensure _state_to_dict is side‑effect free and cheap to call, as it is now invoked on every node execution; if not, consider caching or constraining the accepted input types more narrowly.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Since `llm_node` now accepts both `StateDict` and `BaseModel`, consider renaming the parameter type alias or adding a brief comment near its definition to make this union type expectation clearer for future callers.
- Ensure `_state_to_dict` is side‑effect free and cheap to call, as it is now invoked on every node execution; if not, consider caching or constraining the accepted input types more narrowly.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@codecov
Copy link

codecov bot commented Jan 19, 2026

Codecov Report

❌ Patch coverage is 0% with 4 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
service/app/agents/graph_builder.py 0.00% 4 Missing ⚠️

📢 Thoughts on this report? Let us know!

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes an issue where the llm_node function incorrectly assumed LangGraph would pass state as a dictionary, when in reality LangGraph passes state as a Pydantic BaseModel instance (created by build_state_class). The fix adds proper type handling to accept both dict and BaseModel inputs.

Changes:

  • Updated llm_node function signature to accept StateDict | BaseModel instead of just StateDict
  • Added _state_to_dict conversion to safely handle both dict and BaseModel state inputs
  • Updated comments to accurately reflect that state can be either a dict or Pydantic BaseModel

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.


# Render prompt template
prompt = self._render_template(llm_config.prompt_template, state)
prompt = self._render_template(llm_config.prompt_template, state_dict)
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Passing state_dict to _render_template is redundant because _render_template already calls self._state_to_dict(state) internally at line 255. This results in _state_to_dict being called twice on the already-converted dict. Instead, pass state directly to _render_template to avoid the redundant conversion: prompt = self._render_template(llm_config.prompt_template, state)

Suggested change
prompt = self._render_template(llm_config.prompt_template, state_dict)
prompt = self._render_template(llm_config.prompt_template, state)

Copilot uses AI. Check for mistakes.
@Mile-Away Mile-Away merged commit bb027f6 into main Jan 19, 2026
15 of 16 checks passed
@Mile-Away Mile-Away deleted the fix/celery-error branch January 19, 2026 16:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants