Skip to content

fix: preload subagent skills in system prompt#2700

Open
Kiteeater wants to merge 5 commits intobytedance:mainfrom
Kiteeater:fix/subagent-progressive-skill-loading
Open

fix: preload subagent skills in system prompt#2700
Kiteeater wants to merge 5 commits intobytedance:mainfrom
Kiteeater:fix/subagent-progressive-skill-loading

Conversation

@Kiteeater
Copy link
Copy Markdown
Contributor

Summary

Fixes #2693 by changing subagent skill preloading so skill content is merged into the subagent's effective system_prompt instead of being injected as SystemMessage entries in conversation state.

Previously, enabled skills were loaded as additional SystemMessages before the task message. When LangChain also injected the subagent system prompt, OpenAI-compatible providers could receive system messages outside the first position and reject the request with:

System message must be at the beginning of the message list

Changes

  • Replace _load_skill_messages() with _load_skill_prompt(), returning a single skill prompt string.
  • Merge preloaded skill content into the runtime system prompt passed to create_agent(...).
  • Keep subagent initial state limited to the delegated task HumanMessage.
  • Update task-tool comments to reflect that skills are merged at subagent runtime, not appended to stored config.
  • Add regression tests for:
    • no SystemMessage in subagent initial state
    • skill content being passed through the subagent system prompt
    • real LangChain create_agent(...).astream(...) path only sending a leading system message to the model

Verification

  • uv run pytest tests/test_subagent_executor.py tests/test_task_tool_core_logic.py
  • make lint

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes subagent execution failures on OpenAI-compatible providers by ensuring preloaded skill content is merged into the subagent’s effective system_prompt (so the model only sees a leading system message), instead of injecting skills as additional SystemMessage entries in conversation state.

Changes:

  • Replace skill preloading from per-skill SystemMessage injection to a single merged skill prompt string and append it to the runtime system prompt.
  • Keep subagent initial state limited to the delegated HumanMessage task.
  • Update tests/comments to reflect the new skill preloading behavior.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 5 comments.

File Description
backend/packages/harness/deerflow/subagents/executor.py Switches skill preloading to prompt-string merging and removes skill SystemMessage injection from initial state.
backend/packages/harness/deerflow/tools/builtins/task_tool.py Updates comments to reflect that skills are merged at subagent runtime (not appended to stored config).
backend/tests/test_task_tool_core_logic.py Updates assertions/comments to match the new runtime skill merge behavior.
backend/tests/test_subagent_executor.py Adds a regression test intended to ensure skills reach the model via a leading system message.

Comment thread backend/tests/test_subagent_executor.py Outdated
executor = SubagentExecutor(config=config, tools=[], thread_id="test-thread")

middleware_module = ModuleType("deerflow.agents.middlewares.tool_error_handling_middleware")
middleware_module.build_subagent_runtime_middlewares = lambda *, lazy_init=True: []
Comment thread backend/tests/test_subagent_executor.py Outdated
monkeypatch.setattr("deerflow.subagents.executor.ThreadState", None)
monkeypatch.setattr("deerflow.subagents.executor.create_chat_model", lambda **kwargs: CapturingChatModel(responses=["done"]))

with patch("deerflow.skills.loader.load_skills", return_value=[skill]):
Comment on lines +1308 to +1316
# -----------------------------------------------------------------------------
# Skill Preload Tests
# -----------------------------------------------------------------------------


class TestSkillPreload:
@pytest.mark.anyio
async def test_preloaded_skills_are_sent_in_the_leading_system_message(self, classes, base_config, monkeypatch):
"""Preloaded skills must reach the model without creating later SystemMessages."""
tools=self.tools,
middleware=middlewares,
system_prompt=self.config.system_prompt,
system_prompt=system_prompt or self.config.system_prompt,
Comment on lines +292 to 306
async def _load_skill_prompt(self) -> str:
"""Load skill content into a single prompt block based on config.skills.

Aligned with Codex's pattern: each subagent loads its own skills
per-session and injects them as conversation items (developer messages),
not as system prompt text. The config.skills whitelist controls which
skills are loaded:
The config.skills whitelist controls which skills are loaded:
- None: load all enabled skills
- []: no skills
- ["skill-a", "skill-b"]: only these skills

Returns:
List of SystemMessages containing skill content.
System-prompt text containing preloaded skill content.
"""
if self.config.skills is not None and len(self.config.skills) == 0:
logger.info(f"[trace={self.trace_id}] Subagent {self.config.name} skills=[] — skipping skill loading")
return []
return ""

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

子agent调用报错

2 participants