fix: preload subagent skills in system prompt#2700
Open
Kiteeater wants to merge 5 commits intobytedance:mainfrom
Open
fix: preload subagent skills in system prompt#2700Kiteeater wants to merge 5 commits intobytedance:mainfrom
Kiteeater wants to merge 5 commits intobytedance:mainfrom
Conversation
Contributor
There was a problem hiding this comment.
Pull request overview
Fixes subagent execution failures on OpenAI-compatible providers by ensuring preloaded skill content is merged into the subagent’s effective system_prompt (so the model only sees a leading system message), instead of injecting skills as additional SystemMessage entries in conversation state.
Changes:
- Replace skill preloading from per-skill
SystemMessageinjection to a single merged skill prompt string and append it to the runtime system prompt. - Keep subagent initial state limited to the delegated
HumanMessagetask. - Update tests/comments to reflect the new skill preloading behavior.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
| backend/packages/harness/deerflow/subagents/executor.py | Switches skill preloading to prompt-string merging and removes skill SystemMessage injection from initial state. |
| backend/packages/harness/deerflow/tools/builtins/task_tool.py | Updates comments to reflect that skills are merged at subagent runtime (not appended to stored config). |
| backend/tests/test_task_tool_core_logic.py | Updates assertions/comments to match the new runtime skill merge behavior. |
| backend/tests/test_subagent_executor.py | Adds a regression test intended to ensure skills reach the model via a leading system message. |
| executor = SubagentExecutor(config=config, tools=[], thread_id="test-thread") | ||
|
|
||
| middleware_module = ModuleType("deerflow.agents.middlewares.tool_error_handling_middleware") | ||
| middleware_module.build_subagent_runtime_middlewares = lambda *, lazy_init=True: [] |
| monkeypatch.setattr("deerflow.subagents.executor.ThreadState", None) | ||
| monkeypatch.setattr("deerflow.subagents.executor.create_chat_model", lambda **kwargs: CapturingChatModel(responses=["done"])) | ||
|
|
||
| with patch("deerflow.skills.loader.load_skills", return_value=[skill]): |
Comment on lines
+1308
to
+1316
| # ----------------------------------------------------------------------------- | ||
| # Skill Preload Tests | ||
| # ----------------------------------------------------------------------------- | ||
|
|
||
|
|
||
| class TestSkillPreload: | ||
| @pytest.mark.anyio | ||
| async def test_preloaded_skills_are_sent_in_the_leading_system_message(self, classes, base_config, monkeypatch): | ||
| """Preloaded skills must reach the model without creating later SystemMessages.""" |
| tools=self.tools, | ||
| middleware=middlewares, | ||
| system_prompt=self.config.system_prompt, | ||
| system_prompt=system_prompt or self.config.system_prompt, |
Comment on lines
+292
to
306
| async def _load_skill_prompt(self) -> str: | ||
| """Load skill content into a single prompt block based on config.skills. | ||
|
|
||
| Aligned with Codex's pattern: each subagent loads its own skills | ||
| per-session and injects them as conversation items (developer messages), | ||
| not as system prompt text. The config.skills whitelist controls which | ||
| skills are loaded: | ||
| The config.skills whitelist controls which skills are loaded: | ||
| - None: load all enabled skills | ||
| - []: no skills | ||
| - ["skill-a", "skill-b"]: only these skills | ||
|
|
||
| Returns: | ||
| List of SystemMessages containing skill content. | ||
| System-prompt text containing preloaded skill content. | ||
| """ | ||
| if self.config.skills is not None and len(self.config.skills) == 0: | ||
| logger.info(f"[trace={self.trace_id}] Subagent {self.config.name} skills=[] — skipping skill loading") | ||
| return [] | ||
| return "" | ||
|
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixes #2693 by changing subagent skill preloading so skill content is merged into the subagent's effective
system_promptinstead of being injected asSystemMessageentries in conversation state.Previously, enabled skills were loaded as additional
SystemMessages before the task message. When LangChain also injected the subagent system prompt, OpenAI-compatible providers could receive system messages outside the first position and reject the request with:Changes
_load_skill_messages()with_load_skill_prompt(), returning a single skill prompt string.create_agent(...).HumanMessage.SystemMessagein subagent initial statecreate_agent(...).astream(...)path only sending a leading system message to the modelVerification
uv run pytest tests/test_subagent_executor.py tests/test_task_tool_core_logic.pymake lint