-
Notifications
You must be signed in to change notification settings - Fork 194
Description
When using Anthropic (claude-sonnet-4-0) or Amazon Bedrock (anthropic.claude-sonnet-4-20250514-v1:0) models with the @ai-sdk-tools/agents package, the following error occurred during agent handoffs:
Error [AI_UnsupportedFunctionalityError]: 'Multiple system messages that are separated by user/assistant messages' functionality not supported.
OpenAI models worked fine because they tolerate system messages appearing mid-conversation, masking this compatibility issue.
Also, other providers do not support instructions.
Root Cause Analysis
The Problem
In packages/agents/src/tool-result-extractor.ts, when an agent handoff occurred, the tool result extractor was adding a system message with handoff context after user messages had already been sent.
This created the following message sequence:
- Initial system message (agent instructions) ✅
- User message ✅
- Another system message (handoff context) ❌ ← Problem!
Why It Failed
- Anthropic & Bedrock: Require all system messages to appear at the start of the conversation, before any user/assistant messages
- OpenAI: Tolerates system messages mid-conversation (non-standard behavior)
- AI SDK: Correctly enforces provider-specific constraints
Solution (Option A)
Changed the handoff context message from role: 'system' to role: 'user' in the tool result extractor.
File: packages/agents/src/tool-result-extractor.ts (lines 130-137)
After:
// Add a user message with the available data instead of system message
// Note: Must use 'user' role for compatibility with Anthropic/Bedrock
// They don't support system messages after the conversation has started (after user/assistant messages)
// OpenAI tolerates system messages mid-conversation, but it's not standard
const dataMessage: ModelMessage = {
role: 'user',
content: `[Context from previous agent]\n${dataSummary}\n\n**IMPORTANT**: Only use this data if it's DIRECTLY relevant to the current user question. If the user is asking about something different, ignore this data and call the appropriate tools.`
};Why This Works
- User messages are allowed at any point in the conversation for all providers
- The semantic meaning is preserved (providing context to the agent)
- The AI SDK handles the message correctly across all providers
- No change in agent behavior or response quality
Option B
This can be handled as in Cache control example by providing multiple system messages at the head of messages array. But I didn't test it.
const result = await generateText({
model: anthropic('claude-3-5-sonnet-20240620'),
messages: [
{
role: 'system',
content: 'Cached system message part',
providerOptions: {
anthropic: { cacheControl: { type: 'ephemeral' } },
},
},
{
role: 'system',
content: 'Uncached system message part',
},
{
role: 'user',
content: 'User prompt',
},
],
});@pontusab I am not sure if changing system message to user message has side effects for ai-sdk-tools and its roadmap. What do you think?