Skip to content

Multi-Provider Model Compatibility Issue & Fix (one-liner) #121

@mehmetcavus

Description

@mehmetcavus

When using Anthropic (claude-sonnet-4-0) or Amazon Bedrock (anthropic.claude-sonnet-4-20250514-v1:0) models with the @ai-sdk-tools/agents package, the following error occurred during agent handoffs:

Error [AI_UnsupportedFunctionalityError]: 'Multiple system messages that are separated by user/assistant messages' functionality not supported.

OpenAI models worked fine because they tolerate system messages appearing mid-conversation, masking this compatibility issue.

Also, other providers do not support instructions.

https://github.com/midday-ai/ai-sdk-tools/blob/a1cc55585c5f6f70dbf1986eb46a993eced76757/packages/agents/src/tool-result-extractor.ts#L136C7-L145C41


Root Cause Analysis

The Problem

In packages/agents/src/tool-result-extractor.ts, when an agent handoff occurred, the tool result extractor was adding a system message with handoff context after user messages had already been sent.

This created the following message sequence:

  1. Initial system message (agent instructions) ✅
  2. User message ✅
  3. Another system message (handoff context) ❌ ← Problem!

Why It Failed

  • Anthropic & Bedrock: Require all system messages to appear at the start of the conversation, before any user/assistant messages
  • OpenAI: Tolerates system messages mid-conversation (non-standard behavior)
  • AI SDK: Correctly enforces provider-specific constraints

Solution (Option A)

Changed the handoff context message from role: 'system' to role: 'user' in the tool result extractor.

File: packages/agents/src/tool-result-extractor.ts (lines 130-137)

After:

// Add a user message with the available data instead of system message
// Note: Must use 'user' role for compatibility with Anthropic/Bedrock
// They don't support system messages after the conversation has started (after user/assistant messages)
// OpenAI tolerates system messages mid-conversation, but it's not standard
const dataMessage: ModelMessage = {
  role: 'user',
  content: `[Context from previous agent]\n${dataSummary}\n\n**IMPORTANT**: Only use this data if it's DIRECTLY relevant to the current user question. If the user is asking about something different, ignore this data and call the appropriate tools.`
};

Why This Works

  • User messages are allowed at any point in the conversation for all providers
  • The semantic meaning is preserved (providing context to the agent)
  • The AI SDK handles the message correctly across all providers
  • No change in agent behavior or response quality

Option B

This can be handled as in Cache control example by providing multiple system messages at the head of messages array. But I didn't test it.

const result = await generateText({
  model: anthropic('claude-3-5-sonnet-20240620'),
  messages: [
    {
      role: 'system',
      content: 'Cached system message part',
      providerOptions: {
        anthropic: { cacheControl: { type: 'ephemeral' } },
      },
    },
    {
      role: 'system',
      content: 'Uncached system message part',
    },
    {
      role: 'user',
      content: 'User prompt',
    },
  ],
});

@pontusab I am not sure if changing system message to user message has side effects for ai-sdk-tools and its roadmap. What do you think?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions