-
Notifications
You must be signed in to change notification settings - Fork 154
Description
Checks
- I have updated to the lastest minor and patch version of Strands
- I have checked the documentation and this is not expected behavior
- I have searched ./issues and there are no duplicates of my issue
Strands Version
1.9.0
Tools Package Version
0.2.8
Tools used
- file_read
Python Version
3.12.9
Operating System
SageMaker JupyterLab Notebook
Installation Method
pip
Steps to Reproduce
When using structured_output with the file_read tool, the behavior is inconsistent across models:
- With Claude models, the call executes (though results hallucinate extra questions not present in the file).
- With Nova or OpenAI models, the execution fails with:
ValueError: Model returned stop_reason: end_turn instead of "tool_use".
Code to Reproduce
from strands import Agent
from strands_tools import file_read
from strands.models import BedrockModel
from typing import List
from pydantic import BaseModel, Field
class Question(BaseModel):
question_number: int = Field(..., description="The number of the question.")
question_text: str = Field(..., description="The text of the question.")
class QuestionList(BaseModel):
questions: List[Question] = Field(..., description="A list of questions from the file.")
# Using OpenAI model (also reproducible with Nova)
model = BedrockModel(model_id='openai.gpt-oss-20b-1:0')
json_agent = Agent(model=model, tools=[file_read])
def read_questions_from_file(file_path_in: str):
"""
Reads questions from a file and returns them in a structured JSON format.
"""
prompt = f"""
The file {file_path_in} has a numbered list of questions.
Read the file and return a list of questions in the specified JSON structure.
"""
response = json_agent.structured_output(QuestionList, prompt)
return response
file_path_in = "questions.txt"
response = read_questions_from_file(file_path_in)
print(response.model_dump_json(indent=2))
Create a simple questions.txt file with:
1. What is the capital of France?
2. What is 2+2?
3. Who wrote Hamlet?
Run the above script using openai.gpt-oss-20b-1:0 or nova.* models.
Observe error.
Expected Behavior
The tool should execute consistently across all Bedrock models.
Either structured output should work the same way as with Claude, or if unsupported, the error should be caught and surfaced clearly.
Actual Behavior
With Claude models → executes, but hallucinates.
With Nova/OpenAI models → raises error:
ValueError: Model returned stop_reason: end_turn instead of "tool_use".
Additional Context
This seems related to how different model providers handle tool invocation and structured_output.
Request: Please clarify whether structured_output + tools is expected to work across all models (Claude, Nova, OpenAI), or only a subset.
Possible Solution
No response
Related Issues
No response