Skip to content

Conversation

HareeshBahuleyan
Copy link
Contributor

Current support is limited to params.response_format being a Pydantic Model. This PR enables using openAI schema. Will be useful when migrating openai agents framework to use any-llm: mozilla-ai/any-agent#828

@HareeshBahuleyan HareeshBahuleyan self-assigned this Oct 10, 2025
@codecov
Copy link

codecov bot commented Oct 10, 2025

Codecov Report

❌ Patch coverage is 83.33333% with 1 line in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/any_llm/providers/mistral/mistral.py 83.33% 0 Missing and 1 partial ⚠️
Files with missing lines Coverage Δ
src/any_llm/providers/mistral/mistral.py 82.05% <83.33%> (-13.90%) ⬇️

... and 33 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@HareeshBahuleyan
Copy link
Contributor Author

Ran CI tests:
https://github.com/mozilla-ai/any-llm/actions/runs/18411357927/job/52464763632
The failures observed there I guess are unrelated - they seem to be for other providers, not Mistral, which is being changed in this PR.

Copy link
Contributor

@daavoo daavoo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @HareeshBahuleyan !

Could you add a test to https://github.com/mozilla-ai/any-llm/blob/main/tests/unit/providers/test_mistral_provider.py ? You can see in other providers examples of mocking the internal client

@HareeshBahuleyan
Copy link
Contributor Author

@daavoo A unit test has been added now 👍

Comment on lines 113 to 114
patch("any_llm.providers.mistral.mistral.response_format_from_pydantic_model") as mocked_pydantic_converter,
patch("any_llm.providers.mistral.mistral.ResponseFormat") as mocked_response_format,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@HareeshBahuleyan I think we should not be mocking this mistral functions, we should actually try to see if they work with the inputs we are passing.

Then, we can assert on the arguments passed to mocked_mistralchat.complete_async

Suggested change
patch("any_llm.providers.mistral.mistral.response_format_from_pydantic_model") as mocked_pydantic_converter,
patch("any_llm.providers.mistral.mistral.ResponseFormat") as mocked_response_format,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, updated the code to actually call the functions and verify that they pass the response_format argument with the expected schema to the completion() function.

Copy link
Contributor

@daavoo daavoo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀

@daavoo daavoo merged commit 242704a into main Oct 13, 2025
10 checks passed
@daavoo daavoo deleted the mistral-openai-response-format branch October 13, 2025 19:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants