-
Notifications
You must be signed in to change notification settings - Fork 80
feat(mistral): add support for OpenAI-style response format dictionaries with Mistral #535
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report❌ Patch coverage is
... and 33 files with indirect coverage changes 🚀 New features to boost your workflow:
|
Ran CI tests: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @HareeshBahuleyan !
Could you add a test to https://github.com/mozilla-ai/any-llm/blob/main/tests/unit/providers/test_mistral_provider.py ? You can see in other providers examples of mocking the internal client
@daavoo A unit test has been added now 👍 |
patch("any_llm.providers.mistral.mistral.response_format_from_pydantic_model") as mocked_pydantic_converter, | ||
patch("any_llm.providers.mistral.mistral.ResponseFormat") as mocked_response_format, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@HareeshBahuleyan I think we should not be mocking this mistral functions, we should actually try to see if they work with the inputs we are passing.
Then, we can assert on the arguments passed to mocked_mistralchat.complete_async
patch("any_llm.providers.mistral.mistral.response_format_from_pydantic_model") as mocked_pydantic_converter, | |
patch("any_llm.providers.mistral.mistral.ResponseFormat") as mocked_response_format, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, updated the code to actually call the functions and verify that they pass the response_format
argument with the expected schema to the completion() function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚀
Current support is limited to
params.response_format
being a Pydantic Model. This PR enables using openAI schema. Will be useful when migrating openai agents framework to use any-llm: mozilla-ai/any-agent#828