Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gemini models via Openrouter not supported #5621

Open
ravishqureshi opened this issue Feb 19, 2025 · 9 comments
Open

Gemini models via Openrouter not supported #5621

ravishqureshi opened this issue Feb 19, 2025 · 9 comments

Comments

@ravishqureshi
Copy link

ravishqureshi commented Feb 19, 2025

What happened?

Following code snippet works

config =  {
            "model": "anthropic/claude-3.5-sonnet",
            "base_url": "https://openrouter.ai/api/v1",
            "model_info": {
                "vision": True,
                "function_calling": True,
                "json_output": False,
                "family": "claude-3.5-sonnet"
            }
        }
model = config["model"]
api_key = settings.OPENROUTER_KEY
base_url = config["base_url"]
model_info=config.get("model_info", {})

model_client = OpenAIChatCompletionClient(
      model=model,
      api_key=api_key,
      base_url=base_url,
      model_info=model_info
  )
# do other stuff like create agent etc...
#####rest of the code####
response = await agent.on_messages(messages=messages,cancellation_token=cancellation_token)

Above code works well when we change
"model": "anthropic/claude-3.5-sonnet" -> model ="openai/gpt-4o-2024-11-20"
and
"family": "claude-3.5-sonnet" -> to "family": "gpt-4o"

However, when i change model to gemini flash from here - https://openrouter.ai/google/gemini-2.0-flash-001
ie "model" : "google/gemini-2.0-flash-001"
and
"family" : "gemini-2.0-flash" (picked up from https://microsoft.github.io/autogen/stable//reference/python/autogen_core.models.html#autogen_core.models.ModelInfo) code fails as following. Tried with family "unknown" as well.

venv_autogen_latest/lib/python3.12/site-packages/autogen_agentchat/agents/_assistant_agent.py:416: UserWarning: Resolved model mismatch: google/gemini-2.0-flash-001 != None. Model mapping in autogen_ext.models.openai may be incorrect.
  model_result = await self._model_client.create(
=== Exception during agent.on_messages call ===
'NoneType' object is not subscriptable
Traceback (most recent call last):
  File "/Users/ravishq/Library/CloudStorage/[email protected]/My Drive/iamai/autogen-ms/agent_backyard.py", line 38, in run_task
    response = await agent.on_messages(messages=messages,cancellation_token=cancellation_token)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ravishq/Library/CloudStorage/[email protected]/My Drive/iamai/venv_autogen_latest/lib/python3.12/site-packages/autogen_agentchat/agents/_assistant_agent.py", line 370, in on_messages
    async for message in self.on_messages_stream(messages, cancellation_token):
  File "/Users/ravishq/Library/CloudStorage/[email protected]/My Drive/iamai/venv_autogen_latest/lib/python3.12/site-packages/autogen_agentchat/agents/_assistant_agent.py", line 416, in on_messages_stream
    model_result = await self._model_client.create(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ravishq/Library/CloudStorage/[email protected]/My Drive/iamai/venv_autogen_latest/lib/python3.12/site-packages/autogen_ext/models/openai/_openai_client.py", line 569, in create
    choice: Union[ParsedChoice[Any], ParsedChoice[BaseModel], Choice] = result.choices[0]
                                                                        ~~~~~~~~~~~~~~^^^
TypeError: 'NoneType' object is not subscriptable
Error: 'NoneType' object is not subscriptable

Which packages was the bug in?

Python AgentChat (autogen-agentchat>=0.4.0)

AutoGen library version.

Python 0.4.7

Other library version.

No response

Model used

gpt4o, sonnet 3.5, gemini flash 2.0

Model provider

OpenRouter

Other model provider

No response

Python version

3.12

.NET version

None

Operating system

MacOS

@jackgerrits
Copy link
Member

jackgerrits commented Feb 20, 2025

@ekzhu do you know if OpenRouter presents all models as openai compatible or is gemini different?

@ravishqureshi
Copy link
Author

@jackgerrits - openrouter's claim to fame is that they provide a unified API and all models can be accessed via an openai compatible API schema.

https://openrouter.ai/docs/quickstart
https://openrouter.ai/docs/api-reference/overview
verbatim info from above link:
OpenRouter’s request and response schemas are very similar to the OpenAI Chat API, with a few small differences. At a high level, OpenRouter normalizes the schema across models and providers so you only need to learn one.

Another verbatim text from this link - https://openrouter.ai/openai/o1/api
OpenRouter provides an OpenAI-compatible completion API to 300+ models & providers that you can call directly, or using the OpenAI SDK. Additionally, some third-party SDKs are available.

They do say "very similar" but there is a reason why AI community is doubling down on openrouter and LiteLLM..because we want single interface for all AI models and make integrations model agnostic. Hope this helps. In case you find that Gemini's response is not openai compatible then do print the logs here and i will log a bug with openrouter. however it seems to me that it is not even about api response format. thr is some other problem of mapping in autogent
UserWarning: Resolved model mismatch: google/gemini-2.0-flash-001 != None. Model mapping in autogen_ext.models.openai may be incorrect.

copy-pasta :) the exact error again from this ticket so that you can look for it in your code base.

Waiting for this fix! Let's keep building!! Hyped with MS putting best minds of world on this.. so doubling down on Autogen while back stabbing langhain, crewai and smolagents... lfg!

@jackgerrits
Copy link
Member

UserWarning: Resolved model mismatch: google/gemini-2.0-flash-001 != None. Model mapping in autogen_ext.models.openai may be incorrect.

Yeah, I think this warning is probably okay, but I could be wrong here.

The error is in the issue indicates that result.choices is None. It would be good to reduce the repro down to just a model client call.

I don't have access to open router at the moment, so I will wait to see what @ekzhu thinks.

@Zochory
Copy link

Zochory commented Feb 21, 2025

you should try to integrate header like

Image

in the example it worked, not tried yet with openrouter

@ekzhu
Copy link
Collaborator

ekzhu commented Feb 21, 2025

I am getting the same error from OpenRouter when using Claude models. But it works with OpenAI models.

See my response in #5583

At this point I don't know what's the cause of it.

@ravishqureshi
Copy link
Author

ravishqureshi commented Feb 21, 2025

@ekzhu i am hoping that you are changing the family in your code when trying with Claude in the code snippet you pasted here - #5583

Coz Claude works just fine. This is the config that works:

{
            "model": "anthropic/claude-3.5-sonnet",
            "base_url": "https://openrouter.ai/api/v1",
            "api_type": "anthropic",
            "model_info": {
                "vision": True,
                "function_calling": True,
                "json_output": False,
                "family": "claude-3.5-sonnet"
            }
        }

ignore "api_type" key. Let me know if using above as well doesnt work for you for Claude models. Like i said, Claude works, Gemini doesnt. So we need to be on same page in terms of "reproducibility" of this issue else it will die a slow death and so would my project :D

Awaiting for your response on this...

@ekzhu
Copy link
Collaborator

ekzhu commented Feb 21, 2025

It's more about the model name rather than the model family. Have you tried with just calling open router directly using the openai library? Because from the error message it seems like the failure happened due to server returned a None in result.choices

@philippHorn
Copy link
Contributor

philippHorn commented Feb 22, 2025

I had the same error and inspected the result where the traceback comes from:

result.model_extra
Out[2]: 
{'error': {'message': 'Provider returned error',
  'code': 400,
  'metadata': {'raw': '{"type":"error","error":{"type":"invalid_request_error","message":"Requests which include `tool_use` or `tool_result` blocks must define tools."}}',
   'provider_name': 'Google',
   'isDownstreamPipeClean': True,
   'isErrorUpstreamFault': False}},
 'user_id': 'xxx'}

@philippHorn
Copy link
Contributor

I looked a bit more. I think the problem is:

  • Openai allows tool calls to be in the message history, even when the current API call does not include tools for the model
  • Some Openrouter models seem to not allow this

These were the messages sent to the LLM when I had the error:

[{'content': 'You are a helpful assistant.', 'role': 'system'},
 {'content': 'What is the weather in New York?',
  'role': 'user',
  'name': 'user'},
 {'tool_calls': [{'id': 'toolu_vrtx_01UonpGhPPQbzMNj8JaSREjv',
    'function': {'arguments': '{"city": "New York"}', 'name': 'get_weather'},
    'type': 'function'}],
  'role': 'assistant',
  'name': 'weather_agent'},
 {'content': 'The weather in New York is 73 degrees and Sunny.',
  'role': 'tool',
  'tool_call_id': 'toolu_vrtx_01UonpGhPPQbzMNj8JaSREjv'}]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants