Replies: 1 comment 4 replies
-
Have you updated the workflow to use the updated agent? I've experienced an existing workflow will not have configuration changes propagated from agent/models. |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have Ollama serving LLMs to Autogen Studio via the Model setup. They test perfectly and show connected. Each of the Agents within in the Workflow are exclusively using Agents with Ollama models. There is no reference to OpenAI anywhere within the GroupChat Workflow.
The Workflow begins the task and the Agents are responding via the Ollama connection however, I am I getting an OpenAI error when it shouldn't be called at all? See below.
(yes, the RateLimitError is what is driving me to use local Ollama models. I will go broke testing Autogen if I keep using the OpenAI API)
openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
Note: I am on Windows running Autogen Studio in a conda env.
Beta Was this translation helpful? Give feedback.
All reactions