Skip to content
Shai Dvash edited this page Dec 23, 2024 · 1 revision

Introduction

We tested our providers with the following models. Use this list for a quick start. The same model name can vary between two providers, for instance gpt-35-turbo and gpt-3.5-turbo

Provider Model Names
OPENAI * gpt-3.5-turbo
* gpt-4
* gpt-4-turbo
* gpt-4o
* o1-mini
* o1-preview
CLAUDE-ANTHROPIC * claude-2.1
* claude-3-haiku-20240307
* claude-3-opus-20240229
* claude-3-5-sonnet-20240620
GEMINI * gemini-pro
* gemini-1.5-pro
CLAUDE-AWS * anthropic.claude-v2:1
* anthropic.claude-3-haiku-20240307-v1:0
* anthropic.claude-3-opus-20240229-v1:0
* anthropic.claude-3-5-sonnet-20240620-v1:0
AZURE * gpt-35-turbo
* gpt-4
* gpt-4o
OLLAMA * llama2
* llama2:70b
* llama2-uncensored
* llama3
* dolphin-llama3
* llama3.1
* llama3.2
* vicuna
* mistral
* mixtral
* gemma
* gemma2
* zephyr
* phi
* phi3
* qwen
LOCAL * <absolute path>

Please note that we have specified the models tested for each implementation. If you attempt to use a model not listed, you will receive an error indicating that the provider does not support that model. However, you can add the model to the implementation's list of supported models, and it will then function as expected.

Clone this wiki locally