Skip to content
Thoughts on GitHub Models?

OpenAI GPT-4o mini

An affordable, efficient AI solution for diverse text and image tasks.
Context
131k input · 4k output
Training date
Oct 2023
Rate limit tier
Provider support
Try OpenAI GPT-4o mini
Get early access to our playground for modelsJoin our limited beta waiting list today and be among the first to try out an easy way to test models

Model navigation navigation

OpenAI

GPT-4o mini enables a broad range of tasks with its low cost and latency, such as applications that chain or parallelize multiple model calls (e.g., calling multiple APIs), pass a large volume of context to the model (e.g., full code base or conversation history), or interact with customers through fast, real-time text responses (e.g., customer support chatbots).

Today, GPT-4o mini supports text and vision in the API, with support for text, image, video and audio inputs and outputs coming in the future. The model has a context window of 128K tokens and knowledge up to October 2023. Thanks to the improved tokenizer shared with GPT-4o, handling non-English text is now even more cost effective.

GPT-4o mini surpasses GPT-3.5 Turbo and other small models on academic benchmarks across both textual intelligence and multimodal reasoning, and supports the same range of languages as GPT-4o. It also demonstrates strong performance in function calling, which can enable developers to build applications that fetch data or take actions with external systems, and improved long-context performance compared to GPT-3.5 Turbo.

Resources

Languages

 (27)
English, Italian, Afrikaans, Spanish, German, French, Indonesian, Russian, Polish, Ukrainian, Greek, Latvian

About

An affordable, efficient AI solution for diverse text and image tasks.
Context
131k input · 4k output
Training date
Oct 2023
Rate limit tier
Provider support

Languages

 (27)
English, Italian, Afrikaans, Spanish, German, French, Indonesian, Russian, Polish, Ukrainian, Greek, Latvian