Best in class LLMs are able to output JSON following a schema you provide, usually JSON-Schema. This significantly expands the ways you can leverage LLMs in your application!
Think of the input as:
- A context, anything that is or can be converted to text, like emails/pdfs/html/xlsx
- A schema, "Here is the form you need to fill to complete your task"
- An optional prompt, giving a specific task, rules, etc
And the output/outcome is whichever structure best matches your use case and domain.
The python instructor cookbook has interesting examples.
Instructrice is a PHP library that simplifies working with structured output from LLMs in a type-safe manner.
Features:
- Flexible schema options:
- Classes using api-platform/json-schema
- Dynamically generated types PSL\Type
- Or a JSON-Schema array generated by a third party library, or in plain PHP
- symfony/serializer integration to deserialize LLMs outputs
- Streaming first:
- As a developer you can be more productive with faster feedback loops than waiting for outputs to complete. This also makes slower local models more usable.
- You can provide a much better and snappier UX to your users.
- The headaches of parsing incomplete JSON are handled for you.
- A set of pre-configured LLMs with the best available settings. Set your API keys and switch between different providers and models without having to think about the model name, json mode, function calling, etc.
A Symfony Bundle is also available.
$ composer require adrienbrault/instructrice:@dev
use AdrienBrault\Instructrice\InstructriceFactory;
use AdrienBrault\Instructrice\LLM\Provider\Ollama;
use AdrienBrault\Instructrice\LLM\Provider\OpenAi;
use AdrienBrault\Instructrice\LLM\Provider\Anthropic;
$instructrice = InstructriceFactory::create(
defaultLlm: Ollama::HERMES2THETA_LLAMA3_8B,
apiKeys: [ // Unless you inject keys here, api keys will be fetched from environment variables
OpenAi::class => $openAiApiKey,
Anthropic::class => $anthropicApiKey,
],
);
use AdrienBrault\Instructrice\Attribute\Prompt;
class Character
{
// The prompt annotation lets you add instructions specific to a property
#[Prompt('Just the first name.')]
public string $name;
public ?string $rank = null;
}
$characters = $instructrice->getList(
Character::class,
'Colonel Jack O\'Neil walks into a bar and meets Major Samanta Carter. They call Teal\'c to join them.',
);
/*
dump($characters);
array:3 [
0 => Character^ {
+name: "Jack"
+rank: "Colonel"
}
1 => Character^ {
+name: "Samanta"
+rank: "Major"
}
2 => Character^ {
+name: "Teal'c"
+rank: null
}
]
*/
$character = $instructrice->get(
type: Character::class,
context: 'Colonel Jack O\'Neil.',
);
/*
dump($character);
Character^ {
+name: "Jack"
+rank: "Colonel"
}
*/
$label = $instructrice->get(
type: [
'type' => 'string',
'enum' => ['positive', 'neutral', 'negative'],
],
context: 'Amazing great cool nice',
prompt: 'Sentiment analysis',
);
/*
dump($label);
"positive"
*/
You can also use third party json schema libraries like goldspecdigital/oooas to generate the schema:
CleanShot.2024-04-18.at.14.11.39.mp4
Provider | Environment Variables | Enum | API Key Creation URL |
---|---|---|---|
Ollama | OLLAMA_HOST |
Ollama | |
OpenAI | OPENAI_API_KEY |
OpenAi | API Key Management |
Anthropic | ANTHROPIC_API_KEY |
Anthropic | API Key Management |
Mistral | MISTRAL_API_KEY |
Mistral | API Key Management |
Fireworks AI | FIREWORKS_API_KEY |
Fireworks | API Key Management |
Groq | GROQ_API_KEY |
Groq | API Key Management |
Together AI | TOGETHER_API_KEY |
Together | API Key Management |
Deepinfra | DEEPINFRA_API_KEY |
Deepinfra | API Key Management |
Perplexity | PERPLEXITY_API_KEY |
Perplexity | API Key Management |
Anyscale | ANYSCALE_API_KEY |
Anyscale | API Key Management |
OctoAI | OCTOAI_API_KEY |
OctoAI | API Key Management |
The supported providers are Enums, which you can pass to the llm
argument of InstructriceFactory::create
:
use AdrienBrault\Instructrice\InstructriceFactory;
use AdrienBrault\Instructrice\LLM\Provider\OpenAi;
$instructrice->get(
...,
llm: OpenAi::GPT_4T, // API Key will be fetched from the OPENAI_API_KEY environment variable
);
Strategy | π Text | 𧩠JSON | π Function |
---|
Commercial usage πΌ | β Yes | β Nope |
---|
πΌ | ctx | Ollama | Mistral | Fireworks | Groq | Together | DeepInfra | Perplexity | Anyscale | OctoAI | |
---|---|---|---|---|---|---|---|---|---|---|---|
Mistral 7B | β | 32k | 𧩠| 𧩠68/s | π 98/s | π 88/s !ctx=16k! | 𧩠| 𧩠| |||
Mixtral 8x7B | β | 32k | 𧩠| 𧩠44/s | 𧩠237/s | π 560/s | π 99/s | π 119/s !ctx=16k! | 𧩠| 𧩠| |
Mixtral 8x22B | β | 65k | 𧩠| 𧩠77/s | 𧩠77/s | π 52/s | 𧩠40/s | π 62/s !ctx=16k! | 𧩠| 𧩠| |
Phi-3-Mini-4K | β | 4k | 𧩠| ||||||||
Phi-3-Mini-128K | β | 128k | 𧩠| ||||||||
Phi-3-Medium-4K | β | 4k | 𧩠| ||||||||
Phi-3-Medium-128K | β | 128k | 𧩠| ||||||||
Qwen2 0.5B | β | 32k | 𧩠| ||||||||
Qwen2 1.5B | β | 32k | 𧩠| ||||||||
Qwen2 7B | β | 128k | 𧩠| ||||||||
Llama3 8B | 8k | π | 𧩠280/s | π 800/s | π 194/s | 𧩠133/s | π 121/s | 𧩠| 𧩠| ||
Llama3 70B | 8k | 𧩠| 𧩠116/s | π 270/s | π 105/s | 𧩠26/s | π 42/s | 𧩠| 𧩠| ||
Llama 3.1 8B | [ |
128k | 𧩠| 𧩠| π | π | 𧩠| π | |||
Llama 3.1 70B | [ |
128k | 𧩠| 𧩠| π | π | 𧩠| π | |||
Llama 3.1 405B | [ |
128k | 𧩠| 𧩠| π | π | π | ||||
Gemma 7B | 8k | π 800/s | π 118/s | 𧩠64/s | 𧩠| ||||||
DBRX | 32k | 𧩠50/s | π 72/s | 𧩠| |||||||
Qwen2 72B | 128k | 𧩠| |||||||||
Qwen1.5 32B | 32k | π | 𧩠| ||||||||
Command R | β | 128k | π | ||||||||
Command R+ | β | 128k | π |
Throughputs from https://artificialanalysis.ai/leaderboards/providers .
πΌ | ctx | Base | Ollama | Fireworks | Together | DeepInfra | OctoAI | |
---|---|---|---|---|---|---|---|---|
Hermes 2 Pro Mistral 7B | β | Mistral 7B | 𧩠| 𧩠| 𧩠| |||
FireFunction V1 | β | Mixtral 8x7B | π | |||||
WizardLM 2 7B | β | Mistral 7B | 𧩠| |||||
WizardLM 2 8x22B | β | Mixtral 8x7B | π | 𧩠| 𧩠| |||
Capybara 34B | β | 200k | Yi 34B | 𧩠| ||||
Hermes 2 Pro Llama3 8B | Llama3 8B | π | ||||||
Hermes 2 Theta Llama3 8B | Llama3 8B | π | ||||||
Dolphin 2.9 | 8k | Llama3 8B | 𧩠| π | 𧩠|
Provider | Model | ctx | |
---|---|---|---|
Mistral | Large | 32k | β 26/s |
OpenAI | GPT-4o | 128k | π 83/s |
OpenAI | GPT-4o mini | 128k | π 140/s |
OpenAI | GPT-4 Turbo | 128k | π 28/s |
OpenAI | GPT-3.5 Turbo | 16k | π 72/s |
Anthropic | Claude 3 Haiku | 200k | π 88/s |
Anthropic | Claude 3 Sonnet | 200k | π 59/s |
Anthropic | Claude 3 Opus | 200k | π 26/s |
Gemini 1.5 Flash | 1000k | 𧩠136/s | |
Gemini 1.5 Pro | 1000k | 𧩠57/s | |
Perplexity | Sonar Small Chat | 16k | π |
Perplexity | Sonar Small Online | 12k | π |
Perplexity | Sonar Medium Chat | 16k | π |
Perplexity | Sonar Medium Online | 12k | π |
Throughputs from https://artificialanalysis.ai/leaderboards/providers .
Automate updating these tables by scraping https://artificialanalysis.ai , along with chatboard arena elo.? Would be a good use case / showcase of this library/cli?
If you want to use an Ollama model that is not available in the enum, you can use the Ollama::create
static method:
use AdrienBrault\Instructrice\LLM\LLMConfig;
use AdrienBrault\Instructrice\LLM\Cost;
use AdrienBrault\Instructrice\LLM\OpenAiJsonStrategy;
use AdrienBrault\Instructrice\LLM\Provider\Ollama;
$instructrice->get(
...,
llm: Ollama::create(
'codestral:22b-v0.1-q5_K_M', // check its license first!
32000,
),
);
You can also use any OpenAI compatible api by passing an LLMConfig:
use AdrienBrault\Instructrice\LLM\LLMConfig;
use AdrienBrault\Instructrice\LLM\Cost;
use AdrienBrault\Instructrice\LLM\OpenAiJsonStrategy;
$instructrice->get(
...,
llm: new LLMConfig(
uri: 'https://api.together.xyz/v1/chat/completions',
model: 'meta-llama/Llama-3-70b-chat-hf',
contextWindow: 8000,
label: 'Llama 3 70B',
provider: 'Together',
cost: Cost::create(0.9),
strategy: OpenAiJsonStrategy::JSON,
headers: [
'Authorization' => 'Bearer ' . $apiKey,
]
),
);
You may configure the LLM using a DSN:
- the scheme is the provider:
openai
,openai-http
,anthropic
,google
- the password is the api key
- the host, port and path are the api endpoints without the scheme
- the query string:
model
is the model namecontext
is the context windowstrategy
is the strategy to use:json
for json mode with the schema in the prompt onlyjson_with_schema
for json mode with probably the completion perfectly constrained to the schematool_any
tool_auto
tool_function
Examples:
use AdrienBrault\Instructrice\InstructriceFactory;
$instructrice = InstructriceFactory::create(
defaultLlm: 'openai://:[email protected]/v1/chat/completions?model=gpt-3.5-turbo&strategy=tool_auto&context=16000'
);
$instructrice->get(
...,
llm: 'openai-http://localhost:11434?model=adrienbrault/nous-hermes2theta-llama3-8b&strategy=json&context=8000'
);
$instructrice->get(
...,
llm: 'openai://:[email protected]/inference/v1/chat/completions?model=accounts/fireworks/models/llama-v3-70b-instruct&context=8000&strategy=json_with_schema'
);
$instructrice->get(
...,
llm: 'google://:[email protected]/v1beta/models?model=gemini-1.5-flash&context=1000000'
);
$instructrice->get(
...,
llm: 'anthropic://:[email protected]?model=claude-3-haiku-20240307&context=200000'
);
You may also implement LLMInterface.
Obviously inspired by instructor-php and instructor.
How is it different from instructor php?
Both libraries essentially do the same thing:
- Automatic schema generation from classes
- Multiple LLM/Providers abstraction/support
- Many strategies to extract data: function calling, json mode, etc
- Automatic deserialization/hydration
- Maybe validation/retries later for this lib.
However, instructice differs with:
- Streaming first.
- Preconfigured provider+llms, to not have to worry about:
- Json mode, function calling, etc
- The best prompt format to use
- Your options for local models
- Whether streaming works. For example, groq can only do streaming without json-mode/function calling.
- PSR-3 logging
- Guzzle+symfony/http-client support
- No messages. You just pass context, prompt.
- I am hoping that this choice enables cool things later like supporting few-shots examples, evals, etc
- More flexible schema options
- Higher level abstraction. You aren't able to provide a list of messages, while it is possible with
instructor-php
.
Things to look into:
- Unstructured
- Llama Parse
- EMLs
- jina-ai/reader -> This is awesome,
$client->request('GET', 'https://r.jina.ai/' . $url)
- firecrawl
DSPy is very interesting. There are great ideas to be inspired by.
Ideally this library is good to prototype with, but can support more advanced extraction workflows with few shot examples, some sort of eval system, generating samples/output like DSPy, etc
Would be cool to have a CLI, that accepts a FQCN and a context.
instructrice get "App\Entity\Customer" "$(cat some_email_body.md)"
Autosave all input/schema/output in sqlite db. Like llm? Leverage that to test examples, add few shots, evals?