@@ -67,53 +67,79 @@ anthropic_model = AnthropicModel(
6767agent = Agent(model = anthropic_model)
6868```
6969
70- ### Ollama (Local Models)
70+ ### LiteLLM
7171
72- First install the ` ollama ` python client:
72+ LiteLLM is a unified interface for various LLM providers that allows you to interact with models from OpenAI and many others.
73+
74+ First install the ` litellm ` python client:
7375
7476``` bash
75- pip install strands-agents[ollama ]
77+ pip install strands-agents[litellm ]
7678```
7779
78- Next, import and initialize the ` OllamaModel ` provider:
80+ Next, import and initialize the ` LiteLLMModel ` provider:
7981
8082``` python
8183from strands import Agent
82- from strands.models.ollama import OllamaModel
84+ from strands.models.litellm import LiteLLMModel
8385
84- ollama_model = OllamaModel(
85- host = " http://localhost:11434" # Ollama server address
86- model_id = " llama3" , # Specify which model to use
87- temperature = 0.3 ,
86+ litellm_model = LiteLLMModel(
87+ client_args = {
88+ " api_key" : " <KEY>" ,
89+ },
90+ model_id = " gpt-4o"
8891)
8992
90- agent = Agent(model = ollama_model )
93+ agent = Agent(model = litellm_model )
9194```
9295
93- ### LiteLLM
96+ ### Llama API
9497
95- LiteLLM is a unified interface for various LLM providers that allows you to interact with models from OpenAI and many others.
96-
97- First install the ` litellm ` python client:
98+ Llama API is a Meta-hosted API service that helps you integrate Llama models into your applications quickly and efficiently.
9899
100+ First install the `` python client:
99101``` bash
100- pip install strands-agents[litellm ]
102+ pip install strands-agents[llamaapi ]
101103```
102104
103- Next, import and initialize the ` LiteLLMModel ` provider:
105+ Next, import and initialize the ` LlamaAPIModel ` provider:
104106
105107``` python
106108from strands import Agent
107- from strands.models.litellm import LiteLLMModel
109+ from strands.models.llamaapi import LLamaAPIModel
108110
109- litellm_model = LiteLLMModel (
111+ model = LlamaAPIModel (
110112 client_args = {
111113 " api_key" : " <KEY>" ,
112114 },
113- model_id = " gpt-4o"
115+ # **model_config
116+ model_id = " Llama-4-Maverick-17B-128E-Instruct-FP8" ,
114117)
115118
116- agent = Agent(model = litellm_model)
119+ agent = Agent(models = LLamaAPIModel)
120+ ```
121+
122+ ### Ollama (Local Models)
123+
124+ First install the ` ollama ` python client:
125+
126+ ``` bash
127+ pip install strands-agents[ollama]
128+ ```
129+
130+ Next, import and initialize the ` OllamaModel ` provider:
131+
132+ ``` python
133+ from strands import Agent
134+ from strands.models.ollama import OllamaModel
135+
136+ ollama_model = OllamaModel(
137+ host = " http://localhost:11434" # Ollama server address
138+ model_id = " llama3" , # Specify which model to use
139+ temperature = 0.3 ,
140+ )
141+
142+ agent = Agent(model = ollama_model)
117143```
118144
119145### Custom Model Providers
0 commit comments