Skip to content

Commit 749a574

Browse files
committed
Revise and expand documentation for replicate.use()
Replaces the experimental section on `replicate.use()` with a more detailed and updated guide, including usage examples for image and language models, streaming, chaining, async support, accessing URLs, and prediction creation. Removes references to the API being experimental and updates model examples to use Anthropic Claude instead of Meta Llama.
1 parent 2c68288 commit 749a574

File tree

1 file changed

+127
-131
lines changed

1 file changed

+127
-131
lines changed

README.md

Lines changed: 127 additions & 131 deletions
Original file line numberDiff line numberDiff line change
@@ -62,6 +62,133 @@ client = Replicate(
6262
)
6363
```
6464

65+
## Using `replicate.use()`
66+
67+
The `use()` method provides a more concise way to call Replicate models as functions, offering a more pythonic approach to running models:
68+
69+
```python
70+
import replicate
71+
72+
# Create a model function
73+
flux_dev = replicate.use("black-forest-labs/flux-dev")
74+
75+
# Call it like a regular Python function
76+
outputs = flux_dev(
77+
prompt="a cat wearing a wizard hat, digital art",
78+
num_outputs=1,
79+
aspect_ratio="1:1",
80+
output_format="webp",
81+
)
82+
83+
# outputs is a list of URLPath objects that auto-download when accessed
84+
for output in outputs:
85+
print(output) # e.g., Path(/tmp/a1b2c3/output.webp)
86+
```
87+
88+
### Language models with streaming
89+
90+
Many models, particularly language models, support streaming output. Use the `streaming=True` parameter to get results as they're generated:
91+
92+
```python
93+
import replicate
94+
95+
# Create a streaming language model function
96+
claude = replicate.use("anthropic/claude-4.5-sonnet", streaming=True)
97+
98+
# Stream the output
99+
output = claude(prompt="Write a haiku about Python programming", max_tokens=50)
100+
101+
for chunk in output:
102+
print(chunk, end="", flush=True)
103+
```
104+
105+
### Chaining models
106+
107+
You can easily chain models together by passing the output of one model as input to another:
108+
109+
```python
110+
import replicate
111+
112+
# Create two model functions
113+
flux_dev = replicate.use("black-forest-labs/flux-dev")
114+
claude = replicate.use("anthropic/claude-4.5-sonnet")
115+
116+
# Generate an image
117+
images = flux_dev(prompt="a mysterious ancient artifact")
118+
119+
# Describe the image
120+
description = claude(
121+
prompt="Describe this image in detail",
122+
image=images[0], # Pass the first image directly
123+
)
124+
125+
print(description)
126+
```
127+
128+
### Async support
129+
130+
For async/await patterns, use the `use_async=True` parameter:
131+
132+
```python
133+
import asyncio
134+
import replicate
135+
136+
137+
async def main():
138+
# Create an async model function
139+
flux_dev = replicate.use("black-forest-labs/flux-dev", use_async=True)
140+
141+
# Await the result
142+
outputs = await flux_dev(prompt="futuristic city at sunset")
143+
144+
for output in outputs:
145+
print(output)
146+
147+
148+
asyncio.run(main())
149+
```
150+
151+
### Accessing URLs without downloading
152+
153+
If you need the URL without downloading the file, use the `get_path_url()` helper:
154+
155+
```python
156+
import replicate
157+
from replicate.lib._predictions_use import get_path_url
158+
159+
flux_dev = replicate.use("black-forest-labs/flux-dev")
160+
outputs = flux_dev(prompt="a serene landscape")
161+
162+
for output in outputs:
163+
url = get_path_url(output)
164+
print(f"URL: {url}") # https://replicate.delivery/...
165+
```
166+
167+
### Creating predictions without waiting
168+
169+
To create a prediction without waiting for it to complete, use the `create()` method:
170+
171+
```python
172+
import replicate
173+
174+
claude = replicate.use("anthropic/claude-4.5-sonnet")
175+
176+
# Start the prediction
177+
run = claude.create(prompt="Explain quantum computing")
178+
179+
# Check logs while it's running
180+
print(run.logs())
181+
182+
# Get the output when ready
183+
result = run.output()
184+
print(result)
185+
```
186+
187+
### Current limitations
188+
189+
- The `use()` method must be called at the module level (not inside functions or classes)
190+
- Type hints are limited compared to the standard client interface
191+
65192
## Run a model
66193

67194
You can run a model synchronously using `replicate.run()`:
@@ -535,137 +662,6 @@ with Replicate() as replicate:
535662
# HTTP client is now closed
536663
```
537664

538-
## Experimental: Using `replicate.use()`
539-
540-
> [!WARNING]
541-
> The `replicate.use()` interface is experimental and subject to change. We welcome your feedback on this new API design.
542-
543-
The `use()` method provides a more concise way to call Replicate models as functions. This experimental interface offers a more pythonic approach to running models:
544-
545-
```python
546-
import replicate
547-
548-
# Create a model function
549-
flux_dev = replicate.use("black-forest-labs/flux-dev")
550-
551-
# Call it like a regular Python function
552-
outputs = flux_dev(
553-
prompt="a cat wearing a wizard hat, digital art",
554-
num_outputs=1,
555-
aspect_ratio="1:1",
556-
output_format="webp",
557-
)
558-
559-
# outputs is a list of URLPath objects that auto-download when accessed
560-
for output in outputs:
561-
print(output) # e.g., Path(/tmp/a1b2c3/output.webp)
562-
```
563-
564-
### Language models with streaming
565-
566-
Many models, particularly language models, support streaming output. Use the `streaming=True` parameter to get results as they're generated:
567-
568-
```python
569-
import replicate
570-
571-
# Create a streaming language model function
572-
llama = replicate.use("meta/meta-llama-3-8b-instruct", streaming=True)
573-
574-
# Stream the output
575-
output = llama(prompt="Write a haiku about Python programming", max_tokens=50)
576-
577-
for chunk in output:
578-
print(chunk, end="", flush=True)
579-
```
580-
581-
### Chaining models
582-
583-
You can easily chain models together by passing the output of one model as input to another:
584-
585-
```python
586-
import replicate
587-
588-
# Create two model functions
589-
flux_dev = replicate.use("black-forest-labs/flux-dev")
590-
llama = replicate.use("meta/meta-llama-3-8b-instruct")
591-
592-
# Generate an image
593-
images = flux_dev(prompt="a mysterious ancient artifact")
594-
595-
# Describe the image
596-
description = llama(
597-
prompt="Describe this image in detail",
598-
image=images[0], # Pass the first image directly
599-
)
600-
601-
print(description)
602-
```
603-
604-
### Async support
605-
606-
For async/await patterns, use the `use_async=True` parameter:
607-
608-
```python
609-
import asyncio
610-
import replicate
611-
612-
613-
async def main():
614-
# Create an async model function
615-
flux_dev = replicate.use("black-forest-labs/flux-dev", use_async=True)
616-
617-
# Await the result
618-
outputs = await flux_dev(prompt="futuristic city at sunset")
619-
620-
for output in outputs:
621-
print(output)
622-
623-
624-
asyncio.run(main())
625-
```
626-
627-
### Accessing URLs without downloading
628-
629-
If you need the URL without downloading the file, use the `get_path_url()` helper:
630-
631-
```python
632-
import replicate
633-
from replicate.lib._predictions_use import get_path_url
634-
635-
flux_dev = replicate.use("black-forest-labs/flux-dev")
636-
outputs = flux_dev(prompt="a serene landscape")
637-
638-
for output in outputs:
639-
url = get_path_url(output)
640-
print(f"URL: {url}") # https://replicate.delivery/...
641-
```
642-
643-
### Creating predictions without waiting
644-
645-
To create a prediction without waiting for it to complete, use the `create()` method:
646-
647-
```python
648-
import replicate
649-
650-
llama = replicate.use("meta/meta-llama-3-8b-instruct")
651-
652-
# Start the prediction
653-
run = llama.create(prompt="Explain quantum computing")
654-
655-
# Check logs while it's running
656-
print(run.logs())
657-
658-
# Get the output when ready
659-
result = run.output()
660-
print(result)
661-
```
662-
663-
### Current limitations
664-
665-
- The `use()` method must be called at the module level (not inside functions or classes)
666-
- Type hints are limited compared to the standard client interface
667-
- This is an experimental API and may change in future releases
668-
669665
## Versioning
670666

671667
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:

0 commit comments

Comments
 (0)