A command-line tool to import recipes into Cooklang format using AI-powered conversion.
- Multi-provider AI support: OpenAI, Anthropic Claude, Azure OpenAI, Google Gemini, and Ollama (local Llama models)
- Automatic fallback: Seamlessly switch between providers on failure
- Flexible configuration: TOML-based config with environment variable overrides
- Smart extraction: Supports JSON-LD, HTML class-based, and plain-text extraction
- Metadata preservation: Automatically extracts and includes recipe metadata
- Local AI support: Run completely offline with Ollama
- Rust
- An AI provider (choose one or more):
- Free & Local: Ollama for running Llama models on your machine
- Cloud Options:
git clone https://github.com/cooklang/cooklang-import
cd cooklang-import
cargo install --path .Set your API key as an environment variable:
export OPENAI_API_KEY="your-api-key-here"The tool will work immediately with OpenAI's GPT-4.1-mini model (October 2025).
For multi-provider support and advanced features, create a config.toml file:
cp config.toml.example config.tomlEdit config.toml to configure your preferred providers:
# Default provider to use
default_provider = "openai"
# OpenAI Configuration
[providers.openai]
enabled = true
model = "gpt-4.1-mini" # Fast and cost-effective. Use "gpt-4.1-nano" for lowest latency
temperature = 0.7
max_tokens = 2000
# API key loaded from OPENAI_API_KEY environment variable
# or set here: api_key = "sk-..."
# Anthropic Claude Configuration
[providers.anthropic]
enabled = true
model = "claude-sonnet-4.5" # Use "claude-haiku-4.5" for faster/cheaper option
temperature = 0.7
max_tokens = 4000
# API key loaded from ANTHROPIC_API_KEY environment variable
# Provider Fallback Configuration
[fallback]
enabled = true
order = ["openai", "anthropic"]
retry_attempts = 3
retry_delay_ms = 1000Configuration is loaded with the following priority (highest to lowest):
- Environment variables (e.g.,
OPENAI_API_KEY,COOKLANG__PROVIDERS__OPENAI__MODEL) config.tomlfile in current directory- Default values
For nested configuration, use double underscores:
export COOKLANG__PROVIDERS__OPENAI__MODEL="gpt-4.1-mini"
export COOKLANG__FALLBACK__ENABLED=trueFetch a recipe from a URL and convert to Cooklang format:
cooklang-import https://www.bbcgoodfood.com/recipes/next-level-tikka-masalaDownload and extract recipe data without AI conversion:
cooklang-import https://www.bbcgoodfood.com/recipes/next-level-tikka-masala --extract-onlyThis outputs the raw ingredients and instructions in markdown format without Cooklang markup.
Convert structured markdown recipes to Cooklang format (when you have pre-separated ingredients and instructions):
cooklang-import --markdown \
--ingredients "2 eggs\n1 cup flour\n1/2 cup milk" \
--instructions "Mix dry ingredients. Add eggs and milk. Bake at 350°F for 30 minutes."Convert plain text recipes to Cooklang format (LLM will parse and structure the recipe):
cooklang-import --text "Take 2 eggs and 1 cup of flour. Mix them together and bake at 350°F for 30 minutes."This is useful for unstructured recipe text where ingredients and instructions are not clearly separated.
Use a different LLM provider (requires config.toml):
cooklang-import https://example.com/recipe --provider anthropic
cooklang-import --markdown --ingredients "..." --instructions "..." --provider ollamaAvailable providers: openai, anthropic, google, azure_openai, ollama
Set a custom timeout for HTTP requests:
cooklang-import https://example.com/recipe --timeout 60Combine multiple options:
cooklang-import https://example.com/recipe --provider anthropic --timeout 90For complete usage information:
cooklang-import --help- Models: gpt-4.1-mini (default, Oct 2025), gpt-4.1-nano (fastest), gpt-4o-mini, gpt-4o
- Environment Variable:
OPENAI_API_KEY - Configuration: See
config.toml.example
- Models: claude-sonnet-4.5 (Sep 2025), claude-haiku-4.5 (fastest, Oct 2025), claude-opus-4.1
- Environment Variable:
ANTHROPIC_API_KEY - Configuration: See
config.toml.example
- Models: Your deployed models (e.g., gpt-4, gpt-35-turbo)
- Environment Variable:
AZURE_OPENAI_API_KEY - Required Config:
endpoint,deployment_name,api_version
- Models: gemini-2.5-flash (latest, Sep 2025), gemini-2.0-flash-lite
- Environment Variable:
GOOGLE_API_KEY - Configuration: See
config.toml.example
- Models: llama3, llama2, codellama, mixtral, and more
- Requirements: Ollama installed locally
- No API Key Required: Runs entirely on your machine
- Base URL:
http://localhost:11434(default) - Setup:
- Install Ollama:
curl -fsSL https://ollama.ai/install.sh | sh - Pull a model:
ollama pull llama3 - Configure in
config.tomlor start using immediately
- Install Ollama:
Enable automatic fallback between providers for reliability:
[fallback]
enabled = true
order = ["openai", "anthropic", "google"]
retry_attempts = 3
retry_delay_ms = 1000When enabled, the tool will:
- Try the primary provider with exponential backoff retries
- If all retries fail, automatically switch to the next provider
- Continue until a provider succeeds or all providers are exhausted
If you're upgrading from a version that only used environment variables:
- No action required - Environment variables continue to work
- Optional: Create
config.tomlfor advanced features - Keep your
OPENAI_API_KEYin environment variables for security
Set your API key:
export OPENAI_API_KEY="your-key-here"Ensure at least one provider is:
- Enabled in
config.toml(enabled = true) - Included in
fallback.order - Has a valid API key configured
If you encounter rate limits:
- Enable fallback to use multiple providers
- Increase
retry_delay_msin config - Use a different provider temporarily
cooklang-import can also be used as a Rust library in your own projects.
Add to your Cargo.toml:
[dependencies]
cooklang-import = "0.7.0"
tokio = { version = "1.0", features = ["full"] }The library provides four main use cases:
- Builder API (recommended): Flexible, type-safe builder pattern with fluent interface
- Convenience Functions: Simple high-level functions for common use cases
- Low-level API: Direct access to fetching and conversion functions
The builder API provides the most control and flexibility:
use cooklang_import::{RecipeImporter, ImportResult};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Use Case 1: URL → Cooklang
let result = RecipeImporter::builder()
.url("https://example.com/recipe")
.build()
.await?;
match result {
ImportResult::Cooklang(cooklang) => println!("{}", cooklang),
ImportResult::Recipe(_) => unreachable!(),
}
// Use Case 2: URL → Recipe (extract only, no conversion)
let result = RecipeImporter::builder()
.url("https://example.com/recipe")
.extract_only()
.build()
.await?;
match result {
ImportResult::Recipe(recipe) => {
println!("Ingredients: {}", recipe.ingredients);
println!("Instructions: {}", recipe.instructions);
}
ImportResult::Cooklang(_) => unreachable!(),
}
// Use Case 3: Markdown → Cooklang (structured)
let result = RecipeImporter::builder()
.markdown("2 eggs\n1 cup flour", "Mix and bake")
.build()
.await?;
// Use Case 4: Text → Cooklang (unstructured)
let recipe_text = "Take 2 eggs and 1 cup of flour. Mix and bake at 350F.";
let result = RecipeImporter::builder()
.text(recipe_text)
.build()
.await?;
Ok(())
}For simple use cases, use the convenience functions:
use cooklang_import::{
import_from_url,
extract_recipe_from_url,
convert_markdown_to_cooklang,
convert_text_to_cooklang,
};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Fetch and convert to Cooklang
let cooklang = import_from_url("https://example.com/recipe").await?;
// Extract without conversion
let recipe = extract_recipe_from_url("https://example.com/recipe").await?;
// Convert markdown to Cooklang (structured)
let cooklang = convert_markdown_to_cooklang(
"2 eggs\n1 cup flour",
"Mix and bake"
).await?;
// Convert plain text to Cooklang (unstructured)
let recipe_text = "Take 2 eggs and 1 cup of flour. Mix and bake at 350F.";
let cooklang = convert_text_to_cooklang(recipe_text).await?;
Ok(())
}The builder supports additional configuration including custom providers and timeouts:
use cooklang_import::{RecipeImporter, LlmProvider};
use std::time::Duration;
// Use a custom LLM provider (requires config.toml with provider settings)
let result = RecipeImporter::builder()
.url("https://example.com/recipe")
.provider(LlmProvider::Anthropic)
.build()
.await?;
// Set a custom timeout for network requests
let result = RecipeImporter::builder()
.url("https://example.com/recipe")
.timeout(Duration::from_secs(60))
.build()
.await?;
// Combine both options
let result = RecipeImporter::builder()
.url("https://example.com/recipe")
.provider(LlmProvider::Ollama)
.timeout(Duration::from_secs(120))
.build()
.await?;Available Providers:
LlmProvider::OpenAI- OpenAI GPT models (default if no config)LlmProvider::Anthropic- Claude modelsLlmProvider::Google- Gemini modelsLlmProvider::AzureOpenAI- Azure OpenAI serviceLlmProvider::Ollama- Local Llama models via Ollama
Note: Custom providers require a config.toml file with appropriate provider configuration. See the main Configuration section for details.
The library provides structured error types:
use cooklang_import::{ImportError, RecipeImporter};
match RecipeImporter::builder().url("...").build().await {
Ok(result) => println!("Success!"),
Err(ImportError::FetchError(e)) => eprintln!("Network error: {}", e),
Err(ImportError::NoExtractorMatched) => eprintln!("Could not parse recipe"),
Err(ImportError::ConversionError(e)) => eprintln!("Conversion failed: {}", e),
Err(e) => eprintln!("Other error: {}", e),
}See the examples/ directory for complete examples:
builder_basic.rs- Basic builder usage for all three use casessimple_api.rs- Using convenience functionsbuilder_advanced.rs- Advanced features like custom providers and error handling
Run examples with:
cargo run --example builder_basic
cargo run --example simple_api
cargo run --example builder_advancedRun tests:
cargo testRun with debug logging:
RUST_LOG=debug cooklang-import <url>