Skip to content

igor-olikh/llama-prompt-optimization-tool

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Llama Prompt Optimization Tool

A comprehensive tool for optimizing prompts using Meta's Llama Prompt Ops framework. This tool helps you create, test, and improve prompts for large language models through automated optimization strategies.

Overview

This project demonstrates how to use the Llama Prompt Ops Python package to automatically optimize prompts for better performance. The tool uses advanced optimization techniques to improve prompt effectiveness based on your dataset and evaluation metrics.

Prerequisites

  • Python 3.11 or higher (but less than 3.14)
  • Poetry for dependency management
  • OpenRouter API key for accessing LLM models

Installation

  1. Install the llama-prompt-ops Python package:

    pip install llama-prompt-ops
  2. Clone and set up this project:

    git clone <repository-url>
    cd llama-prompt-optimization-tool
    poetry install

Getting Started

Step 1: Create a Demo Project

Create a new prompt optimization project:

llama-prompt-ops create demo-project

This will create a project structure with the following components:

  • config.yaml: Configuration file for the optimization process
  • prompts/prompt.txt: Your initial prompt template
  • data/dataset.json: Training and evaluation dataset
  • results/: Directory where optimized prompts will be saved

Step 2: Review the System Prompt and Dataset

  1. Examine the initial prompt in prompts/prompt.txt:

    You are a helpful assistant. Extract and return a json with the following keys and values:
    - "urgency" as one of `high`, `medium`, `low`
    - "sentiment" as one of `negative`, `neutral`, `positive`
    - "categories" Create a dictionary with categories as keys and boolean values (True/False), where the value indicates whether the category is one of the best matching support category tags from: `emergency_repair_services`, `routine_maintenance_requests`, `quality_and_safety_concerns`, `specialized_cleaning_services`, `general_inquiries`, `sustainability_and_environmental_practices`, `training_and_support_requests`, `cleaning_services_scheduling`, `customer_feedback_and_complaints`, `facility_management_issues`
    Your complete message should be a valid json string that can be read directly and only contain the keys mentioned in the list above. Never enclose it in ```json...```, no newlines, no unnessacary whitespaces.
    
  2. Review the dataset in data/dataset.json:

    • Contains customer service emails with expected JSON responses
    • Each entry has an input field (email content) and an answer field (expected JSON output)
    • The dataset is used to train and evaluate the prompt optimization

Step 3: Configure the Project

Inspect and modify the config.yaml file:

system_prompt:
  file: prompts/prompt.txt
  inputs:
  - question
  outputs:
  - answer
dataset:
  path: data/dataset.json
  input_field:
  - fields
  - input
  golden_output_field: answer
model:
  task_model: openrouter/meta-llama/llama-3.3-70b-instruct
  proposer_model: openrouter/meta-llama/llama-3.3-70b-instruct
metric:
  class: llama_prompt_ops.core.metrics.FacilityMetric
  strict_json: false
  output_field: answer
optimization:
  strategy: llama

Key configuration options:

  • Model: Uses Llama 3.3 70B Instruct via OpenRouter
  • Metric: FacilityMetric for evaluating JSON output quality
  • Strategy: Llama optimization strategy for prompt improvement

Step 4: Set Up API Access

  1. Get an OpenRouter API key:

    • Visit https://openrouter.ai/
    • Create an account and obtain your API key
    • The API key is required to access the Llama models
  2. Set up environment variables:

    export OPENROUTER_API_KEY=your_api_key_here

    Or create a .env file in your project directory:

    OPENROUTER_API_KEY=your_api_key_here
    

Step 5: Run the Optimization

Navigate to your project directory and run the optimization:

cd demo-project
llama-prompt-ops migrate

This command will:

  • Load your dataset and initial prompt
  • Run the optimization process using the configured strategy
  • Generate improved prompts based on the evaluation metrics
  • Save the results in the results/ directory

Step 6: Review the Results

After optimization, check the results/ directory for the improved prompts:

  1. Generated files:

    • config_YYYYMMDD_HHMMSS.yaml: Optimized prompt configuration
    • config_YYYYMMDD_HHMMSS.json: JSON version of the configuration
  2. Key improvements in the optimized prompt:

    • More detailed system instructions
    • Few-shot examples for better context
    • Improved formatting and clarity
    • Better handling of edge cases

Project Structure

llama-prompt-optimization-tool/
├── pyproject.toml          # Project dependencies
├── poetry.lock            # Locked dependency versions
├── README.md              # This file
└── projects/
    └── demo-project/      # Example optimization project
        ├── config.yaml    # Project configuration
        ├── prompts/
        │   └── prompt.txt # Initial prompt template
        ├── data/
        │   └── dataset.json # Training/evaluation dataset
        ├── results/       # Optimized prompts
        │   ├── config_*.yaml
        │   └── config_*.json
        └── README.md      # Project-specific instructions

Understanding the Optimization Process

The Llama Prompt Ops tool uses several techniques to improve prompts:

  1. Few-shot Learning: Adds relevant examples to the prompt
  2. Instruction Tuning: Refines the system instructions for clarity
  3. Format Optimization: Improves the output format specifications
  4. Context Enhancement: Adds relevant context and constraints

Customization Options

Modifying the Dataset

Replace data/dataset.json with your own dataset:

  • Each entry should have input and expected output fields
  • Ensure the format matches your use case
  • Include diverse examples for better optimization

Changing Models

Update the config.yaml to use different models:

model:
  task_model: openrouter/anthropic/claude-3.5-sonnet
  proposer_model: openrouter/anthropic/claude-3.5-sonnet

Adjusting Metrics

Modify the evaluation metric in config.yaml:

metric:
  class: llama_prompt_ops.core.metrics.YourCustomMetric
  # Add metric-specific parameters

Best Practices

  1. Start with a clear initial prompt: The better your starting point, the better the optimization results
  2. Use diverse datasets: Include various scenarios and edge cases in your training data
  3. Iterate and refine: Run multiple optimization cycles and compare results
  4. Test thoroughly: Always validate optimized prompts with your specific use cases
  5. Monitor costs: API calls can be expensive, so monitor your usage

Troubleshooting

Common Issues

  1. API Key Errors: Ensure your OpenRouter API key is correctly set
  2. Model Availability: Check if the specified model is available on OpenRouter
  3. Dataset Format: Verify your dataset matches the expected format
  4. Memory Issues: Large datasets may require more memory

Getting Help

Contributing

This project is designed to demonstrate Llama Prompt Ops usage. For contributions to the core tool, please refer to the main repository.

License

This project follows the license of the underlying Llama Prompt Ops tool. Please refer to the original repository for licensing information.

About

Automated prompt optimization using Meta's Llama Prompt Ops framework

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published