Labruno is an agent coordinator that creates multiple AI solutions to your coding tasks using parallel sandboxes and evaluates them to find the best implementation.
Labruno acts as an orchestrator that:
- Takes your coding task and spins up multiple isolated sandboxes
- Asks each sandbox to generate a unique solution using LLaMA 4
- Executes all solutions in parallel
- Uses an LLM as judge to evaluate and select the best implementation
- Shows you all solutions with the winner highlighted
Think of it as having multiple AI developers working on your task simultaneously, with an expert reviewer choosing the best approach.
- 🏎️ Parallel Processing: Creates and runs multiple sandboxes concurrently
- 🧠 Multiple Solutions: Generates diverse approaches to the same problem
- 🤖 AI Evaluation: Uses an LLM to judge which solution is best
- 🔒 Secure Execution: Runs code in isolated Daytona sandboxes
- ⚡ Fast Results: Get multiple working implementations in seconds
- Python 3.8+
- Daytona and Groq API keys
# Install
git clone https://github.com/nkkko/labruno.git
cd labruno
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
# Configure
cp .env.example .env
# Add your API keys to .env
# Run
python app.pyIn your .env file:
DAYTONA_API_KEY=your_key_here
DAYTONA_TARGET=us
GROQ_API_KEY=your_key_here
- Open http://127.0.0.1:5001 in your browser
- Type a coding task like "write a function to find prime numbers"
- Click "Generate and Execute Code"
- See multiple solutions and the AI's evaluation of the best one
- You provide a task: Ask for any coding solution
- Concurrent sandboxes spin up: Multiple isolated environments are created in parallel
- Each sandbox generates code: LLaMA 4 creates a unique solution in each environment
- All solutions execute: Code runs safely in isolated sandboxes
- AI judges the results: The LLM evaluates solutions for correctness, efficiency, and style
- Results presented: You see all working solutions with the best one highlighted
- Interview Prep: See multiple approaches to coding problems
- Learning: Compare different ways to solve the same problem
- Optimization: Find the most efficient algorithm for your task
- Exploration: Generate diverse implementations and understand trade-offs
- Modify number of parallel sandboxes in
app.py - Adjust evaluation criteria by changing the ranking in
evaluate_results() - Customize prompt templates in
sandbox_task_runner.py
- API Key Issues: Ensure your Daytona and Groq API keys are correctly set in
.env - Slow Results: For complex tasks, reduce the number of concurrent sandboxes
- Memory Limitations: If you encounter memory issues, lower the
max_workersparameter
Created by nkkko
