Skip to content

Kaangml/orchestration_agents

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Orchestration Agent

A sophisticated multi-agent orchestration system that converts natural language requests into executable tasks through intelligent planning, feedback loops, and autonomous implementation.

🏗️ Architecture

This system implements a hierarchical multi-agent architecture:

  • OrchestratorAgent: Coordinates all agents and manages the overall workflow
  • TaskPlannerAgent: Analyzes natural language input and creates structured task plans
  • FeedbackAgent: Generates feedback requests and handles user approval loops
  • ImplementationPlannerAgent: Creates detailed technical implementation plans
  • ExecutionAgent: Executes implementation plans using available tools

✨ Features

  • 🤖 Multi-Agent Coordination: Specialized agents working together seamlessly
  • 💬 Natural Language Processing: Convert plain English to actionable tasks
  • 🔄 Feedback Loop: Interactive approval process before execution
  • 🧠 Memory Management: Session-based conversation and context persistence
  • 🔧 Tool Integration: API calls, terminal commands, file operations
  • 🎯 Multiple LLM Support: Easy switching between Gemini, OpenAI, and other providers
  • 📊 RESTful API: Clean FastAPI interface for all operations

🚀 Quick Start

Prerequisites

  • Python 3.10+
  • uv package manager

Installation

# Clone the repository
git clone <your-repo-url>
cd orchestration_agent

# Install dependencies with uv
uv pip install -e .

# Copy environment template
cp .env.example .env

# Edit .env and add your API keys
# GEMINI_API_KEY=your_key_here

Running the Server

# Development mode
uv run python src/main.py

# Or using uvicorn directly
uv run uvicorn src.main:app --reload --host 0.0.0.0 --port 8000

The API will be available at http://localhost:8000. Visit http://localhost:8000/docs for interactive API documentation.

📡 API Usage

Create a Session

curl -X POST http://localhost:8000/api/v1/sessions

Send a Chat Message

curl -X POST http://localhost:8000/api/v1/chat \
  -H "Content-Type: application/json" \
  -d '{
    "message": "Create a Python script that fetches weather data",
    "session_id": "<session_id_from_above>"
  }'

Provide Feedback

curl -X POST http://localhost:8000/api/v1/chat \
  -H "Content-Type: application/json" \
  -d '{
    "message": "Yes, proceed with the plan",
    "session_id": "<session_id>",
    "is_feedback": true
  }'

Get Session Details

curl http://localhost:8000/api/v1/sessions/<session_id>

🧩 Project Structure

orchestration_agent/
├── src/
│   ├── agents/           # Agent implementations
│   │   ├── base.py       # Base agent classes
│   │   └── orchestrator.py
│   ├── api/              # FastAPI routes and schemas
│   │   ├── routes.py
│   │   └── schemas.py
│   ├── config/           # Configuration management
│   │   ├── settings.py
│   │   └── logging_config.py
│   ├── models/           # LLM model factory
│   │   └── llm_factory.py
│   ├── tools/            # Tool definitions and registry
│   │   └── registry.py
│   ├── utils/            # Session and memory management
│   │   ├── session.py
│   │   └── manager.py
│   └── main.py           # Application entry point
├── prompts/              # Agent prompt templates
│   ├── agent_prompts.py
│   └── __init__.py
├── data/                 # Session storage
│   └── sessions/
├── pyproject.toml        # Project dependencies
├── .env.example          # Environment template
└── README.md

🔧 Configuration

Key environment variables:

  • GEMINI_API_KEY: Your Google Gemini API key (required)
  • OPENAI_API_KEY: OpenAI API key (optional)
  • DEFAULT_MODEL_PROVIDER: gemini or openai
  • DEFAULT_MODEL_NAME: Model name (e.g., gemini-1.5-pro)
  • LOG_LEVEL: Logging level (INFO, DEBUG, etc.)

🛠️ Development

Adding a New Model Provider

Edit src/models/llm_factory.py and add your provider:

elif provider.lower() == "anthropic":
    return ChatAnthropic(
        model=model_name,
        api_key=settings.anthropic_api_key,
        temperature=temperature,
    )

Adding New Tools

Register tools in src/tools/registry.py:

async def my_custom_tool(param: str) -> Dict[str, Any]:
    """Tool description."""
    # Implementation
    return {"success": True, "result": "..."}

# Register it
tool_registry.register("my_custom_tool", my_custom_tool)

Customizing Prompts

Edit prompts in prompts/agent_prompts.py to customize agent behavior.

📝 Workflow Example

  1. User Input: "Create a REST API for user management"
  2. Task Planning: Agent analyzes and creates structured task plan
  3. Feedback Request: System asks for user confirmation
  4. User Approval: User confirms or modifies the plan
  5. Implementation Planning: Detailed technical plan is generated
  6. Execution: Tools are used to implement the plan
  7. Completion: Results are reported back to user

🧪 Testing

# Run with pytest (when tests are added)
uv run pytest

# Run with coverage
uv run pytest --cov=src

📄 License

MIT License - See LICENSE file for details

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages