A sophisticated multi-agent orchestration system that converts natural language requests into executable tasks through intelligent planning, feedback loops, and autonomous implementation.
This system implements a hierarchical multi-agent architecture:
- OrchestratorAgent: Coordinates all agents and manages the overall workflow
- TaskPlannerAgent: Analyzes natural language input and creates structured task plans
- FeedbackAgent: Generates feedback requests and handles user approval loops
- ImplementationPlannerAgent: Creates detailed technical implementation plans
- ExecutionAgent: Executes implementation plans using available tools
- 🤖 Multi-Agent Coordination: Specialized agents working together seamlessly
- 💬 Natural Language Processing: Convert plain English to actionable tasks
- 🔄 Feedback Loop: Interactive approval process before execution
- 🧠 Memory Management: Session-based conversation and context persistence
- 🔧 Tool Integration: API calls, terminal commands, file operations
- 🎯 Multiple LLM Support: Easy switching between Gemini, OpenAI, and other providers
- 📊 RESTful API: Clean FastAPI interface for all operations
- Python 3.10+
- uv package manager
# Clone the repository
git clone <your-repo-url>
cd orchestration_agent
# Install dependencies with uv
uv pip install -e .
# Copy environment template
cp .env.example .env
# Edit .env and add your API keys
# GEMINI_API_KEY=your_key_here# Development mode
uv run python src/main.py
# Or using uvicorn directly
uv run uvicorn src.main:app --reload --host 0.0.0.0 --port 8000The API will be available at http://localhost:8000. Visit http://localhost:8000/docs for interactive API documentation.
curl -X POST http://localhost:8000/api/v1/sessionscurl -X POST http://localhost:8000/api/v1/chat \
-H "Content-Type: application/json" \
-d '{
"message": "Create a Python script that fetches weather data",
"session_id": "<session_id_from_above>"
}'curl -X POST http://localhost:8000/api/v1/chat \
-H "Content-Type: application/json" \
-d '{
"message": "Yes, proceed with the plan",
"session_id": "<session_id>",
"is_feedback": true
}'curl http://localhost:8000/api/v1/sessions/<session_id>orchestration_agent/
├── src/
│ ├── agents/ # Agent implementations
│ │ ├── base.py # Base agent classes
│ │ └── orchestrator.py
│ ├── api/ # FastAPI routes and schemas
│ │ ├── routes.py
│ │ └── schemas.py
│ ├── config/ # Configuration management
│ │ ├── settings.py
│ │ └── logging_config.py
│ ├── models/ # LLM model factory
│ │ └── llm_factory.py
│ ├── tools/ # Tool definitions and registry
│ │ └── registry.py
│ ├── utils/ # Session and memory management
│ │ ├── session.py
│ │ └── manager.py
│ └── main.py # Application entry point
├── prompts/ # Agent prompt templates
│ ├── agent_prompts.py
│ └── __init__.py
├── data/ # Session storage
│ └── sessions/
├── pyproject.toml # Project dependencies
├── .env.example # Environment template
└── README.md
Key environment variables:
GEMINI_API_KEY: Your Google Gemini API key (required)OPENAI_API_KEY: OpenAI API key (optional)DEFAULT_MODEL_PROVIDER:geminioropenaiDEFAULT_MODEL_NAME: Model name (e.g.,gemini-1.5-pro)LOG_LEVEL: Logging level (INFO,DEBUG, etc.)
Edit src/models/llm_factory.py and add your provider:
elif provider.lower() == "anthropic":
return ChatAnthropic(
model=model_name,
api_key=settings.anthropic_api_key,
temperature=temperature,
)Register tools in src/tools/registry.py:
async def my_custom_tool(param: str) -> Dict[str, Any]:
"""Tool description."""
# Implementation
return {"success": True, "result": "..."}
# Register it
tool_registry.register("my_custom_tool", my_custom_tool)Edit prompts in prompts/agent_prompts.py to customize agent behavior.
- User Input: "Create a REST API for user management"
- Task Planning: Agent analyzes and creates structured task plan
- Feedback Request: System asks for user confirmation
- User Approval: User confirms or modifies the plan
- Implementation Planning: Detailed technical plan is generated
- Execution: Tools are used to implement the plan
- Completion: Results are reported back to user
# Run with pytest (when tests are added)
uv run pytest
# Run with coverage
uv run pytest --cov=srcMIT License - See LICENSE file for details
Contributions are welcome! Please feel free to submit a Pull Request.