The Gradient™ Agent Development Kit (ADK) is a comprehensive toolkit for building, testing, deploying, and evaluating AI agents on DigitalOcean's Gradient™ AI Platform. It provides both a CLI for development workflows and a runtime environment for hosting agents with automatic trace capture.
- Local Development: Run and test your agents locally with hot-reload support
- Seamless Deployment: Deploy agents to DigitalOcean with a single command
- Evaluation Framework: Run comprehensive evaluations with custom metrics and datasets
- Observability: View traces and runtime logs directly from the CLI
- Framework Agnostic: Works with any Python framework for building AI agents
- Automatic LangGraph Integration: Built-in trace capture for LangGraph nodes and state transitions
- Custom Decorators: Capture traces from any framework using
@tracedecorators - Streaming Support: Full support for streaming responses with trace capture
- Production Ready: Designed for seamless deployment to DigitalOcean infrastructure
pip install gradient-adk🎥 Watch the Getting Started Video for a complete walkthrough
gradient agent initThis creates a new agent project with:
main.py- Agent entrypoint with example codeagents/- Directory for agent implementationstools/- Directory for custom toolsconfig.yaml- Agent configurationrequirements.txt- Python dependencies
gradient agent runYour agent will be available at http://localhost:8080 with automatic trace capture enabled.
export DIGITALOCEAN_API_TOKEN=your_token_here
gradient agent deploygradient agent evaluate \
--test-case-name "my-evaluation" \
--dataset-file evaluation_dataset.csv \
--categories correctness,context_qualityLangGraph agents automatically capture traces for all nodes and state transitions:
from gradient_adk import entrypoint
from langgraph.graph import StateGraph
from typing import TypedDict
class State(TypedDict):
input: str
output: str
async def llm_call(state: State) -> State:
# This node execution is automatically traced
response = await llm.ainvoke(state["input"])
state["output"] = response
return state
@entrypoint
async def main(input: dict, context: dict):
graph = StateGraph(State)
graph.add_node("llm_call", llm_call)
graph.set_entry_point("llm_call")
graph = graph.compile()
result = await graph.ainvoke({"input": input.get("query")})
return result["output"]For frameworks beyond LangGraph, use trace decorators to capture custom spans:
from gradient_adk import entrypoint, trace_llm, trace_tool, trace_retriever
@trace_retriever("vector_search")
async def search_knowledge_base(query: str):
# Retriever spans capture search/lookup operations
results = await vector_db.search(query)
return results
@trace_llm("generate_response")
async def generate_response(prompt: str):
# LLM spans capture model calls with token usage
response = await llm.generate(prompt)
return response
@trace_tool("calculate")
async def calculate(x: int, y: int):
# Tool spans capture function execution
return x + y
@entrypoint
async def main(input: dict, context: dict):
docs = await search_knowledge_base(input["query"])
result = await calculate(5, 10)
response = await generate_response(f"Context: {docs}")
return responseThe runtime supports streaming responses with automatic trace capture:
from gradient_adk import entrypoint
@entrypoint
async def main(input: dict, context: dict):
# Stream text chunks
async def generate_chunks():
async for chunk in llm.stream(input["query"]):
yield chunk# Initialize new project
gradient agent init
# Configure existing project
gradient agent configure
# Run locally with hot-reload
gradient agent run --dev
# Deploy to DigitalOcean
gradient agent deploy
# View runtime logs
gradient agent logs
# Open traces UI
gradient agent tracesYou can evaluate your deployed agent with a number of useful evaluation metrics. See the DigitalOcean docs for details on what belongs in a dataset.
# Run evaluation (interactive)
gradient agent evaluate
# Run evaluation (non-interactive)
gradient agent evaluate \
--test-case-name "my-test" \
--dataset-file data.csv \
--categories correctness,safety_and_security \
--star-metric-name "Correctness (general hallucinations)" \
--success-threshold 80.0The ADK provides comprehensive tracing capabilities to capture and analyze your agent's execution. You can use decorators for wrapping functions or programmatic functions for manual span creation.
- LangGraph Nodes: All node executions, state transitions, and edges (including LLM calls, tool calls, and DigitalOcean Knowledge Base calls)
- HTTP Requests: Request/response payloads for LLM API calls
- Errors: Full exception details and stack traces
- Streaming Responses: Individual chunks and aggregated outputs
Use decorators to automatically trace function executions:
from gradient_adk import entrypoint, trace_llm, trace_tool, trace_retriever
@trace_llm("model_call")
async def call_model(prompt: str):
"""LLM spans capture model calls with token usage."""
response = await llm.generate(prompt)
return response
@trace_tool("calculator")
async def calculate(x: int, y: int):
"""Tool spans capture function/tool execution."""
return x + y
@trace_retriever("vector_search")
async def search_docs(query: str):
"""Retriever spans capture search/lookup operations."""
results = await vector_db.search(query)
return results
@entrypoint
async def main(input: dict, context: dict):
docs = await search_docs(input["query"])
result = await calculate(5, 10)
response = await call_model(f"Context: {docs}")
return responseFor more control over span creation, use the programmatic functions. These are useful when you can't use decorators or need to add spans for code you don't control:
from gradient_adk import entrypoint, add_llm_span, add_tool_span, add_agent_span
@entrypoint
async def main(input: dict, context: dict):
# Add an LLM span with detailed metadata
response = await external_llm_call(input["query"])
add_llm_span(
name="external_llm_call",
input={"messages": [{"role": "user", "content": input["query"]}]},
output={"response": response},
model="gpt-4",
num_input_tokens=100,
num_output_tokens=50,
temperature=0.7,
)
# Add a tool span
tool_result = await run_tool(input["data"])
add_tool_span(
name="data_processor",
input={"data": input["data"]},
output={"result": tool_result},
tool_call_id="call_abc123",
metadata={"tool_version": "1.0"},
)
# Add an agent span for sub-agent calls
agent_result = await call_sub_agent(input["task"])
add_agent_span(
name="research_agent",
input={"task": input["task"]},
output={"result": agent_result},
metadata={"agent_type": "research"},
tags=["sub-agent", "research"],
)
return {"response": response, "tool_result": tool_result, "agent_result": agent_result}| Function | Description | Key Optional Fields |
|---|---|---|
add_llm_span() |
Record LLM/model calls | model, temperature, num_input_tokens, num_output_tokens, total_tokens, tools, time_to_first_token_ns |
add_tool_span() |
Record tool/function executions | tool_call_id |
add_agent_span() |
Record agent/sub-agent executions | — |
Common optional fields for all span functions: duration_ns, metadata, tags, status_code
Traces are:
- Automatically sent to DigitalOcean's Gradient Platform
- Available in real-time through the web console
- Accessible via
gradient agent tracescommand
# Required for deployment and evaluations
export DIGITALOCEAN_API_TOKEN=your_do_api_token
# Required for Gradient serverless inference (if using)
export GRADIENT_MODEL_ACCESS_KEY=your_gradient_key
# Optional: Enable verbose trace logging
export GRADIENT_VERBOSE=1my-agent/
├── main.py # Agent entrypoint with @entrypoint decorator
├── .gradient/agent.yml # Agent configuration (auto-generated)
├── requirements.txt # Python dependencies
├── .env # Environment variables (not committed)
├── agents/ # Agent implementations
│ └── my_agent.py
└── tools/ # Custom tools
└── my_tool.py
The Gradient ADK is designed to work with any Python-based AI agent framework:
- ✅ LangGraph - Automatic trace capture (zero configuration)
- ✅ LangChain - Use trace decorators (
@trace_llm,@trace_tool,@trace_retriever) for custom spans - ✅ CrewAI - Use trace decorators for agent and task execution
- ✅ Custom Frameworks - Use trace decorators for any function
- Templates/Examples: https://github.com/digitalocean/gradient-adk-templates
- Gradient Platform: https://www.digitalocean.com/products/gradient/platform
- Documentation: https://docs.digitalocean.com/products/gradient-ai-platform/
- API Reference: https://docs.digitalocean.com/reference/api
- Community: DigitalOcean Community Forums
Licensed under the Apache License 2.0. See LICENSE