Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
213 changes: 110 additions & 103 deletions public/sitemap.xml

Large diffs are not rendered by default.

54 changes: 54 additions & 0 deletions src/app/docs/kagent/examples/a2a-byo/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -240,3 +240,57 @@ You can use the A2A host CLI to invoke the agent. This CLI is part of the [A2A s
"taskId": "59d2b071-04e9-4fef-a0dd-e925dd13cceb"
}
```

## More options

Review more options that you might want to configure for your BYO ADK agents.

### Lifespan hooks

Lifespan hooks initialize or clean up tasks when your BYO agent starts up or shuts down. This feature is useful for tasks like connecting to external services, loading configuration, or cleaning up resources.

To use lifespan hooks in your ADK agent, create a lifespan function and pass it to `KAgentApp`.

1. Create a lifespan function in your agent code, such as the following python example. The lifespan function is an async context manager that runs in either startup and shutdown code.

- **Startup code** (before `yield`): Executes when the agent container starts.

- **Shutdown code** (after `yield`): Executes when the agent container stops.

```python
# basic/lifespan.py
import logging
from contextlib import asynccontextmanager
from typing import Any

@asynccontextmanager
async def lifespan(app: Any):
# Startup: runs when the agent starts
logging.info("Lifespan: setup - initializing resources")
# Perform initialization tasks here
# For example: connect to databases, load configuration, etc.

try:
yield # Agent is running
finally:
# Shutdown: runs when the agent stops
logging.info("Lifespan: teardown - cleaning up resources")
# Perform cleanup tasks here
# For example: close connections, save state, etc.
```

2. Import and pass the lifespan to `KAgentApp`.

```python
# main.py or similar
from kagent.adk import KAgentApp
from basic import agent, lifespan # Import your agent and lifespan

app = KAgentApp(
root_agent=agent.root_agent,
agent_card=agent.agent_card,
kagent_url=os.getenv("KAGENT_URL", "http://kagent-controller.kagent.svc.cluster.local:8083"),
app_name="basic_agent",
lifespan=lifespan.lifespan # Pass the lifespan function
)
```
8 changes: 8 additions & 0 deletions src/app/docs/kagent/examples/crewai-byo/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,14 @@ The following example builds a research crew agent from the [kagent code reposit
A quickstart and detailed guide for adapting existing CrewAI crews and flows to work with kagent is available in the [package's README](https://github.com/kagent-dev/kagent/tree/main/python/packages/kagent-crewai).
This provides a simple way to setup A2A server, tracing, and session-aware memory and state persistence.

The `kagent-crewai` package is published and available on PyPI.

To install in your CrewAI project:

```bash
pip install kagent-crewai
```

Two complete examples are available in the `python/samples/crewai/` directory:

- [**Crew Example**](https://github.com/kagent-dev/kagent/tree/main/python/samples/crewai/research-crew): A multi-agent crew for web research and analysis
Expand Down
48 changes: 47 additions & 1 deletion src/app/docs/kagent/getting-started/local-development/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ export const metadata = {
author: "kagent.dev"
};

# Local developmeent with kagent CLI
# Local development with kagent CLI

In this guide, you'll learn how to develop, build and run an AI agent locally using kagent CLI, without a Kubernetes cluster. This guide is meant for developers familiar with Python. You can also create declarative agents without writing a single line of code, by following the [Your First Agent](/docs/kagent/getting-started/first-agent) guide.

Expand Down Expand Up @@ -263,6 +263,52 @@ mcpserver.kagent.dev/server-everything True 112s

You can now test the deployed agent and the MCP server through kagent UI.

## Using PostgreSQL for local development

By default, kagent uses SQLite for local development. For most local development, SQLite is sufficient and simpler to set up. However, you can also use PostgreSQL for local development, which is useful if you want to test with the same database backend that you use in production.

The following steps show how to set up PostgreSQL with `docker-compose`.

1. Add PostgreSQL service to your `docker-compose.yaml`.

```yaml
services:
postgres:
image: postgres:15-alpine
environment:
POSTGRES_USER: kagent
POSTGRES_PASSWORD: kagent
POSTGRES_DB: kagent
ports:
- "5432:5432"
volumes:
- postgres-data:/var/lib/postgresql/data
volumes:
postgres-data:
```

2. Configure your agent to use PostgreSQL by setting the database URL environment variable.

* Using the `export` command:

```bash
export DATABASE_URL=postgres://kagent:kagent@localhost:5432/kagent
```

* Adding it to your `.env` file:

```bash
echo "DATABASE_URL=postgres://kagent:kagent@localhost:5432/kagent" >> .env
```

3. Start PostgreSQL and your agent.

```bash
docker-compose up -d postgres
kagent run
```

## Troubleshooting

If you run into any issues, you can start by checking the logs from the agent and/or MCP server containers. You can check the running containers using `docker ps` command and then use the `docker logs [container_id]` command to view the logs from individual container.
Expand Down
47 changes: 46 additions & 1 deletion src/app/docs/kagent/introduction/installation/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,11 @@ Another way to install kagent is using Helm.

Review the following advanced configuration options that you might want to set up for your kagent installation.

### Configure the controller service name
### Configure controller environment variables

You can configure the controller by using environment variables for settings such as service names, connection details, and more.

#### Configure the controller service name

By default, kagent uses `kagent-controller` as the controller service name when constructing URLs for agent deployments. If you need to customize this name, set the `KAGENT_CONTROLLER_NAME` environment variable on the controller pod.

Expand All @@ -173,6 +177,47 @@ controller:
value: my-kagent
```

#### More environment variables

You can add custom environment variables to the controller by using the `controller.env` field.

**Helm `--set` flag:**

```bash
helm install kagent oci://ghcr.io/kagent-dev/kagent/helm/kagent \
--namespace kagent \
--set controller.env[0].name=KAGENT_CONTROLLER_NAME \
--set controller.env[0].value=my-kagent \
--set controller.env[1].name=LOG_LEVEL \
--set controller.env[1].value=debug
```

**Helm values file:**

```yaml
controller:
env:
- name: KAGENT_CONTROLLER_NAME
value: my-kagent
- name: LOG_LEVEL
value: debug
- name: CUSTOM_VAR
value: custom-value
```

#### Using secrets for environment variables

You can also reference Kubernetes secrets for environment variables by using the `envFrom` field in Helm.

```yaml
controller:
envFrom:
- secretRef:
name: controller-secrets
```

This example loads all key-value pairs from the `controller-secrets` secret as environment variables in the controller pod.

## Uninstallation

Refer to the [Uninstall](/docs/kagent/operations/uninstall) guide.
Expand Down
137 changes: 137 additions & 0 deletions src/app/docs/kagent/operations/operational-considerations/page.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
---
title: "Operational considerations"
pageOrder: 1
description: "Important operational considerations when running kagent in production."
---

export const metadata = {
title: "Operational Considerations",
description: "Important operational considerations when running kagent in production.",
author: "kagent.dev"
};

# Operational considerations

Review the following operational considerations when running kagent in production environments, including database configuration, high availability, and secret management.

## Automatic agent restart on secret updates

Kagent automatically restarts agents when you update the secrets that the agents reference. This restart ensures that agents pick up new API keys, TLS certificates, and other secret values without manual intervention.

The following secret updates trigger automatic agent restarts:

- **API keys**: Secrets referenced in `ModelConfig` resources (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`)
- **TLS certificates**: Secrets referenced in `ModelConfig` TLS configuration (e.g., CA certificates)
- **Environment variables**: Any secrets referenced via `secretKeyRef` in agent deployment specifications

## Leader election when controller is scaled

When you scale the kagent controller to multiple replicas for high availability, leader election is automatically enabled. This ensures that only one controller instance actively reconciles resources at a time, preventing conflicts and duplicate operations.

### Leader election scenarios

- **Single replica**: No leader election needed; the single controller instance handles all operations
- **Multiple replicas**: Leader election is automatically enabled when `controller.replicas > 1`
- **Active leader**: Only the elected leader performs reconciliation operations
- **Standby replicas**: Other replicas remain ready but do not perform reconciliation until they become the leader

### Enable high availability

You can set the number of controller replicas to enable high availability.

**Helm `--set` flag:**

```bash
helm upgrade kagent oci://ghcr.io/kagent-dev/kagent/helm/kagent \
--namespace kagent \
--set controller.replicas=3
```

**Helm values file:**

```yaml
controller:
replicas: 3
```
### More considerations for HA
- **Database requirement**: When using multiple controller replicas, you must use PostgreSQL as the database backend. SQLite cannot be used with multiple replicas (see [SQLite database scaling limitation](#sqlite-database-scaling-limitation)).
- **Leader election**: Leader election uses Kubernetes leases and is handled automatically.
- **Failover**: If the leader fails, another replica automatically becomes the leader.
## SQLite database scaling limitation
SQLite is the default database for kagent and works well for single-replica controller deployments. However, SQLite cannot be used when scaling the controller to multiple replicas.
SQLite is a file-based database that does not support concurrent writes from multiple processes. When you scale the controller to multiple replicas, each replica would try to access the same SQLite database file, causing conflicts and potential data corruption.
If you try to scale the controller with SQLite enabled, you see an error during Helm installation or upgrade:
```bash
Error: cannot scale controller with SQLite database
```
### Use PostgreSQL for scaling
To scale the controller to multiple replicas, you must configure PostgreSQL as the database backend. You can enable PostgreSQL by using the Helm `--set` flag or values file.

**Helm `--set` flag:**

```bash
helm upgrade kagent oci://ghcr.io/kagent-dev/kagent/helm/kagent \
--namespace kagent \
--set database.type=postgres \
--set database.postgres.url=postgres://user:password@postgres-host:5432/kagent \
--set controller.replicas=3
```

**Helm values file:**

```yaml
database:
type: postgres
postgres:
url: postgres://user:password@postgres-host:5432/kagent
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

empty line - maybe intentional

controller:
replicas: 3
```

### Migrate from SQLite to PostgreSQL

If you're currently using SQLite and want to scale the controller, consider the following steps.

1. Backup your data as needed, such as by copying the SQLite database file to a backup location.

```bash
kubectl exec -n kagent deployment/kagent-controller -c controller -- \
sqlite3 /var/lib/kagent/kagent.db .dump > backup.sql
```

2. Set up PostgreSQL by:
- Installing PostgreSQL in your cluster; or
- Using a managed PostgreSQL service.

3. Update your Helm values to use PostgreSQL.
```yaml
database:
type: postgres
postgres:
url: postgres://user:password@postgres-host:5432/kagent
```

4. Upgrade the Helm release.
```bash
helm upgrade kagent oci://ghcr.io/kagent-dev/kagent/helm/kagent \
--namespace kagent \
-f values.yaml
```

5. Scale the controller.
```bash
helm upgrade kagent oci://ghcr.io/kagent-dev/kagent/helm/kagent \
--namespace kagent \
-f values.yaml \
--set controller.replicas=3
```
1 change: 1 addition & 0 deletions src/app/docs/kagent/operations/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ import QuickLink from '@/components/quick-link';
</div>

<div className="grid grid-cols-1 md:grid-cols-2 gap-6 mb-12">
<QuickLink title="Operational considerations" description="Important operational considerations when running kagent in production." href="/docs/kagent/operations/operational-considerations" />
<QuickLink title="Debug kagent" description="Troubleshoot and debug issues with your kagent installation." href="/docs/kagent/operations/debug" />
<QuickLink title="Upgrade kagent" description="Keep your kagent installation up to date with the latest features and bug fixes." href="/docs/kagent/operations/upgrade" />
<QuickLink title="Uninstall kagent" description="Remove kagent from your cluster when you no longer need it." href="/docs/kagent/operations/uninstall" />
Expand Down
19 changes: 19 additions & 0 deletions src/app/docs/kagent/resources/cli/kagent-run/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ kagent run [project-directory] [flags]
- `project-directory` - The directory containing the agent project (default: current directory)

**Flags:**
- `--build` - Rebuild the Docker image before running
- `--project-dir` - Project directory (default: current directory)

**Global Flags:**
Expand All @@ -29,6 +30,16 @@ kagent run [project-directory] [flags]

The `kagent run` command runs an agent project locally using `docker-compose` and launches an interactive chat session. This way, you can test and interact with your agent before deploying it to a Kubernetes cluster.

### Rebuilding before running

Use the `--build` flag to rebuild the Docker image before running the agent. This is useful when you've made changes to your agent code and want to test the updated version without manually running `kagent build` first.

```bash
kagent run --build
```

This command rebuilds the agent image and then starts the interactive chat interface. It's equivalent to running `kagent build` followed by `kagent run`.

## Example

Run an agent project from the current directory:
Expand All @@ -43,3 +54,11 @@ Run an agent project from a specific directory:
kagent run ./my-agent
```

Rebuild the image and run the agent:

```bash
kagent run --build
```

This is useful after making code changes to ensure you're testing the latest version of your agent.

5 changes: 5 additions & 0 deletions src/config/navigation.json
Original file line number Diff line number Diff line change
Expand Up @@ -195,6 +195,11 @@
"href": "/docs/kagent/operations/debug",
"description": "Find solutions to common issues and troubleshooting tips for kagent."
},
{
"title": "Operational considerations",
"href": "/docs/kagent/operations/operational-considerations",
"description": "Important operational considerations when running kagent in production."
},
{
"title": "Upgrade kagent",
"href": "/docs/kagent/operations/upgrade",
Expand Down
Loading