Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 26 additions & 0 deletions agent/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
[workspace]

[package]
name = "iii-agent"
version = "0.1.0"
edition = "2021"
publish = false

[[bin]]
name = "iii-agent"
path = "src/main.rs"

[dependencies]
iii-sdk = { version = "0.11.0-next.9", features = ["otel"] }
tokio = { version = "1", features = ["rt-multi-thread", "macros", "sync", "signal"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
serde_yaml = "0.9"
anyhow = "1"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["fmt", "env-filter"] }
clap = { version = "4", features = ["derive", "env"] }
chrono = { version = "0.4", features = ["serde"] }
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls", "stream"] }
futures-util = "0.3"
uuid = { version = "1", features = ["v4"] }
66 changes: 66 additions & 0 deletions agent/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# iii-agent

Linear, PostHog, Attio — they all shipped the same thing: a chat bar as the primary interface. iii-agent brings this to the iii console. It dynamically discovers every function registered by every connected worker, lets users ask questions in natural language, and the LLM decides which functions to call. "What's slow in my system?" triggers `eval::analyze_traces`. "Show me the topology" triggers `introspect::diagram`. The agent composes the answer from real data, not hallucinations.

**Plug and play:** Build with `cargo build --release`, set `ANTHROPIC_API_KEY` in your environment, then run `./target/release/iii-agent --url ws://your-engine:49134`. It registers 7 functions, discovers all available tools from other workers, and starts accepting chat via `agent::chat`. Connect more workers and they're automatically available — no restart needed.

## Functions

| Function ID | Description |
|---|---|
| `agent::chat` | Send a message and get a structured JSON-UI response |
| `agent::chat_stream` | Send a message with streaming response via iii Streams |
| `agent::discover` | List all available functions the agent can orchestrate |
| `agent::plan` | Generate an execution plan DAG without executing |
| `agent::session_create` | Create a new chat session |
| `agent::session_history` | Retrieve conversation history for a session |
| `agent::session_cleanup` | Clean up expired sessions (cron-triggered) |

## iii Primitives Used

- **State** -- session history, cached tool definitions
- **Streams** -- streaming chat responses via `agent:events:{session_id}` group
- **Cron** -- hourly session cleanup
- **HTTP** -- chat, discovery, planning, and session management endpoints

## Prerequisites

- Rust 1.75+
- Running iii engine on `ws://127.0.0.1:49134`
- `ANTHROPIC_API_KEY` environment variable set

## Build

```bash
cargo build --release
```

## Usage

```bash
ANTHROPIC_API_KEY=sk-ant-... ./target/release/iii-agent --url ws://127.0.0.1:49134 --config ./config.yaml
```
Comment on lines +40 to +42
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid documenting inline secret usage in command examples.

Line 41 encourages a pattern users often paste with real keys. Prefer a separate export/read step to reduce accidental secret exposure.

🔐 Suggested docs change
-ANTHROPIC_API_KEY=sk-ant-... ./target/release/iii-agent --url ws://127.0.0.1:49134 --config ./config.yaml
+export ANTHROPIC_API_KEY="<set-from-your-secret-manager>"
+./target/release/iii-agent --url ws://127.0.0.1:49134 --config ./config.yaml
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
```bash
ANTHROPIC_API_KEY=sk-ant-... ./target/release/iii-agent --url ws://127.0.0.1:49134 --config ./config.yaml
```
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@agent/README.md` around lines 40 - 42, Replace the inline secret in the
example command so users don't paste real keys; update the README example to
show setting ANTHROPIC_API_KEY in a separate step (e.g., export
ANTHROPIC_API_KEY="sk-..." or sourcing from an env file / .env) and then running
the iii-agent binary (./target/release/iii-agent --url ws://127.0.0.1:49134
--config ./config.yaml), or alternatively show a placeholder like
ANTHROPIC_API_KEY=<REDACTED> and a preceding export instruction; ensure the
example references the same flags (--url, --config) but removes inline secret
disclosure.


```
Options:
--config <PATH> Path to config.yaml [default: ./config.yaml]
--url <URL> WebSocket URL of the iii engine [default: ws://127.0.0.1:49134]
--manifest Output module manifest as JSON and exit
-h, --help Print help
```

## Configuration

```yaml
anthropic_model: "claude-sonnet-4-20250514" # model to use for chat
max_tokens: 4096 # max tokens per LLM response
max_iterations: 10 # max tool-use loops per message
session_ttl_hours: 24 # session expiry
cron_session_cleanup: "0 0 * * * *" # hourly cleanup schedule
```

## Tests

```bash
cargo test
```
109 changes: 109 additions & 0 deletions agent/SPEC.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
# iii-agent Worker Specification

## Overview

The iii-agent is the chat orchestrator for the iii console. It dynamically discovers functions from connected workers and orchestrates them via LLM (Anthropic Claude) to answer user questions.

## Functions

| Function ID | Description |
|---|---|
| `agent::chat` | Send a message and get a structured JSON-UI response |
| `agent::chat_stream` | Send a message with streaming response via iii Streams |
| `agent::discover` | List all available functions the agent can orchestrate |
| `agent::plan` | Generate an execution plan DAG without executing |
| `agent::session_create` | Create a new chat session |
| `agent::session_history` | Retrieve conversation history for a session |
| `agent::session_cleanup` | Clean up expired sessions (cron-triggered) |

## State Scopes

| Scope | Key | Value |
|---|---|---|
| `agent:sessions` | `{session_id}` | Conversation history (messages array) |
| `agent:tools` | `cached` | Cached tool definitions from last discovery |

## Triggers

| Type | Config | Function |
|---|---|---|
| `http` POST | `agent/chat` | `agent::chat` |
| `http` POST | `agent/chat/stream` | `agent::chat_stream` |
| `http` GET | `agent/discover` | `agent::discover` |
| `http` POST | `agent/plan` | `agent::plan` |
| `http` POST | `agent/session` | `agent::session_create` |
| `http` POST | `agent/session/history` | `agent::session_history` |
| `cron` | `0 0 * * * *` | `agent::session_cleanup` |
| `engine::functions-available` | `{}` | Tool cache refresh |

## Chat Flow

1. User sends message via `agent::chat` or `agent::chat_stream`
2. Agent discovers available tools via `iii.list_functions()`
3. Builds system prompt with capabilities summary
4. Loads conversation history from state
5. Sends message + tools to Anthropic Claude API
6. If LLM returns text -> done, return JSON-UI response
7. If LLM returns tool_use -> call `iii.trigger(function_id, payload)`
8. Feed tool result back to LLM as tool_result message
9. Repeat (max 10 iterations)
10. Save conversation to state

## JSON-UI Response Format

```json
{
"elements": [
{"type": "text", "content": "..."},
{"type": "chart", "chart_type": "bar", "data": [...]},
{"type": "table", "headers": [...], "rows": [...]},
{"type": "diagram", "format": "mermaid", "content": "..."},
{"type": "action", "label": "...", "function_id": "...", "payload": {...}}
],
"usage": {
"input_tokens": 1234,
"output_tokens": 567
}
}
```

## Streaming Events

Stream group: `agent:events:{session_id}`

| Event Type | Fields |
|---|---|
| `text_delta` | `text` |
| `tool_use` | `name`, `input` |
| `tool_result` | `name`, `result` |
| `error` | `message` |
| `done` | (empty) |

## Configuration

```yaml
anthropic_model: "claude-sonnet-4-20250514"
max_tokens: 4096
max_iterations: 10
session_ttl_hours: 24
cron_session_cleanup: "0 0 * * * *"
```

## Environment Variables

- `ANTHROPIC_API_KEY` - Required. Anthropic API key for Claude access.
- `RUST_LOG` - Optional. Log level filter (default: `info`).

## Discovery Filter

The agent excludes these function prefixes from LLM tool building:
- `agent::*` - prevents self-invocation loops
- `state::*` - internal state operations
- `stream::*` - internal stream operations
- `engine::*` - internal engine operations

## Running

```bash
ANTHROPIC_API_KEY=sk-ant-... cargo run --release -- --url ws://127.0.0.1:49134
```
6 changes: 6 additions & 0 deletions agent/build.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
fn main() {
println!(
"cargo:rustc-env=TARGET={}",
std::env::var("TARGET").unwrap()
);
}
5 changes: 5 additions & 0 deletions agent/config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
anthropic_model: "claude-sonnet-4-20250514"
max_tokens: 4096
max_iterations: 10
session_ttl_hours: 24
cron_session_cleanup: "0 0 * * * *"
90 changes: 90 additions & 0 deletions agent/src/config.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
use anyhow::Result;
use serde::Deserialize;

#[derive(Deserialize, Debug, Clone)]
pub struct AgentConfig {
#[serde(default = "default_model")]
pub anthropic_model: String,
#[serde(default = "default_max_tokens")]
pub max_tokens: u32,
#[serde(default = "default_max_iterations")]
pub max_iterations: u32,
#[serde(default = "default_session_ttl_hours")]
#[allow(dead_code)]
pub session_ttl_hours: u64,
#[serde(default = "default_cron_session_cleanup")]
pub cron_session_cleanup: String,
}

fn default_model() -> String {
"claude-haiku-4-5-20251001".to_string()
}

fn default_max_tokens() -> u32 {
4096
}

fn default_max_iterations() -> u32 {
10
}

fn default_session_ttl_hours() -> u64 {
24
}

fn default_cron_session_cleanup() -> String {
"0 0 * * * *".to_string()
}

impl Default for AgentConfig {
fn default() -> Self {
AgentConfig {
anthropic_model: default_model(),
max_tokens: default_max_tokens(),
max_iterations: default_max_iterations(),
session_ttl_hours: default_session_ttl_hours(),
cron_session_cleanup: default_cron_session_cleanup(),
}
}
}

pub fn load_config(path: &str) -> Result<AgentConfig> {
let contents = std::fs::read_to_string(path)?;
let config: AgentConfig = serde_yaml::from_str(&contents)?;
Ok(config)
}
Comment on lines +51 to +55
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate semantic bounds after deserialization.

load_config accepts structurally valid YAML but does not guard runtime-invalid values (e.g., max_tokens == 0, max_iterations == 0, empty cron string). This can break behavior without clear startup failure.

🛠️ Proposed fix
-use anyhow::Result;
+use anyhow::{bail, Result};
@@
 pub fn load_config(path: &str) -> Result<AgentConfig> {
     let contents = std::fs::read_to_string(path)?;
     let config: AgentConfig = serde_yaml::from_str(&contents)?;
+    if config.max_tokens == 0 {
+        bail!("max_tokens must be > 0");
+    }
+    if config.max_iterations == 0 {
+        bail!("max_iterations must be > 0");
+    }
+    if config.session_ttl_hours == 0 {
+        bail!("session_ttl_hours must be > 0");
+    }
+    if config.cron_session_cleanup.trim().is_empty() {
+        bail!("cron_session_cleanup must not be empty");
+    }
     Ok(config)
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub fn load_config(path: &str) -> Result<AgentConfig> {
let contents = std::fs::read_to_string(path)?;
let config: AgentConfig = serde_yaml::from_str(&contents)?;
Ok(config)
}
use anyhow::{bail, Result};
pub fn load_config(path: &str) -> Result<AgentConfig> {
let contents = std::fs::read_to_string(path)?;
let config: AgentConfig = serde_yaml::from_str(&contents)?;
if config.max_tokens == 0 {
bail!("max_tokens must be > 0");
}
if config.max_iterations == 0 {
bail!("max_iterations must be > 0");
}
if config.session_ttl_hours == 0 {
bail!("session_ttl_hours must be > 0");
}
if config.cron_session_cleanup.trim().is_empty() {
bail!("cron_session_cleanup must not be empty");
}
Ok(config)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@agent/src/config.rs` around lines 51 - 55, The load_config function currently
deserializes YAML into AgentConfig but doesn't validate semantic bounds; after
calling serde_yaml::from_str in load_config, add explicit runtime checks on
AgentConfig fields (e.g., ensure max_tokens > 0, max_iterations > 0,
non-empty/valid cron string, and any other domain constraints on AgentConfig)
and return a descriptive Err when any check fails so startup fails fast with a
clear message; implement these checks inside load_config immediately after
deserialization and reference the AgentConfig instance to produce context-rich
errors.


#[cfg(test)]
mod tests {
use super::*;

#[test]
fn test_config_defaults() {
let config = AgentConfig::default();
assert_eq!(config.anthropic_model, "claude-haiku-4-5-20251001");
assert_eq!(config.max_tokens, 4096);
assert_eq!(config.max_iterations, 10);
assert_eq!(config.session_ttl_hours, 24);
}

#[test]
fn test_config_from_yaml() {
let yaml = r#"
anthropic_model: "claude-sonnet-4-20250514"
max_tokens: 8192
max_iterations: 5
"#;
let config: AgentConfig = serde_yaml::from_str(yaml).unwrap();
assert_eq!(config.anthropic_model, "claude-sonnet-4-20250514");
assert_eq!(config.max_tokens, 8192);
assert_eq!(config.max_iterations, 5);
assert_eq!(config.session_ttl_hours, 24);
}

#[test]
fn test_config_empty_yaml() {
let config: AgentConfig = serde_yaml::from_str("{}").unwrap();
assert_eq!(config.anthropic_model, "claude-haiku-4-5-20251001");
assert_eq!(config.max_tokens, 4096);
}
}
Loading
Loading