Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
191 changes: 191 additions & 0 deletions content/signoz/docs/llm-observability/python/DOC.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,191 @@
---
name: package
description: "Monitor LLM API calls (OpenAI, Anthropic, LangChain, LiteLLM) with SigNoz using OpenTelemetry and OpenInference instrumentation"
metadata:
languages: "python"
versions: "1.27.0"
revision: 1
updated-on: "2026-03-19"
source: community
tags: "signoz,llm,openai,anthropic,langchain,litellm,observability,opentelemetry,openinference,ai"
---

# SigNoz LLM Observability (Python)

## Golden Rule

SigNoz monitors LLM API calls through OpenTelemetry instrumentation. For OpenAI use `opentelemetry-instrumentation-openai-v2`. For Anthropic use `openinference-instrumentation-anthropic`. For LangChain and LiteLLM use OpenInference instrumentors. All integrations export spans, logs, and metrics to SigNoz over OTLP. Use auto-instrumentation (no code changes) as the default.

Enabling `OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true` captures prompts and completions—exercise caution in production due to sensitive data exposure.

## When To Use

- Tracing token usage, latency, and error rates for OpenAI, Anthropic, LangChain, or LiteLLM calls
- Correlating LLM spans with application traces in the same SigNoz service
- Monitoring agent reasoning steps, tool invocations, and chain executions in LangChain/LangGraph
- Tracking LiteLLM proxy or SDK traffic across multiple LLM providers

## Install

### OpenAI

```bash
pip install opentelemetry-distro opentelemetry-exporter-otlp opentelemetry-instrumentation-openai-v2
opentelemetry-bootstrap --action=install
```

### Anthropic

```bash
pip install opentelemetry-distro opentelemetry-exporter-otlp openinference-instrumentation-anthropic
opentelemetry-bootstrap --action=install
```

### LangChain / LangGraph

```bash
pip install opentelemetry-distro opentelemetry-exporter-otlp \
opentelemetry-instrumentation-httpx opentelemetry-instrumentation-system-metrics \
langgraph langchain openinference-instrumentation-langchain
opentelemetry-bootstrap --action=install
```

### LiteLLM SDK

```bash
pip install opentelemetry-distro opentelemetry-exporter-otlp litellm
opentelemetry-bootstrap --action=install
```

## Authentication And Setup

### SigNoz Cloud

```bash
export OTEL_SERVICE_NAME="my-llm-app"
export OTEL_EXPORTER_OTLP_ENDPOINT="https://ingest.us.signoz.cloud:443"
export OTEL_EXPORTER_OTLP_HEADERS="signoz-ingestion-key=<your-ingestion-key>"
export OTEL_EXPORTER_OTLP_PROTOCOL="grpc"
export OTEL_TRACES_EXPORTER=otlp
export OTEL_METRICS_EXPORTER=otlp
export OTEL_LOGS_EXPORTER=otlp
export OTEL_PYTHON_LOG_CORRELATION=true
export OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true
```

### Self-Hosted SigNoz

Same vars but `OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317` and no `OTEL_EXPORTER_OTLP_HEADERS`.

## Core Usage

### OpenAI (auto-instrumentation)

```bash
OTEL_SERVICE_NAME=openai-app \
OTEL_EXPORTER_OTLP_ENDPOINT=https://ingest.us.signoz.cloud:443 \
OTEL_EXPORTER_OTLP_HEADERS="signoz-ingestion-key=<key>" \
OTEL_EXPORTER_OTLP_PROTOCOL=grpc \
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true \
opentelemetry-instrument python app.py
```

No code changes needed. Each `openai.chat.completions.create()` call gets a span with model, token counts, finish reason, and optionally message content.

### Anthropic (auto-instrumentation)

Same pattern with `openinference-instrumentation-anthropic` installed:

```bash
opentelemetry-instrument python app.py
```

### LangChain (auto-instrumentation)

Set root logger level before running:

```python
import logging
logging.getLogger().setLevel(logging.INFO)
```

Then:

```bash
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true \
opentelemetry-instrument python app.py
```

### LangChain (code-based setup)

```python
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from openinference.instrumentation.langchain import LangChainInstrumentor

resource = Resource.create({"service.name": "langchain-app"})
provider = TracerProvider(resource=resource)
provider.add_span_processor(
BatchSpanProcessor(
OTLPSpanExporter(
endpoint="https://ingest.us.signoz.cloud:443/v1/traces",
headers={"signoz-ingestion-key": "<your-ingestion-key>"},
)
)
)

# Must call before any LangChain/LangGraph imports or function calls
LangChainInstrumentor().instrument()
```

### LiteLLM SDK (one-line activation)

```python
import litellm
litellm.callbacks = ["otel"]
```

Then run with `opentelemetry-instrument` and the standard env vars.

### LiteLLM Proxy Server (config.yaml)

```yaml
model_list:
- model_name: gpt-4o
litellm_params:
model: openai/gpt-4o

general_settings:
callbacks: ["otel"]
```

Set `OTEL_EXPORTER_OTLP_ENDPOINT`, `OTEL_EXPORTER_OTLP_HEADERS`, and `OTEL_EXPORTER_OTLP_PROTOCOL` before starting the proxy.

## What Gets Captured

| Signal | Data |
|---|---|
| Traces | Span per LLM call: model, token usage (input/output/total), finish reason, latency, errors |
| Logs (when enabled) | Structured log per call with message content, log level INFO/ERROR |
| Metrics | Duration, token count, request rate, error rate — OTel GenAI semantic conventions |

## Common Pitfalls

- **`LangChainInstrumentor().instrument()` must run first**: call before importing or using any LangChain/LangGraph code. Late instrumentation misses spans.
- **Message content is opt-in**: `OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true` is off by default. Review PII implications before enabling in production.
- **LiteLLM `callbacks = ["otel"]` requires env vars**: the callback alone does nothing without the OTLP endpoint configured.
- **OpenAI and Anthropic use different packages**: `opentelemetry-instrumentation-openai-v2` (official) vs `openinference-instrumentation-anthropic` (OpenInference). Do not mix them up.
- **Run `opentelemetry-bootstrap` after all deps are installed**.
- **Validate locally first**: `OTEL_TRACES_EXPORTER=console opentelemetry-instrument python app.py` prints spans to stderr.
- **Pre-built dashboards** are available in SigNoz for OpenAI, Anthropic, LiteLLM, and LangChain.

## Official Sources

- SigNoz OpenAI monitoring: https://signoz.io/docs/openai-monitoring/
- SigNoz Anthropic monitoring: https://signoz.io/docs/anthropic-monitoring/
- SigNoz LangChain observability: https://signoz.io/docs/langchain-observability/
- SigNoz LiteLLM observability: https://signoz.io/docs/litellm-observability/
- OpenInference GitHub: https://github.com/Arize-ai/openinference
- OTel GenAI semantic conventions: https://opentelemetry.io/docs/specs/semconv/gen-ai/
191 changes: 191 additions & 0 deletions content/signoz/docs/mcp/general/DOC.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,191 @@
---
name: package
description: "Set up the SigNoz MCP Server and Agent Skills to let AI assistants query observability data (traces, logs, metrics, alerts, dashboards) through natural language"
metadata:
languages: "general"
versions: "0.1.0"
revision: 1
updated-on: "2026-03-19"
source: community
tags: "signoz,mcp,model-context-protocol,ai,agent,claude,cursor,observability,skills"
---

# SigNoz MCP Server & Agent Skills

## Golden Rule

The SigNoz MCP Server exposes observability tools to AI assistants (Claude, Cursor, GitHub Copilot, Codex) through the Model Context Protocol. Agent Skills are lightweight Claude Code plugins that give AI agents domain-specific SigNoz knowledge without running a separate server. Use the MCP server when you need live data queries; use Agent Skills when you want auto-activated documentation and ClickHouse query assistance.

The MCP server is under active development—expect breaking changes between releases.

## When To Use

- **MCP Server**: Let your AI assistant search traces, query logs, list alerts, inspect dashboards, or call the SigNoz query builder using natural language.
- **Agent Skills**: Automatically surface SigNoz instrumentation docs or generate correct ClickHouse queries inside Claude Code without manual prompting.

## MCP Server: Install

### Build from source (Go)

```bash
git clone https://github.com/SigNoz/signoz-mcp-server.git
cd signoz-mcp-server
go build -o bin/signoz-mcp-server ./cmd/server/
```

Requirements: Go 1.25+. The `go build` command has no extra dependencies; `make build` requires `goimports`.

### Docker

```bash
git clone https://github.com/SigNoz/signoz-mcp-server.git
cd signoz-mcp-server
cat > .env <<'EOF'
SIGNOZ_URL=<your-signoz-url>
SIGNOZ_API_KEY=<your-api-key>
LOG_LEVEL=info
EOF
docker-compose up -d
```

Default HTTP port: 8000. Override with `MCP_SERVER_PORT=<port>`.

## MCP Server: API Key Setup

1. Log into your SigNoz instance
2. Navigate to **Settings → API Keys**
3. Click **Create API Key** and copy the value

Only users with the **Admin** role can create API keys. Never commit the key to version control—use environment variables or a secrets manager.

## MCP Server: Client Configuration

### Claude Code (recommended: stdio)

```bash
# Global scope
claude mcp add --scope user signoz "<path-to-binary>/signoz-mcp-server" \
-e SIGNOZ_URL="<your-signoz-url>" \
-e SIGNOZ_API_KEY="<your-api-key>" \
-e LOG_LEVEL=info

# Project scope (run from project root)
claude mcp add --scope project signoz "<path-to-binary>/signoz-mcp-server" \
-e SIGNOZ_URL="<your-signoz-url>" \
-e SIGNOZ_API_KEY="<your-api-key>" \
-e LOG_LEVEL=info

# HTTP mode
claude mcp add --scope user --transport http signoz http://localhost:8000/mcp

# Manage
claude mcp list
claude mcp remove signoz
```

### Claude Desktop (~/.config/claude/claude_desktop_config.json)

```json
{
"mcpServers": {
"signoz": {
"command": "<path-to-binary>/signoz-mcp-server",
"args": [],
"env": {
"SIGNOZ_URL": "<your-signoz-url>",
"SIGNOZ_API_KEY": "<your-api-key>",
"LOG_LEVEL": "info"
}
}
}
}
```

HTTP alternative: replace the object body with `{"url": "http://localhost:8000/mcp"}`.

### Cursor (.cursor/mcp.json)

Same JSON shape as Claude Desktop under `"mcpServers"`.

### GitHub Copilot (.vscode/mcp.json)

```json
{
"servers": {
"signoz": {
"type": "stdio",
"command": "<path-to-binary>/signoz-mcp-server",
"args": [],
"env": {
"SIGNOZ_URL": "<your-signoz-url>",
"SIGNOZ_API_KEY": "<your-api-key>",
"LOG_LEVEL": "info"
}
}
}
}
```

## MCP Server: Environment Variables

| Variable | Description | Default |
|---|---|---|
| `SIGNOZ_URL` | SigNoz instance URL | Required |
| `SIGNOZ_API_KEY` | Authentication key | Required (stdio mode) |
| `TRANSPORT_MODE` | `stdio` or `http` | `stdio` |
| `MCP_SERVER_PORT` | HTTP server port | `8000` |
| `LOG_LEVEL` | `debug`, `info`, `warn`, `error` | `info` |

HTTP server: `SIGNOZ_URL=<url> SIGNOZ_API_KEY=<key> TRANSPORT_MODE=http ./signoz-mcp-server`

## MCP Server: Available Tools

**Metrics**: List/search metric keys, get field values and available fields

**Alerts**: List alerts, get details, history, and related logs

**Logs**: Search error logs, search by service, aggregate logs, list/get saved views

**Dashboards**: List, get, create, and update dashboards

**Traces**: List services, top operations, search/aggregate traces, trace details, error analysis, span hierarchy

**Query**: General query builder for ad-hoc queries

Validate: ask your assistant *"List all alerts"* or *"Show me all available services"*.

## Agent Skills: Install

```bash
# Via skills.sh
npx skills add SigNoz/agent-skills
npx skills add SigNoz/agent-skills --skill signoz-docs
npx skills add SigNoz/agent-skills --skill signoz-clickhouse-query

# Via Claude Code
/plugin marketplace add SigNoz/agent-skills
/reload-plugins
```

### Available Skills

| Skill | Purpose |
|---|---|
| `signoz-docs` | Guides AI agents to fetch and explore SigNoz docs for setup, config, and feature questions |
| `signoz-clickhouse-query` | Helps AI agents write and debug ClickHouse queries for SigNoz dashboards, alerts, and analysis |

## Common Pitfalls

- **Binary path must be absolute**: relative paths in JSON config fail silently. Use the full path to the compiled binary.
- **Restart AI client after config changes**: MCP config is read at startup; edits are not picked up without a restart.
- **HTTP header auth format**: `Bearer <your-api-key>` (with the `Bearer` prefix). Missing it causes 401 errors.
- **Admin-only API keys**: only Admin-role users can create keys. If key creation is grayed out, check your role.
- **Active development**: breaking changes between releases are expected. Pin to a specific commit or tag in production.
- **`LOG_LEVEL=debug`**: use this when tools are not appearing or returning unexpected errors to get detailed diagnostics.

## Official Sources

- SigNoz MCP Server docs: https://signoz.io/docs/ai/signoz-mcp-server/
- SigNoz Agent Skills docs: https://signoz.io/docs/ai/agent-skills/
- MCP Server GitHub: https://github.com/SigNoz/signoz-mcp-server
- Agent Skills GitHub: https://github.com/SigNoz/agent-skills
Loading