Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 31 additions & 0 deletions examples/vtadmin/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Makefile for VTAdmin demo cluster

.PHONY: up down restart logs clean reset

# Base command with all required files and environment variables
COMPOSE_CMD = docker compose -f ../compose/docker-compose.yml -f docker-compose.yml --env-file ../compose/template.env

# Start the cluster
up:
$(COMPOSE_CMD) up -d --force-recreate

# Stop the cluster
down:
$(COMPOSE_CMD) down --remove-orphans

# Restart running services
restart:
$(COMPOSE_CMD) restart

# Perform a full reset (down with volume cleanup, then up)
reset:
$(COMPOSE_CMD) down --remove-orphans
$(COMPOSE_CMD) up -d --force-recreate

# Stream logs from all services
logs:
$(COMPOSE_CMD) logs -f

# Clean up all resources including volumes
clean:
$(COMPOSE_CMD) down -v --remove-orphans
58 changes: 58 additions & 0 deletions examples/vtadmin/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# VTAdmin Demo

This example provides a fully functional local Vitess cluster with **VTAdmin**, **Grafana**, and **Percona Monitoring and Management (PMM)** pre-configured and integrated.

It demonstrates how VTAdmin can serve as a single pane of glass for your database infrastructure, providing context-aware deep links to your monitoring dashboards.

## Quick Start

1. **Start the cluster**:
```bash
cd examples/vtadmin
make up
```
2. **Access VTAdmin**:
Open **http://localhost:5173** in your browser.

## Features

- **Unified Interface**: View cluster topology, tablet health, and gates.
- **Integrated Monitoring**:
- **Vitess Metrics**: Deep links to Grafana dashboards for clusters, tablets, and gates.
- **MySQL Metrics**: Deep links to PMM for database instance analysis.
- **Pre-configured Stack**: Includes Prometheus, Grafana, and PMM running alongside Vitess.

## Service Endpoints

| Service | URL | Credentials |
|---------|-----|-------------|
| **VTAdmin** | http://localhost:5173 | - |
| **Grafana** | http://localhost:3000 | default |
| **PMM** | http://localhost:8888 | default |
| **Prometheus** | http://localhost:9090 | - |

## Configuration

The dashboard links in VTAdmin are configured via environment variables in `docker-compose.yml`. You can modify these to point to your own external monitoring infrastructure:

```yaml
vtadmin-web:
environment:
VITE_VITESS_MONITORING_CLUSTER_TEMPLATE: "http://your-grafana/..."
VITE_MYSQL_MONITORING_TEMPLATE: "http://your-pmm/..."
```

See `web/vtadmin/README.md` for all available environment variables.

## Common Commands

### Cluster Control
```bash
make up # Start cluster with current configuration
make restart # Restart all services
make reset # Full teardown and fresh start (removes volumes)
make down # Stop all services
make clean # Stop and remove all data volumes
make logs # Stream logs from all services
```

25 changes: 25 additions & 0 deletions examples/vtadmin/config/discovery.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
{
"clusters": {
"local": {
"name": "local",
"discovery": "staticfile",
"discovery-staticfile-path": "/app/discovery.json"
}
},
"vtgates": [
{
"host": {
"hostname": "vtgate:15999"
},
"tags": ["cell:test"]
}
],
"vtctlds": [
{
"host": {
"hostname": "vtctld:15999"
},
"tags": ["cell:test"]
}
]
}
15 changes: 15 additions & 0 deletions examples/vtadmin/config/grafana-datasource-prometheus.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Grafana datasource configuration
#
# Automatically provisions Prometheus as a datasource for Grafana.
# Used by the Grafana container to connect to Prometheus metrics.
#
# Access: http://localhost:3000 (admin/admin)

apiVersion: 1

datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus:9090
isDefault: true
17 changes: 17 additions & 0 deletions examples/vtadmin/config/prometheus.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Prometheus configuration for VTAdmin demo cluster
#
# Collects metrics from all Vitess components:
# - vtctld: Topology server
# - vtgate: Query router
# - vttablets: Database tablets
#
# Scrape interval: 10 seconds
# Metrics accessible at: http://localhost:9090

global:
scrape_interval: 10s

scrape_configs:
- job_name: 'vitess'
static_configs:
- targets: ['vtctld:8080', 'vtgate:8080', 'vttablet101:8080', 'vttablet102:8080', 'vttablet201:8080', 'vttablet202:8080', 'vttablet301:8080', 'vttablet302:8080']
160 changes: 160 additions & 0 deletions examples/vtadmin/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
# VTAdmin Demo - Docker Compose Overrides
#
# This file extends the base Vitess cluster (../compose/docker-compose.yml) with:
# - VTAdmin web UI (port 5173)
# - Grafana dashboards for Vitess metrics (port 3000)
# - PMM monitoring for MySQL instances (port 8888)
# - Prometheus metrics collection (port 9090)
#
# Usage:
# make up # Start cluster
# make logs # View logs
# make reset # Full cleanup and restart
#
# See README.md for detailed documentation.

services:
vtadmin-web:
image: node:22
working_dir: /app
volumes:
- ${PWD}/../../web/vtadmin:/app
environment:
VITE_VTADMIN_API_ADDRESS: http://localhost:14200
# VITE_VITESS_MONITORING_DASHBOARD_TITLE: Grafana
# VITE_MYSQL_MONITORING_DASHBOARD_TITLE: PMM
VITE_VITESS_MONITORING_CLUSTER_TEMPLATE: http://localhost:3000/d/vitess_summary/vitess-summary
VITE_VITESS_MONITORING_VTTABLET_TEMPLATE: http://localhost:3000/d/vitess_summary/vitess-summary?var-alias={alias}
VITE_VITESS_MONITORING_VTGATE_TEMPLATE: http://localhost:3000/d/vitess_summary/vitess-summary
VITE_MYSQL_MONITORING_TEMPLATE: http://localhost:8888/graph/d/mysql-instance-overview/mysql-instances-overview?var-service_name={hostname}-mysql
command:
- /bin/sh
- -c
- npm install && npm run start -- --host
ports:
- "5173:5173"
depends_on:
- vtadmin-api

vtadmin-api:
image: vitess/lite:${VITESS_TAG:-latest}
command:
- vtadmin
- --addr=:14200
- --http-origin=*
- --http-tablet-url-tmpl=http://{{ .Tablet.Hostname }}:80
- --cluster-config=/app/discovery.json
- --no-rbac
volumes:
- ${PWD}/config/discovery.json:/app/discovery.json
ports:
- "14200:14200"
depends_on:
- vtctld

vttablet101:
volumes:
- ${PWD}/scripts/vttablet-up.sh:/script/vttablet-up.sh

vttablet102:
volumes:
- ${PWD}/scripts/vttablet-up.sh:/script/vttablet-up.sh

vttablet201:
volumes:
- ${PWD}/scripts/vttablet-up.sh:/script/vttablet-up.sh

vttablet202:
volumes:
- ${PWD}/scripts/vttablet-up.sh:/script/vttablet-up.sh

vttablet301:
volumes:
- ${PWD}/scripts/vttablet-up.sh:/script/vttablet-up.sh

vttablet302:
volumes:
- ${PWD}/scripts/vttablet-up.sh:/script/vttablet-up.sh

cluster-init:
image: vitess/lite:${VITESS_TAG:-latest}
command:
- sh
- -c
- /script/init_cluster.sh
volumes:
- ${PWD}/scripts/init_cluster.sh:/script/init_cluster.sh
depends_on:
- vtctld
- vttablet101
- vttablet102
- vttablet201
- vttablet202
- vttablet301
- vttablet302

pmm-server:
image: percona/pmm-server:2
ports:
- "8888:80"
- "8889:443"
volumes:
- pmm-data:/srv

pmm-setup:
image: percona/pmm-client:2
entrypoint: /bin/sh
depends_on:
- pmm-server
- vtgate
- vttablet101
- vttablet102
- vttablet201
- vttablet202
- vttablet301
- vttablet302
command:
- -c
- |
echo "Waiting for PMM Server..."
until curl -k -s https://admin:admin@pmm-server/v1/ready; do sleep 5; done
echo "PMM Server is ready."

# Register PMM Client
pmm-agent setup --config-file=/usr/local/percona/pmm2/config/pmm-agent.yaml --server-address=pmm-server:443 --server-insecure-tls --server-username=admin --server-password=admin

# Start pmm-agent in background
pmm-agent --config-file=/usr/local/percona/pmm2/config/pmm-agent.yaml &

# Wait for agent to be ready
sleep 10

# Add vttablet MySQLs
for i in 101 102 201 202 301 302; do
echo "Adding vttablet$$i MySQL..."
pmm-admin add mysql --username=pmm --password=pmm --host=vttablet$$i --port=3306 --service-name=vttablet$$i-mysql --query-source=perfschema || true
done

echo "Setup complete."

# Keep container running
wait

prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ${PWD}/config/prometheus.yml:/etc/prometheus/prometheus.yml

grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
volumes:
- ${PWD}/config/grafana-datasource-prometheus.yaml:/etc/grafana/provisioning/datasources/prometheus.yaml
depends_on:
- prometheus

volumes:
pmm-data:
44 changes: 44 additions & 0 deletions examples/vtadmin/scripts/init_cluster.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
#!/bin/bash

# Copyright 2025 The Vitess Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

set -u

echo "Waiting for vtctld..."
until vtctldclient --server vtctld:15999 GetKeyspaces; do
sleep 1
done

echo "Waiting for tablets..."
# We expect 6 tablets
while [ $(vtctldclient --server vtctld:15999 GetTablets | wc -l) -lt 6 ]; do
sleep 1
done

echo "Initializing shards..."

# Initialize test_keyspace/-80
echo "Initializing test_keyspace/-80..."
vtctldclient --server vtctld:15999 PlannedReparentShard --new-primary test-0000000101 test_keyspace/-80 || echo "Failed to init test_keyspace/-80 or already initialized"

# Initialize test_keyspace/80-
echo "Initializing test_keyspace/80-..."
vtctldclient --server vtctld:15999 PlannedReparentShard --new-primary test-0000000201 test_keyspace/80- || echo "Failed to init test_keyspace/80- or already initialized"

# Initialize lookup_keyspace/-
echo "Initializing lookup_keyspace/-..."
vtctldclient --server vtctld:15999 PlannedReparentShard --new-primary test-0000000301 lookup_keyspace/- || echo "Failed to init lookup_keyspace/- or already initialized"

echo "Cluster initialization complete."
Loading
Loading