This repository is the single source of truth for the active host. Each stack is self-contained, uses its own .env.example, and is operated directly with docker compose from its own directory. There are no repo-level helper scripts by design.
- Docker Engine with Docker Compose v2 is required.
- The stack is tuned for a low-resource host by default. Most services already have
cpus,mem_limit, and log rotation configured in their compose files. .envfiles are local runtime files and are ignored by Git. Commit only.env.example.archive/contains legacy layouts and backups. It is not part of the active host.
databases/postgres: shared PostgreSQL and idempotent bootstrap.observability/signoz: primary observability stack.network/pihole: DNS and ad blocking with Unbound recursion.network/headscale: private Tailscale control plane.network/traefik: local edge, TLS, and CrowdSec.apps/site: personal site container from GHCR.ai-llms/ollama: local model runtime.ai-llms/liteLLM: OpenAI-compatible gateway.ai-llms/open-web-ui: UI for Ollama.ai-llms/libre-chat: full chat UI routed through LiteLLM.dashboards/homarr: lightweight home dashboard.observability/prometheus: manual backup metrics stack.observability/grafana: manual backup UI for the backup Prometheus stack.
| Stack | Role | Main Host Access | Notes |
|---|---|---|---|
databases/postgres |
Shared PostgreSQL | Internal only on database-network |
Start first |
observability/signoz |
Main telemetry and logs | 8080, 4317, 4318, signoz.localhost:8443 |
Creates signoz-net |
network/pihole |
DNS and ad blocking | ${PIHOLE_BIND_IP}:53, 8090, pihole.localhost:8443 |
DNS bind IP must match a real host interface |
network/headscale |
Private VPN control plane | 127.0.0.1:8081, headscale.localhost:8443 |
Local-first default domain |
apps/site |
Personal site | 8001, site.localhost:8443 |
Sends OTEL telemetry to SigNoz |
ai-llms/ollama |
Local model runtime | 11434 |
Uses ai-llms-network |
ai-llms/liteLLM |
LLM gateway | 4000 |
Depends on Postgres and SigNoz |
ai-llms/open-web-ui |
Ollama UI | 3000, openwebui.localhost:8443 |
Talks to ollama over ai-llms-network |
ai-llms/libre-chat |
Chat frontend | 3080, librechat.localhost:8443 |
Uses LiteLLM as upstream |
dashboards/homarr |
Home dashboard | 80, homarr.localhost:8443 |
Port 80 must be free on the host |
network/traefik |
Local edge and TLS | 8088, 8443, 127.0.0.1:8089 |
Start last |
| Stack | Role | Main Host Access |
|---|---|---|
observability/prometheus |
Backup metrics stack | 9090, 9093 |
observability/grafana |
Backup dashboards | 3001 |
These are not the primary observability path. The active path is SigNoz.
database-network: shared internal network for PostgreSQL consumers.signoz-net: observability network created by the SigNoz stack.ai-llms-network: shared network for Ollama, LiteLLM, Open WebUI, and the LibreChat API.
ai-llms-network is external in the AI stacks, so create it once before starting those stacks if it does not already exist:
docker network inspect ai-llms-network >/dev/null 2>&1 || docker network create ai-llms-network- Copy
.env.exampleto.envinside each stack directory you plan to run. - Adjust secrets, ports, and host-specific values before starting anything.
- If you want the full active host, start the stacks in this order:
databases/postgres
observability/signoz
network/pihole
network/headscale
apps/site
ai-llms/ollama
ai-llms/liteLLM
ai-llms/open-web-ui
ai-llms/libre-chat
dashboards/homarr
network/traefik
- Start each stack from its own directory:
docker compose config
docker compose up -d
docker compose psRun these from the stack directory you are working on:
docker compose config
docker compose up -d
docker compose ps
docker compose logs -f
docker compose restart <service>
docker compose downUseful host-level commands:
docker ps --format 'table {{.Names}}\t{{.Status}}\t{{.Ports}}'
docker network ls
docker volume ls
docker inspect <container>Run these from the repository root when you update Markdown files:
uv run rumdl check . && uv run rumdl fmt .GitHub Actions runs the same Markdown validation on pushes and pull requests, plus docker compose config validation for each stack with its local .env.example.
CIis the fast and safe path. It runs on push and pull request, checks Markdown withrumdl, and validates every supported stack withdocker compose config.- The repository currently does not run runtime smoke tests in GitHub Actions.
- If you want a real runtime check, run
docker compose up -d,docker compose ps, and a smallcurlor health probe from the stack directory on the host itself.
curl -fsS http://127.0.0.1:8080/api/v1/health
curl -fsS http://127.0.0.1:4000/health/liveliness
curl -fsS http://127.0.0.1:8001/health
curl -s http://127.0.0.1:11434/api/tags
curl -I http://127.0.0.1:3080/
curl -I http://127.0.0.1:8090/admin/
curl http://127.0.0.1:8089/pingTraefik routes can be checked with host headers:
curl -k -I -H 'Host: site.localhost' https://127.0.0.1:8443/
curl -k -I -H 'Host: signoz.localhost' https://127.0.0.1:8443/
curl -k -I -H 'Host: librechat.localhost' https://127.0.0.1:8443/- Change exposed ports by editing the local
.envfor each stack. - Change DNS binding for Pi-hole with
PIHOLE_BIND_IP. - Change Traefik edge ports with
TRAEFIK_HTTP_PORT,TRAEFIK_HTTPS_PORT, andTRAEFIK_DASHBOARD_PORT. - Extend SigNoz Docker log capture by editing
FORWARD_CONTAINERSinobservability/signoz/.env.example. - Increase resource limits only where needed. The main hot spots are
signoz-clickhouse,ollama,open-web-ui,librechat-api, andlitellm. - For real Headscale usage, update
network/headscale/config/config.yamlbefore registering clients.
- Check the stack status:
docker compose ps - Check the container logs:
docker compose logs -f <service> - Check the host port binding:
docker ps --format 'table {{.Names}}\t{{.Ports}}'
- Confirm
postgresis healthy. - Confirm the service is attached to
database-network. - Confirm credentials in the local
.envmatch the bootstrap values.
- Check
signoz-otel-collector,otel-bridge, anddocker-log-forwarder. - Confirm the app is using
signoz-otel-collector:4317or:4318. - Confirm the container name is listed in
FORWARD_CONTAINERSif you expect Docker log forwarding.
- Check
network/traefik/dynamic/routes.yml. - Confirm the backend port is already published on the host.
- Confirm
traefikandcrowdsecare both healthy.
- Confirm the chosen
PIHOLE_BIND_IPexists on the host. - Confirm port
53on that IP is not already in use. - Check
unboundfirst. Pi-hole depends on it.
- This can be normal on first boot because Prisma migrations and schema sanity checks can take a while.
- Watch
docker logs litellm-proxy -fbefore assuming the container is broken.
archive/keeps old layouts and reference material.- Old runtime resources should be cleaned only after the replacement stack is confirmed healthy.
- The current active runtime has already been migrated away from the old
on-premlayout.