TrackFlow is a logistics tracking platform built with Java 21, Spring Boot 3, and Apache Kafka. It demonstrates event-driven microservices communication across three independent services.
- Java 21
- Spring Boot 3
- Apache Kafka (KRaft mode)
- PostgreSQL
- Gradle
- Grafana LGTM stack (Prometheus, Loki, Tempo, Grafana)
TrackFlow was built to demonstrate:
- Event-driven microservices architecture
- Kafka-based service communication
- Distributed system observability with Prometheus, Loki, Tempo, and Grafana
- Debugging and monitoring of asynchronous systems
| Service | Port | Database | Responsibility |
|---|---|---|---|
| order-service | 8081 |
orders_db |
Order lifecycle + Kafka producer |
| tracking-service | 8082 |
tracking_db |
Consumes events, stores tracking history |
| notification-service | 8083 |
notifications_db |
Consumes events, logs notifications, publishes to DLQ |
[order-service] ──── shipment-events ────► [tracking-service]
└──► [notification-service] ──► shipment-events.DLQ
Communication is hybrid: synchronous HTTP for client-facing APIs, asynchronous Kafka for cross-service propagation.
Minimal steps to run the system locally and validate it works.
1. Start Kafka
docker compose up -d2. Start the services
cd services/order-service && ./gradlew bootRun
cd services/tracking-service && ./gradlew bootRun
cd services/notification-service && ./gradlew bootRun3. Run the smoke test
./scripts/smoke.sh- Java 21
- Docker + Docker Compose
- PostgreSQL running locally on
localhost:5432 - direnv (optional, for
.envrcsupport)
PostgreSQL runs locally. Kafka runs in Docker, isolated from other local stacks.
# Start Kafka
docker compose up -d
# Stop Kafka
docker compose downPorts:
| Service | Port | Notes |
|---|---|---|
| PostgreSQL | 5432 |
Local instance |
| Kafka | 9093 |
Docker — isolated from observability stack on 9092 |
| order-service | 8081 |
|
| tracking-service | 8082 |
|
| notification-service | 8083 |
|
| Prometheus | 9090 |
Observability stack |
| Loki | 3100 |
Observability stack |
| Tempo (OTLP HTTP) | 4318 |
Observability stack |
| Grafana | 3000 |
Observability stack |
Three databases on the local PostgreSQL instance:
CREATE DATABASE orders_db;
CREATE DATABASE tracking_db;
CREATE DATABASE notifications_db;User: trackflow / trackflow
Each service uses a .envrc file (see .envrc.example in each service directory):
export DB_USERNAME=trackflow
export DB_PASSWORD=trackflow
export DB_URL=jdbc:postgresql://localhost:5432/{service}_db
export KAFKA_BOOTSTRAP_SERVERS=localhost:9093
export SERVER_PORT=808{1,2,3}Each service runs locally via Gradle:
cd services/order-service && ./gradlew bootRun
cd services/tracking-service && ./gradlew bootRun
cd services/notification-service && ./gradlew bootRunTrackFlow integrates with a local Grafana LGTM stack (Prometheus, Loki, Tempo, Grafana). The stack is not bundled in this repository and runs from a separate infrastructure project.
Start the observability stack:
# The path below is the author's local environment — adjust to your own infrastructure project location.
cd /path/to/your/infra/observability && docker compose up -dOnce running, each TrackFlow service automatically pushes metrics, logs, and traces to it. No configuration changes are needed.
Import the dashboard:
- Open Grafana at
http://localhost:3000 - Go to Dashboards → Import
- Upload
docs/grafana/trackflow-dashboard.json
The dashboard provides panels for order throughput, failed notifications, HTTP request rate, JVM heap memory, and live service logs.
Verify data is flowing:
# Metrics
curl -s http://localhost:8081/actuator/prometheus | grep http_server_requests
# Traces — check Tempo via Grafana Explore (datasource: Tempo)
# Logs — check Loki via Grafana Explore (datasource: Loki)
# Label filter: app=order-serviceGenerates a realistic load of orders using real Portuguese user data from the randomuser.me API and progresses each order through its full lifecycle.
./scripts/simulate.sh # 50 orders (default)
./scripts/simulate.sh 20 # custom numberEach order is processed in parallel: created, then stepped through PICKED_UP → IN_TRANSIT → OUT_FOR_DELIVERY → DELIVERED with 100 ms between transitions. Progress is printed as orders are created and status events are fired. A summary of orders created, Kafka events published, and elapsed time is printed at the end.
Requires curl and jq.
Validates the full end-to-end flow: order creation → Kafka event → tracking + notification propagation.
./scripts/smoke.shFlow:
- Create order in
order-service - Verify event propagation to
tracking-serviceandnotification-service - Verify tracking history exists
- Verify notification log exists
Bruno collection available at bruno/trackflow-api/.
Open the collection in Bruno, select the local environment, and run requests in sequence starting from Create Order.
curl -fsS http://localhost:8081/actuator/health
curl -fsS http://localhost:8082/actuator/health
curl -fsS http://localhost:8083/actuator/healthhttp://localhost:8081/swagger-ui/index.html
http://localhost:8082/swagger-ui/index.html
http://localhost:8083/swagger-ui/index.html
This repository is intended as a learning and demonstration project for distributed backend architecture using Spring Boot and Kafka.
It focuses on:
- event-driven service communication
- microservice boundaries
- observability and debugging
- failure handling through retries and dead-letter queues