Skip to content

Commit 3a189a2

Browse files
Kamal Sai DevarapalliKamal Sai Devarapalli
authored andcommitted
Rename booking service to taskprocessing service
- Renamed services/booking directory to services/taskprocessing - Updated docker-compose.yml to use taskprocessing service name and correct environment variables - Updated infrastructure/docker/docker-compose.yml with correct env vars (TASKPROCESSING_SERVER_IPADDRESS/PORT) - Fixed PROJECT_DESCRIPTION.md to reference task service instead of booking - Updated scripts/setup.sh and scripts/redis_dry_run.py to use taskprocessing paths - Fixed redis_dry_run.py test function to properly document BookingRedisHelper legacy class name - Updated REDIS_SETUP.md documentation paths - All services tested and running correctly
1 parent d409173 commit 3a189a2

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

45 files changed

+264
-948
lines changed

PROJECT_DESCRIPTION.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
# EventStreamMonitor - Project Description ## What is EventStreamMonitor? **EventStreamMonitor** is a production-ready, real-time microservices monitoring platform that collects, streams, and visualizes application logs and errors across multiple services in real-time. ## Core Purpose The project demonstrates modern DevOps practices by providing: - **Real-time log collection** from multiple microservices - **Event streaming** using Apache Kafka - **Error monitoring and alerting** with automatic filtering - **Live dashboard** for error visualization - **Production-ready architecture** with microservices ## Key Features ### 1. Real-Time Log Streaming - Collects logs from multiple microservices simultaneously - Streams logs to Apache Kafka for processing - Automatic error level detection (ERROR, CRITICAL) - Service identification and metadata tagging ### 2. Error Monitoring & Filtering - Automatically filters ERROR and CRITICAL level logs - Stores errors for analysis and alerting - Provides statistics by service and error type - Real-time error tracking and visualization ### 3. Multi-Service Dashboard - Live dashboard showing error statistics - Real-time error feed with auto-refresh - Service-level error breakdown - Error details with timestamps, stack traces, and context ### 4. Microservices Architecture - Independent, scalable services - Event-driven communication via Kafka - Redis caching for performance - Session management across services - Database per service (isolation) ### 5. Production-Ready Features - Docker containerization - Health checks and monitoring - Grafana integration ready - RESTful APIs for automation - Comprehensive error handling ## Tech Stack - **Backend**: Python, Flask - **Message Broker**: Apache Kafka - **Cache/Sessions**: Redis - **Databases**: PostgreSQL (per service) - **Containerization**: Docker, Docker Compose - **Monitoring**: Custom Dashboard, Grafana-ready - **API**: RESTful APIs ## Architecture Overview ``` Microservices (User, Booking, etc.) - Generate logs and events - Stream to Kafka Kafka Topics Apache Kafka - application-logs - application-logs-errors Consumer filters errors Log Monitor Service - Error filtering - Error storage - API endpoints Dashboard Grafana (Web UI) Integration ``` ## Use Cases ### 1. **DevOps Monitoring** - Monitor multiple microservices from one dashboard - Track errors in real-time - Identify problematic services quickly - Set up alerts for critical errors ### 2. **Development & Debugging** - Real-time error visibility - Service-level error tracking - Error context and stack traces - Historical error analysis ### 3. **Production Monitoring** - Production error monitoring - Service health tracking - Performance insights - Grafana dashboards for operations ### 4. **Learning & Portfolio** - Demonstrates microservices architecture - Shows Kafka event streaming - Redis caching patterns - Docker containerization - Real-time monitoring systems ## What Makes It Stand Out ### For Recruiters/Employers: - **DevOps Skills**: Real-time monitoring, log aggregation - **Microservices**: Scalable, distributed architecture - **Event-Driven**: Kafka for event streaming - **Production Ready**: Docker, health checks, error handling - **Modern Stack**: Kafka, Redis, Python, Flask - **Full-Stack**: Backend APIs + Dashboard UI ### Technical Highlights: - Real-time processing with Kafka - Scalable microservices architecture - Caching and session management - Error filtering and alerting - Dashboard visualization - Grafana integration ready ## Project Structure ``` EventStreamMonitor/ services/ # Microservices usermanagement/ # User service booking/ # Booking service notification/ # Notification service logmonitor/ # Log monitoring service common/ # Shared libraries infrastructure/ # Docker, K8s configs docs/ # Documentation scripts/ # Utility scripts ``` ## Quick Start ```bash # Start all services docker-compose up -d # Access dashboard open http://localhost:5004 # Stream test errors python3 scripts/quick_stream_errors.py ``` ## Perfect For - **First GitHub Project**: Professional, complete, showcases skills - **Portfolio Project**: Demonstrates real-world DevOps practices - **Learning Project**: Covers microservices, Kafka, monitoring - **Interview Project**: Shows system design and architecture skills ## Value Proposition **EventStreamMonitor** demonstrates your ability to: 1. Build production-ready microservices 2. Implement real-time monitoring systems 3. Work with modern DevOps tools (Kafka, Redis, Docker) 4. Create scalable, event-driven architectures 5. Build monitoring and observability tools This is exactly what recruiters look for - practical, production-ready code that solves real problems!
1+
# EventStreamMonitor - Project Description ## What is EventStreamMonitor? **EventStreamMonitor** is a production-ready, real-time microservices monitoring platform that collects, streams, and visualizes application logs and errors across multiple services in real-time. ## Core Purpose The project demonstrates modern DevOps practices by providing: - **Real-time log collection** from multiple microservices - **Event streaming** using Apache Kafka - **Error monitoring and alerting** with automatic filtering - **Live dashboard** for error visualization - **Production-ready architecture** with microservices ## Key Features ### 1. Real-Time Log Streaming - Collects logs from multiple microservices simultaneously - Streams logs to Apache Kafka for processing - Automatic error level detection (ERROR, CRITICAL) - Service identification and metadata tagging ### 2. Error Monitoring & Filtering - Automatically filters ERROR and CRITICAL level logs - Stores errors for analysis and alerting - Provides statistics by service and error type - Real-time error tracking and visualization ### 3. Multi-Service Dashboard - Live dashboard showing error statistics - Real-time error feed with auto-refresh - Service-level error breakdown - Error details with timestamps, stack traces, and context ### 4. Microservices Architecture - Independent, scalable services - Event-driven communication via Kafka - Redis caching for performance - Session management across services - Database per service (isolation) ### 5. Production-Ready Features - Docker containerization - Health checks and monitoring - Grafana integration ready - RESTful APIs for automation - Comprehensive error handling ## Tech Stack - **Backend**: Python, Flask - **Message Broker**: Apache Kafka - **Cache/Sessions**: Redis - **Databases**: PostgreSQL (per service) - **Containerization**: Docker, Docker Compose - **Monitoring**: Custom Dashboard, Grafana-ready - **API**: RESTful APIs ## Architecture Overview ``` Microservices (User, Task, Notification, etc.) - Generate logs and events - Stream to Kafka Kafka Topics Apache Kafka - application-logs - application-logs-errors Consumer filters errors Log Monitor Service - Error filtering - Error storage - API endpoints Dashboard Grafana (Web UI) Integration ``` ## Use Cases ### 1. **DevOps Monitoring** - Monitor multiple microservices from one dashboard - Track errors in real-time - Identify problematic services quickly - Set up alerts for critical errors ### 2. **Development & Debugging** - Real-time error visibility - Service-level error tracking - Error context and stack traces - Historical error analysis ### 3. **Production Monitoring** - Production error monitoring - Service health tracking - Performance insights - Grafana dashboards for operations ### 4. **Learning & Portfolio** - Demonstrates microservices architecture - Shows Kafka event streaming - Redis caching patterns - Docker containerization - Real-time monitoring systems ## What Makes It Stand Out ### For Recruiters/Employers: - **DevOps Skills**: Real-time monitoring, log aggregation - **Microservices**: Scalable, distributed architecture - **Event-Driven**: Kafka for event streaming - **Production Ready**: Docker, health checks, error handling - **Modern Stack**: Kafka, Redis, Python, Flask - **Full-Stack**: Backend APIs + Dashboard UI ### Technical Highlights: - Real-time processing with Kafka - Scalable microservices architecture - Caching and session management - Error filtering and alerting - Dashboard visualization - Grafana integration ready ## Project Structure ``` EventStreamMonitor/ services/ # Microservices usermanagement/ # User service task/ # Task service notification/ # Notification service logmonitor/ # Log monitoring service common/ # Shared libraries infrastructure/ # Docker, K8s configs docs/ # Documentation scripts/ # Utility scripts ``` ## Quick Start ```bash # Start all services docker-compose up -d # Access dashboard open http://localhost:5004 # Stream test errors python3 scripts/quick_stream_errors.py ``` ## Perfect For - **First GitHub Project**: Professional, complete, showcases skills - **Portfolio Project**: Demonstrates real-world DevOps practices - **Learning Project**: Covers microservices, Kafka, monitoring - **Interview Project**: Shows system design and architecture skills ## Value Proposition **EventStreamMonitor** demonstrates your ability to: 1. Build production-ready microservices 2. Implement real-time monitoring systems 3. Work with modern DevOps tools (Kafka, Redis, Docker) 4. Create scalable, event-driven architectures 5. Build monitoring and observability tools This is exactly what recruiters look for - practical, production-ready code that solves real problems!

REDIS_SETUP.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ user = redis_helper.get_cached_user(123)
5757

5858
### 3. Service-Specific Helpers
5959
- `services/usermanagement/app/redis_helper.py` - User Management Redis helper
60-
- `services/booking/app/redis_helper.py` - Booking Redis helper
60+
- `services/taskprocessing/app/redis_helper.py` - Task Processing Redis helper
6161

6262
### 4. Dependencies
6363
- Added `redis` to `requirements.txt`

docker-compose.yml

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ services:
3434
retries: 5
3535

3636
# Task Processing Service Database
37-
booking-db:
37+
taskprocessing-db:
3838
image: postgres:latest
3939
environment:
4040
- POSTGRES_USER=airlineradmin
@@ -43,7 +43,7 @@ services:
4343
ports:
4444
- 3306:5432
4545
volumes:
46-
- booking-db-data:/var/lib/postgresql/data
46+
- taskprocessing-db-data:/var/lib/postgresql/data
4747
healthcheck:
4848
test: ["CMD-SHELL", "pg_isready -U airlineradmin"]
4949
interval: 10s
@@ -140,22 +140,22 @@ services:
140140
retries: 3
141141

142142
# Task Processing Service
143-
booking-service:
143+
taskprocessing-service:
144144
build:
145145
context: .
146-
dockerfile: services/booking/Dockerfile
146+
dockerfile: services/taskprocessing/Dockerfile
147147
ports:
148148
- 5002:9092
149149
depends_on:
150-
- booking-db
150+
- taskprocessing-db
151151
- kafka
152152
- redis
153153
environment:
154154
- FLASK_ENV=production
155155
- DEBUG=false
156-
- BOOKING_SERVER_IPADDRESS=0.0.0.0
157-
- BOOKING_SERVER_PORT=9092
158-
- DATABASE_URL=postgresql://airlineradmin:testeventstreammonitor#123@booking-db:5432/TASK_PROCESSING
156+
- TASKPROCESSING_SERVER_IPADDRESS=0.0.0.0
157+
- TASKPROCESSING_SERVER_PORT=9092
158+
- DATABASE_URL=postgresql://airlineradmin:testeventstreammonitor#123@taskprocessing-db:5432/TASK_PROCESSING
159159
- KAFKA_BOOTSTRAP_SERVERS=kafka:29092
160160
- REDIS_HOST=redis
161161
- REDIS_PORT=6379
@@ -223,6 +223,6 @@ services:
223223
volumes:
224224
registration-db-data:
225225
auth-db-data:
226-
booking-db-data:
226+
taskprocessing-db-data:
227227
notification-db-data:
228228
redis-data:

infrastructure/docker/docker-compose.yml

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -33,17 +33,17 @@ services:
3333
timeout: 5s
3434
retries: 5
3535

36-
# Flight Booking Service Database
37-
booking-db:
36+
# Task Processing Service Database
37+
taskprocessing-db:
3838
image: postgres:latest
3939
environment:
4040
- POSTGRES_USER=airlineradmin
4141
- POSTGRES_PASSWORD=testeventstreammonitor#123
42-
- POSTGRES_DB=FLIGHT_BOOKINGS
42+
- POSTGRES_DB=TASK_PROCESSING
4343
ports:
4444
- 3306:5432
4545
volumes:
46-
- booking-db-data:/var/lib/postgresql/data
46+
- taskprocessing-db-data:/var/lib/postgresql/data
4747
healthcheck:
4848
test: ["CMD-SHELL", "pg_isready -U airlineradmin"]
4949
interval: 10s
@@ -118,22 +118,22 @@ services:
118118
timeout: 10s
119119
retries: 3
120120

121-
# Flight Booking Service
122-
booking-service:
121+
# Task Processing Service
122+
taskprocessing-service:
123123
build:
124124
context: .
125-
dockerfile: services/booking/Dockerfile
125+
dockerfile: services/taskprocessing/Dockerfile
126126
ports:
127127
- 5002:9092
128128
depends_on:
129-
- booking-db
129+
- taskprocessing-db
130130
- kafka
131131
environment:
132132
- FLASK_ENV=production
133133
- DEBUG=false
134-
- BOOKING_SERVER_IPADDRESS=0.0.0.0
135-
- BOOKING_SERVER_PORT=9092
136-
- DATABASE_URL=postgresql://airlineradmin:testeventstreammonitor#123@booking-db:5432/FLIGHT_BOOKINGS
134+
- TASKPROCESSING_SERVER_IPADDRESS=0.0.0.0
135+
- TASKPROCESSING_SERVER_PORT=9092
136+
- DATABASE_URL=postgresql://airlineradmin:testeventstreammonitor#123@taskprocessing-db:5432/TASK_PROCESSING
137137
- KAFKA_BOOTSTRAP_SERVERS=kafka:29092
138138
healthcheck:
139139
test: ["CMD", "python", "-c", "import requests; requests.get('http://localhost:9092/health', timeout=5)"]
@@ -167,5 +167,5 @@ services:
167167
volumes:
168168
registration-db-data:
169169
auth-db-data:
170-
booking-db-data:
170+
taskprocessing-db-data:
171171
notification-db-data:

scripts/redis_dry_run.py

Lines changed: 11 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -121,17 +121,20 @@ def test_user_management_helper():
121121
return False
122122

123123

124-
def test_booking_helper():
125-
"""Test Booking Redis helper"""
126-
print("\n[TEST 5] Testing Booking Redis Helper...")
124+
def test_taskprocessing_helper():
125+
"""Test Task Processing Redis helper (BookingRedisHelper - legacy class name)"""
126+
print("\n[TEST 5] Testing Task Processing Redis Helper...")
127127
try:
128-
from services.booking.app.redis_helper import BookingRedisHelper
128+
from services.taskprocessing.app.redis_helper import BookingRedisHelper
129129

130+
# Test import and instantiation
130131
helper = BookingRedisHelper()
131-
print(" BookingRedisHelper imported successfully")
132+
# Verify helper has the expected interface
133+
assert hasattr(helper, 'redis_client'), "Helper missing redis_client"
134+
print(" Task Processing Redis helper (BookingRedisHelper) imported successfully")
132135
return True
133136
except Exception as e:
134-
print(f" Failed to import BookingRedisHelper: {e}")
137+
print(f" Failed to import Task Processing Redis helper: {e}")
135138
return False
136139

137140

@@ -160,8 +163,8 @@ def main():
160163
# Test 4: User Management Helper
161164
results.append(("User Management Helper", test_user_management_helper()))
162165

163-
# Test 5: Booking Helper
164-
results.append(("Booking Helper", test_booking_helper()))
166+
# Test 5: Task Processing Helper (uses legacy BookingRedisHelper class)
167+
results.append(("Task Processing Helper", test_taskprocessing_helper()))
165168

166169
# Summary
167170
print("\n" + "=" * 60)

scripts/setup.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,8 @@ echo "Creating symlinks for common library..."
1010
if [ ! -L "services/usermanagement/common" ]; then
1111
ln -s ../../common services/usermanagement/common
1212
fi
13-
if [ ! -L "services/booking/common" ]; then
14-
ln -s ../../common services/booking/common
13+
if [ ! -L "services/taskprocessing/common" ]; then
14+
ln -s ../../common services/taskprocessing/common
1515
fi
1616
if [ ! -L "services/notification/common" ]; then
1717
ln -s ../../common services/notification/common

services/booking/app/__init__.py

Lines changed: 0 additions & 143 deletions
This file was deleted.

services/booking/app/app_configs.py

Lines changed: 0 additions & 20 deletions
This file was deleted.

0 commit comments

Comments
 (0)