This document outlines the comprehensive testing infrastructure implemented for the StarkPulse backend application. The testing strategy covers unit tests, integration tests, end-to-end tests, performance tests, and automated testing pipelines.
- Testing Strategy
- Test Types
- Test Environment Setup
- Running Tests
- Test Data Management
- Performance Testing
- CI/CD Integration
- Coverage Reports
- Best Practices
- Troubleshooting
Our testing strategy follows the test pyramid approach with emphasis on:
- Unit Tests (70%): Fast, isolated tests for individual components
- Integration Tests (20%): Tests for module interactions and database operations
- End-to-End Tests (10%): Complete user journey testing
- Performance Tests: Load and stress testing for critical endpoints
- 90%+ Code Coverage: Enforced across all modules
- Performance Benchmarks: Response times < 500ms for 95% of requests
- Error Rate: < 1% for all test scenarios
- Zero Critical Security Vulnerabilities
Location: src/**/*.spec.ts
Purpose: Test individual components, services, and utilities in isolation.
Example:
npm run test:unitConfiguration: jest.config.js
Location: test/integration/*.integration.spec.ts
Purpose: Test interactions between modules, database operations, and external services.
Key Features:
- Database integration with TestContainers
- Redis integration testing
- Blockchain service mocking
- Cross-module communication testing
Example:
npm run test:integrationConfiguration: test/jest-integration.json
Location: test/e2e/*.e2e-spec.ts
Purpose: Test complete user workflows and API endpoints.
Coverage:
- Portfolio management flows
- Transaction monitoring
- Notification systems
- Authentication workflows
- Analytics endpoints
Example:
npm run test:e2eConfiguration: test/jest-e2e.json
Location: test/load-testing/
Tools:
- k6: For load testing and performance monitoring
- Artillery: For complex scenario testing
Example:
npm run test:performance
npm run test:load- Node.js 18+
- Docker: For TestContainers (PostgreSQL, Redis)
- k6: For performance testing
- Artillery: For load testing
Create a .env.test file:
# Database
DATABASE_HOST=localhost
DATABASE_PORT=5432
DATABASE_NAME=starkpulse_test
DATABASE_USERNAME=postgres
DATABASE_PASSWORD=postgres
# Redis
REDIS_HOST=localhost
REDIS_PORT=6379
# Blockchain
STARKNET_RPC_URL=http://localhost:5050
BLOCKCHAIN_NETWORK=testnet
# JWT
JWT_SECRET=test-secret-key-for-testing-only
# API
API_PORT=3001
API_HOST=localhostThe testing infrastructure automatically manages test databases using TestContainers:
// Automatic database setup
const testEnvironment = new TestEnvironment();
await testEnvironment.setup(); // Creates PostgreSQL + Redis containersIf you prefer manual setup:
# Start PostgreSQL
docker run -d --name postgres-test -p 5432:5432 -e POSTGRES_PASSWORD=postgres postgres:15
# Start Redis
docker run -d --name redis-test -p 6379:6379 redis:7
# Run migrations
npm run migration:run
# Seed test data
npm run test:seed# Run all tests
npm test
# Run specific test types
npm run test:unit
npm run test:integration
npm run test:e2e
npm run test:performance
# Run tests with coverage
npm run test:coverage
# Run tests in watch mode
npm run test:watch
# Run tests for specific module
npm run test -- portfolio
npm run test -- --testPathPattern=notifications# Unit tests only
npm run test:unit
# Integration tests with database
npm run test:integration
# E2E tests (requires running application)
npm run test:e2e
# Performance tests with k6
npm run test:performance
# Load tests with Artillery
npm run test:load
# All tests with coverage report
npm run test:coverage
# Check coverage thresholds
npm run test:coverage:check
# Generate HTML coverage report
npm run coverage:report-
Setup Phase:
- Start TestContainers (PostgreSQL, Redis)
- Run database migrations
- Seed test data
-
Test Execution:
- Run test suites in parallel
- Collect coverage data
- Generate reports
-
Cleanup Phase:
- Clear test data
- Stop containers
- Generate final reports
Location: test/fixtures/test-data-factory.ts
Provides factory methods for creating test data:
// Create test user
const user = TestDataFactory.createUser();
// Create portfolio with assets
const { user, assets } = TestDataFactory.createUserWithAssets(userOverrides, 5);
// Create bulk data
const users = TestDataFactory.createBulkUsers(100);Location: test/fixtures/database-seeder.ts
Manages test data lifecycle:
const seeder = new DatabaseSeeder(testEnvironment);
// Seed data
await seeder.seedUser();
await seeder.seedPortfolioAssets(10, { userId: user.id });
// Clear data
await seeder.clearAll();Realistic Scenarios:
- Users with diverse portfolio compositions
- Transaction histories with various statuses
- Notification preferences and history
- Market data with historical trends
Edge Cases:
- Empty portfolios
- Failed transactions
- Network timeouts
- Invalid data formats
Configuration: test/load-testing/k6-load-test.js
Test Scenarios:
- Portfolio operations (40% traffic)
- Transaction monitoring (30% traffic)
- Notifications (20% traffic)
- Market data (10% traffic)
Performance Targets:
- Response time: 95% < 500ms
- Error rate: < 1%
- Throughput: 100+ RPS
- Concurrent users: 50+
Run Performance Tests:
# Default load test
npm run test:performance
# Custom k6 test
k6 run test/load-testing/k6-load-test.js
# With environment variables
BASE_URL=http://localhost:3000 k6 run test/load-testing/k6-load-test.jsConfiguration: test/load-testing/artillery-config.yml
Advanced Scenarios:
- Multi-phase load testing
- User journey simulation
- Performance regression testing
Run Artillery Tests:
# Default artillery test
npm run test:load
# Custom artillery test
artillery run test/load-testing/artillery-config.yml
# Generate HTML report
artillery run --output report.json test/load-testing/artillery-config.yml
artillery report report.jsonLocation: .github/workflows/ci-cd.yml
Pipeline Stages:
- Lint and Format: Code quality checks
- Unit Tests: Fast isolated tests
- Integration Tests: Database and service integration
- E2E Tests: Complete workflow testing
- Performance Tests: Load and stress testing (main branch only)
- Coverage Report: Aggregate coverage analysis
- Security Scan: Vulnerability assessment
- Build and Deploy: Production deployment (main branch only)
Parallel Execution: Tests run in parallel for faster feedback
Service Dependencies:
- PostgreSQL 15
- Redis 7
- Application runtime
Environment Variables: Configured per pipeline stage
Artifacts:
- Test reports
- Coverage reports
- Performance metrics
- Build artifacts
Merge Requirements:
- All tests must pass
- Coverage > 90%
- No lint errors
- Security scan pass
- Performance benchmarks met
Jest Configuration: Enforces 90%+ coverage across:
- Statements: 90%
- Branches: 90%
- Functions: 90%
- Lines: 90%
# Generate coverage report
npm run test:coverage
# Check coverage thresholds
npm run test:coverage:check
# Generate HTML report
npm run coverage:report
# View coverage in browser
open coverage/lcov-report/index.htmlCodecov Integration: Automatic upload to Codecov for tracking
PR Comments: Coverage changes commented on pull requests
Badge: Coverage badge in README
- Follow AAA Pattern: Arrange, Act, Assert
- Descriptive Names: Use clear, descriptive test names
- Single Responsibility: One assertion per test when possible
- Mock External Dependencies: Use proper mocking for external services
- Clean Setup/Teardown: Proper test data lifecycle management
- Logical Grouping: Group related tests in describe blocks
- Shared Setup: Use beforeAll/beforeEach for common setup
- Test Isolation: Each test should be independent
- Resource Cleanup: Always clean up resources after tests
- Parallel Execution: Run tests in parallel when possible
- Database Optimization: Use transactions for faster rollback
- Mock Heavy Operations: Mock expensive operations in unit tests
- Resource Limits: Set appropriate timeouts and resource limits
- Deterministic Data: Use consistent test data
- Isolation: Isolate test data between test runs
- Realistic Scenarios: Create realistic test scenarios
- Edge Cases: Include edge cases and error conditions
# Check if PostgreSQL container is running
docker ps | grep postgres
# Restart test environment
npm run test:db:restart# Check Redis container
docker ps | grep redis
# Test Redis connection
redis-cli -h localhost -p 6379 ping# Check application startup
curl http://localhost:3000/health
# Verify test data
npm run test:seed# Generate detailed coverage report
npm run coverage:report
# Identify uncovered code
open coverage/lcov-report/index.html# Run specific test file
npm run test -- test/e2e/portfolio.e2e-spec.ts
# Run with debug output
DEBUG=true npm run test -- portfolio
# Run single test case
npm run test -- --testNamePattern="should create portfolio"# Start test environment manually
npm run test:env:start
# Check container logs
docker logs $(docker ps -q --filter name=postgres-test)
docker logs $(docker ps -q --filter name=redis-test)
# Stop test environment
npm run test:env:stop# Run tests with timing
npm run test -- --verbose
# Profile test execution
NODE_ENV=test npm run test -- --detectSlowTests# Check database query performance
npm run test:integration -- --verbose
# Optimize test data
npm run test:data:optimizeFor testing-related questions:
- Check this documentation
- Review existing test examples
- Create an issue in the repository
- Contact the development team
Last Updated: July 2025 Version: 1.0.0 Maintainer: StarkPulse Development Team