This guide covers deploying the Open FinOps Stack using Docker, eliminating the need for Python virtual environment setup.
# Make the wrapper executable
chmod +x finops-docker.sh
# Import AWS CUR data
./finops-docker.sh aws import-cur
# List available manifests
./finops-docker.sh aws list-manifests
# Run tests
./finops-docker.sh --help# Build the pipeline image
docker build -t finops-pipeline .
# Import AWS CUR data
docker run --rm \
-v $(pwd)/data:/data \
-v $(pwd)/config.toml:/app/config.toml:ro \
finops-pipeline aws import-cur
# List available manifests
docker run --rm \
-v $(pwd)/config.toml:/app/config.toml:ro \
finops-pipeline aws list-manifests# Start Metabase only
docker-compose up -d
# Include pipeline service
docker-compose -f docker-compose.yml -f docker-compose.pipeline.yml up -d
# Run one-off import job
docker-compose -f docker-compose.yml -f docker-compose.pipeline.yml run aws-import
# Run manifest listing
docker-compose -f docker-compose.yml -f docker-compose.pipeline.yml run aws-list- Docker: Install Docker Desktop from docker.com
- Configuration: Create
config.tomlwith your AWS settings (see SETUP.md)
The Docker deployment expects this directory structure:
/
├── config.toml # Your configuration (required)
├── data/ # Database and output files (auto-created)
├── tmp/ # Temporary test data (auto-created)
├── finops-docker.sh # Docker wrapper script
├── docker-compose.yml # Metabase service
├── docker-compose.pipeline.yml # Pipeline service extension
└── Dockerfile # Pipeline image definition
| Host Path | Container Path | Purpose |
|---|---|---|
./data/ |
/data |
DuckDB database and output files |
./config.toml |
/app/config.toml |
Configuration (read-only) |
./tmp/ |
/tmp |
Temporary test data |
The Docker container supports these environment variables:
OPEN_FINOPS_DATA_DIR: Data directory path (default:/data)OPEN_FINOPS_AWS_*: AWS configuration overridesAWS_ACCESS_KEY_ID: AWS credentialsAWS_SECRET_ACCESS_KEY: AWS credentialsAWS_DEFAULT_REGION: AWS region
# Using wrapper script
./finops-docker.sh aws import-cur --start-date 2024-01 --end-date 2024-03
# Using docker directly
docker run --rm \
-v $(pwd)/data:/data \
-v $(pwd)/config.toml:/app/config.toml:ro \
-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
finops-pipeline aws import-cur --start-date 2024-01# Run tests in container
docker run --rm \
-v $(pwd):/app \
-w /app \
finops-pipeline python -m pytest tests/
# Interactive shell for debugging
docker run --rm -it \
-v $(pwd)/data:/data \
-v $(pwd)/config.toml:/app/config.toml:ro \
finops-pipeline bash# Start everything
docker-compose -f docker-compose.yml -f docker-compose.pipeline.yml up -d
# Check logs
docker-compose logs finops-pipeline
docker-compose logs metabase
# Import data
docker-compose run aws-import
# Access Metabase
open http://localhost:3000"config.toml not found"
- Create
config.tomlin the current directory - See SETUP.md for configuration examples
"Permission denied"
- Make sure Docker has access to the current directory
- On Linux, you may need to adjust file permissions:
chmod 755 ./data
"AWS credentials not found"
- Set AWS environment variables or configure AWS CLI
- Mount your AWS credentials:
-v ~/.aws:/root/.aws:ro
"Database locked"
- Stop any running Metabase containers:
docker-compose down - The DuckDB file can only be accessed by one process at a time
"Image not found"
# Rebuild the image
docker build -t finops-pipeline . --no-cache"Dependencies failed to install"
# Check requirements.txt exists and is valid
cat requirements.txt
# Build with verbose output
docker build -t finops-pipeline . --progress=plain- Database Location: The
/datavolume should be on fast storage (SSD) - Memory: DuckDB operations can be memory-intensive for large datasets
- Networking: AWS S3 access speed affects import performance
- Configuration file contains sensitive AWS credentials
- Use environment variables or AWS IAM roles in production
- The container runs as root by default - consider using USER directive for production
To use a different Python version or add system packages:
FROM python:3.12-slim # Different Python version
# Add custom system packages
RUN apt-get update && apt-get install -y \
your-package-here \
&& rm -rf /var/lib/apt/lists/*For development with code changes:
# Mount source code for live editing
docker run --rm -it \
-v $(pwd):/app \
-w /app \
finops-pipeline bashExample GitHub Actions workflow:
name: Test Docker Build
on: [push, pull_request]
jobs:
docker:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build Docker image
run: docker build -t finops-pipeline .
- name: Test CLI
run: docker run --rm finops-pipeline --help- Import your data: Configure AWS credentials and run import
- Explore with Metabase: Connect to your DuckDB database
- Build dashboards: Create custom visualizations
- Scale up: Consider production deployment options
For more information, see:
- SETUP.md - Initial configuration
- VISUALIZATION.md - Metabase setup
- README.md - Project overview