Infrastructure-as-code configurations for deploying Amazon EKS clusters with the Flox containerd shim pre-installed. This enables running Flox development environments directly inside Kubernetes pods.
See the Flox documentation for more details.
This repository provides four deployment options:
terraform/new-cluster/- Complete EKS cluster with VPC and Flox-enabled node groups using Terraformterraform/new-nodegroup/- Add a Flox-enabled node group to an existing EKS cluster using Terraformeksctl/new-cluster/- Complete EKS cluster with Flox-enabled node groups using eksctleksctl/new-nodegroup/- Add a Flox-enabled node group to an existing EKS cluster using eksctl
- Terraform >= 1.6
- AWS CLI configured with appropriate credentials
- AWS account with permissions to create VPC, EKS, and EC2 resources
Create a complete EKS cluster with VPC:
cd terraform/new-cluster
# Initialize Terraform
tofu init
# Review the planned changes
tofu plan
# Create the infrastructure
tofu apply
# Configure kubectl
aws eks update-kubeconfig --name flox-eks-tf --region us-east-1Note: The default region in main.tf is us-east-1. Update local.region if needed.
Add a Flox-enabled node group to an existing Terraform-managed EKS cluster:
# Copy nodegroup.tf into your existing Terraform configuration directory
cp terraform/new-nodegroup/nodegroup.tf /path/to/your/cluster/terraform/
# Update nodegroup.tf to match your existing resource names
# (module names, local variables, etc.)
cd /path/to/your/cluster/terraform/
# Review the planned changes
tofu plan
# Create the node group
tofu apply
# Verify nodes
kubectl get nodes --show-labels | grep flox.dev/enabledNote: The nodegroup.tf file references module.eks, module.vpc, and local.name - adjust these to match your existing Terraform configuration's resource names.
Create a new EKS cluster with Flox support:
# Create cluster
eksctl create cluster -f eksctl/new-cluster/cluster.yaml
# Verify nodes
kubectl get nodes --show-labels | grep flox.dev/enabledAdd a Flox-enabled node group to an existing cluster:
# Update the cluster name in eksctl/new-nodegroup/nodegroup.yaml
# to match your existing cluster
# Create the node group
eksctl create nodegroup -f eksctl/new-nodegroup/nodegroup.yaml
# Verify nodes
kubectl get nodes --show-labels | grep flox.dev/enabledAll configurations install and configure the Flox shim with:
- Flox Installation: Installs Flox CLI via RPM during node bootstrap
- Shim Activation: Activates the
containerd-shim-flox-installerenvironment - Containerd Runtime: Configures a custom
floxruntime in containerd - Node Labels: Adds
flox.dev/enabled: "true"label for pod scheduling
Edit terraform/new-cluster/main.tf to customize:
- Region: Change
local.region(default:us-east-1) - Cluster name: Change
local.name(default:flox-eks-tf) - Instance type: Change
instance_types(default:t3.small) - Node capacity: Adjust
desired_size,min_size,max_size - Access CIDR: Update
endpoint_public_access_cidrsfor security
Edit the YAML files to customize:
- Cluster name:
metadata.name - Region:
metadata.region - Instance type:
managedNodeGroups[].instanceType - Capacity:
desiredCapacity,minSize,maxSize
Pods not scheduling on Flox nodes: Ensure you're using the RuntimeClass and that nodes have the flox.dev/enabled: "true" label.
Shim not found: Check that the pre-bootstrap commands completed successfully. Review cloud-init logs from Actions->Monitor and troubleshoot->Get system log in the EC2 console.
cd terraform/new-cluster
terraform destroy# Delete entire cluster
eksctl delete cluster --name flox --region us-east-1
# Delete only node group
eksctl delete nodegroup --cluster flox-sandbox --name flox --region us-east-1