CAPT is a Cluster API provider that leverages Terraform to create and manage EKS clusters on AWS. It uses Crossplane's Terraform Provider to manage infrastructure components through Kubernetes-native resources.
CAPT implements a modular approach to EKS cluster management where each infrastructure component (VPC, Control Plane, Machine Resources) is managed through its own WorkspaceTemplate. This design enables:
- Clear separation of concerns between infrastructure components
- Reusable infrastructure templates
- Secure configuration management through Kubernetes secrets
- Terraform-based state management and drift detection
- ClusterClass support for standardized cluster deployments
- Independent compute resource management through Machine concept
The cluster creation is divided into four main components:
- VPC Infrastructure
- EKS Control Plane
- Compute Resources (Machine)
- Cluster Configuration
Each component is managed independently through WorkspaceTemplates and can be templated using ClusterClass. The controllers automatically manage WorkspaceTemplateApply resources for infrastructure provisioning:
graph TD
A[Cluster] --> B[CAPTCluster]
A --> C[CAPTControlPlane]
A --> D[CAPTMachineDeployment]
B --> E[VPC WorkspaceTemplate]
C --> F[EKS WorkspaceTemplate]
D --> G[NodeGroup WorkspaceTemplate]
B --> |Controller| H[VPC WorkspaceTemplateApply]
C --> |Controller| I[EKS WorkspaceTemplateApply]
D --> |Controller| J[NodeGroup WorkspaceTemplateApply]
H --> E
I --> F
J --> G
H --> K[VPC Infrastructure]
I --> L[EKS Control Plane]
I --> M[EKS Blueprints Addons]
J --> N[Compute Resources]
O[ClusterClass] --> A
O --> P[CaptControlPlaneTemplate]
P --> F
- Version control and tagging for clear configuration management
- State tracking for configuration drift detection
- Utilization of standard Terraform modules
- ClusterClass templates for standardized deployments
- Automatic WorkspaceTemplateApply management by controllers
- VPC retention capability for shared infrastructure scenarios
- Explicit dependency definition between components (e.g., VPC and EKS)
- Secure configuration propagation through secrets
- Independent lifecycle management for each component
- Template-based configuration with variable substitution
- Secure handling of sensitive information through Kubernetes secrets
- Automatic OIDC authentication and IAM role configuration
- Centralized security group and network policy management
- Secure configuration migration between environments
- Reusable infrastructure templates
- Customization through environment-specific variables and tags
- Automatic management of Helm charts and EKS addons
- Compatibility with existing Terraform modules
- ClusterClass for consistent cluster deployments
- Automatic Fargate profile configuration
- Efficient node scaling with Karpenter
- Integrated EKS addon management
- Extensibility through Custom Resource Definitions (CRDs)
- ClusterTopology support for advanced cluster management
This guide will help you get started with CAPT, deploy it on your Kubernetes cluster, and set up a basic integration with Cluster API.
Before you begin, ensure you have the following:
- A Kubernetes cluster (v1.19+)
- kubectl installed and configured to access your cluster
- Cluster API (v1.0+) installed on your cluster
- AWS credentials with appropriate permissions
- Crossplane with Terraform Provider installed
-
Download the latest CAPT release:
curl -LO https://github.com/appthrust/capt/releases/latest/download/capt.yaml
-
Install CAPT:
kubectl apply -f capt.yaml
Note: The
capt.yaml
file includes all necessary Custom Resource Definitions (CRDs), RBAC settings, and the CAPT controller deployment. -
Verify the controller is running:
kubectl get pods -n capt-system
Important: The default installation uses the controller:latest
image tag. For production use, it's recommended to use a specific version tag. You can modify the image tag in the capt.yaml
file before applying it.
-
Create a Kubernetes secret with your AWS credentials:
kubectl create secret generic aws-credentials \ --from-literal=AWS_ACCESS_KEY_ID=<your-access-key> \ --from-literal=AWS_SECRET_ACCESS_KEY=<your-secret-key> \ -n capt-system
-
Apply the Crossplane Terraform Provider configuration:
kubectl apply -f crossplane-terraform-config/provider-config.yaml
-
Create a VPC WorkspaceTemplate:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: WorkspaceTemplate metadata: name: simple-vpc spec: template: metadata: description: "Simple VPC configuration" spec: module: source: "terraform-aws-modules/vpc/aws" version: "5.0.0" variables: name: value: "simple-vpc" cidr: value: "10.0.0.0/16"
Save this as
simple-vpc.yaml
and apply it:kubectl apply -f simple-vpc.yaml
-
Create a CAPTCluster resource:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: CAPTCluster metadata: name: simple-cluster spec: region: us-west-2 vpcTemplateRef: name: simple-vpc
Save this as
simple-cluster.yaml
and apply it:kubectl apply -f simple-cluster.yaml
-
Create a Cluster resource:
apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: simple-cluster spec: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: CAPTCluster name: simple-cluster
Save this as
cluster.yaml
and apply it:kubectl apply -f cluster.yaml
-
Check the status of your cluster:
kubectl get clusters
-
View the CAPTCluster resource:
kubectl get captclusters
-
Check the WorkspaceTemplateApply resources:
kubectl get workspacetemplateapplies
Once the cluster is ready:
-
Get the kubeconfig for your new EKS cluster:
aws eks get-token --cluster-name simple-cluster > kubeconfig
-
Use the new kubeconfig to interact with your EKS cluster:
kubectl --kubeconfig=./kubeconfig get nodes
ClusterClass provides a templated approach to cluster creation, enabling standardized deployments across your organization:
- Define ClusterClass:
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata:
name: eks-class
spec:
controlPlane:
ref:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: CaptControlPlaneTemplate
name: eks-control-plane-template
variables:
- name: controlPlane.version
required: true
schema:
openAPIV3Schema:
type: string
enum: ["1.27", "1.28", "1.29", "1.30", "1.31"]
- Create Cluster using ClusterClass:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: demo-cluster
spec:
topology:
class: eks-class
version: "1.31"
variables:
- name: controlPlane.version
value: "1.31"
- name: environment
value: dev
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: WorkspaceTemplate
metadata:
name: vpc-template
spec:
template:
metadata:
description: "Standard VPC configuration"
spec:
module:
source: "terraform-aws-modules/vpc/aws"
version: "5.0.0"
variables:
name:
value: "${var.name}"
cidr:
value: "10.0.0.0/16"
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: CAPTCluster
metadata:
name: demo-cluster
spec:
region: us-west-2
vpcTemplateRef:
name: vpc-template
namespace: default
retainVpcOnDelete: true # VPC will be retained when cluster is deleted
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: CAPTMachineDeployment
metadata:
name: demo-nodegroup
spec:
replicas: 3
template:
spec:
workspaceTemplateRef:
name: nodegroup-template
instanceType: t3.medium
diskSize: 50
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: WorkspaceTemplate
metadata:
name: nodegroup-template
spec:
template:
metadata:
description: "EKS Node Group configuration"
spec:
module:
source: "./internal/tf_module/eks_node_group"
variables:
instance_types:
value: ["${var.instance_type}"]
disk_size:
value: "${var.disk_size}"
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: demo-cluster
spec:
clusterNetwork:
services:
cidrBlocks: ["10.96.0.0/12"]
pods:
cidrBlocks: ["192.168.0.0/16"]
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: CAPTCluster
name: demo-cluster
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: CAPTControlPlane
name: demo-cluster
Note: WorkspaceTemplateApply resources are automatically created and managed by the controllers. You do not need to create them manually.
- Manage related resources in the same namespace
- Use consistent naming conventions
- Define clear dependencies between components
- Regular configuration drift checks
- Utilize ClusterClass for standardized deployments
- Let controllers manage WorkspaceTemplateApply resources
- Manage sensitive information as secrets
- Follow the principle of least privilege for IAM configuration
- Proper security group configuration
- Implement secure network policies
- Separate configurations per environment
- Utilize version control effectively
- Monitor and manage component lifecycles
- Regular security and compliance audits
- Use ClusterClass for consistent deployments
- Document template purposes and requirements
- Version templates appropriately
- Implement proper tagging strategies
- Maintain backward compatibility
- Leverage ClusterClass variables for flexibility
- Use WorkspaceTemplate for infrastructure definitions
- Let controllers handle WorkspaceTemplateApply lifecycle
- Standardized cluster templates
- Variable-based configuration
- Reusable control plane templates
- Consistent cluster deployments
- Environment-specific customization
- Infrastructure as code using Terraform
- Version control and metadata tracking
- Secure secret management
- Reusable infrastructure templates
- Automatic WorkspaceTemplateApply management by controllers
- Independent compute resource lifecycle
- Flexible node group configuration
- Support for multiple instance types
- Automated scaling configuration
- Integration with cluster autoscaling
- Template-based node group management
- Multi-AZ deployment
- Public and private subnets
- NAT Gateway configuration
- EKS and Karpenter integration
- VPC retention for shared infrastructure
- Independent VPC lifecycle management
- Fargate profiles for system workloads
- EKS Blueprints addons integration
- CoreDNS, VPC-CNI, and Kube-proxy configuration
- Karpenter setup for node management
- Template-based configuration with ClusterClass
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
When creating a new release:
- Update the version number in relevant files (e.g.,
VERSION
,Chart.yaml
, etc.) - Update the CHANGELOG.md file with the new version and its changes
- Create a new tag with the version number (e.g.,
v1.0.0
) - Push the tag to the repository
- The CI/CD pipeline will automatically:
- Build the project
- Generate the
capt.yaml
file - Create a new GitHub release
- Attach the
capt.yaml
file to the release
Users can then download and apply the capt.yaml
file to install or upgrade CAPT.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.