Because of the way AWS ECR docker registries work, the credentials have to be refreshed.
This image pulls credentials from ECR every hour, and injects them into Kubernetes so that images can be pulled from a private repo.
This allows the use of AWS ECR registries also when your Kubernetes cluster is running in another cloud provider, or you don't want to set up EC2 roles for it.
Grab container images from: https://cloud.docker.com/u/trondhindenes/repository/docker/trondhindenes/k8s-ecrupdater
Configure with the following environment variables:
K8S_PULL_SECRET_NAME: Name of the Kubernetes pull secret to update
ECR_UPDATE_INTERVAL: (optional, time in seconds)
ECR_CREATE_MISSING: if this envvar is set to `true`, missing pull secrets will be created in all namespaces
(there's a good chance this will fail on older (pre 1.11) clusters.
AWS_DEFAULT_REGION: (set to your region)
AWS_ACCESS_KEY_ID: aws creds
AWS_SECRET_ACCESS_KEY: aws creds
Note that if you're using alternate methods of providing the pod with AWS credentials (such as kube2iam or similar) you can skip the AWS_ACCESS_KEY_ID
/AWS_SECRET_ACCESS_KEY
configuration items.
It is assumed that you already have ECR setup, an IAM user with access to it, and that you have kubectl
configured to communicate with your Kubernetes cluster.
You can also run it locally using kubectl proxy
on your computer if you want to test things out. In that case, make sure the proxy listens on localhost:8001
-
(this step is only required if
ECR_CREATE_MISSING
is not set to true) Create a secret called ecr. This is the secret that this pod will update regularly. It doesn't matter what you put in here, as ecrupdater will update it, it just needs to exist.:kubectl create secret docker-registry ecr --docker-username=smash --docker-password=lol --docker-email [email protected]
NOTE:ecrupdater
will look for the secrets with the specified name across all your namespaces if you're using the authorization template below. So in this example any secret namedecr
across all namespaces will be updated. If you want to separate them you can run multiple instances ofecrupdater
, optionally with tighter (namespaces-isolated) security. -
Create the authorization stuff that lets kubectl-proxy (running in the same pod as the ecr-updater) interact with kubernetes:
kubectl apply -f example_deployment/01_authorization.yml
-
Create a IAM user that has read access to your registries. The access key and secret key need to be base64-encoded (remember to use the
-n
option):
echo -n "PUT_ACCESSKEY_HERE" | base64
echo -n "PUT_SECRETKEY_HERE" | base64
Put this info in the fileexample_deployment/01_aws_credentials.yml.yml
in this repo.
Now you can create a secret that will hold this info. This is how the ecr updater will log on to AWS:
kubectl apply -f example_deployment/01_aws_credentials.yml
-
Deploy the pod. This contains both the ecr-updater and a "sidecar" container running kubectl-proxy. The proxy allows communication with the kubernetes api in a simple manner. Make sure to set your correct aws region in
example_deployment/02_deployment.yml
before deploying!kubectl apply -f example_deployment/02_deployment.yml
-
Test a deployment. Replace the containerimage with one from your own ecr registry, deploy it and prosper! (note that the ecrupdater initially pauses for 60 seconds, so make sure time has passed between the ecr updater pod coming online, and you run the next command)
kubectl apply -f example_deployment/03_pullsecret_test.yml