Skip to content

otterize/otterize-csi-spiffe-demo

Repository files navigation

Cross-cloud AWS access using Otterize with cert-manager CSI Driver SPIFFE

Watch the CNCF webinar that this repository accompanies: https://www.youtube.com/watch?v=8scIIMRnp08

This demo will setup cert-manager and its CSI Driver SPIFFE in a non-AWS Kubernetes cluster. Otterize will be setup in the same Kubernetes cluster and will use the cert-manager CSI Driver SPIFFE to authenticate to AWS and automate creation of AWS roles and policies of different workloads running in that non-AWS Kubernetes cluster.

This is a demo repository that builds upon the Kubecon EU 2023 talk of Josh van Leeuwen and Thomas Meadows where they presented how you could leverage cert-manager with its CSI Driver SPIFFE to authenticate to AWS by using IAM Roles Anywhere. The prior work they did for the talk can be found in the following GitHub repository.

Prerequisites

Setup

  1. Setup cert-manager and disable the automated certificateRequest approver. We disable the automated certificateRequest approver as we want to let CSI Driver SPIFFE manage the approver for SPIFFE certificates.

    helm repo add jetstack https://charts.jetstack.io --force-update
    
    helm upgrade -i -n cert-manager cert-manager jetstack/cert-manager \
      --set extraArgs={--controllers='*\,-certificaterequests-approver'} \
      --set installCRDs=true \
      --create-namespace
  2. Setup a self-signed issuer and generate your own signing CA. This signing CA will need to be manually approved as we disabled the automated approver in the previous step.

    kubectl apply -f issuer.yaml
    cmctl approve -n cert-manager \
      $(kubectl get cr -n cert-manager -ojsonpath='{.items[0].metadata.name}')
  3. Install the cert-manager CSI Driver SPIFFE. This uses the modified version of cert-manager CSI Driver SPIFFE that automatically authenticates to AWS. These changes will make their way to mainline soon.

    helm upgrade -i -n cert-manager cert-manager-csi-driver-spiffe jetstack/cert-manager-csi-driver-spiffe --version v0.5.0 -f values.yaml
  4. We need to prepare a few bits directly on the AWS side to allow Otterize to connect from our Kubernetes cluster to AWS. The Terraform will setup the following:

    • Retrieve the public key of your CA from your Kubernetes cluster and define it as a trust anchor for IAM Roles Anywhere.
    • Setup IAM policy, role and IAM Role Anywhere policy for both the Otterize credentials and intents operator.
    • It will deploy all of this in the eu-west-2 AWS region (you can change it in the variables, but don't forget to do the same in later steps)
    cd otterize-aws
    terraform init
    terraform apply
    cd ..
  5. Capture the Terraform Outputs for later use

  6. Setup Otterize with AWS Integration This uses the modified version of Otterize that supports AWS IAM RolesAnywhere.

    helm repo add otterize https://helm.otterize.com --force-update
    
    helm upgrade --install otterize otterize/otterize-kubernetes -n otterize-system --create-namespace \
      --set intentsOperator.operator.mode=defaultActive  \
      --set global.aws.enabled=true \
      --set global.aws.region=eu-west-2 \
      --set global.aws.rolesAnywhere.enabled=true \
      --set global.aws.rolesAnywhere.clusterName=otterize-csi-spiffe-demo \
      --set 'global.aws.rolesAnywhere.accounts[0].trustDomain=spiffe.cert-manager.io' \
      --set 'global.aws.rolesAnywhere.accounts[0].trustAnchorARN=<arn>' \
      --set 'global.aws.rolesAnywhere.accounts[0].id=353146681200' \
      --set 'global.aws.rolesAnywhere.accounts[0].intentsOperator.profileARN=<arn from terraform output>' \
      --set 'global.aws.rolesAnywhere.accounts[0].credentialsOperator.profileARN=<arn>' \
      --set 'global.aws.rolesAnywhere.accounts[0].intentsOperator.roleARN=<arn>' \
      --set 'global.aws.rolesAnywhere.accounts[0].credentialsOperator.roleARN=<arn>'
  7. Create S3 bucket and deploy demo application

    export BUCKET_NAME=otterize-tutorial-bucket-`date +%s`
    echo $BUCKET_NAME
    aws s3api create-bucket --bucket $BUCKET_NAME --region eu-west-2 --create-bucket-configuration LocationConstraint=eu-west-2
    kubectl create namespace otterize-tutorial-iam
    kubectl apply -n otterize-tutorial-iam -f https://docs.otterize.com/code-examples/aws-iam-eks/client-and-server.yaml
    kubectl patch deployment -n otterize-tutorial-iam server --type='json' -p="[{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/env\", \"value\": [{\"name\": \"BUCKET_NAME\", \"value\": \"$BUCKET_NAME\"}]}]"
  8. Watch logs of the server and look at credentials errors. The errors are normal as we haven't let Otterize know it needs to manage access for this workload to AWS.

    kubectl logs -f -n otterize-tutorial-iam deploy/server
  9. Allow the server deployment to create CertificateRequests to get a SPIFFE ID. Besides that add the label to let Otterize create the IAM role.

    kubectl apply -f rbac.yaml
    kubectl patch deployment server -n otterize-tutorial-iam --patch-file server-patch.yaml
  10. Go to AWS IAM console and look for a IAM role that starts with otr and the server pod will have the CSI SPIFFE Driver volume attached to it. You can see that by doing a kubectl describe on the pod.

  11. Create an Otterize ClientIntent. Make sure to change to the correct S3 Bucket Name in the ClientIntent.

    kubectl apply -f intent.yaml
  12. Check the logs again of the server and notice how the upload is succeeding. You can also validate the policy by going back to the IAM role in the AWS console.

    kubectl logs -f -n otterize-tutorial-iam deploy/server

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages