Skip to content

Commit

Permalink
aws visibility tutorial (#203)
Browse files Browse the repository at this point in the history
Co-authored-by: Ori Shoshan <[email protected]>
  • Loading branch information
bglynn and orishoshan committed Mar 14, 2024
1 parent 6f211f4 commit f245df0
Show file tree
Hide file tree
Showing 11 changed files with 466 additions and 92 deletions.
54 changes: 8 additions & 46 deletions docs/features/aws-iam/tutorials/aws-iam-eks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -25,56 +25,18 @@ Before you start, you'll need an AWS EKS cluster. Any cluster will do; there are
<summary>How to set up an AWS EKS cluster using eksctl</summary>


Save this `yaml` as `cluster-config.yaml`:
Run the following command to create your AWS cluster. [Don't have eksctl? Install it now.](https://eksctl.io/installation/)

```yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
name: otterize-iam-eks-tutorial
region: us-west-2
version: "1.27"

iam:
withOIDC: true

vpc:
clusterEndpoints:
publicAccess: true
privateAccess: true

addons:
- name: vpc-cni
version: 1.14.0
attachPolicyARNs: #optional
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
configurationValues: |-
enableNetworkPolicy: "true"
- name: coredns
- name: kube-proxy

managedNodeGroups:
- name: small-on-demand
amiFamily: AmazonLinux2
instanceTypes: [ "t3.large" ]
minSize: 0
desiredCapacity: 2
maxSize: 6
privateNetworking: true
disableIMDSv1: true
volumeSize: 100
volumeType: gp3
volumeEncrypted: true
tags:
team: "eks"
```bash
curl ${ABSOLUTE_URL}/code-examples/aws-iam-eks/cluster-config.yaml | eksctl create cluster -f -
```
<details>
<summary>Inspect eks-cluster.yaml contents</summary>

Then run the following command to create your cluster. [Don't have eksctl? Install it now.](https://eksctl.io/installation/)

```shell
eksctl create cluster -f cluster-config.yaml
```yaml
{@include: ../../../../static/code-examples/aws-iam-eks/cluster-config.yaml}
```
</details>

</details>

Expand Down
180 changes: 180 additions & 0 deletions docs/features/aws-iam/tutorials/aws-visibility.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,180 @@
---
sidebar_position: 2
title: AWS resource mapping & IAM policy generation
image: /img/quick-tutorials/aws-iam-visibility/social.png
---


Many production Kubernetes workloads rely on cloud resources, like S3 Buckets, RDS databases, and Lambda functions. In this tutorial, we will look at how Otterize provides visibility into the AWS resources called by your workloads.

In this tutorial, we will:
* Set up an EKS cluster.
* Deploy two Lambda functions.
* Deploy a server pod that retrieves a joke (as in, a string containing a joke ;) from a Lambda, provides a review, and posts the review to another Lambda.
* Automatically detect and view the Lambda function calls in Otterize.

By the end, you'll know how to map Kubernetes workloads alongside their dependent AWS resources using Otterize.

## Prerequisites

### CLI tools
We will need the following CLI tools to set up our cluster and deploy our scripts.

1. [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). You will also need credentials within the target account with permissions to work with EKS clusters, IAM, CloudFormation, and Lambda functions.
2. [eksctl](https://eksctl.io/installation/)

### Create an EKS cluster
Already have Otterize deployed with the IAM integration configured on your cluster? [Skip to the tutorial.](#tutorial)

Begin by creating an EKS cluster for pod deployment using **eksctl** with the YAML configuration below:
```bash
curl ${ABSOLUTE_URL}/code-examples/aws-visibility/eks-cluster.yaml | eksctl create cluster -f -
```
<details>
<summary>Inspect eks-cluster.yaml contents</summary>

```yaml
{@include: ../../../../static/code-examples/aws-visibility/eks-cluster.yaml}
```
</details>

Next, update **kubeconfig** to link it with the new cluster:
```bash
aws eks update-kubeconfig --name otterize-tutorial-aws-visibility --region 'us-west-2'
```

### Enable AWS Visibility with Otterize Installation
To provide visibility, we will need to install Otterize in our cluster, and we will want to enable AWS IAM roles for service accounts (IRSA) on our cluster. We can quickly enable this using a cloudformation template on the Otterize Cloud Integrations page.

1. **Install Otterize**
If you don't have a connected Kubernetes cluster, create one via [Integrations page](https://app.otterize.com/integrations) and follow the setup instructions for Kubernetes. Skip if your cluster is already connected.

2. **Integrate AWS with Otterize Cloud**
To begin the integration with AWS, visit the [Integrations page](https://app.otterize.com/integrations). Once there, you will be asked for information to help populate a CloudFormation template we will use to set up roles and policies for the Otterize deployment in our cluster.

If you created the EKS cluster above, the cluster name would be`otterize-tutorial-aws-visibility`, and the region would be `us-west-2`.

Once the information is provided, a *Launch Cloudformation* button will take you to the AWS Console to deploy the cloudformation script. This script will install IRSA within your EKS cluster and enable Otterize Cloud to manage intents.

After IRSA is enabled in your cluster, you need to redeploy Otterize with the AWS credential operator and AWS visibility enabled. In Otterize Cloud, click the *Next* button to see the updated Helm commands. AWS Visibility is not enabled by default. Before executing the revised configuration, you will need to set an additional flag at the end of the command:

```bash
--set networkMapper.aws.visibility.enabled=true
```

## Tutorial

Having configured our environment, we'll deploy AWS resources, authorize pod access using ClientIntents, and monitor access in Otterize Cloud.

### Deploy two Lambda functions

First, we will deploy two Lambda functions (`DadJokeLambdaFunction` and `FeedbackLambdaFunction`). These services will work alongside our server pod to generate a humor training dataset. This works by receiving a joke from the DadJokeLambdaFunction, the server pod reviewing the joke, and then sending the feedback to the FeedbackLambdaFunction.

We can deploy the lambda functions and their required roles with the following command:
```bash
curl http://localhost:3003/code-examples/aws-visibility/cloudformation.yaml -o template.yaml && \
aws cloudformation deploy --template-file template.yaml --stack-name OtterizeTutorialJokeTrainingStack --capabilities CAPABILITY_IAM --region 'us-west-2'
```
<details>
<summary>Inspect CloudFormation YAML</summary>

```yaml
{@include: ../../../../static/code-examples/aws-visibility/cloudformation.yaml}
```
</details>

### Deploy clusters with access to Lambda functions

Now that our Lambdas are deployed, we want to deploy our server pod within our cluster and point it to our two Lambda functions. In the commands below, we will create a configmap to hold our functions ARNs and pass the map into our deployment YAML.

```bash

kubectl create namespace otterize-tutorial-aws-visibility

DAD_JOKE_LAMBDA_ARN=$(aws cloudformation describe-stacks --region 'us-west-2' --stack-name OtterizeTutorialJokeTrainingStack --query "Stacks[0].Outputs[?OutputKey=='DadJokeLambdaFunction'].OutputValue" --output text)
FEEDBACK_LAMBDA_ARN=$(aws cloudformation describe-stacks --region 'us-west-2' --stack-name OtterizeTutorialJokeTrainingStack --query "Stacks[0].Outputs[?OutputKey=='FeedbackLambdaFunction'].OutputValue" --output text)

kubectl create configmap lambda-arns \
--from-literal=dadJokeLambdaArn=$DAD_JOKE_LAMBDA_ARN \
--from-literal=feedbackLambdaArn=$FEEDBACK_LAMBDA_ARN \
-n otterize-tutorial-aws-visibility

kubectl apply -n otterize-tutorial-aws-visibility -f ${ABSOLUTE_URL}/code-examples/aws-visibility/all.yaml
```
<details>
<summary>Inspect deployment YAML</summary>

```yaml
{@include: ../../../../static/code-examples/aws-visibility/all.yaml}
```
</details>

Inspecting our deployment YAML, you will see we have added two labels to our pod. The first `network-mapper.otterize.com/aws-visibility` informs the network mapper to identify AWS API calls, and the `credentials-operator.otterize.com/create-aws-role` that drives the credentials operator to create a role specifically for this pod that will be used for our intents.

Once our pod is deployed, we can inspect the logs and see that we cannot access the Lambda functions.

```bash
kubectl logs -f -n otterize-tutorial-aws-visibility deploy/joketrainer
```

Sample output:
```
invoke error, operation error Lambda: Invoke, https response error StatusCode: 403, RequestID: a3bab063-dfb0-49e3-b466-0069807c56fa, api error AccessDeniedException: User: arn:aws:sts::12345678910:assumed-role/otr-otterize-tutorial-aws-visibility.default@otterize-tut-ecfd9d/12345678910 is not authorized to perform: lambda:InvokeFunction on resource: arn:aws:lambda:us-west-2:12345678910:function:OtterizeTutorialJokeTrainingStack-DadJokeLambda-dnNYqlwipYxG because no identity-based policy allows the lambda:InvokeFunction action
```

### Applying Intents

We can apply an intent for our pod to be able to call the Lambda functions created by our cloudformation stack.

```bash
kubectl apply -n otterize-tutorial-aws-visibility -f ${ABSOLUTE_URL}/code-examples/aws-visibility/intents.yaml
```

```yaml
{@include: ../../../../static/code-examples/aws-visibility/intents.yaml}
```

We can now recheck the logs to ensure that the pod is running:
```bash
kubectl logs -f -n otterize-tutorial-aws-visibility deploy/joketrainer
```

Example output:
```
Joke: People saying 'boo! to their friends has risen by 85% in the last year.... That's a frightening statistic.
Sending Feedback of Funny?: Yes
Joke: Have you ever heard of a music group called Cellophane? They mostly wrap.
Sending Feedback of Funny?: Yes
Joke: What did Yoda say when he saw himself in 4K? "HDMI"
Sending Feedback of Funny?: No
```

### Visualize Relationships
The Otterize network mapper inspects pods with the `network-mapper.otterize.com/aws-visibility: true` label. For the labeled pods, the network mapper will identify AWS API calls made by that pod and determine which resources and actions are being used. This information is shown on the [Access graph](https://app.otterize.com/access-graph).

In the Access graph screenshot below, you’ll see 4 AWS resources associated with our *joketrainer* pod: *DadJokeLambdaFunction*, *FeedbackLambdaFunction*, the role assumed by our server pod, and our wildcard intent definition. This wildcard definition matches any Lambdas created by our cloudformation stack. These types of wildcard definitions can be helpful for AWS Resources with dynamic ARN names as you move across staging and production deployments. Still, they open up a security space that could be overly permissive for some environments. Otterize makes deploying with a wildcard definition easy and then applying more stringent authorization without disrupting any services.

![Otterize Cloud AWS Visibility Example](/img/quick-tutorials/aws-iam-visibility/aws-iam-visibility.png)


### What's Next

Now that we've discovered the AWS resources used within a Kubernetes workload, you can learn more about how you can manage access to these resources with Otterize in the [Automate AWS IAM for EKS](/features/aws-iam/tutorials/aws-iam-eks) tutorial.

## Cleanup

To remove the deployed example:
```bash
kubectl delete namespace otterize-tutorial-aws-visibility
```

To remove the Lambda functions:
```bash
aws cloudformation delete-stack --stack-name OtterizeTutorialJokeTrainingStack
```

To remove the EKS cluster:
```bash
eksctl delete cluster --name otterize-tutorial-aws-visibility --region us-west-2
```
Original file line number Diff line number Diff line change
Expand Up @@ -23,57 +23,22 @@ This tutorial will walk you through deploying an AWS EKS cluster with the AWS VP

Before you start, you'll need an AWS Kubernetes cluster. Having a cluster with a [CNI](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) that supports [NetworkPolicies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) is required for this tutorial.

Save this `yaml` as `cluster-config.yaml`:
```shell
eksctl create cluster -f cluster-config.yaml
```

```yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
Run the following command to create your AWS cluster. [Don't have eksctl? Install it now.](https://eksctl.io/installation/)

metadata:
name: np-ipv4-127
region: us-west-2
version: "1.27"

iam:
withOIDC: true

vpc:
clusterEndpoints:
publicAccess: true
privateAccess: true

addons:
- name: vpc-cni
version: 1.14.0
attachPolicyARNs: #optional
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
configurationValues: |-
# highlight-next-line
enableNetworkPolicy: "true"
- name: coredns
- name: kube-proxy

managedNodeGroups:
- name: small-on-demand
amiFamily: AmazonLinux2
instanceTypes: [ "t3.large" ]
minSize: 0
desiredCapacity: 2
maxSize: 6
privateNetworking: true
disableIMDSv1: true
volumeSize: 100
volumeType: gp3
volumeEncrypted: true
tags:
team: "eks"
```bash
curl ${ABSOLUTE_URL}/code-examples/aws-eks-mini/cluster-config.yaml | eksctl create cluster -f -
```
<details>
<summary>Inspect eks-cluster.yaml contents</summary>

Then run the following command to create your AWS cluster. [Don't have eksctl? Install it now.](https://eksctl.io/installation/)

```shell
eksctl create cluster -f cluster-config.yaml
```yaml
{@include: ../../../../static/code-examples/aws-eks-mini/cluster-config.yaml}
```
</details>

Once your AWS EKS has finished deploying the control pane and node group, the next step is deploying Otterize as well as a couple of clients and a server to see how they are affected by network policies.

Expand Down
41 changes: 41 additions & 0 deletions static/code-examples/aws-eks-mini/cluster-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
name: np-ipv4-127
region: us-west-2
version: "1.27"

iam:
withOIDC: true

vpc:
clusterEndpoints:
publicAccess: true
privateAccess: true

addons:
- name: vpc-cni
version: 1.14.0
attachPolicyARNs: #optional
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
configurationValues: |-
# highlight-next-line
enableNetworkPolicy: "true"
- name: coredns
- name: kube-proxy

managedNodeGroups:
- name: small-on-demand
amiFamily: AmazonLinux2
instanceTypes: [ "t3.large" ]
minSize: 0
desiredCapacity: 2
maxSize: 6
privateNetworking: true
disableIMDSv1: true
volumeSize: 100
volumeType: gp3
volumeEncrypted: true
tags:
team: "eks"
40 changes: 40 additions & 0 deletions static/code-examples/aws-iam-eks/cluster-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
name: otterize-iam-eks-tutorial
region: us-west-2
version: "1.27"

iam:
withOIDC: true

vpc:
clusterEndpoints:
publicAccess: true
privateAccess: true

addons:
- name: vpc-cni
version: 1.14.0
attachPolicyARNs: #optional
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
configurationValues: |-
enableNetworkPolicy: "true"
- name: coredns
- name: kube-proxy

managedNodeGroups:
- name: small-on-demand
amiFamily: AmazonLinux2
instanceTypes: [ "t3.large" ]
minSize: 0
desiredCapacity: 2
maxSize: 6
privateNetworking: true
disableIMDSv1: true
volumeSize: 100
volumeType: gp3
volumeEncrypted: true
tags:
team: "eks"
Loading

0 comments on commit f245df0

Please sign in to comment.