Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS EKS CNI Mini tutorial #128

Merged
merged 18 commits into from
Aug 30, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion docs/_common/cluster-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,6 @@ managedNodeGroups:
```

For guides that deploy the larger set of services, Kafka and ZooKeeper are also deployed, and you will also need the EBS CSI driver to accommodate their storage needs. [Follow the AWS guide for the EBS CSI add-on to do so.](https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html)

If you're not using the VPC CNI, you can set up the Calico network policy controller using the following instructions:

<a href="https://docs.aws.amazon.com/eks/latest/userguide/calico.html">Visit the official documentation</a>, or follow the instructions below:
Expand Down
169 changes: 169 additions & 0 deletions docs/quick-tutorials/aws-eks-cni-mini.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,169 @@
---
sidebar_position: 8
title: Network policies on AWS EKS with the VPC CNI
---
import CodeBlock from "@theme/CodeBlock";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

This tutorial will walk you through deploying an AWS EKS cluster with the AWS VPC CNI add-on, while enabling the new network policy support on EKS with Otterize.

## Prerequisites

* An EKS cluster with the AWS VPC CNI add-on installed and with the new built-in network policy support enabled. See [Installing the AWS VPC CNI add-on](https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html) for more information, or follow the instructions below.
* An Otterize account. See [Getting Started](https://docs.otterize.com/getting-started) for more information.
* The [Otterize CLI](https://docs.otterize.com/cli/installation).
* The [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html).
* The [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) command-line tool.

## Step one: Create an AWS EKS cluster with the AWS VPC CNI plugin

Before you start, you'll need an AWS Kubernetes cluster. Having a cluster with a [CNI](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) that supports [NetworkPolicies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) is required for this tutorial.

Save this `yaml` as `cluster-config.yaml`:

```yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
name: np-ipv4-127
region: us-west-2
version: "1.27"

iam:
withOIDC: true

vpc:
clusterEndpoints:
publicAccess: true
privateAccess: true

addons:
- name: vpc-cni
version: 1.14.0
attachPolicyARNs: #optional
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
configurationValues: |-
# highlight-next-line
enableNetworkPolicy: "true"
- name: coredns
- name: kube-proxy

managedNodeGroups:
- name: x86-al2-on-demand
amiFamily: AmazonLinux2
instanceTypes: [ "m6i.xlarge", "m6a.xlarge" ]
minSize: 0
desiredCapacity: 2
maxSize: 6
privateNetworking: true
disableIMDSv1: true
volumeSize: 100
volumeType: gp3
volumeEncrypted: true
tags:
team: "eks"
```

Then run the following command to create your AWS cluster:

```shell
eksctl create cluster -f cluster-config.yaml
```

Once your AWS EKS has finished deploying the control pane and node group, the next step is deploying Otterize as well as a couple of clients and a server to see how they are affected by network policies.

## Step two: Install the Otterize agents

### Install Otterize on your cluster

You can now install Otterize in your cluster, and optionally connect to Otterize Cloud. Connecting to Cloud lets you see what's happening visually in your browser, through the "access graph".

So either forego browser visualization and:

<details>
<summary>Install Otterize in your cluster, <b>without</b> Otterize Cloud</summary>

{@include: ../_common/install-otterize.md}

</details>

Or choose to include browser visualization and:

<details>
<summary>Install Otterize in your cluster, <b>with</b> Otterize Cloud</summary>

#### Create an Otterize Cloud account

{@include: ../_common/create-account.md}

#### Install Otterize OSS, connected to Otterize Cloud

{@include: ../_common/install-otterize-from-cloud.md}

</details>

Finally, you'll need to install the Otterize CLI (if you haven't already) to interact with the network mapper:

<details>
<summary>Install the Otterize CLI</summary>

{@include: ../_common/install-otterize-cli.md}

</details>

### Deploy a server and two clients

So that we have some pods to look at (and protect), you can install our simple clients and server demo app that will deploy a server and 2 clients.

```bash
kubectl apply -f https://docs.otterize.com/code-examples/automate-network-policies/all.yaml
```

Once you have that installed and running your Otterize access graph should look something like this:

![Access Graph](/img/quick-tutorials/aws-eks-mini/access-graph.png)

## Step three: Create an intent

Now that you have Otterize installed, the next step is to create an intent which will enable access to the server from the client. If you enable protection on the server without declaring an intent, the client will be blocked.

```shell
otterize network-mapper export --server server.otterize-tutorial-npol | kubectl apply -f -
```

At which point you should see that the `server` service is ready to be protected:

![One intent applied](/img/quick-tutorials/aws-eks-mini/one-intent.png)

And you can then protect the `server` service by applying the following `yaml` file:

Protect the server with the following command:

```bash
kubectl apply -f https://docs.otterize.com/code-examples/aws-eks-mini/protect-server.yaml
```
Save that to a file called `protect-server.yaml` and then run:
davidgs marked this conversation as resolved.
Show resolved Hide resolved

```shell
% kubectl apply -f protect-server.yaml
```
And you should see your access graph showing the service as protected:

![Protected Service](/img/quick-tutorials/aws-eks-mini/protected.png)

## What's next

Have a look at the [Guide](https://docs.otterize.com/guides/protect-1-service-network-policies) on how to deploy protection to a larger, more complex application on step at a time.

## Teardown

To remove the deployed examples run:
```bash
kubectl delete -f protect-server.yaml
otterize network-mapper export --server server.otterize-tutorial-npol | kubectl delete -f -
kubectl delete -f https://docs.otterize.com/code-examples/automate-network-policies/all.yaml
helm uninstall otterize -n otterize-system
eksctl delete cluster -f cluster-config.yaml
```
8 changes: 8 additions & 0 deletions static/code-examples/aws-eks-mini/protect-server.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
apiVersion: k8s.otterize.com/v1alpha2
kind: ProtectedService
metadata:
name: server
namespace: otterize-tutorial-npol

spec:
name: server
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading