Skip to content

Commit

Permalink
Merge branch 'main' into sapir/k8s-kafka-mtls-tutorial-screenshots-up…
Browse files Browse the repository at this point in the history
…date
  • Loading branch information
sapirwo committed Sep 3, 2023
2 parents 3153a28 + 739ec0e commit 67b6fe9
Show file tree
Hide file tree
Showing 50 changed files with 17,330 additions and 3,984 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -20,3 +20,4 @@ yarn-debug.log*
yarn-error.log*

.idea
package-lock.json
61 changes: 61 additions & 0 deletions docs/_common/cluster-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,67 @@ gcloud container clusters update CLUSTER_NAME --enable-network-policy
</Tabs>
</TabItem>
<TabItem value="eks" label="AWS EKS">

Starting August 29, 2023, [you can configure the built-in VPC CNI add-on to enable network policy support](https://aws.amazon.com/blogs/containers/amazon-vpc-cni-now-supports-kubernetes-network-policies).
To spin up a new cluster, use the following `eksctl` `ClusterConfig`, and save it to a file called `cluster.yaml`.

Spin up the cluster using `eksctl create cluster -f cluster.yaml`. This will spin up a cluster called `network-policy-demo` in `us-west-2`.

The important bit is the configuration for the VPC CNI addon:

```yaml
configurationValues: |-
# highlight-next-line
enableNetworkPolicy: "true"
```
```yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
name: network-policy-demo
version: "1.27"
region: us-west-2

iam:
withOIDC: true

vpc:
clusterEndpoints:
publicAccess: true
privateAccess: true

addons:
- name: vpc-cni
version: 1.14.0
attachPolicyARNs: #optional
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
configurationValues: |-
# highlight-next-line
enableNetworkPolicy: "true"
- name: coredns
- name: kube-proxy

managedNodeGroups:
- name: x86-al2-on-demand
amiFamily: AmazonLinux2
instanceTypes: [ "m6i.xlarge", "m6a.xlarge" ]
minSize: 0
desiredCapacity: 2
maxSize: 6
privateNetworking: true
disableIMDSv1: true
volumeSize: 100
volumeType: gp3
volumeEncrypted: true
tags:
team: "eks"
```
For guides that deploy the larger set of services, Kafka and ZooKeeper are also deployed, and you will also need the EBS CSI driver to accommodate their storage needs. [Follow the AWS guide for the EBS CSI add-on to do so.](https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html)
If you're not using the VPC CNI, you can set up the Calico network policy controller using the following instructions:
<a href="https://docs.aws.amazon.com/eks/latest/userguide/calico.html">Visit the official documentation</a>, or follow the instructions below:
1. Spin up an [EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html) using the console, AWS CLI or `eksctl`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ If no Kubernetes clusters are connected to your account, click the "connect your
1. Follow the instructions to install Otterize <b>with enforcement on</b> (not in shadow mode) for this tutorial. In other words, <b>omit</b> the following flag in the Helm command: `--set intentsOperator.operator.mode=defaultShadow`
2. And <b>add</b> the following flags to the Helm command:
```
--set intentsOperator.operator.enableNetworkPolicyCreation=false \
--set intentsOperator.operator.enableNetworkPolicyCreation=false \
--set networkMapper.kafkawatcher.enable=true \
--set networkMapper.kafkawatcher.kafkaServers={"kafka-0.kafka"}
```
Expand Down
4 changes: 3 additions & 1 deletion docs/_common/install-otterize-kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,9 @@ Use Helm to install the latest version of Otterize:
helm repo add otterize https://helm.otterize.com
helm repo update
helm install -n otterize-system --create-namespace \
--set intentsOperator.operator.enableNetworkPolicyCreation=false otterize otterize/otterize-kubernetes
--set intentsOperator.operator.mode=defaultShadow --set intentsOperator.operator.enableNetworkPolicyCreation=false \
--set global.deployment.spire=true --set global.deployment.credentialsOperator=true \
otterize otterize/otterize-kubernetes
```

You can add the `--wait` flag for Helm to wait for deployment to complete and all pods to be Ready, or manually watch for all pods to be `Ready` using `kubectl get pods -n otterize-system -w`.
10 changes: 10 additions & 0 deletions docs/_common/install-otterize-mtls.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
Use Helm to install the latest version of Otterize:
```shell
helm repo add otterize https://helm.otterize.com
helm repo update
helm install -n otterize-system --create-namespace \
--set global.deployment.spire=true --set global.deployment.credentialsOperator=true \
otterize otterize/otterize-kubernetes
```

You can add the `--wait` flag for Helm to wait for deployment to complete and all pods to be Ready, or manually watch for all pods to be `Ready` using `kubectl get pods -n otterize-system -w`.
3 changes: 1 addition & 2 deletions docs/_common/install-otterize-network-policies.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@ Use Helm to install the latest version of Otterize:
```shell
helm repo add otterize https://helm.otterize.com
helm repo update
helm install otterize otterize/otterize-kubernetes -n otterize-system --create-namespace --set deployment.spire=false \
--set deployment.credentialsOperator=false --set intentsOperator.operator.autoGenerateTLSUsingCredentialsOperator=false
helm install otterize otterize/otterize-kubernetes -n otterize-system --create-namespace
```
You can add the `--wait` flag for Helm to wait for deployment to complete and all pods to be Ready, or manually watch for all pods to be `Ready` using `kubectl get pods -n otterize-system -w`.
11 changes: 0 additions & 11 deletions docs/_common/install-otterize-no-netpols-with-kafka-watcher.md

This file was deleted.

197 changes: 197 additions & 0 deletions docs/quick-tutorials/aws-eks-cni-mini.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,197 @@
---
sidebar_position: 8
title: Network policies on AWS EKS with the VPC CNI
---
import CodeBlock from "@theme/CodeBlock";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

This tutorial will walk you through deploying an AWS EKS cluster with the AWS VPC CNI add-on, while enabling the new network policy support on EKS with Otterize.

## Prerequisites

* An EKS cluster with the AWS VPC CNI add-on installed and with the new built-in network policy support enabled. See [Installing the AWS VPC CNI add-on](https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html) for more information, or follow the instructions below.
* The [Otterize CLI](https://docs.otterize.com/installation#install-the-otterize-cli).
* The [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html).
* The [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) command-line tool.

## Step one: Create an AWS EKS cluster with the AWS VPC CNI plugin

Before you start, you'll need an AWS Kubernetes cluster. Having a cluster with a [CNI](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) that supports [NetworkPolicies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) is required for this tutorial.

Save this `yaml` as `cluster-config.yaml`:

```yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
name: np-ipv4-127
region: us-west-2
version: "1.27"

iam:
withOIDC: true

vpc:
clusterEndpoints:
publicAccess: true
privateAccess: true

addons:
- name: vpc-cni
version: 1.14.0
attachPolicyARNs: #optional
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
configurationValues: |-
# highlight-next-line
enableNetworkPolicy: "true"
- name: coredns
- name: kube-proxy

managedNodeGroups:
- name: x86-al2-on-demand
amiFamily: AmazonLinux2
instanceTypes: [ "m6i.xlarge", "m6a.xlarge" ]
minSize: 0
desiredCapacity: 2
maxSize: 6
privateNetworking: true
disableIMDSv1: true
volumeSize: 100
volumeType: gp3
volumeEncrypted: true
tags:
team: "eks"
```
Then run the following command to create your AWS cluster:
```shell
eksctl create cluster -f cluster-config.yaml
```

Once your AWS EKS has finished deploying the control pane and node group, the next step is deploying Otterize as well as a couple of clients and a server to see how they are affected by network policies.

## Step two: Install the Otterize agents

### Install Otterize on your cluster

You can now install Otterize in your cluster, and optionally connect to Otterize Cloud. Connecting to Cloud lets you see what's happening visually in your browser, through the "access graph".

So either forego browser visualization and:

<details>
<summary>Install Otterize in your cluster, <b>without</b> Otterize Cloud</summary>

{@include: ../_common/install-otterize.md}

</details>

Or choose to include browser visualization and:

<details>
<summary>Install Otterize in your cluster, <b>with</b> Otterize Cloud</summary>

#### Create an Otterize Cloud account

{@include: ../_common/create-account.md}

#### Install Otterize OSS, connected to Otterize Cloud

{@include: ../_common/install-otterize-from-cloud.md}

</details>

Finally, you'll need to install the Otterize CLI (if you haven't already) to interact with the network mapper:

<details>
<summary>Install the Otterize CLI</summary>

{@include: ../_common/install-otterize-cli.md}

</details>

### Deploy a server and two clients

So that we have some pods to look at (and protect), you can install our simple clients and server demo app that will deploy a server and 2 clients.

```bash
kubectl apply -f https://docs.otterize.com/code-examples/automate-network-policies/all.yaml
```

Once you have that installed and running your Otterize access graph should look something like this:

![Access Graph](/img/quick-tutorials/aws-eks-mini/access-graph.png)

## Step three: Create an intent

Now that you have Otterize installed, the next step is to create an intent which will enable access to the server from the client. If you enable protection on the server without declaring an intent, the client will be blocked.

```shell
otterize network-mapper export --server server.otterize-tutorial-eks | kubectl apply -f -
```

Running this command will generate the following `ClientIntents` for each client connected to `server` and apply it to your cluster. You could also place it in a Helm chart or apply it some other way, instead of piping it directly to kubectl.
```yaml
apiVersion: k8s.otterize.com/v1alpha2
kind: ClientIntents
metadata:
name: client
namespace: otterize-tutorial-eks
spec:
service:
name: client
calls:
- name: server
---
apiVersion: k8s.otterize.com/v1alpha2
kind: ClientIntents
metadata:
name: client-other
namespace: otterize-tutorial-eks
spec:
service:
name: client-other
calls:
- name: server
```
At which point you should see that the `server` service is ready to be protected:

![One intent applied](/img/quick-tutorials/aws-eks-mini/one-intent.png)

And you can then protect the `server` service by applying the following `yaml` file:

```yaml
{@include: ../../static/code-examples/aws-eks-mini/protect-server.yaml}
```

Protect the server by applying the resource:

```bash
kubectl apply -f https://docs.otterize.com/code-examples/aws-eks-mini/protect-server.yaml
```
Save that to a file called `protect-server.yaml` and then run:

```shell
% kubectl apply -f protect-server.yaml
```
And you should see your access graph showing the service as protected:

![Protected Service](/img/quick-tutorials/aws-eks-mini/protected.png)

## What's next

Have a look at the [guide on how to deploy protection to a larger, more complex application one step at a time](https://docs.otterize.com/guides/protect-1-service-network-policies).

## Teardown

To remove the deployed examples run:
```bash
kubectl delete -f protect-server.yaml
otterize network-mapper export --server server.otterize-tutorial-eks | kubectl delete -f -
kubectl delete -f https://docs.otterize.com/code-examples/automate-network-policies/all.yaml
helm uninstall otterize -n otterize-system
eksctl delete cluster -f cluster-config.yaml
```
Loading

0 comments on commit 67b6fe9

Please sign in to comment.