-
Notifications
You must be signed in to change notification settings - Fork 8
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'main' into sapir/k8s-kafka-mtls-tutorial-screenshots-up…
…date
- Loading branch information
Showing
50 changed files
with
17,330 additions
and
3,984 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -20,3 +20,4 @@ yarn-debug.log* | |
yarn-error.log* | ||
|
||
.idea | ||
package-lock.json |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,10 @@ | ||
Use Helm to install the latest version of Otterize: | ||
```shell | ||
helm repo add otterize https://helm.otterize.com | ||
helm repo update | ||
helm install -n otterize-system --create-namespace \ | ||
--set global.deployment.spire=true --set global.deployment.credentialsOperator=true \ | ||
otterize otterize/otterize-kubernetes | ||
``` | ||
|
||
You can add the `--wait` flag for Helm to wait for deployment to complete and all pods to be Ready, or manually watch for all pods to be `Ready` using `kubectl get pods -n otterize-system -w`. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
11 changes: 0 additions & 11 deletions
11
docs/_common/install-otterize-no-netpols-with-kafka-watcher.md
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,197 @@ | ||
--- | ||
sidebar_position: 8 | ||
title: Network policies on AWS EKS with the VPC CNI | ||
--- | ||
import CodeBlock from "@theme/CodeBlock"; | ||
import Tabs from '@theme/Tabs'; | ||
import TabItem from '@theme/TabItem'; | ||
|
||
This tutorial will walk you through deploying an AWS EKS cluster with the AWS VPC CNI add-on, while enabling the new network policy support on EKS with Otterize. | ||
|
||
## Prerequisites | ||
|
||
* An EKS cluster with the AWS VPC CNI add-on installed and with the new built-in network policy support enabled. See [Installing the AWS VPC CNI add-on](https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html) for more information, or follow the instructions below. | ||
* The [Otterize CLI](https://docs.otterize.com/installation#install-the-otterize-cli). | ||
* The [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html). | ||
* The [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) command-line tool. | ||
|
||
## Step one: Create an AWS EKS cluster with the AWS VPC CNI plugin | ||
|
||
Before you start, you'll need an AWS Kubernetes cluster. Having a cluster with a [CNI](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) that supports [NetworkPolicies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) is required for this tutorial. | ||
|
||
Save this `yaml` as `cluster-config.yaml`: | ||
|
||
```yaml | ||
apiVersion: eksctl.io/v1alpha5 | ||
kind: ClusterConfig | ||
|
||
metadata: | ||
name: np-ipv4-127 | ||
region: us-west-2 | ||
version: "1.27" | ||
|
||
iam: | ||
withOIDC: true | ||
|
||
vpc: | ||
clusterEndpoints: | ||
publicAccess: true | ||
privateAccess: true | ||
|
||
addons: | ||
- name: vpc-cni | ||
version: 1.14.0 | ||
attachPolicyARNs: #optional | ||
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy | ||
configurationValues: |- | ||
# highlight-next-line | ||
enableNetworkPolicy: "true" | ||
- name: coredns | ||
- name: kube-proxy | ||
|
||
managedNodeGroups: | ||
- name: x86-al2-on-demand | ||
amiFamily: AmazonLinux2 | ||
instanceTypes: [ "m6i.xlarge", "m6a.xlarge" ] | ||
minSize: 0 | ||
desiredCapacity: 2 | ||
maxSize: 6 | ||
privateNetworking: true | ||
disableIMDSv1: true | ||
volumeSize: 100 | ||
volumeType: gp3 | ||
volumeEncrypted: true | ||
tags: | ||
team: "eks" | ||
``` | ||
Then run the following command to create your AWS cluster: | ||
```shell | ||
eksctl create cluster -f cluster-config.yaml | ||
``` | ||
|
||
Once your AWS EKS has finished deploying the control pane and node group, the next step is deploying Otterize as well as a couple of clients and a server to see how they are affected by network policies. | ||
|
||
## Step two: Install the Otterize agents | ||
|
||
### Install Otterize on your cluster | ||
|
||
You can now install Otterize in your cluster, and optionally connect to Otterize Cloud. Connecting to Cloud lets you see what's happening visually in your browser, through the "access graph". | ||
|
||
So either forego browser visualization and: | ||
|
||
<details> | ||
<summary>Install Otterize in your cluster, <b>without</b> Otterize Cloud</summary> | ||
|
||
{@include: ../_common/install-otterize.md} | ||
|
||
</details> | ||
|
||
Or choose to include browser visualization and: | ||
|
||
<details> | ||
<summary>Install Otterize in your cluster, <b>with</b> Otterize Cloud</summary> | ||
|
||
#### Create an Otterize Cloud account | ||
|
||
{@include: ../_common/create-account.md} | ||
|
||
#### Install Otterize OSS, connected to Otterize Cloud | ||
|
||
{@include: ../_common/install-otterize-from-cloud.md} | ||
|
||
</details> | ||
|
||
Finally, you'll need to install the Otterize CLI (if you haven't already) to interact with the network mapper: | ||
|
||
<details> | ||
<summary>Install the Otterize CLI</summary> | ||
|
||
{@include: ../_common/install-otterize-cli.md} | ||
|
||
</details> | ||
|
||
### Deploy a server and two clients | ||
|
||
So that we have some pods to look at (and protect), you can install our simple clients and server demo app that will deploy a server and 2 clients. | ||
|
||
```bash | ||
kubectl apply -f https://docs.otterize.com/code-examples/automate-network-policies/all.yaml | ||
``` | ||
|
||
Once you have that installed and running your Otterize access graph should look something like this: | ||
|
||
![Access Graph](/img/quick-tutorials/aws-eks-mini/access-graph.png) | ||
|
||
## Step three: Create an intent | ||
|
||
Now that you have Otterize installed, the next step is to create an intent which will enable access to the server from the client. If you enable protection on the server without declaring an intent, the client will be blocked. | ||
|
||
```shell | ||
otterize network-mapper export --server server.otterize-tutorial-eks | kubectl apply -f - | ||
``` | ||
|
||
Running this command will generate the following `ClientIntents` for each client connected to `server` and apply it to your cluster. You could also place it in a Helm chart or apply it some other way, instead of piping it directly to kubectl. | ||
```yaml | ||
apiVersion: k8s.otterize.com/v1alpha2 | ||
kind: ClientIntents | ||
metadata: | ||
name: client | ||
namespace: otterize-tutorial-eks | ||
spec: | ||
service: | ||
name: client | ||
calls: | ||
- name: server | ||
--- | ||
apiVersion: k8s.otterize.com/v1alpha2 | ||
kind: ClientIntents | ||
metadata: | ||
name: client-other | ||
namespace: otterize-tutorial-eks | ||
spec: | ||
service: | ||
name: client-other | ||
calls: | ||
- name: server | ||
``` | ||
At which point you should see that the `server` service is ready to be protected: | ||
|
||
![One intent applied](/img/quick-tutorials/aws-eks-mini/one-intent.png) | ||
|
||
And you can then protect the `server` service by applying the following `yaml` file: | ||
|
||
```yaml | ||
{@include: ../../static/code-examples/aws-eks-mini/protect-server.yaml} | ||
``` | ||
|
||
Protect the server by applying the resource: | ||
|
||
```bash | ||
kubectl apply -f https://docs.otterize.com/code-examples/aws-eks-mini/protect-server.yaml | ||
``` | ||
Save that to a file called `protect-server.yaml` and then run: | ||
|
||
```shell | ||
% kubectl apply -f protect-server.yaml | ||
``` | ||
And you should see your access graph showing the service as protected: | ||
|
||
![Protected Service](/img/quick-tutorials/aws-eks-mini/protected.png) | ||
|
||
## What's next | ||
|
||
Have a look at the [guide on how to deploy protection to a larger, more complex application one step at a time](https://docs.otterize.com/guides/protect-1-service-network-policies). | ||
|
||
## Teardown | ||
|
||
To remove the deployed examples run: | ||
```bash | ||
kubectl delete -f protect-server.yaml | ||
otterize network-mapper export --server server.otterize-tutorial-eks | kubectl delete -f - | ||
kubectl delete -f https://docs.otterize.com/code-examples/automate-network-policies/all.yaml | ||
helm uninstall otterize -n otterize-system | ||
eksctl delete cluster -f cluster-config.yaml | ||
``` |
Oops, something went wrong.