Skip to content

Commit

Permalink
Remove clusters, update integrations docs
Browse files Browse the repository at this point in the history
  • Loading branch information
Yarden Refaeli committed Feb 15, 2024
1 parent 0ed2b68 commit 2df5aa4
Show file tree
Hide file tree
Showing 13 changed files with 58 additions and 157 deletions.
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
Head over to the [Clusters page](https://app.otterize.com/clusters) and create a cluster.
Head over to the [Integrations page](https://app.otterize.com/integrations) and create a Kubernetes integration.
Follow the connection guide that opens to connect your cluster, and make the following changes:

1. Under `mTLS and Kafka support` choose `Otterize Cloud`.
2. Enable enforcement. The configuration tab should look like this:
![Cluster connection guide](/img/configure-cluster/connect-cluster-cloud-with-enforcement.png)
![Cluster connection guide](/img/configure-cluster/connect-cluster-cloud-with-enforcement.png)

3. Copy the Helm command and <b>add</b> the following flag:
```
--set intentsOperator.operator.enableDatabaseReconciler=true
```
```
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
If no Kubernetes clusters are connected to your account, click the "connect your cluster" button to:
1. Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
2. Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide &rarr;" link and running the Helm commands shown there.
1. Follow the instructions to install Otterize <b>with enforcement on</b> (use the toggle to make `Enforcement mode: active`)
2. And <b>add</b> the following flag to the Helm command: `--set networkMapper.istiowatcher.enable=true`
If no Kubernetes clusters are connected to your account, click the "Create integration" button and then click the "Add integration" button to:

1. Create a Kubernetes integration, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
2. Connect it with your actual Kubernetes cluster, by running the Helm commands shown on the screen after creating the integration.
1. Follow the instructions to install Otterize <b>with enforcement on</b> (use the toggle to make `Enforcement mode: active`)
2. And <b>add</b> the following flag to the Helm command: `--set networkMapper.istiowatcher.enable=true`

<details>
<summary>More details, if you're curious</summary>
Expand All @@ -13,4 +14,5 @@ The credentials will already be inlined into the Helm command shown in the Cloud
If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster &mdash; you just won't have the visualization in Otterize Cloud.

The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode," meaning that it will show you what **would** happen if it were to create/update your access controls (Kubernetes network policies, Kafka ACLs, Istio authorization policies, etc.). While that's useful for gradually rolling out IBAC, for this tutorial we go straight to active enforcement.

</details>
10 changes: 6 additions & 4 deletions docs/_common/install-otterize-from-cloud-with-enforcement.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
If no Kubernetes clusters are connected to your account, click the "connect your cluster" button to:
1. Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
2. Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide &rarr;" link and running the Helm commands shown there.
1. Follow the instructions to install Otterize <b>with enforcement on</b> (use the toggle to make `Enforcement mode: active`)
If no Kubernetes clusters are connected to your account, click the "Create integration" button and then click the "Add integration" button to:

1. Create a Kubernetes integration, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
2. Connect it with your actual Kubernetes cluster, by running the Helm commands shown on the screen after creating the integration.
1. Follow the instructions to install Otterize <b>with enforcement on</b> (use the toggle to make `Enforcement mode: active`)

<details>
<summary>More details, if you're curious</summary>
Expand All @@ -12,4 +13,5 @@ The credentials will already be inlined into the Helm command shown in the Cloud
If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster &mdash; you just won't have the visualization in Otterize Cloud.

The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode," meaning that it will show you what **would** happen if it were to create/update your access controls (Kubernetes network policies, Kafka ACLs, Istio authorization policies, etc.). While that's useful for gradually rolling out IBAC, for this tutorial we go straight to active enforcement.

</details>
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
If no Kubernetes clusters are connected to your account, click the "connect your cluster" button to:
1. Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
2. Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide &rarr;" link and running the Helm commands shown there.
1. Follow the instructions to install Otterize <b>with enforcement on</b> (use the toggle to make `Enforcement mode: active`)
If no Kubernetes clusters are connected to your account, click the "Create integration" button and then click the "Add integration" button to:

1. Create a Kubernetes integration, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
2. Connect it with your actual Kubernetes cluster, by running the Helm commands shown on the screen after creating the integration.
1. Follow the instructions to install Otterize <b>with enforcement on</b> (use the toggle to make `Enforcement mode: active`)

<details>
<summary>More details, if you're curious</summary>
Expand All @@ -12,4 +13,5 @@ The credentials will already be inlined into the Helm command shown in the Cloud
If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster &mdash; you just won't have the visualization in Otterize Cloud.

The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode," meaning that it will show you what **would** happen if it created network policies to restrict pod-to-pod traffic, and created Kafka ACLs to control access to Kafka topics. While that's useful for gradually rolling out IBAC, for this tutorial we go straight to active enforcement.

</details>
14 changes: 8 additions & 6 deletions docs/_common/install-otterize-from-cloud-with-istiowatcher.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
If no Kubernetes clusters are connected to your account, click the "connect your cluster" button to:
1. Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
2. Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide &rarr;" link and running the Helm commands shown there.
1. Follow the instructions to install OtterizeAnd <b>add</b> the following flag to the Helm command: `--set networkMapper.istiowatcher.enable=true`
If no Kubernetes clusters are connected to your account, click the "Create integration" button and then click the "Add integration" button to:

1. Create a Kubernetes integration, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
2. Connect it with your actual Kubernetes cluster, by running the Helm commands shown on the screen after creating the integration.
1. Follow the instructions to install Otterize and <b>add</b> the following flag to the Helm command: `--set networkMapper.istiowatcher.enable=true`

<details>
<summary>More details, if you're curious</summary>
Expand All @@ -11,7 +12,8 @@ Connecting your cluster simply entails installing Otterize OSS via Helm, using c
The credentials will already be inlined into the Helm command shown in the Cloud UI, so you just need to copy that line and run it from your shell.
If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster &mdash; you just won't have the visualization in Otterize Cloud.

The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode,"
meaning that it will show you what **would** happen if it were to create/update your access controls (Kubernetes network policies, Kafka ACLs, Istio authorization policies, etc.).
The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode,"
meaning that it will show you what **would** happen if it were to create/update your access controls (Kubernetes network policies, Kafka ACLs, Istio authorization policies, etc.).
Later in this tutorial, we'll turn on enforcement, but for now we'll leave it in shadow mode.

</details>
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
Head over to the [Clusters page](https://app.otterize.com/clusters) and create a cluster.
Head over to the [Integrations page](https://app.otterize.com/integrations) and create a Kubernetes integration.
Follow the connection guide that opens to connect your cluster, and make the following changes:

1. Under `mTLS and Kafka support` choose `cert-manager`.
2. Note that enforcement is disabled, we will enable it later. The configuration tab should look like this:
![Cluster connection guide](/img/configure-cluster/connect-cluster-kafka-mtls-with-otterize-cloud-cert-manager.png)
![Cluster connection guide](/img/configure-cluster/connect-cluster-kafka-mtls-with-otterize-cloud-cert-manager.png)

3. Copy the Helm command and <b>add</b> the following flags:
```
--set intentsOperator.operator.enableNetworkPolicyCreation=false \
--set networkMapper.kafkawatcher.enable=true \
--set networkMapper.kafkawatcher.kafkaServers={"kafka-0.kafka"}
```
```
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
Head over to the [Clusters page](https://app.otterize.com/clusters) and create a cluster.
Head over to the [Integrations page](https://app.otterize.com/integrations) and create a Kubernetes integration.
Follow the connection guide that opens to connect your cluster, and make the following changes:

1. Under `mTLS and Kafka support` choose `Otterize Cloud`.
2. Note that enforcement is disabled, we will enable it later. The configuration tab should look like this:
![Cluster connection guide](/img/configure-cluster/connect-cluster-kafka-mtls-with-otterize-cloud.png)
![Cluster connection guide](/img/configure-cluster/connect-cluster-kafka-mtls-with-otterize-cloud.png)

3. Copy the Helm command and <b>add</b> the following flags:
```
--set intentsOperator.operator.enableNetworkPolicyCreation=false \
--set networkMapper.kafkawatcher.enable=true \
--set networkMapper.kafkawatcher.kafkaServers={"kafka-0.kafka"}
```
```
10 changes: 6 additions & 4 deletions docs/_common/install-otterize-from-cloud.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
If no Kubernetes clusters are connected to your account, click the "Connect your cluster" button to:
1. Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
2. Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide &rarr;" link and running the Helm commands shown there.
Choose `Enfocement mode: disabled` to apply shadow mode on every server until you're ready to protect it.
If no Kubernetes clusters are connected to your account, click the "Create integration" button and then click the "Add integration" button to:

1. Create a Kubernetes integration, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
2. Connect it with your actual Kubernetes cluster, by running the Helm commands shown on the screen after creating the integration.
Choose `Enfocement mode: disabled` to apply shadow mode on every server until you're ready to protect it.

<details>
<summary>More details, if you're curious</summary>
Expand All @@ -12,4 +13,5 @@ The credentials will already be inlined into the Helm command shown in the Cloud
If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster &mdash; you just won't have the visualization in Otterize Cloud.

The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode," meaning that it will not create network policies to restrict pod-to-pod traffic, or create Kafka ACLs to control access to Kafka topics. Instead, it will report to Otterize Cloud what **would** happen if enforcement were to be enabled, guiding you to implement IBAC without blocking intended access.

</details>
2 changes: 1 addition & 1 deletion docs/guides/protect-1-service-network-policies.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ Go ahead and browse to the URL above to "shop" and get a feel for the demo's beh

## Seeing the access graph

In the Otterize Cloud UI, your [cluster](https://app.otterize.com/clusters) should now show all 3 Otterize OSS operators &mdash; the network mapper, intents operator, and credentials operator &mdash; as connected, with a green status.
In the Otterize Cloud UI, your [integration](https://app.otterize.com/integrations) should now show all 3 Otterize OSS operators &mdash; the network mapper, intents operator, and credentials operator &mdash; as connected, with a green status.

<img src="/img/guides/protect-1-service-network-policies/otterize-oss-connected.png" alt="Access graph - Otterize OSS connected" width="600"/>

Expand Down
4 changes: 2 additions & 2 deletions docs/installation/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@ Before you start, you need to have a Kubernetes cluster with a [CNI](https://kub
{@include: ../_common/upgrade-otterize.md}

## Connect Otterize OSS to Otterize Cloud, or install Otterize with Otterize Cloud
To connect Otterize OSS to Otterize Cloud you will need to [login](https://app.otterize.com), create a cluster, and follow the instructions.
To connect Otterize OSS to Otterize Cloud you will need to [login](https://app.otterize.com), go to [integrations](https://app.otterize.com/integrations), create a Kubernetes integration, and follow the instructions.

In a nutshell, you need to `helm upgrade` the same Helm chart, but provide Otterize Cloud credentials. Upon creating a cluster, a guide will appear that walks you through doing this with the new credentials jut created.
In a nutshell, you need to `helm upgrade` the same Helm chart, but provide Otterize Cloud credentials. Upon creating a Kubernetes integration, a guide will appear that walks you through doing this with the new credentials just created.

## Install just the Otterize network mapper
{@include: ../_common/install-otterize-network-mapper.md}
Expand Down
18 changes: 7 additions & 11 deletions docs/otterize-cloud/object-model.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -25,20 +25,16 @@ In Otterize Cloud, services are _inferred_ from the intents reported to the Clou
A service name is unique within a namespace in a cluster, but not in general unique across the cluster or across clusters.

{@include: _environments_and_namespaces.mdx}
## Clusters

When a Kubernetes cluster is connected to Otterize Cloud, it is represented in the Cloud by a **cluster** object. You'll name it when you add it in the UI or through the API/CLI, or when you create the integration directly (in the UI or API/CLI).

The Otterize operators -- intents operator, network mapper, and/or credentials operator -- running in your cluster will inform the Cloud about the intents, services, and credentials within this cluster, and will also convey their configuration (e.g. shadow or enforcement mode) within this cluster. Thus the cluster object in Otterize Cloud contains a lot of useful information about your Kubernetes cluster -- information used to deliver insights when you view your cluster in through the lens of the access graph.

Note that, while a cluster and its namespaces and services could be in a single environment, and an environment could contain multiple clusters, many other combinations are possible. For example, a cluster could contain namespaces in multiple environments. Or, environments may contain some namespaces in one cluster and other namespaces in another cluster. Use whatever mappings make sense for your situation.

Cluster names must be unique within an organization.

## Integrations

Otterize Cloud currently supports two types of integrations: **Kubernetes integrations** and **generic integrations**. In the future, many other types of integrations will be added, allowing Otterize Cloud to work seamlessly with all your infrastructures and systems.

A Kubernetes integration is used to connect a Kubernetes cluster with Otterize Cloud via any or all of the Otterize operators: the intents operator, the network mapper, and the credentials operator. When a Kubernetes-type integration is created, it is always linked to an Otterize Cloud cluster object. It contains the credentials needed by the operators running in the Kubernetes cluster to communicate with the Cloud on behalf of that cluster, i.e., it ties together the physical Kubernetes cluster with its representation in Otterize Cloud. The integration also determines the environment to which namespaces in that clusters will be associated by default. The name of a Kubernetes integration is derived from the name of the cluster; since cluster names are unique per organization, so are Kubernetes-type integration names.
A Kubernetes integration is used to connect a Kubernetes cluster with Otterize Cloud via any or all of the Otterize operators: the intents operator, the network mapper, and the credentials operator. It contains the credentials needed by the operators running in the Kubernetes cluster to communicate with the Cloud on behalf of that cluster, i.e., it ties together the physical Kubernetes cluster with its representation in Otterize Cloud. The integration also determines the environment to which namespaces in that clusters will be associated by default. The names of Kubernetes-type integrations must be unique within an organization.

A generic integration is used to connect generically an external system to Otterize Cloud. It provides that system credentials to access the Otterize API/CLI, in a way that doesn't involve any specific Otterize user. That makes it ideal for building automations on top of the Otterize API. For example, new clusters provisioned for the development team could be automatically connected to Otterize Cloud, or a CI/CD system could automatically look in the access graph for services that would be blocked or intents that were not declared and applied and fail the build. The name of the integration should reflect the way it will be used. The names of generic-type integrations must be unique within an organization.

When a Kubernetes cluster is connected to Otterize Cloud, it is represented in the Cloud by an **Kubernetes integration** object. You'll name it when you add a Kubernetes integration in the UI or through the API/CLI.

The Otterize operators -- intents operator, network mapper, and/or credentials operator -- running in your cluster will inform the Cloud about the intents, services, and credentials within this cluster, and will also convey their configuration (e.g. shadow or enforcement mode) within this cluster.

Note that, while a cluster and its namespaces and services could be in a single environment, and an environment could contain multiple clusters, many other combinations are possible. For example, a cluster could contain namespaces in multiple environments. Or, environments may contain some namespaces in one cluster and other namespaces in another cluster. Use whatever mappings make sense for your situation.
Loading

0 comments on commit 2df5aa4

Please sign in to comment.