diff --git a/docs/_common/install-otterize-from-cloud-with-enforcement-postgresql.md b/docs/_common/install-otterize-from-cloud-with-enforcement-postgresql.md
index bf059cc10..6d358afc3 100644
--- a/docs/_common/install-otterize-from-cloud-with-enforcement-postgresql.md
+++ b/docs/_common/install-otterize-from-cloud-with-enforcement-postgresql.md
@@ -1,11 +1,11 @@
-Head over to the [Clusters page](https://app.otterize.com/clusters) and create a cluster.
+Head over to the [Integrations page](https://app.otterize.com/integrations) and create a Kubernetes integration.
Follow the connection guide that opens to connect your cluster, and make the following changes:
1. Under `mTLS and Kafka support` choose `Otterize Cloud`.
2. Enable enforcement. The configuration tab should look like this:
-![Cluster connection guide](/img/configure-cluster/connect-cluster-cloud-with-enforcement.png)
+ ![Cluster connection guide](/img/configure-cluster/connect-cluster-cloud-with-enforcement.png)
3. Copy the Helm command and add the following flag:
```
--set intentsOperator.operator.enableDatabaseReconciler=true
- ```
\ No newline at end of file
+ ```
diff --git a/docs/_common/install-otterize-from-cloud-with-enforcement-with-istiowatcher.md b/docs/_common/install-otterize-from-cloud-with-enforcement-with-istiowatcher.md
index 9e65d1d50..bdac9cf58 100644
--- a/docs/_common/install-otterize-from-cloud-with-enforcement-with-istiowatcher.md
+++ b/docs/_common/install-otterize-from-cloud-with-enforcement-with-istiowatcher.md
@@ -1,8 +1,9 @@
-If no Kubernetes clusters are connected to your account, click the "connect your cluster" button to:
-1. Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
-2. Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide →" link and running the Helm commands shown there.
- 1. Follow the instructions to install Otterize with enforcement on (use the toggle to make `Enforcement mode: active`)
- 2. And add the following flag to the Helm command: `--set networkMapper.istiowatcher.enable=true`
+If no Kubernetes clusters are connected to your account, click the "Create integration" button and then click the "Add integration" button to:
+
+1. Create a Kubernetes integration, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
+2. Connect it with your actual Kubernetes cluster, by running the Helm commands shown on the screen after creating the integration.
+ 1. Follow the instructions to install Otterize with enforcement on (use the toggle to make `Enforcement mode: active`)
+ 2. And add the following flag to the Helm command: `--set networkMapper.istiowatcher.enable=true`
More details, if you're curious
@@ -13,4 +14,5 @@ The credentials will already be inlined into the Helm command shown in the Cloud
If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster — you just won't have the visualization in Otterize Cloud.
The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode," meaning that it will show you what **would** happen if it were to create/update your access controls (Kubernetes network policies, Kafka ACLs, Istio authorization policies, etc.). While that's useful for gradually rolling out IBAC, for this tutorial we go straight to active enforcement.
+
diff --git a/docs/_common/install-otterize-from-cloud-with-enforcement.md b/docs/_common/install-otterize-from-cloud-with-enforcement.md
index a35ba79f4..66f46f19e 100644
--- a/docs/_common/install-otterize-from-cloud-with-enforcement.md
+++ b/docs/_common/install-otterize-from-cloud-with-enforcement.md
@@ -1,7 +1,8 @@
-If no Kubernetes clusters are connected to your account, click the "connect your cluster" button to:
-1. Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
-2. Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide →" link and running the Helm commands shown there.
- 1. Follow the instructions to install Otterize with enforcement on (use the toggle to make `Enforcement mode: active`)
+If no Kubernetes clusters are connected to your account, click the "Create integration" button and then click the "Add integration" button to:
+
+1. Create a Kubernetes integration, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
+2. Connect it with your actual Kubernetes cluster, by running the Helm commands shown on the screen after creating the integration.
+ 1. Follow the instructions to install Otterize with enforcement on (use the toggle to make `Enforcement mode: active`)
More details, if you're curious
@@ -12,4 +13,5 @@ The credentials will already be inlined into the Helm command shown in the Cloud
If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster — you just won't have the visualization in Otterize Cloud.
The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode," meaning that it will show you what **would** happen if it were to create/update your access controls (Kubernetes network policies, Kafka ACLs, Istio authorization policies, etc.). While that's useful for gradually rolling out IBAC, for this tutorial we go straight to active enforcement.
+
diff --git a/docs/_common/install-otterize-from-cloud-with-istio-enforcement.md b/docs/_common/install-otterize-from-cloud-with-istio-enforcement.md
index f7ec08751..99215b6a2 100644
--- a/docs/_common/install-otterize-from-cloud-with-istio-enforcement.md
+++ b/docs/_common/install-otterize-from-cloud-with-istio-enforcement.md
@@ -1,7 +1,8 @@
-If no Kubernetes clusters are connected to your account, click the "connect your cluster" button to:
-1. Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
-2. Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide →" link and running the Helm commands shown there.
- 1. Follow the instructions to install Otterize with enforcement on (use the toggle to make `Enforcement mode: active`)
+If no Kubernetes clusters are connected to your account, click the "Create integration" button and then click the "Add integration" button to:
+
+1. Create a Kubernetes integration, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
+2. Connect it with your actual Kubernetes cluster, by running the Helm commands shown on the screen after creating the integration.
+ 1. Follow the instructions to install Otterize with enforcement on (use the toggle to make `Enforcement mode: active`)
More details, if you're curious
@@ -12,4 +13,5 @@ The credentials will already be inlined into the Helm command shown in the Cloud
If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster — you just won't have the visualization in Otterize Cloud.
The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode," meaning that it will show you what **would** happen if it created network policies to restrict pod-to-pod traffic, and created Kafka ACLs to control access to Kafka topics. While that's useful for gradually rolling out IBAC, for this tutorial we go straight to active enforcement.
+
diff --git a/docs/_common/install-otterize-from-cloud-with-istiowatcher.md b/docs/_common/install-otterize-from-cloud-with-istiowatcher.md
index 004c2d667..a6f5f5fda 100644
--- a/docs/_common/install-otterize-from-cloud-with-istiowatcher.md
+++ b/docs/_common/install-otterize-from-cloud-with-istiowatcher.md
@@ -1,7 +1,8 @@
-If no Kubernetes clusters are connected to your account, click the "connect your cluster" button to:
-1. Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
-2. Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide →" link and running the Helm commands shown there.
- 1. Follow the instructions to install OtterizeAnd add the following flag to the Helm command: `--set networkMapper.istiowatcher.enable=true`
+If no Kubernetes clusters are connected to your account, click the "Create integration" button and then click the "Add integration" button to:
+
+1. Create a Kubernetes integration, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
+2. Connect it with your actual Kubernetes cluster, by running the Helm commands shown on the screen after creating the integration.
+ 1. Follow the instructions to install Otterize and add the following flag to the Helm command: `--set networkMapper.istiowatcher.enable=true`
More details, if you're curious
@@ -11,7 +12,8 @@ Connecting your cluster simply entails installing Otterize OSS via Helm, using c
The credentials will already be inlined into the Helm command shown in the Cloud UI, so you just need to copy that line and run it from your shell.
If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster — you just won't have the visualization in Otterize Cloud.
-The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode,"
-meaning that it will show you what **would** happen if it were to create/update your access controls (Kubernetes network policies, Kafka ACLs, Istio authorization policies, etc.).
+The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode,"
+meaning that it will show you what **would** happen if it were to create/update your access controls (Kubernetes network policies, Kafka ACLs, Istio authorization policies, etc.).
Later in this tutorial, we'll turn on enforcement, but for now we'll leave it in shadow mode.
+
diff --git a/docs/_common/install-otterize-from-cloud-with-shadow-mode-and-kafka-watcher-and-cert-manager.md b/docs/_common/install-otterize-from-cloud-with-shadow-mode-and-kafka-watcher-and-cert-manager.md
index 094d15dae..4a41c82cf 100644
--- a/docs/_common/install-otterize-from-cloud-with-shadow-mode-and-kafka-watcher-and-cert-manager.md
+++ b/docs/_common/install-otterize-from-cloud-with-shadow-mode-and-kafka-watcher-and-cert-manager.md
@@ -1,13 +1,13 @@
-Head over to the [Clusters page](https://app.otterize.com/clusters) and create a cluster.
+Head over to the [Integrations page](https://app.otterize.com/integrations) and create a Kubernetes integration.
Follow the connection guide that opens to connect your cluster, and make the following changes:
1. Under `mTLS and Kafka support` choose `cert-manager`.
2. Note that enforcement is disabled, we will enable it later. The configuration tab should look like this:
-![Cluster connection guide](/img/configure-cluster/connect-cluster-kafka-mtls-with-otterize-cloud-cert-manager.png)
+ ![Cluster connection guide](/img/configure-cluster/connect-cluster-kafka-mtls-with-otterize-cloud-cert-manager.png)
3. Copy the Helm command and add the following flags:
```
--set intentsOperator.operator.enableNetworkPolicyCreation=false \
--set networkMapper.kafkawatcher.enable=true \
--set networkMapper.kafkawatcher.kafkaServers={"kafka-0.kafka"}
- ```
\ No newline at end of file
+ ```
diff --git a/docs/_common/install-otterize-from-cloud-with-shadow-mode-and-kafka-watcher.md b/docs/_common/install-otterize-from-cloud-with-shadow-mode-and-kafka-watcher.md
index 08cf24631..43d1e9e68 100644
--- a/docs/_common/install-otterize-from-cloud-with-shadow-mode-and-kafka-watcher.md
+++ b/docs/_common/install-otterize-from-cloud-with-shadow-mode-and-kafka-watcher.md
@@ -1,13 +1,13 @@
-Head over to the [Clusters page](https://app.otterize.com/clusters) and create a cluster.
+Head over to the [Integrations page](https://app.otterize.com/integrations) and create a Kubernetes integration.
Follow the connection guide that opens to connect your cluster, and make the following changes:
1. Under `mTLS and Kafka support` choose `Otterize Cloud`.
2. Note that enforcement is disabled, we will enable it later. The configuration tab should look like this:
-![Cluster connection guide](/img/configure-cluster/connect-cluster-kafka-mtls-with-otterize-cloud.png)
+ ![Cluster connection guide](/img/configure-cluster/connect-cluster-kafka-mtls-with-otterize-cloud.png)
3. Copy the Helm command and add the following flags:
```
--set intentsOperator.operator.enableNetworkPolicyCreation=false \
--set networkMapper.kafkawatcher.enable=true \
--set networkMapper.kafkawatcher.kafkaServers={"kafka-0.kafka"}
- ```
\ No newline at end of file
+ ```
diff --git a/docs/_common/install-otterize-from-cloud.md b/docs/_common/install-otterize-from-cloud.md
index e850fcfeb..3e9b59cc1 100644
--- a/docs/_common/install-otterize-from-cloud.md
+++ b/docs/_common/install-otterize-from-cloud.md
@@ -1,7 +1,8 @@
-If no Kubernetes clusters are connected to your account, click the "Connect your cluster" button to:
-1. Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
-2. Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide →" link and running the Helm commands shown there.
-Choose `Enfocement mode: disabled` to apply shadow mode on every server until you're ready to protect it.
+If no Kubernetes clusters are connected to your account, click the "Create integration" button and then click the "Add integration" button to:
+
+1. Create a Kubernetes integration, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
+2. Connect it with your actual Kubernetes cluster, by running the Helm commands shown on the screen after creating the integration.
+ Choose `Enfocement mode: disabled` to apply shadow mode on every server until you're ready to protect it.
More details, if you're curious
@@ -12,4 +13,5 @@ The credentials will already be inlined into the Helm command shown in the Cloud
If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster — you just won't have the visualization in Otterize Cloud.
The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode," meaning that it will not create network policies to restrict pod-to-pod traffic, or create Kafka ACLs to control access to Kafka topics. Instead, it will report to Otterize Cloud what **would** happen if enforcement were to be enabled, guiding you to implement IBAC without blocking intended access.
+
diff --git a/docs/guides/protect-1-service-network-policies.mdx b/docs/guides/protect-1-service-network-policies.mdx
index 13b485874..b0060cb4e 100644
--- a/docs/guides/protect-1-service-network-policies.mdx
+++ b/docs/guides/protect-1-service-network-policies.mdx
@@ -127,7 +127,7 @@ Go ahead and browse to the URL above to "shop" and get a feel for the demo's beh
## Seeing the access graph
-In the Otterize Cloud UI, your [cluster](https://app.otterize.com/clusters) should now show all 3 Otterize OSS operators — the network mapper, intents operator, and credentials operator — as connected, with a green status.
+In the Otterize Cloud UI, your [integration](https://app.otterize.com/integrations) should now show all 3 Otterize OSS operators — the network mapper, intents operator, and credentials operator — as connected, with a green status.
diff --git a/docs/installation/README.mdx b/docs/installation/README.mdx
index 102d628a5..61dcb078a 100644
--- a/docs/installation/README.mdx
+++ b/docs/installation/README.mdx
@@ -24,9 +24,9 @@ Before you start, you need to have a Kubernetes cluster with a [CNI](https://kub
{@include: ../_common/upgrade-otterize.md}
## Connect Otterize OSS to Otterize Cloud, or install Otterize with Otterize Cloud
-To connect Otterize OSS to Otterize Cloud you will need to [login](https://app.otterize.com), create a cluster, and follow the instructions.
+To connect Otterize OSS to Otterize Cloud you will need to [login](https://app.otterize.com), go to [integrations](https://app.otterize.com/integrations), create a Kubernetes integration, and follow the instructions.
-In a nutshell, you need to `helm upgrade` the same Helm chart, but provide Otterize Cloud credentials. Upon creating a cluster, a guide will appear that walks you through doing this with the new credentials jut created.
+In a nutshell, you need to `helm upgrade` the same Helm chart, but provide Otterize Cloud credentials. Upon creating a Kubernetes integration, a guide will appear that walks you through doing this with the new credentials just created.
## Install just the Otterize network mapper
{@include: ../_common/install-otterize-network-mapper.md}
diff --git a/docs/otterize-cloud/object-model.mdx b/docs/otterize-cloud/object-model.mdx
index f9dad7fab..c002d5d31 100644
--- a/docs/otterize-cloud/object-model.mdx
+++ b/docs/otterize-cloud/object-model.mdx
@@ -25,20 +25,16 @@ In Otterize Cloud, services are _inferred_ from the intents reported to the Clou
A service name is unique within a namespace in a cluster, but not in general unique across the cluster or across clusters.
{@include: _environments_and_namespaces.mdx}
-## Clusters
-
-When a Kubernetes cluster is connected to Otterize Cloud, it is represented in the Cloud by a **cluster** object. You'll name it when you add it in the UI or through the API/CLI, or when you create the integration directly (in the UI or API/CLI).
-
-The Otterize operators -- intents operator, network mapper, and/or credentials operator -- running in your cluster will inform the Cloud about the intents, services, and credentials within this cluster, and will also convey their configuration (e.g. shadow or enforcement mode) within this cluster. Thus the cluster object in Otterize Cloud contains a lot of useful information about your Kubernetes cluster -- information used to deliver insights when you view your cluster in through the lens of the access graph.
-
-Note that, while a cluster and its namespaces and services could be in a single environment, and an environment could contain multiple clusters, many other combinations are possible. For example, a cluster could contain namespaces in multiple environments. Or, environments may contain some namespaces in one cluster and other namespaces in another cluster. Use whatever mappings make sense for your situation.
-
-Cluster names must be unique within an organization.
-
## Integrations
Otterize Cloud currently supports two types of integrations: **Kubernetes integrations** and **generic integrations**. In the future, many other types of integrations will be added, allowing Otterize Cloud to work seamlessly with all your infrastructures and systems.
-A Kubernetes integration is used to connect a Kubernetes cluster with Otterize Cloud via any or all of the Otterize operators: the intents operator, the network mapper, and the credentials operator. When a Kubernetes-type integration is created, it is always linked to an Otterize Cloud cluster object. It contains the credentials needed by the operators running in the Kubernetes cluster to communicate with the Cloud on behalf of that cluster, i.e., it ties together the physical Kubernetes cluster with its representation in Otterize Cloud. The integration also determines the environment to which namespaces in that clusters will be associated by default. The name of a Kubernetes integration is derived from the name of the cluster; since cluster names are unique per organization, so are Kubernetes-type integration names.
+A Kubernetes integration is used to connect a Kubernetes cluster with Otterize Cloud via any or all of the Otterize operators: the intents operator, the network mapper, and the credentials operator. It contains the credentials needed by the operators running in the Kubernetes cluster to communicate with the Cloud on behalf of that cluster, i.e., it ties together the physical Kubernetes cluster with its representation in Otterize Cloud. The integration also determines the environment to which namespaces in that clusters will be associated by default. The names of Kubernetes-type integrations must be unique within an organization.
A generic integration is used to connect generically an external system to Otterize Cloud. It provides that system credentials to access the Otterize API/CLI, in a way that doesn't involve any specific Otterize user. That makes it ideal for building automations on top of the Otterize API. For example, new clusters provisioned for the development team could be automatically connected to Otterize Cloud, or a CI/CD system could automatically look in the access graph for services that would be blocked or intents that were not declared and applied and fail the build. The name of the integration should reflect the way it will be used. The names of generic-type integrations must be unique within an organization.
+
+When a Kubernetes cluster is connected to Otterize Cloud, it is represented in the Cloud by an **Kubernetes integration** object. You'll name it when you add a Kubernetes integration in the UI or through the API/CLI.
+
+The Otterize operators -- intents operator, network mapper, and/or credentials operator -- running in your cluster will inform the Cloud about the intents, services, and credentials within this cluster, and will also convey their configuration (e.g. shadow or enforcement mode) within this cluster.
+
+Note that, while a cluster and its namespaces and services could be in a single environment, and an environment could contain multiple clusters, many other combinations are possible. For example, a cluster could contain namespaces in multiple environments. Or, environments may contain some namespaces in one cluster and other namespaces in another cluster. Use whatever mappings make sense for your situation.
\ No newline at end of file
diff --git a/docs/quickstart/access-control/aws-iam-eks.mdx b/docs/quickstart/access-control/aws-iam-eks.mdx
index 9938266f6..372a54184 100644
--- a/docs/quickstart/access-control/aws-iam-eks.mdx
+++ b/docs/quickstart/access-control/aws-iam-eks.mdx
@@ -4,9 +4,6 @@ title: Automate AWS IAM for EKS
image: /img/quick-tutorials/aws-iam-eks/social.png
---
-import CodeBlock from "@theme/CodeBlock";
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
Otterize automates AWS IAM roles and policies for your AWS EKS workloads, all in Kubernetes.
@@ -89,7 +86,7 @@ aws eks update-kubeconfig --region us-west-2 --name otterize-iam-eks-tutorial
#### 2. Deploy Otterize for AWS IAM
To deploy Otterize, head over to [Otterize Cloud](https://app.otterize.com) and:
-1. Create a Kubernetes cluster on the [Clusters page](https://app.otterize.com/clusters), and follow the instructions. *Make sure to enable enforcement mode for this tutorial.* If you already have a Kubernetes cluster connected, skip this step.
+1. Create a Kubernetes cluster on the [Integrations page](https://app.otterize.com/integrations), and follow the instructions. *Make sure to enable enforcement mode for this tutorial.* If you already have a Kubernetes cluster connected, skip this step.
2. Create an AWS IAM integration on the [Integrations page](https://app.otterize.com/integrations).
diff --git a/docs/reference/cli/README.mdx b/docs/reference/cli/README.mdx
index 79fe6fe0f..ef863a693 100644
--- a/docs/reference/cli/README.mdx
+++ b/docs/reference/cli/README.mdx
@@ -17,7 +17,7 @@ The following are the commands offered by the Otterize CLI.
## Command structure
-Most CLI commands are of the form `otterize NOUN VERB` where the `NOUN` is the type of object (e.g. `intents`, `clusters`) and the `VERB` is the operation being performed.
+Most CLI commands are of the form `otterize NOUN VERB` where the `NOUN` is the type of object (e.g. `intents`, `integrations`) and the `VERB` is the operation being performed.
Putting the `NOUN` first makes the auto-completion and documentation built into the CLI easier to consume: you first choose the type of object you are interested in,
then the built-in auto-completion or documentation shows you the options on that type of object.
@@ -254,109 +254,6 @@ Some Cloud CLI commands may fail.
Upgrade your CLI to the latest build to resolve this issue.
```
-## Clusters
-
-### `otterize clusters list [-n ]`
-
-List all clusters, optionally filtered by name.
-
-Clusters in Otterize Cloud represent Kubernetes clusters that are integrated with Otterize Cloud. The list also includes clusters that were not integrated or are no longer integrated, as long as they were not deleted.
-
-#### Options
-
-| Name | Default | Description |
-| --- | --- | --- |
-| `-n` or `--name` | | Filter the list by cluster name. Since cluster names are unique, this will always return at most one cluster. |
-
-#### Returns
-
-Returns a table of information about the clusters.
-
-```shell
-id name status namespace count service count configuration.globalDefaultDeny
-────────────────── ─────────── ──────────── ─────────────── ───────────── ───────────────────────────────
-cluster_64fetu3i2t my-cluster1 CONNECTED 4 50 false
-cluster_7g4tre319g my-cluster2 DISCONNECTED 12 127 true
-
-```
-
-### `otterize clusters get `
-
-Returns information about a single cluster, specified by its id.
-
-#### Returns
-
-Returns a table of information about the cluster.
-
-```shell
-id name status namespace count service count configuration.globalDefaultDeny
-────────────────── ─────────── ──────────── ─────────────── ───────────── ───────────────────────────────
-cluster_64fetu3i2t my-cluster1 CONNECTED 4 50 false
-```
-
-### `otterize clusters create -n `
-
-Creates a cluster in Otterize Cloud with the given name.
-
-To connect it to a Kubernetes cluster, i.e. to the Otterize OSS operators in that cluster:
-- You'll need an environment to which all namespaces in the cluster are assigned by default. If needed, create an environment with `otterize environments create -n `.
-- Create an integration with `otterize integrations create kubernetes --cluster-id --env-id `.
-- Follow [the guide to installing or upgrading Otterize OSS](/installation) and use the integration's credentials.
-
-#### Options
-
-| Name | Default | Description |
-| --- | --- | --- |
-| `-n` or `--name` | | The name to give the cluster, in Otterize Cloud. |
-
-#### Returns
-
-Returns a table of information about the newly-created cluster.
-
-```shell
-id name default environment id integration id namespace count service count configuration intents operator credentials operator network mapper
-────────────────── ──────── ────────────────────── ────────────── ─────────────── ───────────── ──────────────────────────────────────────────────────────────────── ───────────────────────────────── ───────────────────────────────── ─────────────────────────────────
-cluster_kbw9ex7dfa cluster1 8 0 {GlobalDefaultDeny:false UseNetworkPoliciesInAccessGraphStates:true} NOT_INTEGRATED (last seen: never) NOT_INTEGRATED (last seen: never) NOT_INTEGRATED (last seen: never)
-
-```
-
-### `otterize clusters update --global-default-deny=[true|false]`
-
-Updates the user-settable properties of the cluster in Otterize Cloud.
-
-#### Options
-
-| Name | Default | Description |
-| --- | --- | --- |
-| `--global-default-deny` | false | Whether a global default deny network policy is in effect. |
-| `use-network-policies-in-access-graph-states` | true | If false, the access graph will not take network policies into account when calculating service and intents states. |
-
-#### Returns
-
-Returns a confirmation and a table of information about the updated cluster.
-
-```shell
-Cluster updated
-id name default environment id integration id namespace count service count configuration intents operator credentials operator network mapper
-────────────────── ──────── ────────────────────── ────────────── ─────────────── ───────────── ───────────────────────────────────────────────────────────────────── ───────────────────────────────── ───────────────────────────────── ─────────────────────────────────
-cluster_kbw9ex7dfa cluster1 8 0 {GlobalDefaultDeny:false UseNetworkPoliciesInAccessGraphStates:false} NOT_INTEGRATED (last seen: never) NOT_INTEGRATED (last seen: never) NOT_INTEGRATED (last seen: never)
-
-```
-
-### `otterize clusters delete `
-
-Deletes the given cluster, if it doesn't have an integration.
-
-To delete a cluster with an integration, first delete its integration.
-
-#### Returns
-
-Returns a confirmation of the deletion.
-
-```shell
-Deleted cluster cluster_kbw9ex7dfa
-```
-
## Environments
### `otterize environments list [-l label1,label2] [-n name]`
@@ -577,7 +474,7 @@ id type name cluster id default environment id intents operator
int_nx2twtcqwt GENERIC my-bot cli_csn8mb55b3 7adba16e331bdf9fe9d2c1ec7d2d78a1c7e56df34sc2c5ff98e4dec4aa232c3c
```
-### `otterize integrations create kubernetes --cluster-id= --env-id=`
+### `otterize integrations create kubernetes --name= --env-id=`
Creates a new Kubernetes-type integration for the specified cluster and with the specified default environment.
@@ -589,7 +486,7 @@ The default environment must also have already been created. All namespaces in t
| Name | Default | Description |
| --- | --- | --- |
-| `--cluster-id` | | The id of the cluster to use with this integration. |
+| `-n` or `--name` | | Specify the name of the new integration. |
| `--env-id` | | The id of the environment to use as the default environment for this integration. |
#### Returns
@@ -622,7 +519,7 @@ id type name cluster id default environment id intents operato
int_nx2twtcqwt GENERIC my-bot2 cli_csn8mb55b3 7adba16e331bdf9fe9d2c1ec7d2d78a1c7e56df34sc2c5ff98e4dec4aa232c3c
```
-### `otterize integrations update kubernetes [--env-id ]`
+### `otterize integrations update kubernetes [--env-id ] [--name=]`
Updates the specified Kubernetes-type integration with a new environment to use as the default environment for new namespaces.
@@ -630,6 +527,7 @@ Updates the specified Kubernetes-type integration with a new environment to use
| Name | Default | Description |
| --- | --- | --- |
+| `-n` or `--name` | | The new name to use for the integration. |
| `--env-id` | | The id of the environment to use as the default for this integration. |
#### Returns