diff --git a/docs/_common/install-otterize-cli.md b/docs/_common/install-otterize-cli.md index e82eb663f..ba11a7224 100644 --- a/docs/_common/install-otterize-cli.md +++ b/docs/_common/install-otterize-cli.md @@ -13,7 +13,7 @@ brew install otterize/otterize/otterize-cli ```bash -curl -LJO https://get.otterize.com/otterize-cli/v1.0.2/otterize_macOS_arm64_notarized.zip +curl -LJO https://get.otterize.com/otterize-cli/v1.0.3/otterize_macOS_arm64_notarized.zip tar xf otterize_macOS_arm64_notarized.zip sudo cp otterize /usr/local/bin # optionally move to PATH ``` @@ -21,7 +21,7 @@ sudo cp otterize /usr/local/bin # optionally move to PATH ```bash -curl -LJO https://get.otterize.com/otterize-cli/v1.0.2/otterize_macOS_x86_64_notarized.zip +curl -LJO https://get.otterize.com/otterize-cli/v1.0.3/otterize_macOS_x86_64_notarized.zip tar xf otterize_macOS_x86_64_notarized.zip sudo cp otterize /usr/local/bin # optionally move to PATH ``` @@ -42,7 +42,7 @@ scoop install otterize-cli ```PowerShell -Invoke-WebRequest -Uri https://get.otterize.com/otterize-cli/v1.0.2/otterize_windows_x86_64.zip -OutFile otterize_Windows_x86_64.zip +Invoke-WebRequest -Uri https://get.otterize.com/otterize-cli/v1.0.3/otterize_windows_x86_64.zip -OutFile otterize_Windows_x86_64.zip Expand-Archive otterize_Windows_x86_64.zip -DestinationPath . # optionally move to PATH ``` @@ -54,7 +54,7 @@ Expand-Archive otterize_Windows_x86_64.zip -DestinationPath . ```bash -wget https://get.otterize.com/otterize-cli/v1.0.2/otterize_linux_x86_64.tar.gz +wget https://get.otterize.com/otterize-cli/v1.0.3/otterize_linux_x86_64.tar.gz tar xf otterize_linux_x86_64.tar.gz sudo cp otterize /usr/local/bin # optionally move to PATH ``` diff --git a/docs/quickstart/access-control/postgres.mdx b/docs/quickstart/access-control/postgres.mdx new file mode 100644 index 000000000..0b71187c2 --- /dev/null +++ b/docs/quickstart/access-control/postgres.mdx @@ -0,0 +1,176 @@ +--- +sidebar_position: 2 +title: Just-in-time PostgreSQL access +image: /img/quick-tutorials/aws-iam-eks/social.png +--- + +import CodeBlock from "@theme/CodeBlock"; +import Tabs from "@theme/Tabs"; +import TabItem from "@theme/TabItem"; + +export const Terminal = ({children}) => ( +
+ {children} +
+); + + +# Overview +This tutorial will deploy an example cluster to highlight Otterize's PostgreSQL capabilities. Within that cluster is a client service that hits an endpoint on a server, which then connects to a database. The server runs two different database operations: +1. An `INSERT` operation to append a table within the database +2. A `SELECT` operation to validate the updates. + +The server needs appropriate permissions to access the database. You could use one admin user for all services, which is insecure and is the cause for many security breaches. With Otterize, you can specify required access, and have Otterize create users and perform correctly scoped SQL GRANTs just in time, as the service spins up and down. + +In this tutorial, we will: +* Deploy an example cluster +* Make our database accessible to Otterize Cloud +* Connect our cluster and database to Otterize Cloud +* Declare a ClientIntents resource for the server, specifying required access +* See that the required access has been granted + +# Prerequisites + +#### 1. Minikube Cluster +
+ Prepare a Kubernetes cluster with Minikube + +For this tutorial you'll need a local Kubernetes cluster. Having a cluster with a [CNI](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) that supports [NetworkPolicies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) isn't required for this tutorial, but is recommended so that your cluster works with other tutorials. + +If you don't have the Minikube CLI, first [install it](https://minikube.sigs.k8s.io/docs/start/). + +Then start your Minikube cluster with Calico, in order to enforce network policies. + +```shell +minikube start --cpus=4 --memory 4096 --disk-size 32g --cni=calico +``` + +
+ + +#### 2. ngrok +We will be using it to create a proxy to connect our locally running database to Otterize Cloud, for the tutorial's purposes. Once you have a [ngrok account](https://dashboard.ngrok.com/signup), you’ll want to install it in your terminal using the instructions found here: [ngrok install](https://ngrok.com/download) + +# Tutorial + +### Deploy the cluster + +This will set up the namespace we will use for our tutorial and deploy the cluster with our client, server, and database. + +``` shell +kubectl create namespace otterize-tutorial-postgres +kubectl apply -n otterize-tutorial-postgres -f ${ABSOLUTE_URL}/code-examples/postgres/client-server-database.yaml +``` + +### Make the database accessible to Otterize Cloud + +We need to allow Otterize Cloud to access the database server so Otterize Cloud can configure on-demand credentials for our server’s access. This tutorial will expose our database port to our local environment and then proxy it to Otterize Cloud using ngrok. We will need both of these processes up and running during the rest of this tutorial. + +In a new terminal window, run the following command to forward our database port from our cluster into your local environment: +```shell +kubectl port-forward svc/database 5432:5432 -n otterize-tutorial-postgres +``` + +Now that your database port is accessible to your local environment, we are using ngrok to make that available to Otterize Cloud. For production uses, this can be done through firewall configurations. + +In a new terminal window, run: +```shell +ngrok tcp 5432 +``` + +Once ngrok is running, make note of the *Forwarding* host and port. Will need this for our next step. + +### Integrate the database to Otterize Cloud + +To add the database, we head over to the [Integrations page](https://app.otterize.com/integrations) + +1. Click *Add Integration* +2. Select Integration Type: *Database* +3. Provide a name for the integration: *otterize-tutorial-postgres* +4. Leave the database type set to *PostgreSQL* +5. Copy your *Forwarding* host and port from ngrok in the *Address* Field. This will look something like `0.tcp.us-cal-1.ngrok.io:14192`. Be sure to remove the `tcp://` portion of the URL. +6. *Username*: otterize-tutorial, *Password*: jeffdog523 +1. Note this is a superuser, which allows Otterize to create unique credentials for each service. For production, it is recommended to create a privileged user for Otterize’s exclusive use. This user should have the necessary permissions to GRANT access to any databases and tables you want it to manage. +7. Hit *Test Connection*, and you should see an “OK” status. +8. Hit the Add button to complete the integration + +### Integrate the cluster to Otterize Cloud +Create a Kubernetes cluster on the [Clusters page](https://app.otterize.com/clusters), and follow the instructions. + +After providing a cluster name and environment. For this tutorial, choose: + +1. mTLS and Kafka Support: None +2. Enforcement mode: Enabled. +3. Copy and run the Helm upgrade command. +4. You should see the Connection status change. + +### View logs for the server +After the client, server, and database are up and running, we can see that the server does not have the appropriate access to the database by inspecting the logs with the following command. + +```shell +kubectl logs -f -n otterize-tutorial-postgres deploy/server +``` + + +Example log: + + pq: password authentication failed for user "svc_9cigb2qemv_otterize_tutorial_postgres_server" + + +### Define your ClientIntents + +ClientIntents are Otterize’s way of defining access through unique relationships, which lead to perfectly scoped access. In this example, we provide our `server` service the ability to insert select records to allow it to access the database. + +Below is our `intents.yaml` file. As you can see, it is scoped to our database named `otterize-tutorial` and our `public.example` table. We also have limited the access to just `SELECT` and `INSERT` operations. We could add more databases, tables, or operations if our service required more access. + +Specifying the table and operations is optional. If you don't specify the table, access will be granted to all tables in the specified database. If you don't specify the operations, all operations will be allowed. +```yaml +apiVersion: k8s.otterize.com/v1alpha3 +kind: ClientIntents +metadata: + name: client-intents-for-server + namespace: otterize-tutorial-postgres +spec: + service: + name: server + calls: + - name: otterize-tutorial-postgres # Same name as our integration + type: database + databaseResources: + - databaseName: otterize-tutorial + table: public.example + operations: + - SELECT + - INSERT +``` + +We can now apply our intents. Behind the scenes, Otterize Cloud runs `CREATE USER` and `GRANT` queries on the database, making our `SELECT` and `INSERT` errors disappear. + +```shell +kubectl apply -f intents.yaml +``` + +Example log: + + Successfully INSERTED into our table + Successfully SELECTED, most recent value: 2024-01-22T18:48:43Z + + +That’s it! If your service’s functionality changes, adding or removing access is as simple as updating your ClientIntents definitions. For fun, try altering the `operations` to just `SELECT` or `INSERT`. + +# Teardown +To remove the deployed examples, run: +```shell +kubectl delete namespace otterize-tutorial-postgres +``` + +End the ngrok and port forwarding processes by closing the terminal windows or Ctrl-C the processes. \ No newline at end of file diff --git a/docs/quickstart/access-control/postgresql.mdx b/docs/quickstart/access-control/postgresql.mdx deleted file mode 100644 index 4b4fc543b..000000000 --- a/docs/quickstart/access-control/postgresql.mdx +++ /dev/null @@ -1,75 +0,0 @@ ---- -sidebar_position: 2 -title: Just-in-time PostgreSQL users & access -image: /img/quick-tutorials/postgresql/social.png ---- - -import CodeBlock from "@theme/CodeBlock"; -import Tabs from "@theme/Tabs"; -import TabItem from "@theme/TabItem"; - -Otterize automates PostgreSQL access management and secrets for your workloads, all in Kubernetes. - - -![](/img/quick-tutorials/postgresql/cloud.png) - -Connect to [Otterize Cloud](https://app.otterize.com) to get started! - - -## Deploy Otterize for PostgreSQL - -### Install Otterize - -{@include: ../../_common/install-otterize-from-cloud-with-enforcement-postgresql.md} - - -### Create database integration -Create a _Database_ integration of type _PostgreSQL_ on the [Integrations page](https://app.otterize.com/integrations). - -## Configure your workloads - -### Pod annotaion -Annotate a pod, requesting a user and a password to be provisioned and bound to the pod. - -Annotate the pod with this annotation: - -`credentials-operator.otterize.com/user-password-secret-name: booking-service-secret` - -Otterize then provisions credentials for this specific workload in this namespace in this cluster, that is not shared with other workloads. - -### ClientIntents -Declare your workload’s ClientIntents, specifying desired permissions. - -```yaml -apiVersion: k8s.otterize.com/v1alpha3 -kind: ClientIntents -metadata: - name: booking-service - namespace: flight-search -spec: - service: - name: booking-service - calls: - - name: bookings - type: database - databaseResources: - - table: users - databaseName: bookings-db - operations: - - SELECT - - table: products - databaseName: bookings-db - operations: - - ALL -``` - -Otterize then creates a user and matching grants on the target database. - - -### Can I also map SQL calls? - -:::info Coming soon -Capture SQL calls for pods in your cluster, automatically generating the required least-privilege permissions, or ClientIntents, for each workload. Zero-friction in development, zero-trust in production. It’s coming. -::: - -If you want to learn more, and meet other Otterize users, please [Join our Community](https://joinslack.otterize.com/) and chat with us! \ No newline at end of file diff --git a/docs/quickstart/visualization/k8s-network-mapper.mdx b/docs/quickstart/visualization/k8s-network-mapper.mdx index af3fce832..b703b3cf7 100644 --- a/docs/quickstart/visualization/k8s-network-mapper.mdx +++ b/docs/quickstart/visualization/k8s-network-mapper.mdx @@ -8,7 +8,7 @@ import Tabs from "@theme/Tabs"; import TabItem from "@theme/TabItem"; import styles from "/src/css/styles.module.css"; -The network mapper allows you to map pod-to-pod traffic within your K8s cluster. +The network mapper allows you to map network traffic for your K8s cluster. Once mapped you can export it as an image, json, list, or view it within Otterize Cloud. In this tutorial, we will: @@ -29,7 +29,7 @@ Before you start, you'll need a Kubernetes cluster. Having a cluster with a [CNI You can now install Otterize in your cluster (if it's not already installed), and optionally connect to Otterize Cloud. Connecting to Cloud lets you: 1. See what's happening visually in your browser, through the "access graph"; -2. Avoid using SPIRE (which can be installed with Otterize) for issuing certificates, as Otterize Cloud provides a certificate service. +2. View pod public internet egress traffic. So either forego browser visualization and: @@ -73,6 +73,15 @@ Deploy the following simple example — `client`, `client2` and `server`, co kubectl apply -n otterize-tutorial-mapper -f ${ABSOLUTE_URL}/code-examples/network-mapper/all.yaml ``` +
+Expand to see the deployment YAML + +```yaml +{@include: ../../../static/code-examples/network-mapper/all.yaml} +``` + +
+ ## Map the cluster The network mapper starts to sniff traffic and build an in-memory network map as soon as it's installed. @@ -94,11 +103,11 @@ If you've attached Otterize OSS to Otterize Cloud, you can now also see the [acc The access graph reveals several types of information and insights, such as: 1. Seeing the network map for different clusters, seeing the subset of the map for a given namespace, or even — according to how you've mapped namespaces to environments — seeing the subset of the map for a specific environment. -2. Filtering the map to include recently-seen traffic, since some date in the past. That way you can eliminate calls that are no longer relevant, without having to reset the network mapper and start building a new map. -3. If the intents operator is also connected, the access graph now reveals more specifics about access: understand which services are protected or would be protected, and which client calls are being blocked or would be blocked. We'll see more of that in the next couple of tutorials - -Note, for example, that the `client` → `server` arrow is yellow. Clicking on it shows: +2. Viewing the public internet egress traffic for each pod, including the DNS name and the IPs associated with each outbound request. +3. Filtering the map to include recently-seen traffic, since some date in the past. That way you can eliminate calls that are no longer relevant, without having to reset the network mapper and start building a new map. +4. If the intents operator is also connected, the access graph now reveals more specifics about access: understand which services are protected or would be protected, and which client calls are being blocked or would be blocked. We'll see more of that in the next couple of tutorials. +Note, for example, that the `client` → `server` arrow is yellow. Clicking on it shows the automatically generated intents for both the client pod to the server pod and the egress of the client to the public internet. If we take a closer look, the ClientIntent YAML specifies that the `client` can call the `server` on the internal network, and it can reach the IP Address `142.250.189.174`. We can see from the comment that this IP belongs to google.com. Client to server edge infoSignup for free", + html: "", position: "right", }, { href: "https://calendly.com/otterize-team/kubecon-na", - html: "
Request a demoDemo
", + html: "
Request a demoDemo
", position: "right", }, ], @@ -355,7 +363,12 @@ const config = { }, { html: `