diff --git a/docs/features/network-mapping-network-policies/reference/README.mdx b/docs/features/network-mapping-network-policies/reference/README.mdx index 18e9d5962..dacfa4710 100644 --- a/docs/features/network-mapping-network-policies/reference/README.mdx +++ b/docs/features/network-mapping-network-policies/reference/README.mdx @@ -1,10 +1,10 @@ --- sidebar_position: 3 title: Reference -hide_table_of_contents: true +hide_table_of_contents: false --- -### ClientIntents example (YAML) +## ClientIntents example (YAML) ```yaml apiVersion: k8s.otterize.com/v1alpha3 @@ -28,8 +28,22 @@ spec: methods: [ get, post ] ``` +#### ClientIntents and DNS values -### Helm Chart options +When a ClientIntent is specified utilizing DNS identifiers, such as domain names, it initiates a sequence of operations to integrate the relevant IP addresses into the respective NetworkPolicies. + +1. The Network Mapper incorporates a DNS cache layer, which identifies and archives all resolved DNS records alongside their corresponding IPv4 and IPv6 IP addresses. +2. Without a ClientIntent associated with the given domain or its related IP addresses, Otterize will propose a policy tailored to the observed traffic. +3. Upon the application of a ClientIntent with a domain name present in the cache, the NetworkMapper dynamically updates the intent’s `status` section at one-second intervals with any newly identified IP addresses. It is important to note that Otterize retains all previously identified IP addresses to ensure backward compatibility. +4. The Intents Operator reviews changes within the `status` section and amends the associated NetworkPolicy to include these newly discovered IP addresses. + + + + + + + +## Helm Chart options | Key | Description | Default | |-----------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------|---------| @@ -39,12 +53,12 @@ spec: View the [Helm chart reference](/reference/configuration/otterize-chart) for all other options -### Network mapper parameters +## Network mapper parameters All configurable parameters of the network mapper can be configured under the alias `networkMapper`. Further information about network mapper parameters can be found [in the network mapper's chart](https://github.com/otterize/helm-charts/tree/main/network-mapper). -### CLI: Network mapper commands +## CLI: Network mapper commands All `otterize network-mapper` commands share a set of optional flags which will not be repeated in the documentation for each command. diff --git a/docs/features/network-mapping-network-policies/tutorials/k8s-egress-access-control-tutorial.mdx b/docs/features/network-mapping-network-policies/tutorials/k8s-egress-access-control-tutorial.mdx new file mode 100644 index 000000000..390793a85 --- /dev/null +++ b/docs/features/network-mapping-network-policies/tutorials/k8s-egress-access-control-tutorial.mdx @@ -0,0 +1,316 @@ +--- +sidebar_position: 5 +title: Egress NetworkPolicy Automation +image: /img/quick-tutorials/egress-access-control/social.png +--- + +import Tabs from "@theme/Tabs"; +import TabItem from "@theme/TabItem"; + +Let’s learn how Otterize automates egress access control with network policies. + +In this tutorial, we will: + +- Deploy an example cluster consisting of an example frontend for a personal advice application and a server with an external dependency to retrieve wisdom. +- Declare ClientIntents for each service, including public internet and internal network egress intents. +- See that a network policy was autogenerated to allow just that and block the (undeclared) calls from the other client. +- Revise our intent to use DNS records to tie our network policies to domain names. + +## Prerequisites +### Install Otterize on your cluster +To deploy Otterize, head over to [Otterize Cloud](https://app.otterize.com/), and to integrate your cluster, navigate to the [Integrations page](https://app.otterize.com/integrations) and follow the instructions, but be sure to add the flag below. + +**Note:** Egress policy creation is off by default. We must add the following flag when installing Otterize to enable egress policy creation. + +```bash +--set intentsOperator.operator.enableEgressNetworkPolicyCreation=true +``` + +## Tutorial +### Deploy the cluster + +This will set up the namespace we will use for our tutorial and deploy the cluster containing our front and backend pods. Upon deployment, our cluster will have no network policies in place. + +```yaml +kubectl create namespace otterize-tutorial-egress-access +kubectl apply -n otterize-tutorial-egress-access -f ${ABSOLUTE_URL}/code-examples/egress-access-control/all.yaml +``` + +### About Network Policies + +By default, in Kubernetes, pods are non-isolated. Meaning they accept traffic from any source and can send traffic to any source. When you introduce policies, either ingress or egress pods become isolated. Any connection not explicitly allowed will be rejected. When an ingress policy type is introduced, any traffic that does not match a rule will be rejected. Similarly, when an egress policy type is introduced, any traffic that does not match a rule will not be allowed out of the pod. + +Stringent policies can be essential in certain sectors, such as healthcare, finance, and government. Implementing egress policies is crucial in minimizing attack surfaces by concealing services or restricting the exposure of any compromised services. However, challenges may emerge when egress policies are applied to services dependent on external communications that were not initially accounted for. These external communications could include DNS, time synchronization, package repositories, logging, telemetry, cloud services, authentication, or other critical external dependencies that, while not directly related to a pod's primary functionality, are vital for its operation. + +Otterize helps elevate these issues by capturing and mapping the ingress and egress connections used by your pods and then providing suggested policies to maintain your witnessed tariff. You can also enable [shadow enforcement](/reference/shadow-vs-active-enforcement) to see which connections would be blocked without committing to active enforcement. + +### Defining our intents + +We aim to secure the network in our example cluster by introducing a default deny policy for our entire network and policies for each pod’s appropriate ingress and egress needs. + +* Frontend - Needs to retrieve advice from our backend. This will result in an egress policy on our frontend and an ingress policy on our backend. +* Backend - Needs to be able to accept our frontend request and communicate to an external API. This will create an ingress policy for our frontend and an egress policy for the external API. + +As previously mentioned, the pods will be non-isolated by default, and everything will work. Check the logs for the frontend service to see the free advice flowing: + +```bash +kubectl logs -f -n otterize-tutorial-egress-access deploy/frontend +``` +Example log output: +``` +The answer to all your problems is to: +The sun always shines above the clouds. + +The answer to all your problems is to: +Stop procrastinating. + +The answer to all your problems is to: +Don't feed Mogwais after midnight. +``` + +View of our non-isolated cluster within Otterize Cloud + + +### Applying our intents + +Given that this is a serious advice application, we want to lock down our pods to ensure no outside inference can occur. + +To enforce the strict communication rules for our services, we will start by applying a default deny policy, ensuring that only explicitly defined connections are allowed. You’ll see that we are allowing UDP on port 53 to support any DNS lookups we need. Without DNS support, our pods could not resolve their cluster names (*frontend*, *backend*) to their internal IP addresses nor resolve the domain names used by our external advise API service. + +```bash +kubectl apply -n otterize-tutorial-egress-access -f ${ABSOLUTE_URL}/code-examples/egress-access-control/default-deny-policy.yaml +``` + +*Default Deny Policy* +```yaml +{@include: ../../../../static/code-examples/egress-access-control/default-deny-policy.yaml} +``` + +You can now see in the logs that the pods are isolated from each other and the public internet: + +```bash +kubectl logs -f -n otterize-tutorial-egress-access deploy/frontend +``` +Example log output from *frontend* pod: +``` +Unable to connect to the backend +Unable to connect to the backend +Unable to connect to the backend +``` + + +Now that we have secured our broader network, we will apply the following ClientIntents to enable traffic for our services. + +```bash +kubectl apply -n otterize-tutorial-egress-access -f ${ABSOLUTE_URL}/code-examples/egress-access-control/intents.yaml +``` + +```yaml +{@include: ../../../../static/code-examples/egress-access-control/intents.yaml} +``` + +Now, our network and our services are only able to open connections to those internal and external resources that are explicitly needed. Below, we can inspect the five different NetworkPolicies generated by Otterize and look at the annotations to see how these policies match their applied pods and designated traffic rules. + + + + +```yaml +Name: access-to-backend-from-otterize-tutorial-egress-access +Namespace: otterize-tutorial-egress-access +Created on: 2024-02-25 12:20:52 -0800 PST +Labels: intents.otterize.com/network-policy=backend-otterize-tutorial-eg-00531a +Annotations: none +Spec: + # Selector specifying pods to which the policy applies. In this case, it targets pod labeled as backend. + # Otterize automatically adds these labels, ensuring they persist across deployments and multiple instances. + PodSelector: intents.otterize.com/server=backend-otterize-tutorial-eg-00531a + Allowing ingress traffic: + # Specifies that the policy allows traffic to any port on the selected pods. + To Port: any (traffic allowed to all ports) + From: + # Specifying the namespace for our pod selector + NamespaceSelector: kubernetes.io/metadata.name=otterize-tutorial-egress-access + # Further refines the allowed sources of ingress to only those pods with the Otterize managed label + PodSelector: intents.otterize.com/access-backend-otterize-tutorial-eg-00531a=true + Not affecting egress traffic + # Specifies that this policy only applies to incoming traffic to the selected pods. + Policy Types: Ingress +``` + + + +```yaml +Name: access-to-frontend-from-otterize-tutorial-egress-access +Namespace: otterize-tutorial-egress-access +Created on: 2024-02-25 12:20:52 -0800 PST +Labels: intents.otterize.com/network-policy=frontend-otterize-tutorial-eg-2bb536 +Annotations: none +Spec: + # This label identifies the NetworkPolicy as relating to the frontend pod + # Otterize automatically adds these labels, ensuring they persist across deployments and multiple instances. + PodSelector: intents.otterize.com/server=frontend-otterize-tutorial-eg-2bb536 + Allowing ingress traffic: + # Specifies that the policy permits traffic to any port on the selected pods + To Port: any (traffic allowed to all ports) + From: + # Specifying the namespace for our pod selector + NamespaceSelector: kubernetes.io/metadata.name=otterize-tutorial-egress-access + # Further refines the allowed sources of ingress to only those pods with the Otterize managed label + PodSelector: intents.otterize.com/access-frontend-otterize-tutorial-eg-2bb536=true + Not affecting egress traffic + # Specifies that this policy only applies to incoming traffic to the selected pods. + Policy Types: Ingress +``` + + + +```yaml +Name: egress-to-backend.otterize-tutorial-egress-access-from-frontend +Namespace: otterize-tutorial-egress-access +Created on: 2024-02-25 12:20:52 -0800 PST +Labels: intents.otterize.com/egress-network-policy=frontend-otterize-tutorial-eg-2bb536 + intents.otterize.com/egress-network-policy-target=backend-otterize-tutorial-eg-00531a +Annotations: none +Spec: + # This selector targets the pods to which the policy applies. Here, it specifically targets pods labeled as "client" of the "frontend" + # Otterize automatically adds these labels, ensuring they persist across deployments and multiple instances. + PodSelector: intents.otterize.com/client=frontend-otterize-tutorial-eg-2bb536 + Not affecting ingress traffic + Allowing egress traffic: + # Specifies that the policy allows egress traffic to any port + To Port: any (traffic allowed to all ports) + To: + # Specifying the namespace for our pod selector + NamespaceSelector: kubernetes.io/metadata.name=otterize-tutorial-egress-access + # Further refines the allowed sources of egress to only those pods with the Otterize managed label + PodSelector: intents.otterize.com/server=backend-otterize-tutorial-eg-00531a + # Specifies that this policy only applies to outbound traffic to the selected pods. + Policy Types: Egress +``` + + + +```yaml +Name: egress-to-frontend.otterize-tutorial-egress-access-from-backend +Namespace: otterize-tutorial-egress-access +Created on: 2024-02-25 12:20:52 -0800 PST +Labels: intents.otterize.com/egress-network-policy=backend-otterize-tutorial-eg-00531a + intents.otterize.com/egress-network-policy-target=frontend-otterize-tutorial-eg-2bb536 +Annotations: none +Spec: + # This selector targets the pods to which the policy applies. Here, it specifically targets pods labeled as "client" of the "backend" + # Otterize automatically adds these labels, ensuring they persist across deployments and multiple instances. + PodSelector: intents.otterize.com/client=backend-otterize-tutorial-eg-00531a + Not affecting ingress traffic + Allowing egress traffic: + # Specifies that the policy allows egress traffic to any port + To Port: any (traffic allowed to all ports) + To: + # Specifying the namespace for our pod selector + NamespaceSelector: kubernetes.io/metadata.name=otterize-tutorial-egress-access + # Further refines the allowed sources of egress to only those pods with the Otterize managed label + PodSelector: intents.otterize.com/server=frontend-otterize-tutorial-eg-2bb536 + # Specifies that this policy only applies to outbound traffic to the selected pods. + Policy Types: Egress + +``` + + + +```yaml +Name: egress-to-internet-from-backend +Namespace: otterize-tutorial-egress-access +Created on: 2024-02-25 12:20:52 -0800 PST +Labels: intents.otterize.com/egress-internet-network-policy=backend-otterize-tutorial-eg-00531a +Annotations: none +Spec: + # This selector targets the pods to which the policy applies. Here, it specifically targets pods labeled as "client" of the "backend" + # Otterize automatically adds these labels, ensuring they persist across deployments and multiple instances. + PodSelector: intents.otterize.com/client=backend-otterize-tutorial-eg-00531a + Not affecting ingress traffic + Allowing egress traffic: + # Specifies that the policy allows egress traffic to any port + To Port: any (traffic allowed to all ports) + To: + # Specifies the IP address range to which the policy allows egress traffic. Here, a /32 CIDR notation indicates a single IP address, suggesting this policy targets the IP address associated with our advise API + IPBlock: + CIDR: 185.53.57.80/32 + # The 'Except' field allows specifying IP addresses within the defined CIDR range that we should exclude, but it is empty in this case. + Except: + # Specifies that this policy only applies to outbound traffic to the selected pods. + Policy Types: Egress +``` + + + +The protected network can be seen on Otterize Cloud: + + +### Using DNS and domain names in ClientIntents + +In the preceding example, a static IP address was employed for the definitions of our intents. Typically, in practical scenarios, a domain name precedes an external service rather than a static IP address. The inherent absence of a direct method to handle NetworkPolicies via domain names presents a considerable challenge in devising policy configurations for services characterized by dynamic IP addresses. However, the capability of ClientIntents to incorporate domain names, DNS records, and static IP addresses offers a flexible solution to this challenge. Below is an enhanced version of a ClientIntent that leverages a domain name. + +```yaml +{@include: ../../../../static/code-examples/egress-access-control/domain-intents.yaml} +``` + +In the above YAML file, we have replaced the IP address with our service’s domain name. + +Otterize will now track the resolved IP addresses for `api.adviceslip.com` and add them to NetworkPolicies within your clusters. Let’s deploy the revised intent with the command below: + +```bash +kubectl apply -n otterize-tutorial-egress-access -f ${ABSOLUTE_URL}/code-examples/egress-access-control/domain-intents.yaml +``` + +After the updated definition takes effect, with the command below we can view the policy again to find some additions. + +```bash +kubectl describe clientintents -n otterize-tutorial-egress-access backend --show-events=false +``` +
+View Updated ClientIntent Description + +```yaml + +Name: backend +Namespace: otterize-tutorial-egress-access +Labels: none +Annotations: none +API Version: k8s.otterize.com/v1alpha3 +Kind: ClientIntents +Metadata: + Creation Timestamp: 2024-03-08T18:53:40Z + Finalizers: + intents.otterize.com/client-intents-finalizer + Generation: 2 + Resource Version: 2122 + UID: c93d8cc4-2a8f-404e-b06c-1935176f1dc8 +Spec: + Calls: + Internet: + Domains: + api.adviceslip.com + Type: internet + Name: frontend + Service: + Name: backend +Status: + Observed Generation: 2 + Resolved I Ps: + Dns: api.adviceslip.com + Ips: + 185.53.57.80 + Up To Date: true +``` + +
+ +It is observed that Otterize identifies the IP addresses associated with the domain and incorporates them into a newly established section within the `status` block. This action informs the [Intent Operator](/reference/configuration/intents-operator) to dynamically adjust the network policies in response to the detection of new IP addresses originating from the specified domain name. For services that utilize dynamic IP addresses, each discovered IP address is systematically added to the network policy. Further details on DNS intents can be explored in the [Reference](/features/network-mapping-network-policies/reference#clientintents-with-dns). + +## Teardown + +To remove the deployed examples, run the following: + +```bash +kubectl delete namespace otterize-tutorial-egress-access +``` diff --git a/package.json b/package.json index ac65d4afb..7c15e67e3 100644 --- a/package.json +++ b/package.json @@ -30,7 +30,7 @@ "react": "^17.0.2", "react-dom": "^17.0.2", "react-loadable": "^5.5.0", - "vercel": "^33.4.0" + "vercel": "^33.5.4" }, "devDependencies": { "@docusaurus/module-type-aliases": "^2.4.3", diff --git a/static/code-examples/egress-access-control/all.yaml b/static/code-examples/egress-access-control/all.yaml new file mode 100644 index 000000000..6d2bcca67 --- /dev/null +++ b/static/code-examples/egress-access-control/all.yaml @@ -0,0 +1,57 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: otterize-tutorial-egress-access +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend + namespace: otterize-tutorial-egress-access +spec: + selector: + matchLabels: + app: frontend + template: + metadata: + labels: + app: frontend + spec: + containers: + - name: frontend + imagePullPolicy: Always + image: 'otterize/egress-tutorial-frontend' +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: backend + namespace: otterize-tutorial-egress-access +spec: + selector: + matchLabels: + app: backend + template: + metadata: + labels: + app: backend + spec: + containers: + - name: backend + imagePullPolicy: Always + image: 'otterize/egress-tutorial-backend' + ports: + - containerPort: 8080 +--- +apiVersion: v1 +kind: Service +metadata: + name: backend + namespace: otterize-tutorial-egress-access +spec: + selector: + app: backend + ports: + - protocol: TCP + port: 8080 + targetPort: 8080 \ No newline at end of file diff --git a/static/code-examples/egress-access-control/default-deny-policy.yaml b/static/code-examples/egress-access-control/default-deny-policy.yaml new file mode 100644 index 000000000..a24890287 --- /dev/null +++ b/static/code-examples/egress-access-control/default-deny-policy.yaml @@ -0,0 +1,14 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-deny +spec: + podSelector: {} + policyTypes: + - Egress + - Ingress + egress: + - ports: + - protocol: UDP + port: 53 \ No newline at end of file diff --git a/static/code-examples/egress-access-control/dns-enabled-npol.yaml b/static/code-examples/egress-access-control/dns-enabled-npol.yaml new file mode 100644 index 000000000..1a7728bd2 --- /dev/null +++ b/static/code-examples/egress-access-control/dns-enabled-npol.yaml @@ -0,0 +1,22 @@ +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: allow-dns-access +spec: + egress: + - ports: + - port: 53 + protocol: UDP + to: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: kube-system + podSelector: + matchLabels: + k8s-app: kube-dns + podSelector: + matchExpressions: + - key: intents.otterize.com/service + operator: Exists + policyTypes: + - Egress \ No newline at end of file diff --git a/static/code-examples/egress-access-control/domain-intents.yaml b/static/code-examples/egress-access-control/domain-intents.yaml new file mode 100644 index 000000000..d29e36b54 --- /dev/null +++ b/static/code-examples/egress-access-control/domain-intents.yaml @@ -0,0 +1,26 @@ +apiVersion: k8s.otterize.com/v1alpha3 +kind: ClientIntents +metadata: + name: frontend + namespace: otterize-tutorial-egress-access +spec: + service: + name: frontend + calls: + - name: backend +--- +apiVersion: k8s.otterize.com/v1alpha3 +kind: ClientIntents +metadata: + name: backend + namespace: otterize-tutorial-egress-access +spec: + service: + name: backend + calls: + - type: internet + internet: + domains: + # Domain name for our advice service + - api.adviceslip.com + - name: frontend \ No newline at end of file diff --git a/static/code-examples/egress-access-control/intents-no-external-api.yaml b/static/code-examples/egress-access-control/intents-no-external-api.yaml new file mode 100644 index 000000000..179dceb65 --- /dev/null +++ b/static/code-examples/egress-access-control/intents-no-external-api.yaml @@ -0,0 +1,21 @@ +apiVersion: k8s.otterize.com/v1alpha3 +kind: ClientIntents +metadata: + name: frontend + namespace: otterize-tutorial-egress-access +spec: + service: + name: frontend + calls: + - name: backend +--- +apiVersion: k8s.otterize.com/v1alpha3 +kind: ClientIntents +metadata: + name: backend + namespace: otterize-tutorial-egress-access +spec: + service: + name: backend + calls: + - name: frontend \ No newline at end of file diff --git a/static/code-examples/egress-access-control/intents.yaml b/static/code-examples/egress-access-control/intents.yaml new file mode 100644 index 000000000..b77e53134 --- /dev/null +++ b/static/code-examples/egress-access-control/intents.yaml @@ -0,0 +1,25 @@ +apiVersion: k8s.otterize.com/v1alpha3 +kind: ClientIntents +metadata: + name: frontend + namespace: otterize-tutorial-egress-access +spec: + service: + name: frontend + calls: + - name: backend +--- +apiVersion: k8s.otterize.com/v1alpha3 +kind: ClientIntents +metadata: + name: backend + namespace: otterize-tutorial-egress-access +spec: + service: + name: backend + calls: + - type: internet + internet: + ips: + - 185.53.57.80 # IP address of our external aPI + - name: frontend \ No newline at end of file diff --git a/static/img/icons/Postgresql-no-word-mark.svg b/static/img/icons/Postgresql-no-word-mark.svg new file mode 100644 index 000000000..d98e3659c --- /dev/null +++ b/static/img/icons/Postgresql-no-word-mark.svg @@ -0,0 +1,22 @@ + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/static/img/quick-tutorials/egress-access-control/cluster-intents-applied.png b/static/img/quick-tutorials/egress-access-control/cluster-intents-applied.png new file mode 100644 index 000000000..384f81aaa Binary files /dev/null and b/static/img/quick-tutorials/egress-access-control/cluster-intents-applied.png differ diff --git a/static/img/quick-tutorials/egress-access-control/social.png b/static/img/quick-tutorials/egress-access-control/social.png new file mode 100644 index 000000000..43fea749e Binary files /dev/null and b/static/img/quick-tutorials/egress-access-control/social.png differ diff --git a/static/img/quick-tutorials/egress-access-control/unprotected-network-egress-tutorial.png b/static/img/quick-tutorials/egress-access-control/unprotected-network-egress-tutorial.png new file mode 100644 index 000000000..7a2bc2986 Binary files /dev/null and b/static/img/quick-tutorials/egress-access-control/unprotected-network-egress-tutorial.png differ diff --git a/yarn.lock b/yarn.lock index f679995a9..184f56ebb 100644 --- a/yarn.lock +++ b/yarn.lock @@ -8841,7 +8841,7 @@ vary@~1.1.2: resolved "https://registry.yarnpkg.com/vary/-/vary-1.1.2.tgz#2299f02c6ded30d4a5961b0b9f74524a18f634fc" integrity sha512-BNGbWLfd0eUPabhkXUVm0j8uuvREyTh5ovRa/dyow/BqAbZJyC+5fU+IzQOzmAKzYqYRAISoRhdQr3eIZ/PXqg== -vercel@^33.4.0: +vercel@^33.5.4: version "33.5.5" resolved "https://registry.yarnpkg.com/vercel/-/vercel-33.5.5.tgz#77848b78d7535d436ecd884b61a2910709a677cf" integrity sha512-MsuUq6JCPGtRhrzHQ2MVRh8bxNkhVWDaYGPk3LGSEWKbJ0dkB1ic97s5uMEBSsp6QgUB8ZaGuosPDTDGgmPxXw==