Skip to content

Commit

Permalink
feat: audit scanner docs.
Browse files Browse the repository at this point in the history
Adds documentation about the new Kubewarden component, the Audit
Scanner. The docs explain what is it, how to use it and why it is
useful.

Signed-off-by: José Guilherme Vanz <[email protected]>
  • Loading branch information
jvanz committed Jul 11, 2023
1 parent 33f16bd commit c1f010c
Show file tree
Hide file tree
Showing 3 changed files with 287 additions and 1 deletion.
235 changes: 235 additions & 0 deletions docs/explanations/audit-scanner.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,235 @@
---
sidebar_label: "Audit Scanner"
title: "Kubewarden Audit Scanner"
---

# Kubewarden Audit Scanner

Starting from version `v1.7.0`, Kubewarden introduces a new component called
the Audit Scanner. This scanner evaluates whether the resources running in the
cluster comply with the deployed policies. It is particularly useful when
policies are deployed after the resources. The Audit Scanner allows operators
to detect resources deployed before the policies that violate the validations
performed by the policies.

To illustrate the usage of the audit scanner in Kubewarden, let's consider the
following scenario:

Assume you have deployed a set of policies to enforce specific rules and
validations on your Kubernetes cluster. However, some resources were deployed
before these policies were implemented, and you want to ensure that these
resources comply with the newly deployed policies.

To address this need, Kubewarden introduced the audit scanner component
starting from version `v1.7.0`. The audit scanner periodically evaluates the
resources running in your cluster and checks if they comply with the policies
that have been deployed. By doing so, it helps operators identify any resources
that violate the validations performed by the policies, even if they were
deployed before the policies themselves.

See [here](../howtos/audit-scanner) how to enable Audit Scanner

### Policy Reports

When using the Kubewarden Audit Scanner, the results of the policy scans are
stored in [PolicyReport](https://htmlpreview.github.io/?https://github.com/kubernetes-sigs/wg-policy-prototypes/blob/master/policy-report/docs/index.html) Custom Resource Definitions (CRDs).

:::caution
Note that the PolicyReport CRDs are under development in the `wg-policy`
Kubernetes group.Therefore, this documentation can be out of date if a new
version of the CRDs is released.

Check the `wg-policy` group
[repository](https://github.com/kubernetes-sigs/wg-policy-prototypes) out for
more information about the CRDs.
:::

The audit scanner stores the audit results in the Policy Report Custom Resource
Definitions (CRDs) created by the `wg-policy` Kubernetes group. These CRDs
provide a structured way to store and manage the audit results. Each namespace
scanned by the audit scanner will have a separate report associated with it.
Cluster wide resources will have a separate report as well.

To use the audit scanner, you need to manually install the Policy Report CRDs
or use the version installed by `kubewarden-crds`. These CRDs define the
structure and schema for the audit reports generated by the scanner.

The audit results generated by the scanner include various information, such as
the policy that was evaluated, the resource being scanned, the result of the
evaluation (pass, fail, or skip), and a timestamp indicating when the
evaluation took place. Additionally, you can optionally define severity and
category annotations for your policies in the `metadata.yml` file.

You can also leverage the optional UI provided by the
[policy-reporter](https://github.com/kyverno/policy-reporter) tool for
monitoring and observability of the PolicyReport CRDs. Furthermore, operators
can access the reports via ordinary `kubectl` commands.


Let's take a look at some example audit results generated by the audit scanner:

### Cluster-Wide Audit Results example

```yaml
apiVersion: wgpolicyk8s.io/v1beta1
kind: ClusterPolicyReport
metadata:
creationTimestamp: "2023-07-10T19:25:40Z"
generation: 1
labels:
app.kubernetes.io/managed-by: kubewarden
...
results:
- policy: cap-testing-cap-policy
...
resourceSelector: {}
resources:
- apiVersion: v1
kind: Namespace
name: kube-system
...
result: pass
rule: testing-cap-policy
source: kubewarden
timestamp:
nanos: 0
seconds: 1689017140
- policy: cap-testing-cap-policy
...
resourceSelector: {}
resources:
- apiVersion: v1
kind: Namespace
name: default
...
result: pass
rule: testing-cap-policy
source: kubewarden
timestamp:
nanos: 0
seconds: 1689017140
...
summary:
error: 0
fail: 0
pass: 6
skip: 0
warn: 0
```
In the above example, the audit scanner has evaluated the
`cap-testing-cap-policy` on multiple namespaces in the cluster. The results
indicate that all the namespaces passed the policy validation. The `summary`
section provides a summary of the audit results, showing that there were no
errors, failures, or warnings.

### Namespace-Specific Audit Results example

```yaml
apiVersion: wgpolicyk8s.io/v1beta1
kind: PolicyReport
metadata:
creationTimestamp: "2023-07-10T19:28:05Z"
generation: 4
labels:
app.kubernetes.io/managed-by: kubewarden
...
results:
- message: one of the containers has privilege escalation enabled
policy: cap-no-privilege-escalation
...
resourceSelector: {}
resources:
- apiVersion: apps/v1
kind: Deployment
name: nginx-deployment
namespace: default
...
result: fail
rule: no-privilege-escalation
source: kubewarden
timestamp:
nanos: 0
seconds: 1689017383
- policy: cap-do-not-run-as-root
...
resourceSelector: {}
resources:
- apiVersion: apps/v1
kind: Deployment
name: nginx-deployment
namespace: default
...
result: pass
rule: do-not-run-as-root
source: kubewarden
timestamp:
nanos: 0
seconds: 1689017383
...
summary:
error: 0
fail: 8
pass: 10
skip: 0
warn: 0
```

In this example, the audit scanner has evaluated multiple policies on resources
within the `default` namespace. The results indicate that some of the resources
failed the validation for the `cap-no-privilege-escalation` policy, while
others passed the validation for the `cap-do-not-run-as-root` policy. The
`summary` section shows a summary of the audit results, indicating the number
of failures and passes.

By using the audit scanner and analyzing the generated audit reports, you can
gain insights into the compliance of your resources with the deployed policies,
helping you ensure that your cluster remains secure and adheres to the defined
rules and validations.

---

## Policies

Policies will be used by the audit scanner by default. Operators that want to
skip a policy evaluation in the Audit scanner should set `false` to the
`backgroundAudit` field in the policy spec. Furthermore, policies in
Kubewarden now support two optional annotations:
`io.kubewarden.policy.severity` and `io.kubewarden.policy.category`. These
annotations can be defined in the `metadata.yml` file associated with a policy.

- The `io.kubewarden.policy.severity` annotation allows you to specify the
severity level of the policy violation, such as "critical", "high", "medium",
or "low".
- The `io.kubewarden.policy.category` annotation allows you to categorize the
policy based on a specific domain or purpose, such as "security",
"compliance", or "performance".

See an example of the `metadata.yaml` file in one of the [template
projects](https://github.com/kubewarden/rust-policy-template/blob/main/metadata.yml)

## Permissions and ServiceAccounts

The audit scanner in Kubernetes requires specific RBAC configurations to
function effectively. These configurations grant necessary permissions for
accessing Kubernetes resources. During the installation process of the audit
scanner, the required `ServiceAccounts`, `ClusterRole`, and `ClusterRoleBindings` are
set up automatically.

To access most Kubernetes resources, the audit scanner utilizes the default
`view` `ClusterRole` provided by the API Server. This `ClusterRole` allows
read-only access to a wide range of Kubernetes resources within a namespace.
You can find more details about this role in the [Kubernetes
documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles).

In addition to the `view` `ClusterRole`, the audit scanner also requires a custom
`ClusterRole`. This custom `ClusterRole` grants read access to Kubewarden resource
types and read-write access to the `PolicyReport` CRDs. These permissions enable
the scanner to fetch resources for conducting audit evaluations and create
policy reports based on the evaluation results.






49 changes: 49 additions & 0 deletions docs/howtos/audit-scanner.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
---
sidebar_label: "Audit Scanner Installation"
title: "Kubewarden Audit Scanner Installation"
---

# Kubewarden Audit Scanner installation

Starting from version `v1.7.0`, Kubewarden introduces a new component called
the Audit Scanner. This scanner evaluates whether the resources running in the
cluster comply with the deployed policies. It is particularly useful when
policies are deployed after the resources. The Audit Scanner allows operators
to detect resources deployed before the policies that violate the validations
performed by the policies.

## Installation

To install and use the Kubewarden Audit Scanner, please follow these steps:

1. Install the `kubewarden-controller` Helm chart, ensuring you have version
`v1.7.0` or higher. Remember to enable the audit scanner component.

```console
helm install kubewarden-crds kubewarden/kubewarden-crds
helm install --set auditScanner.enable=true kubewarden-controller kubewarden/kubewarden-controller
```

For more information about the installation of Kubewarden see the [Quick Start guide](../quick-start.md)

:::caution
The PolicyReport Custom Resource Definitions must be available to store the
policy report results. Therefore, if the `kubewarden-crds` is installed with
the `installPolicyReportCRDs` value set to `false`, the cluster operator should
install the CRD manually. Note that, the `kubewarden-crds` installs the CRDs
by default.

See more info about the CRDs at the [policy work group
repository](https://github.com/kubernetes-sigs/wg-policy-prototypes)
:::

By default, the Audit Scanner is implemented as a cron job that will be
triggered every 60 minutes. You can adjust this and other audit scanner
configuration changing other chart values. Check the
[values.yaml](https://github.com/kubewarden/helm-charts/blob/main/charts/kubewarden-controller/values.yaml)
file for more options. Please note that the successful installation and
functioning of the Audit Scanner depend on the proper installation of the
Policy Report CRDs. Without the CRDs, the scanner will not be able to store and
retrieve the audit results accurately.

See [here](../explanations/audit-scanner) more information about the Audit Scanner
4 changes: 3 additions & 1 deletion sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,8 @@ module.exports = {
'testing-policies/cluster-operators',
],
},
"explanations/context-aware-policies"
"explanations/context-aware-policies",
"explanations/audit-scanner"
],
collapsed: true,
},
Expand Down Expand Up @@ -146,6 +147,7 @@ module.exports = {
},
],
},
"howtos/audit-scanner"
],
collapsed: true,
},
Expand Down

0 comments on commit c1f010c

Please sign in to comment.