Skip to content

Commit

Permalink
Merge pull request #264 from jhkrug/refresh/audit-scanner
Browse files Browse the repository at this point in the history
Review and refresh of audit scanner overview documents.
  • Loading branch information
jhkrug authored Oct 13, 2023
2 parents ed5c14b + 887be5c commit dc41df7
Show file tree
Hide file tree
Showing 3 changed files with 124 additions and 138 deletions.
129 changes: 59 additions & 70 deletions docs/explanations/audit-scanner/audit-scanner.md
Original file line number Diff line number Diff line change
@@ -1,89 +1,78 @@
---
sidebar_label: "What's it?"
title: "Audit Scanner"
sidebar_label: "What is the Audit Scanner?"
title: "What is the Audit Scanner?"
description: An overview of the Kubewarden Audit Scanner.
keywords: [kubewarden, audit scanner, kubernetes]
---

# Audit Scanner

:::note

The Audit Scanner feature is available starting from Kubewarden 1.7.0 release

:::

A component, called `audit-scanner`, constantly checks the
resources declared in the cluster, flagging the ones that do not adhere with
the deployed Kubewarden policies.

Policies evolve over time: new ones are deployed and the existing ones can be
updated, both in terms of version and configuration settings. This can lead to
situations where resources already inside of the cluster are no longer
compliant. The audit scanner feature provides Kubernetes administrators with a
tool to consistently verify the compliance state of their clusters.

To illustrate the usage of the audit scanner in Kubewarden, let's consider the
following scenario.

Assume Bob is deploying a Wordpress Pod inside of the cluster. He's new to
Kubernetes, hence he makes a mistake and deploys this Pod running as a
privileged container. Since there's no policy preventing that, the Pod is
successfully created inside of the cluster.

Some days later, Alice, the Kubernetes administrator, enforces a Kubewarden
policy that prohibits the creation of privileged containers. The Pod deployed
by Bob keeps running inside of the cluster.

However, thanks to the report generated by the audit scanner, Alice can
quickly identify all the workloads that are violating her policies; including
the Wordpress Pod created by Bob.

To make that happens, audit scanner get all resources that should be audited,
build a fake admission request with the resource's data and send it to the
policy server in a different endpoint exclusively used to audit requests.
However, for the policy evaluating the request, there is no differences from a
real or an audit request. The data received are the same. Furthermore, this
policy server endpoint is instrumented to collect data of the evaluation as
the one used to validate request from the control plane. Therefore, users can
use their monitoring tools analyze this data as well.
The `audit-scanner` component constantly checks resources in the cluster.
It flags the ones that don't adhere with the Kubewarden policies deployed in the cluster.

Policies evolve over time.
New ones are deployed, existing ones are updated.
Versions and configuration settings change.
This can lead to situations where resources already inside the cluster are no longer compliant.
The audit scanning feature provides Kubernetes administrators with a tool that constantly verifies the compliance state of their clusters.

To explain the use of the audit scanner in Kubewarden, consider the following scenario.

Assume Bob is deploying a WordPress Pod in the cluster.
Bob is new to Kubernetes, makes a mistake and deploys the Pod running as a privileged container.
At this point there's no policy preventing that so the Pod is
successfully created in the cluster.

Some days later, Alice, the Kubernetes administrator, enforces a Kubewarden policy that prohibits the creation of privileged containers.
The Pod deployed by Bob keeps running in the cluster as it already exists.

A report generated by the audit scanner lets Alice identify all the workloads that are violating creation policies.
This includes the WordPress Pod created by Bob.

The audit scanner operates by:

- identifying all the resources to audit
- for each it builds a synthetic admission request with the resource's data
- it sends each admission request to a policy server endpoint which is only for audit requests

For the policy evaluating the request,
there are no differences between real or audit requests.
The data received is the same.
This auditing policy server endpoint has instrumentation to collect data of the evaluation.
So, users can use their monitoring tools analyze audit scanner data.

## Enable audit scanner

As stated before, the audit scanner feature can be enabled starting from the
Kubewarden 1.7.0 release.
You can enable the audit scanner starting from the Kubewarden 1.7.0 release.

Detailed installation instructions can be found
[here](../howtos/audit-scanner).
Detailed installation instructions are in the
[audit scanner How-to](../howtos/audit-scanner).

## Policies

By default, every policy is evaluated by the audit scanner. Operators that want
to skip a policy evaluation in the Audit scanner should set the
`spec.backgroundAudit` field to `false` inside of the policy definition.
Furthermore, policies in Kubewarden now support two optional annotations:
`io.kubewarden.policy.severity` and `io.kubewarden.policy.category`:
By default, the audit scanner evaluates every policy.
Operators that want to skip a policy evaluation in the audit scanner must set the `spec.backgroundAudit` field to `false` in the policy definition.

- The `io.kubewarden.policy.severity` annotation allows you to specify the
severity level of the policy violation, such as `critical`, `high`, `medium`,
`low` or `info`.
- The `io.kubewarden.policy.category` annotation allows you to categorize the
policy based on a specific domain or purpose, such as `PSP`, `compliance`, or
`performance`.
Also, policies in Kubewarden now support two optional annotations:

See the policy authors [docs](../../writing-policies/index.md) for more info.
- The `io.kubewarden.policy.severity` annotation lets you specify the severity level of the policy violation, such as `critical`, `high`, `medium`, `low` or `info`.
- The `io.kubewarden.policy.category` annotation lets you categorize the policy based on a specific domain or purpose, such as `PSP`, `compliance`, or `performance`.

See the policy authors [documentation](../../writing-policies/index.md) for more information.

## Permissions and ServiceAccounts

The audit scanner in Kubernetes requires specific RBAC configurations to be
able to scan Kubernetes resources and save the results. A correct default Service
Account with those permissions is created during the installation. But the user
can provide their own ServiceAccount to fine tune access to resources.

The default audit scanner `ServiceAccount` is bound to the `view` `ClusterRole`
provided by Kubernetes. This `ClusterRole` allows read-only access to a wide
range of Kubernetes resources within a namespace. You can find more details
about this role in the [Kubernetes
documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles).

In addition, the audit scanner is also bound to a `ClusterRole` that grants
read access to Kubewarden resource types and read-write access to the
`PolicyReport` [CRDs](policy-reports.md). These permissions enable the scanner
to fetch resources for conducting audit evaluations and create policy reports
based on the evaluation results.
The audit scanner in Kubernetes requires specific Role Based Access Control (RBAC) configurations to be able to scan Kubernetes resources and save the results.
A correct default Service Account with those permissions is created during the installation.
The user can create and configure their own ServiceAccount to fine tune access to resources.

The default audit scanner `ServiceAccount` is bound to the `view` `ClusterRole` provided by Kubernetes.
This `ClusterRole` allows read-only access to a wide range of Kubernetes resources within a namespace.
You can find more details about this role in the [Kubernetes documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles).

Also, the audit scanner is bound to a `ClusterRole` that grants read access to Kubewarden resource types and read-write access to the `PolicyReport` [CRDs](policy-reports.md).
These permissions let the scanner fetch resources for conducting audit evaluations and creating policy reports based on the evaluation results.
40 changes: 20 additions & 20 deletions docs/explanations/audit-scanner/limitations.md
Original file line number Diff line number Diff line change
@@ -1,40 +1,41 @@
---
sidebar_label: "Limitations"
title: "Audit Scanner - Limitations"
description: The limitation of the audit scanner
keywords: [kubewarden, kubernetes, audit scanner]
---

## Supported event types

Policies can inspect `CREATE`, `UPDATE`, and `DELETE` events.

The audit scanner cannot simulate `UPDATE` events, as it doesn't know exactly
which part of the resource needs to be changed in a meaningful way.
The audit scanner can not simulate `UPDATE` events,
as it doesn't know which part of the resource need to be changed.

Because of that, policy interested only in `UPDATE` events will be ignored by
the audit scanner.
So, a policy concerned only with `UPDATE` events is ignored by the audit scanner.

:::note
The audit-scanner v1.7.0 release supports only `CREATE` events. `DELETE` ones
will be handled in the near future.

The audit-scanner v1.7.0 release supports only `CREATE` events.
`DELETE` events will be handled in the near future.

:::

## Policies relying on user and user group information

Each Kubernetes admission request object contains information about which user
(or ServiceAccount) initiated the event, and to which group they belong.
Each Kubernetes admission request object has information about which user (or ServiceAccount) initiated the event,
and to which group they belong.

All the events simulated by the audit scanner are originated by the same hard
coded user and group. Because of that, policies that rely on these values to
make their decisions will not produce meaningful results.
All the events simulated by the audit scanner are originated by the same hard coded user and group.
Because of that, policies that rely on these values to make their decisions will not produce meaningful results.

For these cases, the policy should be configured to be skipped from the audit
checks.
For these cases, the policy should be configured to be skipped from the audit checks.

## Policies relying on external data

Policies can request and use external data when performing an evaluation. These
policies can be evaluated by the audit checks, but their outcome can change
over time depending on the external data.
Policies can request and use external data when performing an evaluation.
These policies can be evaluated by the audit checks,
but their outcomes can change depending on the external data.

## Usage of `*` by policies

Expand All @@ -51,7 +52,6 @@ spec:
- UPDATE
```
The `apiGroups`, `apiVersions` and `resources` attributes can use the special `*` inside of them. This is a wildcard
symbol that causes the policy to match all the values used inside of the field.

Policies that make use of the `*` symbol are going to be ignored by the audit scanner.
The `apiGroups`, `apiVersions` and `resources` attributes can use the wildcard `*`.
This wildcard symbol causes the policy to match all the values used in the field.
The audit scanner ignores policies that make use of the `*` symbol.
93 changes: 45 additions & 48 deletions docs/explanations/audit-scanner/policy-reports.md
Original file line number Diff line number Diff line change
@@ -1,75 +1,74 @@
---
sidebar_label: "Policy Reports"
title: "Audit Scanner - Policy Reports"
description: The policy reports that the Audit Scanner produces.
keywords: [kubewarden, kubernetes, audit scanner]
---

# Policy Reports

When using the Kubewarden Audit Scanner, the results of the policy scans are
stored using the
When using the Kubewarden Audit Scanner, the results of the policy scans are stored using the
[PolicyReport](https://htmlpreview.github.io/?https://github.com/kubernetes-sigs/wg-policy-prototypes/blob/045372e558b896695b2daae92e8c7a04d4d40282/policy-report/docs/index.html)
Custom Resource.

:::caution
Note that the PolicyReport CRDs are under development in the `wg-policy`
Kubernetes group. Therefore, this documentation can be out of date if a new
version of the CRDs is released.

The PolicyReport CRDs are under development in the `wg-policy` Kubernetes group.
Therefore, this documentation will become out of date if a new version of the CRDs is released.

Check the `wg-policy` group
[repository](https://github.com/kubernetes-sigs/wg-policy-prototypes) for
more information about the CRDs.
[repository](https://github.com/kubernetes-sigs/wg-policy-prototypes)
for more information about the CRDs.

:::

These CRDs provide a structured way to store and manage the audit results.
These CRDs offer a structured way to store and manage the audit results.

Each namespace scanned by the audit scanner has a dedicated `PolicyReport` resource defined in it.

The results of Cluster wide resources are found in a `ClusterPolicyReport` object.
There will be only one `ClusterPolicyReport` per cluster.

Each namespace scanned by the audit scanner will have a dedicated
`PolicyReport` resource defined inside of it.
The audit results generated by the scanner include:

The results of Cluster wide resources are going to be found inside of a
`ClusterPolicyReport` object. There is going to be only one
`ClusterPolicyReport` per cluster.
- the policy that was evaluated
- the resource being scanned
- the result of the evaluation (pass, fail, or skip)
- a timestamp indicating when the evaluation took place.

The audit results generated by the scanner includes various information, such
as the policy that was evaluated, the resource being scanned, the result of the
evaluation (pass, fail, or skip), and a timestamp indicating when the
evaluation took place. Additionally, you can optionally define severity and
category annotations for your policies.
You can also define severity and category annotations for your policies.

Operators can access the reports via ordinary `kubectl` commands. They can also
leverage the optional UI provided by the
[policy-reporter](https://kyverno.github.io/policy-reporter) open source
project for monitoring and observability of the PolicyReport CRDs.
Operators can access the reports via ordinary `kubectl` commands.
They can also use the optional UI provided by the
[policy-reporter](https://kyverno.github.io/policy-reporter)
open source project for monitoring and observability of the PolicyReport CRDs.

## Policy Reporter UI

The Policy Reporter is shipped as a subchart of `kubewarden-controller`, refer
to the [Audit Scanner Installation](../../howtos/audit-scanner) page for more
info.
The Policy Reporter is shipped as a subchart of `kubewarden-controller`.
Refer to the [Audit Scanner Installation](../../howtos/audit-scanner)
page for more information.

The Policy Reporter UI provides a dashboard showcasing all violations from
`PolicyReports` and the `ClusterPolicyReport`. See the following example:
The Policy Reporter UI provides a dashboard showcasing all violations
from `PolicyReports` and the `ClusterPolicyReport`.
This is shown below.

![Policy Reporter dashboard example](/img/policy-reporter_dashboard.png)

In addition, it provides a tab for PolicyReports, and a tab for
ClusterPolicyReports, with expanded info. See the following example of the
PolicyReport results:
As shown below,
it also provides a tab for PolicyReports, and a tab for ClusterPolicyReports, with expanded information.

![Policy Reporter PolicyReports example](/img/policy-reporter_policyreports.png)

Additional features of Policy Reporter include forwarding of results to
different clients (like Grafana Loki, Elasticsearch, chat applications),
metrics endpoint, etc. Please refer to the
[policy-reporter's community docs](https://kyverno.github.io/policy-reporter)
for more info.
Other features of Policy Reporter include forwarding of results to different clients
(like Grafana Loki, Elasticsearch, chat applications),
metrics endpoints, and so on.
See the [policy-reporter's community docs](https://kyverno.github.io/policy-reporter)
for more information.

## Cluster-Wide Audit Results example

In the next example, the audit scanner has evaluated the
`cap-testing-cap-policy` on multiple namespaces in the cluster. The results
indicate that all the namespaces passed the policy validation. The `summary`
section provides a summary of the audit results, showing there were no
errors, failures, or warnings.
In the next example, the audit scanner has evaluated the `cap-testing-cap-policy` on many namespaces in the cluster.
The results indicate that all the namespaces passed the policy validation.
The `summary` section summarizes the audit results, showing there were no errors, failures, or warnings.

```yaml
apiVersion: wgpolicyk8s.io/v1beta1
Expand Down Expand Up @@ -120,12 +119,10 @@ summary:
## Namespace-Specific Audit Results example
In this example, the audit scanner has evaluated multiple policies on resources
within the `default` namespace. The results indicate that some of the resources
failed the validation for the `cap-no-privilege-escalation` policy, while
others passed the validation for the `cap-do-not-run-as-root` policy. The
`summary` section shows a summary of the audit results, indicating the number
of failures and passes.
In this example, the audit scanner evaluated many policies on resources within the `default` namespace.
The results indicate that certain resources failed the validation for the `cap-no-privilege-escalation` policy.
Others passed the validation for the `cap-do-not-run-as-root` policy.
The `summary` section shows the number of failures and passes.

```yaml
apiVersion: wgpolicyk8s.io/v1beta1
Expand Down

0 comments on commit dc41df7

Please sign in to comment.