Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PlacementRule ==> Placement #908

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 5 additions & 4 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -242,7 +242,6 @@ kind-delete-cluster:
.PHONY: install-crds
install-crds:
@echo installing crds on hub
kubectl apply -f https://raw.githubusercontent.com/$(CALLER_REPO)/multicloud-operators-subscription/$(RELEASE_BRANCH)/deploy/hub-common/apps.open-cluster-management.io_placementrules_crd.yaml --kubeconfig=$(PWD)/kubeconfig_$(HUB_CLUSTER_NAME)
kubectl apply -f https://raw.githubusercontent.com/open-cluster-management-io/api/$(OCM_API_COMMIT)/cluster/v1/0000_00_clusters.open-cluster-management.io_managedclusters.crd.yaml --kubeconfig=$(PWD)/kubeconfig_$(HUB_CLUSTER_NAME)
kubectl apply -f https://raw.githubusercontent.com/open-cluster-management-io/api/$(OCM_API_COMMIT)/cluster/v1beta1/0000_02_clusters.open-cluster-management.io_placements.crd.yaml --kubeconfig=$(PWD)/kubeconfig_$(HUB_CLUSTER_NAME)
kubectl apply -f https://raw.githubusercontent.com/open-cluster-management-io/api/$(OCM_API_COMMIT)/cluster/v1beta1/0000_03_clusters.open-cluster-management.io_placementdecisions.crd.yaml --kubeconfig=$(PWD)/kubeconfig_$(HUB_CLUSTER_NAME)
Expand All @@ -269,15 +268,17 @@ e2e-dependencies:
K8SCLIENT ?= oc
GINKGO = $(LOCAL_BIN)/ginkgo
IS_HOSTED ?= false
PATCH_DECISIONS ?= true
MANAGED_CLUSTER_NAMESPACE ?= $(MANAGED_CLUSTER_NAME)

.PHONY: e2e-test
e2e-test: e2e-dependencies
$(GINKGO) -v $(TEST_ARGS) test/e2e -- -cluster_namespace=$(MANAGED_CLUSTER_NAMESPACE) -k8s_client=$(K8SCLIENT) -is_hosted=$(IS_HOSTED) -cluster_namespace_on_hub=$(CLUSTER_NAMESPACE_ON_HUB)
$(GINKGO) -v $(TEST_ARGS) test/e2e -- -cluster_namespace=$(MANAGED_CLUSTER_NAMESPACE) -k8s_client=$(K8SCLIENT) -is_hosted=$(IS_HOSTED) -patch_decisions=$(PATCH_DECISIONS) -cluster_namespace_on_hub=$(CLUSTER_NAMESPACE_ON_HUB)

.PHONY: e2e-test-hosted
e2e-test-hosted: CLUSTER_NAMESPACE_ON_HUB=cluster2
e2e-test-hosted: IS_HOSTED=true
e2e-test-hosted: CLUSTER_NAMESPACE_ON_HUB=cluster2
e2e-test-hosted: IS_HOSTED=true
e2e-test-hosted: PATCH_DECISIONS=false
e2e-test-hosted: MANAGED_CLUSTER_NAMESPACE=cluster2-hosted
e2e-test-hosted: e2e-test

Expand Down
103 changes: 61 additions & 42 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,53 +1,60 @@
[comment]: # ( Copyright Contributors to the Open Cluster Management project )
[comment]: # " Copyright Contributors to the Open Cluster Management project "

# Governance Policy Framework

[![GRC Integration Test](https://github.com/stolostron/governance-policy-framework/actions/workflows/integration.yml/badge.svg)](https://github.com/stolostron/governance-policy-framework/actions/workflows/integration.yml)

Open Cluster Management - Governance Policy Framework

The policy framework provides governance capability to gain visibility, and drive remediation for various security and configuration aspects to help meet such enterprise standards.
The policy framework provides governance capability to gain visibility, and drive remediation for
various security and configuration aspects to help meet such enterprise standards.

## What it does

View the following functions of the policy framework:
View the following functions of the policy framework:

* Distributes policies to managed clusters from hub cluster.
* Collects policy execution results from managed cluster to hub cluster.
* Supports multiple policy engines and policy languages.
* Provides an extensible mechanism to bring your own policy.
- Distributes policies to managed clusters from hub cluster.
- Collects policy execution results from managed cluster to hub cluster.
- Supports multiple policy engines and policy languages.
- Provides an extensible mechanism to bring your own policy.

## Architecture

![architecture](images/policy-framework-architecture-diagram.png)

The governance policy framework consists of following components:

- Govenance policy framework: A framework to distribute various supported policies to managed clusters and collect results to be sent to the hub cluster.
- [Policy propagator](https://github.com/stolostron/governance-policy-propagator)
- [Governance policy framework addon](https://github.com/stolostron/governance-policy-framework-addon)
- Policy controllers: Policy engines that run on managed clusters to evaluate policy rules distributed by the policy framework and generate results.
- [Configuration policy controller](https://github.com/stolostron/config-policy-controller)
- [Usage examples](./doc/configuration-policy/README.md)
- [Certificate policy controller](https://github.com/stolostron/cert-policy-controller)
- Third-party
- [Gatekeeper](https://github.com/open-policy-agent/gatekeeper)
- [Kyverno](https://github.com/kyverno/kyverno/)
- Govenance policy framework: A framework to distribute various supported policies to managed
clusters and collect results to be sent to the hub cluster.
- [Policy propagator](https://github.com/stolostron/governance-policy-propagator)
- [Governance policy framework addon](https://github.com/stolostron/governance-policy-framework-addon)
- Policy controllers: Policy engines that run on managed clusters to evaluate policy rules
distributed by the policy framework and generate results.
- [Configuration policy controller](https://github.com/stolostron/config-policy-controller)
- [Usage examples](./doc/configuration-policy/README.md)
- [Certificate policy controller](https://github.com/stolostron/cert-policy-controller)
- Third-party
- [Gatekeeper](https://github.com/open-policy-agent/gatekeeper)
- [Kyverno](https://github.com/kyverno/kyverno/)

## The Policy CRDs

The `Policy` is the Custom Resource Definition (CRD), created for policy framework controllers to monitor. It acts as a vehicle to deliver policies to managed cluster and collect results to send to the hub cluster.
The `Policy` is the Custom Resource Definition (CRD), created for policy framework controllers to
monitor. It acts as a vehicle to deliver policies to managed cluster and collect results to send to
the hub cluster.

View the following example specification of a `Policy` object:

```yaml
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
name: policy-pod
spec:
remediationAction: inform # [inform/enforce] If set, it defines the remediationAction globally.
disabled: false # [true/false] If true, the policy will not be distributed to the managed cluster.
policy-templates:
- objectDefinition: # Use `objectDefinition` to wrap the policy resource to be distributed to the managed cluster
remediationAction: inform # [inform/enforce] If set, it defines the remediationAction globally.
disabled: false # [true/false] If true, the policy will not be distributed to the managed cluster.
policy-templates:
- objectDefinition: # Use `objectDefinition` to wrap the policy resource to be distributed to the managed cluster
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
Expand All @@ -58,57 +65,69 @@ spec:
- complianceType: musthave
objectDefinition:
apiVersion: v1
kind: Pod
kind: Pod
metadata:
name: sample-nginx-pod
namespace: default
spec:
containers:
- image: nginx:1.7.9
name: nginx
ports:
- containerPort: 80
- image: nginx:1.7.9
name: nginx
ports:
- containerPort: 80
```

The `PlacementBinding` CRD is used to bind the `Policy` with a `PlacementRule`. Only a bound `Policy` is distributed to a managed cluster by the policy framework.
The `PlacementBinding` CRD is used to bind the `Policy` with a `Placement`. Only a bound `Policy` is
distributed to a managed cluster by the policy framework.

View the following example specification of a `PlacementBinding` object:

```yaml
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
name: binding-policy-pod
placementRef:
name: placement-policy-pod
kind: PlacementRule
apiGroup: apps.open-cluster-management.io
kind: Placement
apiGroup: cluster.open-cluster-management.io
subjects:
- name: policy-pod
kind: Policy
apiGroup: policy.open-cluster-management.io
- name: policy-pod
kind: Policy
apiGroup: policy.open-cluster-management.io
```

The `PlacementRule` CRD is used to determine the target clusters to distribute policies to.
The `Placement` CRD is used to determine the target clusters to distribute policies to.

View the following example specification of a `Placement` object:

View the following example specification of a `PlacementRule` object:
```yaml
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
name: placement-policy-pod
spec:
clusterSelector:
matchExpressions:
- {key: environment, operator: In, values: ["dev"]}
predicates:
- requiredClusterSelector:
labelSelector:
matchExpressions:
- { key: environment, operator: In, values: ["dev"] }
tolerations:
- key: cluster.open-cluster-management.io/unreachable
operator: Exists
- key: cluster.open-cluster-management.io/unavailable
operator: Exists
```

## How to install it

You can find installation instructions from [Open Cluster Management](https://open-cluster-management.io/) website.
You can find installation instructions from
[Open Cluster Management](https://open-cluster-management.io/) website.

## More policies

You can find more policies or contribute to the open repository, [policy-collection](https://github.com/stolostron/policy-collection).
You can find more policies or contribute to the open repository,
[policy-collection](https://github.com/stolostron/policy-collection).

<!---
Date: 09/18/2024
Expand Down
11 changes: 5 additions & 6 deletions build/clean-up-cluster.sh
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@ function hub() {
echo "Hub: clean up"
oc delete policies.policy.open-cluster-management.io --all-namespaces --all --ignore-not-found
oc delete placementbindings.policy.open-cluster-management.io --all-namespaces --all --ignore-not-found
oc delete placementrules.apps.open-cluster-management.io --all-namespaces --all --ignore-not-found
# Don't clean up in all namespaces because the global-set placement shouldn't be deleted
for ns in default policy-test e2e-rbac-test-1 e2e-rbac-test-2
do
Expand All @@ -50,8 +49,8 @@ function managed() {
oc delete secret -n default rsa-ca-sample-secret --ignore-not-found
oc delete clusterrolebinding -l e2e=true --ignore-not-found
oc delete subscriptions.operators.coreos.com container-security-operator -n openshift-operators --ignore-not-found
oc delete csv -n openshift-operators `oc get -n openshift-operators csv -o jsonpath='{.items[?(@.spec.displayName=="Quay Container Security")].metadata.name}'` --ignore-not-found || true # csv might not exist
oc delete csv -n openshift-operators `oc get -n openshift-operators csv -o jsonpath='{.items[?(@.spec.displayName=="Red Hat Quay Container Security Operator")].metadata.name}'` --ignore-not-found || true # csv might not exist
oc delete csv -n openshift-operators "$(oc get -n openshift-operators csv -o jsonpath='{.items[?(@.spec.displayName=="Quay Container Security")].metadata.name}')" --ignore-not-found || true # csv might not exist
oc delete csv -n openshift-operators "$(oc get -n openshift-operators csv -o jsonpath='{.items[?(@.spec.displayName=="Red Hat Quay Container Security Operator")].metadata.name}')" --ignore-not-found || true # csv might not exist
oc delete crd imagemanifestvulns.secscan.quay.redhat.com --ignore-not-found
oc delete operatorgroup awx-resource-operator-operatorgroup -n default --ignore-not-found
oc delete subscriptions.operators.coreos.com awx-resource-operator -n default --ignore-not-found
Expand All @@ -71,23 +70,23 @@ function managed() {
delete_all_and_wait pods openshift-gatekeeper-system 0
oc delete ns openshift-gatekeeper-system gatekeeper-system --ignore-not-found
oc delete subscriptions.operators.coreos.com gatekeeper-operator-product -n openshift-operators --ignore-not-found
oc delete csv -n openshift-operators `oc get -n openshift-operators csv -o jsonpath='{.items[?(@.spec.displayName=="Gatekeeper Operator")].metadata.name}'` --ignore-not-found || true # csv might not exist
oc delete csv -n openshift-operators "$(oc get -n openshift-operators csv -o jsonpath='{.items[?(@.spec.displayName=="Gatekeeper Operator")].metadata.name}')" --ignore-not-found || true # csv might not exist
oc delete ns openshift-gatekeeper-operator --ignore-not-found
oc delete crd gatekeepers.operator.gatekeeper.sh --ignore-not-found
oc delete validatingwebhookconfigurations.admissionregistration.k8s.io gatekeeper-validating-webhook-configuration --ignore-not-found
oc delete mutatingwebhookconfigurations.admissionregistration.k8s.io gatekeeper-mutating-webhook-configuration --ignore-not-found
# Compliance Operator clean up
oc delete ScanSettingBinding -n openshift-compliance --all --ignore-not-found || true # ScanSettingBinding CRD might not exist
RESOURCES=(ComplianceSuite ComplianceCheckResult ComplianceScan)
for RESOURCE in ${RESOURCES[@]}; do
for RESOURCE in "${RESOURCES[@]}"; do
delete_all_and_wait $RESOURCE openshift-compliance 0
done
# only three pods should be left
delete_all_and_wait pods openshift-compliance 3 "true"
delete_all_and_wait ProfileBundle openshift-compliance 0
oc delete subscriptions.operators.coreos.com compliance-operator -n openshift-compliance --ignore-not-found
oc delete operatorgroup compliance-operator -n openshift-compliance --ignore-not-found
oc delete csv -n openshift-compliance `oc get -n openshift-compliance csv -o jsonpath='{.items[?(@.spec.displayName=="Compliance Operator")].metadata.name}'` --ignore-not-found || true # csv might not exist
oc delete csv -n openshift-compliance "$(oc get -n openshift-compliance csv -o jsonpath='{.items[?(@.spec.displayName=="Compliance Operator")].metadata.name}')" --ignore-not-found || true # csv might not exist
oc delete ns openshift-compliance --ignore-not-found
oc delete crd -l operators.coreos.com/compliance-operator.openshift-compliance --ignore-not-found
# Clean up events in cluster ns
Expand Down
13 changes: 7 additions & 6 deletions doc/configuration-policy/audit/audit-pod-kind-field-filter.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,17 +37,18 @@ metadata:
name: binding-policy-pod-kind-field-filter
placementRef:
name: placement-policy-pod-kind-field-filter
kind: PlacementRule
apiGroup: apps.open-cluster-management.io
kind: Placement
apiGroup: cluster.open-cluster-management.io
subjects:
- name: policy-pod-kind-field-filter
kind: Policy
apiGroup: policy.open-cluster-management.io
---
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
name: placement-policy-pod-kind-field-filter
spec:
clusterSelector:
matchExpressions: []
predicates:
- requiredLabelSelector:
matchExpressions: []
19 changes: 13 additions & 6 deletions doc/configuration-policy/audit/audit-pod-kind.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -35,17 +35,24 @@ metadata:
name: binding-policy-pod-kind
placementRef:
name: placement-policy-pod-kind
kind: PlacementRule
apiGroup: apps.open-cluster-management.io
kind: Placement
apiGroup: cluster.open-cluster-management.io
subjects:
- name: policy-pod-kind
kind: Policy
apiGroup: policy.open-cluster-management.io
---
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
name: placement-policy-pod-kind
spec:
clusterSelector:
matchExpressions: []
predicates:
- requiredClusterSelector:
labelSelector:
matchExpressions: []
tolerations:
- key: cluster.open-cluster-management.io/unreachable
operator: Exists
- key: cluster.open-cluster-management.io/unavailable
operator: Exists
19 changes: 13 additions & 6 deletions doc/configuration-policy/audit/audit-role-multiple-ns.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -45,17 +45,24 @@ metadata:
name: binding-policy-role-audit-multiple-ns
placementRef:
name: placement-policy-role-audit-multiple-ns
kind: PlacementRule
apiGroup: apps.open-cluster-management.io
kind: Placement
apiGroup: cluster.open-cluster-management.io
subjects:
- name: policy-role-audit-multiple-ns
kind: Policy
apiGroup: policy.open-cluster-management.io
---
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
name: placement-policy-role-audit-multiple-ns
spec:
clusterSelector:
matchExpressions: []
predicates:
- requiredClusterSelector:
labelSelector:
matchExpressions: []
tolerations:
- key: cluster.open-cluster-management.io/unreachable
operator: Exists
- key: cluster.open-cluster-management.io/unavailable
operator: Exists
19 changes: 13 additions & 6 deletions doc/configuration-policy/audit/audit-role-single-ns.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -39,17 +39,24 @@ metadata:
name: binding-policy-role-audit-single-ns
placementRef:
name: placement-policy-role-audit-single-ns
kind: PlacementRule
apiGroup: apps.open-cluster-management.io
kind: Placement
apiGroup: cluster.open-cluster-management.io
subjects:
- name: policy-role-audit-single-ns
kind: Policy
apiGroup: policy.open-cluster-management.io
---
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
name: placement-policy-role-audit-single-ns
spec:
clusterSelector:
matchExpressions: []
predicates:
- requiredClusterSelector:
labelSelector:
matchExpressions: []
tolerations:
- key: cluster.open-cluster-management.io/unreachable
operator: Exists
- key: cluster.open-cluster-management.io/unavailable
operator: Exists
19 changes: 13 additions & 6 deletions doc/configuration-policy/create/create-role-multiple-ns.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -50,17 +50,24 @@ metadata:
name: binding-policy-role
placementRef:
name: placement-policy-role
kind: PlacementRule
apiGroup: apps.open-cluster-management.io
kind: Placement
apiGroup: cluster.open-cluster-management.io
subjects:
- name: policy-role
kind: Policy
apiGroup: policy.open-cluster-management.io
---
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
name: placement-policy-role
spec:
clusterSelector:
matchExpressions: []
predicates:
- requiredClusterSelector:
labelSelector:
matchExpressions: []
tolerations:
- key: cluster.open-cluster-management.io/unreachable
operator: Exists
- key: cluster.open-cluster-management.io/unavailable
operator: Exists
Loading
Loading