- Take me to the Lab
Solutions for Lab - OPA in Kubernetes:
-
Which is not a function of
Kube-mgmt
?Reveal
Manage kubernetes objects via OPA
is not a function of Kube-mgmt. -
What needs to be done to enable
kube-mgmt
to automatically identify policies defined in kubernetes and load them into OPA?- Create configmaps on Kubernetes with the label openpolicyagent.org/policy set to rego
- Create configmaps on Kubernetes with label set to OPA
- Create secrets on Kubernetes with name start
opa-
- Create configmaps on Kubernetes with name start
opa-
Reveal
Create configmaps on Kubernetes with the label openpolicyagent.org/policy set to rego
needs to be done to enable kube-mgmt to automatically identify policies defined in kubernetes and load them into OPA. -
We have placed rego policies under
/root/untrusted-registry.rego
and/root/unique-host.rego
. View the contents of these files and identify which kubernetes resources will be validated by these rego policies?Check
input.request.kind.kind
in rego policies file- untrusted-registry.rego : pod ; unique-host.rego : pod
- untrusted-registry.rego : ingress ; unique-host.rego : ingress
- untrusted-registry.rego : ingress ; unique-host.rego : ingress
- untrusted-registry.rego : ingress ; unique-host.rego : pod
Reveal
untrusted-registry.rego : pod ; unique-host.rego : ingress
-
If we were to implement the policy under
/root/untrusted-registry.rego
and create a pod as defined in/root/test.yaml
, which of the 2 containers will error out?untrusted-registry.rego
policy denies pods with image name that does not start withhooli.com/
- both
- mysql-backend
- nginx-frontend
- none
Reveal
nginx-frontend
-
Create a configmap for OPA using the
untrusted-registry.rego policy
Use below files:
- configmap file:
/root/untrusted-registry.rego
- configmap name :
untrusted-registry
Remember from CKA how to create a configmap from a file imperatively.
Reveal
kubectl create configmap untrusted-registry --from-file=untrusted-registry.rego
- configmap file:
-
Create a pod defined under
/root/test.yaml
in the namespacedev
. Fix the OPA validation issue while creating the pod.NOTE: The pod is expected to be in a created state but not up and running.
Recall that the untrusted registry policy denies creation of pods not from a certain registry named in the policy.
Try applying the manifest as-is and observe the error
kubectl apply -n dev -f /root/test.yaml
Reveal
-
Edit
test.yaml
and ensure all container images start withhooli.com/
-
Apply the edited manifest
kubectl apply -n dev -f /root/test.yaml
-
-
As per policy in
/root/unique-host.rego
, which ingress resources will be denied for creation?- multiple ingress resources with same namespace
- multiple ingress resources with same service
- multiple ingress resources with same host
- multiple ingress resources with same image name
Check the Ingress object being compared in the
/root/unique-host.rego
policyReveal
The following two lines in the policy give away the answer
host := input.request.object.spec.rules[_].host
msg := sprintf("invalid ingress host %q (conflicts with %v/%v)", [host, other_ns, other_ingress])
The4 object being compared is the ingress host, and the policy is to prevent you from being able to create two ingresses referring to the same host - which is a good thing as it would confuse the ingress controller.
multiple ingress resources with same host
-
Create a configmap named unique-host using the rego file
/root/unique-host.rego
for OPA.Do the same as for Q5, but for the other rego file.
Reveal
kubectl create configmap unique-host --from-file=/root/unique-host.rego
-
Create 2 ingress resources using below files. Check if you can create both resources
/root/ingress-test-1.yaml
/root/ingress-test-2.yaml
Reveal
Create namespaces as needed.
kubectl apply -f /root/ingress-test-1.yaml kubectl apply -f /root/ingress-test-2.yaml
ingress-test-2
fails with an error. If you examine both YAML files, you will see that they both refer to the same hostinitech.com
which is in violation of the policy.Note also that if you created #2 first, then it would create and #1 would fail with that error.