IMPORTAT: Network Policies do not work on Kubernetes on Docker for Desktop
- kubernetes.io > Reference > Accessing the API > Controlling Access to the Kubernetes API
- kubernetes.io > Reference > Accessing the API > Authenticating
- kubernetes.io > Tasks > Configure Pods and Containers > Configure a Security Context for a Pod or Container
- kubernetes.io > Concepts > Services, Load Balancing, and Networking > Network Policies
A pod that runs with user id 101:
In a one-line command, run this:
kubectl run nginx --image=nginx --restart=Never --overrides='{"spec": {"securityContext": {"runAsUser": 101}}}'
Or do it manually:
kubectl run nginx --image=nginx --restart=Never --dry-run -o yaml > nginx.yaml
Insert Pod.spec.securityContext.runAsUser
:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
securityContext:
runAsUser: 101
containers:
...
To make it has the capabilities NET_ADMIN
and SYS_TIME
, add them to the securityContext
inside the container (pod.spec.containers[0].securityContext.capabilities.add
):
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
securityContext:
runAsUser: 101
containers:
- name: nginx
image: nginx
securityContext:
capabilities:
add: ["NET_ADMIN", "SYS_TIME"]
...
If there is an error, check the events and logs:
kubectl get event
kubectl logs POD [ -c CONTAINER ]
For example, the lines:
securityContext:
runAsUser: 1000
At the Pod.spec
or container level may cause the error CrashLoopBackOff
executing kubectl get pods
. Checking the events and the logs you'll get the event:
Warning BackOff pod/app2 Back-off restarting failed container
And the logs:
2019/12/03 21:09:56 [warn] 1#1: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2 nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
2019/12/03 21:09:56 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied) nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
All capabilities are here: https://github.com/torvalds/linux/blob/master/include/uapi/linux/capability.h
-
NET_ADMIN allows interface, routing, and other network configuration.
https://github.com/torvalds/linux/blob/master/include/uapi/linux/capability.h#L203
-
SYS_TIME allows system clock configuration
https://github.com/torvalds/linux/blob/master/include/uapi/linux/capability.h#L311
Use command capsh
to know the kernel capabilities
grep Cap /proc/1/status
capsh --decode=00000000aa0435fb
kubectl api-resources
- Create the Service Account
Name: serviceaccount
, Short Name: sa
, Kind: ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: SA_NAME
- Create the Cluster Role
Name: clusterroles
, Short Name: NONE, Kind: ClusterRole
, API Group: rbac.authorization.k8s.io
To create a Cluster Role check other similar or one with all permissions to copy from it
kubectl get clusterroles admin -o yaml > myclusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: CR_NAME
rules:
- apiGroups:
- "" # <- "" means "v1"
resources:
- secrets
verbs:
- get
- list
- Bind the role to the account
Name: rolebindings
, Short Name: NONE, Kind: RoleBinding
, API Group: rbac.authorization.k8s.io
To create it use a blueprint
kubectl get rolebindings --all-namespaces
kubectl get rolebindings kube-proxy -n kube-system -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: RB_NAME
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: CR_NAME
subjects:
- kind: ServiceAccount
name: SA_NAME
- Assign Service Account to the Pod or Deployment
Add the property Pod.spec.serviceAccountName
with the Service Account name.
With a new pod you can create it with:
kubectl run nginx --image=nginx --restart=Never --serviceaccount=secret-access-sa
For an existing Deployment (also rc, daemonset, job, rs & statefulset) you can set the Service Account with:
kubectl set serviceaccount deployment nginx secret-access-sa
For an existing Pod, just delete and create.
List of network providers that supports Network Policy:
kubernetes.io > Tasks > Administer a Cluster > Install a Network Policy Provider > Declare Network Policy > Before you begin
Network policy to deny all traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-default
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Network policy to allow access to the pod only if other pods are labeled with access: true
kubetcl run nginx --image=nginx --replicas=2 --port=80 --expose
kubectl get svc nginx -o yaml # get the pod selector label: 'run: nginx' or use:
kubectl get po --show-labels | grep nginx # check with:
kubectl get po -l run=nginx
Network Policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access
spec:
podSelector:
matchLabels:
run: nginx # label of the pod to apply the rule
policyType:
- Ingress
igress:
- from
- podSelector:
matchLabels:
access: true
Check:
# Access denied
kubectl run busybox --image=busybox --rm -it --restart=Never -- wget -O- http://nginx:80
# Access granted
kubectl run busybox --image=busybox --rm -it --restart=Never --labels=access=true -- wget -O- http://nginx:80
Use curl
from your host to check ingress access to the service/pod from outside world:
curl http://${ClusterIP}:${ServicePort}
Use wget
from your host or a pod in your cluster (temporary or existing) to check the ingress access to the service/pod from the cluster:
# Test from a new testing Pod
kubectl run busybox --image=busybox --rm -it --restart=Never -- wget -O- http://<Service Name | Pod IP>:${ServicePort}
# Test from an existing Pod with `wget`
kubectl exec mypod -it -- wget -O- http://<Service Name | Pod IP>:${ServicePort}
Use nc -zv
from the pod to verify, to check the egress access to other service/pod and the outside world:
kubectl exec -it app2 -- nc -vz 127.0.0.1 ${ServicePort}
kubectl exec -it app2 -- nc -vz <Pod IP> ${ServicePort}
kubectl exec -it app2 -- nc -vz www.google.com 80
In previous kubectl exec
commands, use -c CONTAINER
if there is more than one container running on the Pod.
{
"apiVersion": "abac.authorization.kubernetes.io/v1beta1",
"kind": "Policy",
"spec": {
"user": "bob",
"namespace": "foobar",
"resource": "pods",
"readonly": true
}
}
More examples in:
kubernetes.io > Reference > Accessing the API > Using ABAC Authorization
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
securityContext:
runAsNonRoot: true
containers:
- image: nginx
name: nginx
Know more about Security Context.
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
spec:
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: MustRunAsNonRoot
fsGroup:
rule: RunAsAny
More examples of PodSecurityPolicy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-egress-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Example of complex match expression:
podSelector:
matchExpression:
- {key: inns, operator: In, values: ["yes"]}
More examples of Network Policies