$ minikube start
If you're following a tutorial on Minikube, instead of pushing your Docker image to a registry, you can simply build the image using the same Docker host as the Minikube VM, so that the images are automatically present. To do so, make sure you are using the Minikube Docker daemon:
$ eval $(minikube docker-env)
To detach:
$ eval $(minikube docker-env -u)
$ minikube stop
$ minikube delete
$ kubectl version
$ kubectl get componentstatuses
$ kubectl get nodes
$ kubectl describe nodes <NODE NAME>
Kubernetes proxy:
$ kubectl get daemonSets --namespace=kube-system kube-proxy
DNS:
$ kubectl get deployments --namespace=kube-system kube-dns
Load balancing service for DNS:
$ kubectl get services --namespace=kube-system kube-dns
Kubernetes UI:
$ kubectl get deployments --namespace=kube-system kubernetes-dashboard
Load-balancing service for the dashboard:
$ kubectl get services --namespace=kube-system kubernetes-dashboard
$ kubectl proxy
Context can be used to set default namespace, authentication, and other settings to be used by kubectl
.
Create context with a different default namespace for your kubectl
commands using:
$ kubectl config set-context my-context --namespace=mystuff
then activate it with:
$ kubectl config use-context my-context
General syntax:
$ kubectl get [output-options] (TYPE [NAME | -l label] | TYPE/NAME ...) [flags] [options]
e.g.:
$ kubectl get pods [<pod-name>]
Output options can be specified with -o
or --output
. Syntax:
[(-o|--output=)json|yaml|wide|custom-columns=...|custom-columns-file=...|go-template=...|go-template-file=...|jsonpath=...|jsonpath-file=...]
Option | Description |
---|---|
-o wide |
More detail output |
-o json |
JSON output |
-o yaml |
YAML output |
-o jsonpath |
Get specific field, e.g. -o jsonpath --template={.status.podIP} |
-o template |
Get specific field, e.g. -o jsonpath --template={.status.podIP} |
More detailed info:
$ kubectl describe <resource-type> [<object-name>]
Valid resource types include:
- all
- certificatesigningrequests (aka 'csr')
- clusterrolebindings
- clusterroles
- componentstatuses (aka 'cs')
- configmaps (aka 'cm')
- controllerrevisions
- cronjobs
- customresourcedefinition (aka 'crd')
- daemonsets (aka 'ds')
- deployments (aka 'deploy')
- endpoints (aka 'ep')
- events (aka 'ev')
- horizontalpodautoscalers (aka 'hpa')
- ingresses (aka 'ing')
- jobs
- limitranges (aka 'limits')
- namespaces (aka 'ns')
- networkpolicies (aka 'netpol')
- nodes (aka 'no')
- persistentvolumeclaims (aka 'pvc')
- persistentvolumes (aka 'pv')
- poddisruptionbudgets (aka 'pdb')
- podpreset
- pods (aka 'po')
- podsecuritypolicies (aka 'psp')
- podtemplates
- replicasets (aka 'rs')
- replicationcontrollers (aka 'rc')
- resourcequotas (aka 'quota')
- rolebindings
- roles
- secrets
- serviceaccounts (aka 'sa')
- services (aka 'svc')
- statefulsets (aka 'sts')
- storageclasses (aka 'sc')
To create an object from object manifest in obj.yaml
:
$ kubectl apply -f obj.yaml
Editing Kubernetes object interactively:
$ kubectl edit <resource-type> <object-name>
Deleting:
$ kubectl delete -f obj.yaml
or
$ kubectl delete <resource-type> <object-name>
Add label:
$ kubectl label pods bar color=red
Use --overwrite
to change existing label.
Delete label color
by applying dash suffix:
$ kubectl label pods bar "color-"
$ kubectl logs <pod-name>
Use -f
to continuously stream the logs back to the terminal without exiting.
$ kubectl exec -it <pod-name> -- bash
$ kubectl cp <pod-name>:/path/to/remote/file /path/to/local/file
$ kubectl run hello-node --image=gcr.io/<username>/hello-node:v1
$ kubectl get pods
$ kubectl describe pods hello-node
$ kubectl delete deployments/hello-node
Create hello-pod.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: hello-node
labels:
app: hello-node
spec:
containers:
- image: docker.io/bennylp/hello-node:v1
name: hello-node
ports:
- containerPort: 5000
name: http
protocol: TCP
$ kubectl apply -f hello-pod.yaml
By name:
$ kubectl delete pods/hello-node
or using the manifest:
$ kubectl delete -f hello-node.yaml
$ kubectl port-forward hello-node 5555:5000
You can then access the pod's port 5000 from http://localhost:5555.
Liveness probe is a health check to see if the container is alive. If the liveness probe fails, the container will be restarted.
Say we create hello-pod-health.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: hello-node
spec:
containers:
- image: docker.io/bennylp/hello-node:v1
name: hello-node
livenessProbe:
httpGet:
path: /healthy
port: 5000
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 10
failureThreshold: 3
ports:
- containerPort: 5000
name: http
protocol: TCP
Create the pod with:
$ kubectl apply -f hello-node-health.yaml
Note: you need to handle /health
HTTP URI path in the application.
Readiness probe indicates that the container is ready to serve requests. If it fails, the load balancer will not give new request to the container.
Same syntax as liveness probe, but the name is readinessProbe
instead of livenessProbe
.
Add in container's spec in the YAML:
...
spec:
containers:
- image: ...
name: ...
resources:
requests:
cpu: "500m"
memory: "128Mi"
...
...
spec:
containers:
- image: ...
name: ...
resources:
requests:
cpu: "500m"
memory: "128Mi"
limits:
cpu: "1000m"
memory: "256Mi"
...
spec:
volumes:
- name: "my-data"
hostPath:
path: "/var/lib/hello-node"
containers:
- image: ..
name: ..
volumeMounts:
- mountPath: "/data"
name: "my-data"
Both labels and annotations are key/value pairs that can be attached to objects. Labels are identifying information, while annotations are not.
Label name can be prefixed by a DNS subdomain, e.g. acme.com/app-version
.
Applying when creating deployment (note: this label is applied to the Deployment object and not the Pods):
$ kubectl run alpaca-test ... --labels="ver=1,color=red,env=prod"
Adding to already running object:
$ kubectl label pods bar color=red
...
metadata:
labels:
app: wordpress
...
$ kubectl get pods --show-labels
$ kubectl get deployments -L color
Query if a label is set at all:
$ kubectl get deployments --selector="canary"
Query for a particular value:
$ kubectl get pods --selector="ver=2"
Use comma to separate labels:
$ kubectl get pods --selector="app=bandicoot,ver=2"
Query for multiple values:
$ kubectl get pods --selector="app in (alpaca,bandicoot)"
Operators:
Operator | Description |
---|---|
key = value |
Equality |
key != value |
Inequality |
key in (value1, value2) |
Having any of these values |
key notin (value1, value2) |
Not having any of these values |
key |
This key is set |
!key |
This key is not set |
In object definition:
...
metadata:
annotations:
example.com/icon-url: "https://example.com/icon.png"
...
$ kubectl expose deployment hello-node
kind: Service
apiVersion: v1
metadata:
name: hello-svc
spec:
type: NodePort
selector:
app: hello-node
ports:
- protocol: TCP
port: 5555
targetPort: 5000
When defined without selector, service can be used to target other kind of backends (such as database servers).
Type | Description |
---|---|
ClusterIP |
(default) Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType. |
NodePort |
Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting :. |
LoadBalancer |
builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the clusterIP. |
ExternalName |
Maps the service to the contents of the externalName field (e.g. foo.bar.example.com ), by returning a CNAME record with its value. No proxying of any kind is set up. |
$ kubectl describe service hello-svc
Because the cluster IP address of a service is virtual, it is stable and you can map the service address to a DNS entry (how??). For example the service above can be named as hello-svc.default.svc.cluster.local
, where:
hello-svc
: the service namedefault
: namespacesvc
: indicates that this is a servicecluster.local
: default base domain
No idea what this is..
kind: Service
apiVersion: v1
metadata:
name: external-database
spec:
type: ExternalName
externalName: "database.company.com"
The doc says in many cases it is recommended to create a Deployment instead of ReplicaSet.
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
# Unique key of the ReplicaSet instance
name: replicaset-example
spec:
# 3 Pods should exist at all times.
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
# Run the nginx image
- name: nginx
image: nginx:1.10
The label in template's spec is the specification for pods which are managed by this ReplicaSet.
Use kubectl apply
.
Use kubectl describe
.
$ kubectl get pods <pod-name> -o yaml
Check the kubernetes.io/created-by
annotation.
Use the --selector
or -l
flag:
$ kubectl get pods -l app=nginx
$ kubectl scale replicaset-example --replicas=4
Change the replicas
field in the manifest and do kubectl apply
.
For example based on CPU usage:
$ kubectl autoscale rs replicaset-example --min=2 --max=5 --cpu-percent=80
$ kubectl delete rs replicaset-example
Add --cascade=false
to prevent pod deletion.
DaemonSet
is used to specify pods to be run on each node.
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
# Unique key of the DaemonSet instance
name: daemonset-example
spec:
template:
metadata:
labels:
app: daemonset-example
spec:
containers:
# This container is run once on each Node in the cluster
- name: daemonset-example
image: ubuntu:trusty
command:
- /bin/sh
args:
- -c
# This script is run through `sh -c <script>`
- >-
while [ true ]; do
echo "DaemonSet running on $(hostname)" ;
sleep 10 ;
done
Use kubectl apply
.
Set the appropriate label on the nodes and use spec.template.spec.nodeSelector
to select them:
spec:
template:
metadata:
labels:
app: nginx
ssd: "true"
spec:
nodeSelector:
ssd: "true"
PODS=$(kubectl get pods -o jsonpath -template='{.items[*].metadata.name}'
for x in $PODS; do
kubectl delete pods ${x}
sleep 60
done
Configure the update strategy by setting the spec.updateStrategy.type
field to RollingUpdate
. With this, any change to the spec.template
field (or subfields) in the DaemonSet
will initiate a rolling update.
$ kubectl delete -f daemonset-example.yaml
Add --cascade=false
to prevent pod deletion.
Jobs are short-lived pods to execute some tasks.
Running a one shot job interactively (the -i
option):
$ kubectl run -i example-job --image=.. --restart=OnFailure -- arg1 arg2 ..
apiVersion: batch/v1
kind: Job
metadata:
# Unique key of the Job instance
name: example-job
spec:
parallelism: 5
completions: 10
template:
metadata:
name: example-job
spec:
containers:
- name: pi
image: perl
command: ["perl"]
args: ["-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: OnFailure
$ kubectl describe jobs example-job
$ kubectl get pod -l job-name=example-job -a
$ kubectl delete jobs example-job
Suppose we have a config file my-config.txt
with the following content:
# This is a sample config file that I might use to configure an application
parameter1 = value1
parameter2 = value2
Here's to create a ConfigMap:
$ kubectl create configmap my-config \
--from-file=my-config.txt \
--from-literal=extra-param=extra-value \
--from-literal=another-param=another-value
$ kubectl get configmaps my-config -o yaml
apiVersion: v1
data:
another-param: another-value
extra-param: extra-value
my-config.txt: |
# This is a sample config file that I might use to configure an application
parameter1 = value1
parameter2 = value2
kind: ConfigMap
metadata:
creationTimestamp: 2018-06-03T01:13:31Z
name: my-config
namespace: default
resourceVersion: "211863"
selfLink: /api/v1/namespaces/default/configmaps/my-config
uid: 5458f599-66cb-11e8-b3a2-08002738b79a
$ kubectl get configmaps
Three ways:
- mount as file system
- environment variable
- command-line argument
The first two are shown in example below:
apiVersion: v1
kind: Pod
metadata:
name: pod-example
spec:
containers:
- name: test-container
image: gcr.io/kuar-demo/kuard-amd64:1
imagePullPolicy: Always
command:
- "/kuard"
- "$(EXTRA_PARAM)"
env:
- name: ANOTHER_PARAM
valueFrom:
configMapKeyRef:
name: my-config
key: another-param
- name: EXTRA_PARAM
valueFrom:
configMapKeyRef:
name: my-config
key: extra-param
volumeMounts:
- name: config-volume
mountPath: /config
volumes:
- name: config-volume
configMap:
name: my-config
restartPolicy: Never
If you have the YAML:
$ kubectl replace -f <filename>
If previously applied using kubectl apply
:
$ kubectl apply -f <filename>
$ kubectl edit configmap my-config
Putting example.crt
and example.key
in the secret:
$ kubectl create secret generic example-secret --from-file=example.crt --from-file=example.key
$ kubectl create secret generic example-secret \
--from-file=example.crt --from-file=example.key \
--dry-run -o yaml | \
kubectl replace -f -
$ kubectl get secrets
$ kubectl describe secrets example-secret
Name: example-tls
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
example.crt: 1679 bytes
example.key: 1050 bytes
Mount as volume:
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
containers:
- name: example-pod
image: ...
imagePullPolicy: Always
volumeMounts:
- name: tls-certs
mountPath: "/tls"
readOnly: true
volumes:
- name: tls-certs
secret:
secretName: example-secret
$ kubectl create secret docker-registry my-docker-credential \
--docker-username=<username> \
--docker-password=<password> \
--docker-email=<email-address>
apiVersion: v1
kind: Pod
metadata:
name: example.pod
spec:
containers:
- name: example-pod
image: docker...
imagePullPolicy: Always
imagePullSecrets:
- name: my-image-pull-secret
$ kubectl run nginx --image=nginx:1.7.12
$ kubectl get deployments nginx
Getting the ReplicaSet that the deployment manages:
$ kubectl get replicasets --selector=run=nginx
$ kubectl scale deployments nginx --replicas=2
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: deployment-example
spec:
# 3 Pods should exist at all times.
replicas: 3
# Keep record of 2 revisions for rollback
revisionHistoryLimit: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: "20%"
template:
metadata:
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: nginx
annotations:
kubernetes.io/change-cause: "Update nginx to 1.10"
spec:
containers:
- name: nginx
# Run this image
image: nginx:1.10
Use kubectl describe
.
Change the YAML and use kubectl apply -f <filename>
.
You'd better change the annotation to indicate the reason of the update.
$ kubectl rollout status deployments nginx
You can pause the rollout if something weird happens:
$ kubectl rollout pause deployments nginx
And to resume it:
$ kubectl rollout resume deployments nginx
$ kubectl rollout history deployment nginx
Detailed info for a particular revision:
$ kubectl rollout history deployment nginx --revision=2
$ kubectl rollout undo deployments nginx
But this is probably bad idea. Better to change the YAML and apply it.
Use:
maxUnavailable
minReadySeconds
progressDeadlineSeconds