-
Notifications
You must be signed in to change notification settings - Fork 0
/
certified kubernetes administrator.txt
6441 lines (4929 loc) · 197 KB
/
certified kubernetes administrator.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
core concepts:
kubernetes cluster architecture:
control plane - manager
- api server
communciation hub for all cluster components.
- scheduler
asiigns an app and pods within an app to a worker node
- controller manager
maintains clsuter
- etcd
data store storing cluster confniguration
worker node
- kubelet
runs and amanges containers - talks to api server and to container runtime.
- kube-proxy
load balances traffic bw app components
- container runtime
docker, rkt, contrainerd etc.
kubernetes has a declarative intent.
cat << EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
EOF
$ kc get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 0 7m12s 10.200.0.2 chaitanyah3684c.mylabserver.com <none> <none>
curlpod 1/1 Running 0 4m49s 10.200.0.10 chaitanyah3683c.mylabserver.com <none> <none>
nginx 1/1 Running 0 15s 10.200.0.8 chaitanyah3683c.mylabserver.com <none> <none>
cloud_user@chaitanyah3685c:~/kubernetes-code$ kc exec -it curlpod -- curl --head 10.200.0.8
HTTP/1.1 200 OK
Server: nginx/1.17.8
Date: Sat, 22 Feb 2020 02:27:16 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 21 Jan 2020 13:36:08 GMT
Connection: keep-alive
ETag: "5e26fe48-264"
Accept-Ranges: bytes
api primitives:
cloud_user@chaitanyah3681c:~$ kubectl get componentstatus
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
kubectl get deployments nginx-deployment -o wide
kubectl get deployments nginx-deployment -o yaml
$ kc get deploy
$ kc get pods
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 16m
curlpod 1/1 Running 0 13m
nginx-deployment-5f7df8d587-h4dj4 1/1 Running 0 2m23s
cloud_user@chaitanyah3685c:~/kubernetes-code$ kc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.32.0.1 <none> 443/TCP 9d
nginx-service ClusterIP 10.32.0.250 <none> 80/TCP 30s
cloud_user@chaitanyah3685c:~/kubernetes-code$
cloud_user@chaitanyah3685c:~/kubernetes-code$
cloud_user@chaitanyah3685c:~/kubernetes-code$ kc exec -it curlpod -- curl --head 10.32.0.250
HTTP/1.1 200 OK
Server: nginx/1.17.8
Date: Sat, 22 Feb 2020 02:36:23 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 21 Jan 2020 13:36:08 GMT
Connection: keep-alive
ETag: "5e26fe48-264"
Accept-Ranges: bytes
kubectl get pods --show-labels
cloud_user@chaitanyah3681c:~/cert-kube$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-deployment-5c689d88bb-2ddg5 1/1 Running 0 7m51s app=nginx,pod-template-hash=5c689d88bb
nginx-deployment-5c689d88bb-m4gnw 1/1 Running 0 7m51s app=nginx,pod-template-hash=5c689d88bb
cloud_user@chaitanyah3681c:~/cert-kube$ kubectl label deployments nginx-deployment env=prod
deployment.extensions/nginx-deployment labeled
cloud_user@chaitanyah3681c:~/cert-kube$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 2 2 2 2 9m5s
cloud_user@chaitanyah3681c:~/cert-kube$ kubectl get deployments --show-labels
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE LABELS
nginx-deployment 2 2 2 2 9m10s env=prod
cloud_user@chaitanyah3681c:~/cert-kube$ kubectl get pods -L env
NAME READY STATUS RESTARTS AGE ENV
nginx-deployment-5c689d88bb-2ddg5 1/1 Running 0 9m21s
nginx-deployment-5c689d88bb-m4gnw 1/1 Running 0 9m21s
cloud_user@chaitanyah3681c:~/cert-kube$ kubectl annotate deployments nginx-deployment mycompany.com/tempannotation="chad"
deployment.extensions/nginx-deployment annotated
cloud_user@chaitanyah3681c:~/cert-kube$ kubectl get pods --field-selector=status.phase=Running
NAME READY STATUS RESTARTS AGE
nginx-deployment-5c689d88bb-2ddg5 1/1 Running 0 10m
nginx-deployment-5c689d88bb-m4gnw 1/1 Running 0 10m
cloud_user@chaitanyah3681c:~/cert-kube$ kubectl get pods --field-selector=status.phase=Running,metadata.namespace=default
NAME READY STATUS RESTARTS AGE
nginx-deployment-5c689d88bb-2ddg5 1/1 Running 0 10m
nginx-deployment-5c689d88bb-m4gnw 1/1 Running 0 10m
cloud_user@chaitanyah3681c:~/cert-kube$ kubectl get pods --field-selector=status.phase==Running,metadata.namespace==default
NAME READY STATUS RESTARTS AGE
nginx-deployment-5c689d88bb-2ddg5 1/1 Running 0 11m
nginx-deployment-5c689d88bb-m4gnw 1/1 Running 0 11m
cloud_user@chaitanyah3681c:~/cert-kube$ kubectl get pods --field-selector=status.phase==Running,metadata.namespace!=default
No resources found.
kubernetes services and network primitives:
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
spec:
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
selector:
app: nginx
$ kc explain svc.spec.type
KIND: Service
VERSION: v1
FIELD: type <string>
DESCRIPTION:
type determines how the Service is exposed. Defaults to ClusterIP. Valid
options are ExternalName, ClusterIP, NodePort, and LoadBalancer.
"ExternalName" maps to the specified externalName. "ClusterIP" allocates a
cluster-internal IP address for load-balancing to endpoints. Endpoints are
determined by the selector or if that is not specified, by manual
construction of an Endpoints object. If clusterIP is "None", no virtual IP
is allocated and the endpoints are published as a set of endpoints rather
than a stable IP. "NodePort" builds on ClusterIP and allocates a port on
every node which routes to the clusterIP. "LoadBalancer" builds on NodePort
and creates an external load-balancer (if supported in the current cloud)
which routes to the clusterIP. More info:
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
cloud_user@chaitanyah3685c:~/kubernetes-code$ kc get ep
NAME ENDPOINTS AGE
kubernetes 172.31.118.87:6443,172.31.119.25:6443 9d
nginx-service 10.200.0.8:80 5m57s
To create the busybox pod to run commands from:
cat << EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: radial/busyboxplus:curl
args:
- sleep
- "1000"
EOF
$ kubectl run busybox --generator=run-pod/v1 --image=radial/busyboxplus:curl -- sleep 1000
cloud_user@chaitanyah3681c:~/cert-kube$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
busybox 1/1 Running 0 75s 10.244.2.21 chaitanyah3683c.mylabserver.com <none>
nginx-deployment-5c689d88bb-2ddg5 1/1 Running 0 22m 10.244.2.20 chaitanyah3683c.mylabserver.com <none>
nginx-deployment-5c689d88bb-m4gnw 1/1 Running 0 22m 10.244.1.20 chaitanyah3682c.mylabserver.com <none>
cloud_user@chaitanyah3681c:~/cert-kube$
cloud_user@chaitanyah3681c:~/cert-kube$
cloud_user@chaitanyah3681c:~/cert-kube$
cloud_user@chaitanyah3681c:~/cert-kube$
cloud_user@chaitanyah3681c:~/cert-kube$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 24h
nginx-nodeport NodePort 10.111.199.152 <none> 80:30080/TCP 6m22s
cloud_user@chaitanyah3681c:~/cert-kube$
cloud_user@chaitanyah3681c:~/cert-kube$
cloud_user@chaitanyah3681c:~/cert-kube$
cloud_user@chaitanyah3681c:~/cert-kube$ kubectl exec busybox -- curl 10.111.199.152:80
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 612 100 612 0 0 158k 0 --:--:-- --:--:-- --:--:-- 199k
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
install, config and validate:
install master and nodes:
Get the Docker gpg key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Add the Docker repository:
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
Get the Kubernetes gpg key:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
Add the Kubernetes repository:
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
Update your packages:
sudo apt-get update
Install Docker, kubelet, kubeadm, and kubectl:
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.13.5-00 kubeadm=1.13.5-00 kubectl=1.13.5-00
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.16.0-00 kubeadm=1.16.0-00 kubectl=1.16.0-00
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.19.0-00 kubeadm=1.19.0-00 kubectl=1.19.0-00 --allow-unauthenticated
20.04 focal:
# cat /etc/issue
Ubuntu 20.04.1 LTS \n \l
sudo apt-get install -y docker-ce=5:20.10.3~3-0~ubuntu-focal kubelet=1.20.2-00 kubeadm=1.20.2-00 kubectl=1.20.2-00 --allow-unauthenticated
Hold them at the current version:
sudo apt-mark hold docker-ce kubelet kubeadm kubectl
Add the iptables rule to sysctl.conf:
echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
Enable iptables immediately:
sudo sysctl -p
Initialize the cluster (run only on the master):
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
Set up local kubeconfig:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
** new for 1.17.8-00- calico cni used: 1.18.12-00 being upgraded to at the end.
kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
Apply Flannel CNI network overlay:
**different cni require diff pod-cidr to be sued with kubeadm init --pod-network-cidr.
also, kubeadm automatically allocates a non overlapping cidr to each of the worker nodes in the cluster.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
**when creating a cluster using kubeadm;
all the conf are stored as config maps in kube-system namespace like so:
$ kubectl get cm --all-namespaces
NAMESPACE NAME DATA AGE
kube-public cluster-info 2 29m
kube-system coredns 1 29m
kube-system extension-apiserver-authentication 6 29m
kube-system kube-flannel-cfg 2 15m
kube-system kube-proxy 2 29m
kube-system kubeadm-config 2 29m
kube-system kubelet-config-1.13 1 29m
**how to create and use new tokens to join nodes to the cluster:
$ kubeadm token create --print-join-command
kubeadm join 172.31.101.11:6443 --token qrqkkt.v6r0bx3oly6d8ngh --discovery-token-ca-cert-hash sha256:b82f56a3c53c7e88462c76d1b1b7f835ff7752b43c888aca04d1ca7428be6804
cloud_user@chaitanyah3681c:~$ kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
i9fipl.h3wi0d046y0gx0cp 23h 2020-01-02T00:05:00Z authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
qrqkkt.v6r0bx3oly6d8ngh 23h 2020-01-02T00:38:23Z authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
cloud_user@chaitanyah3681c:~$
**kubeadm join 172.31.101.11:6443 --token qrqkkt.v6r0bx3oly6d8ngh --discovery-token-ca-cert-hash sha256:b82f56a3c53c7e88462c76d1b1b7f835ff7752b43c888aca04d1ca7428be6804
command to join a worker node to a cluster needs to be run as root.
Join the worker nodes to the cluster:
kubeadm join [your unique string from the kubeadm init command]
Verify the worker nodes have joined the cluster successfully:
kubectl get nodes
Compare this result of the kubectl get nodes command:
NAME STATUS ROLES AGE VERSION
chadcrowell1c.mylabserver.com Ready master 4m18s v1.13.5
chadcrowell2c.mylabserver.com Ready none 82s v1.13.5
chadcrowell3c.mylabserver.com Ready none 69s v1.13.5
**everything kubeadm init does:
The "init" command executes the following phases:
```
preflight Run master pre-flight checks
kubelet-start Writes kubelet settings and (re)starts the kubelet
certs Certificate generation
/etcd-ca Generates the self-signed CA to provision identities for etcd
/apiserver-etcd-client Generates the client apiserver uses to access etcd
/etcd-healthcheck-client Generates the client certificate for liveness probes to healtcheck etcd
/etcd-server Generates the certificate for serving etcd
/etcd-peer Generates the credentials for etcd nodes to communicate with each other
/front-proxy-ca Generates the self-signed CA to provision identities for front proxy
/front-proxy-client Generates the client for the front proxy
/ca Generates the self-signed Kubernetes CA to provision identities for other Kubernetes components
/apiserver Generates the certificate for serving the Kubernetes API
/apiserver-kubelet-client Generates the Client certificate for the API server to connect to kubelet
/sa Generates a private key for signing service account tokens along with its public key
kubeconfig Generates all kubeconfig files necessary to establish the control plane and the admin kubeconfig file
/admin Generates a kubeconfig file for the admin to use and for kubeadm itself
/kubelet Generates a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes
/controller-manager Generates a kubeconfig file for the controller manager to use
/scheduler Generates a kubeconfig file for the scheduler to use
control-plane Generates all static Pod manifest files necessary to establish the control plane
/apiserver Generates the kube-apiserver static Pod manifest
/controller-manager Generates the kube-controller-manager static Pod manifest
/scheduler Generates the kube-scheduler static Pod manifest
etcd Generates static Pod manifest file for local etcd.
/local Generates the static Pod manifest file for a local, single-node local etcd instance.
upload-config Uploads the kubeadm and kubelet configuration to a ConfigMap
/kubeadm Uploads the kubeadm ClusterConfiguration to a ConfigMap
/kubelet Uploads the kubelet component config to a ConfigMap
mark-control-plane Mark a node as a control-plane
bootstrap-token Generates bootstrap tokens used to join a node to a cluster
addon Installs required addons for passing Conformance tests
/coredns Installs the CoreDNS addon to a Kubernetes cluster
/kube-proxy Installs the kube-proxy addon to a Kubernetes cluster
```
high availablity and fault tolerance:
cloud_user@chaitanyah3681c:~/cert-kube$ kubectl get endpoints kube-scheduler -n kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"chaitanyah3681c.mylabserver.com_a219b562-6f65-11e9-9969-0ae96c07b466","leaseDurationSeconds":15,"acquireTime":"2019-05-05T18:43:58Z","renewTime":"2019-05-05T20:20:43Z","leaderTransitions":1}'
creationTimestamp: 2019-05-04T19:28:45Z
name: kube-scheduler
namespace: kube-system
resourceVersion: "41221"
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
uid: d55b2153-6ea2-11e9-ab2f-0ae96c07b466
holds the identity of the leader: holderIdentity
View the pods in the default namespace with a custom view:
kubectl get pods -o custom-columns=POD:metadata.name,NODE:spec.nodeName --sort-by spec.nodeName -n kube-system
View the kube-scheduler YAML:
kubectl get endpoints kube-scheduler -n kube-system -o yaml
Create a stacked etcd topology using kubeadm:
kubeadm init --config=kubeadm-config.yaml
Watch as pods are created in the default namespace:
kubectl get pods -n kube-system -w
securing clsuter communciations:
cat .kube/config | more
View the service account token:
kubectl get secrets
Create a new namespace named my-ns:
kubectl create ns my-ns
Run the kube-proxy pod in the my-ns namespace:
kubectl run test --image=chadmcrowell/kubectl-proxy -n my-ns
List the pods in the my-ns namespace:
kubectl get pods -n my-ns
Run a shell in the newly created pod:
kubectl exec -it <name-of-pod> -n my-ns sh
List the services in the namespace via API call:
curl localhost:8001/api/v1/namespaces/my-ns/services
View the token file from within a pod:
cat /var/run/secrets/kubernetes.io/serviceaccount/token
List the service account resources in your cluster:
coz myns/default sa doesnt have permissions to do so.
let me add a service read role on that and then::
$ kc create role myns-service-read -n myns --verb=get,list --resource=service
role.rbac.authorization.k8s.io/myns-service-read created
$ kc create rolebinding myns-serivce-read-myns-default -n myns --role=myns-service-read --serviceaccount=myns:default
rolebinding.rbac.authorization.k8s.io/myns-serivce-read-myns-default created
now, it works::
# curl localhost:8001/api/v1/namespaces/myns/services
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/myns/services",
"resourceVersion": "87523"
},
"items": []
kubectl get serviceaccounts
end to end tests on your cluster:
Run a simple nginx deployment:
kubectl run nginx --image=nginx
View the deployments in your cluster:
kubectl get deployments
View the pods in the cluster:
kubectl get pods
Use port forwarding to access a pod directly:
kubectl port-forward $pod_name 8081:80
kubectl port-forward deployment/nginx-deployment 8888:80
Get a response from the nginx pod directly:
curl --head http://127.0.0.1:8081
View the logs from a pod:
kubectl logs $pod_name
Run a command directly from the container:
kubectl exec -it <pod name> -- nginx -v
Create a service by exposing port 80 of the nginx deployment:
kubectl expose deployment nginx --port 80 --type NodePort
List the services in your cluster:
kubectl get services
Get a response from the service:
curl -I localhost:$node_port
List the nodes' status:
kubectl get nodes
View detailed information about the nodes:
kubectl describe nodes
View detailed information about the pods:
kubectl describe pods
cluster:
upgrading the kubernetes cluster:
View the version of the server and client on the master node:
kubectl version --short
View the version of the scheduler and controller manager:
kubectl get pods -n kube-system kube-controller-manager-chadcrowell1c.mylabserver.com -o yaml
View the name of the kube-controller pod:
kubectl get pods -n kube-system
Set the VERSION variable to the latest stable release of Kubernetes:
export VERSION=v1.14.1
Set the ARCH variable to the amd64 system:
export ARCH=amd64
View the latest stable version of Kubernetes using the variable:
echo $VERSION
Curl the latest stable version of Kubernetes:
curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > kubeadm
Install the latest version of kubeadm:
sudo install -o root -g root -m 0755 ./kubeadm /usr/bin/kubeadm
Check the version of kubeadm:
sudo kubeadm version
Plan the upgrade:
sudo kubeadm upgrade plan
Apply the upgrade to 1.14.1:
kubeadm upgrade apply v1.14.1
View the differences between the old and new manifests:
diff kube-controller-manager.yaml /etc/kubernetes/manifests/kube-controller-manager.yaml
Curl the latest version of kubelet:
curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubelet > kubelet
Install the latest version of kubelet:
sudo install -o root -g root -m 0755 ./kubelet /usr/bin/kubelet
Restart the kubelet service:
sudo systemctl restart kubelet.service
Watch the nodes as they change version:
Curl the latest version of kubelet:
curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubectl > kubectl
Install the latest version of kubelet:
kubectl get nodes -w
**if u get this message when attempting to upgrade using kubeadm, then upgrade kubelet first and then attempt the upgrade.
- There are kubelets in this cluster that are too old that have these versions [v1.13.5]
performing upgrades to os within a kubernetes cluster:
See which pods are running on which nodes:
kubectl get pods -o wide
Evict the pods on a node:
kubectl drain [node_name] --ignore-daemonsets
Watch as the node changes status:
kubectl get nodes -w
Schedule pods to the node after maintenance is complete:
kubectl uncordon [node_name]
Remove a node from the cluster:
kubectl delete node [node_name]
Generate a new token:
sudo kubeadm token generate
List the tokens:
sudo kubeadm token list
Print the kubeadm join command to join a node to the cluster:
sudo kubeadm token create [token_name] --ttl 2h --print-join-command
*node draining/token creation stuff*:
kc get no
NAME STATUS ROLES AGE VERSION
5575e104891c.mylabserver.com Ready <none> 68m v1.18.12
7e393cfe541c.mylabserver.com Ready master 71m v1.18.12
faa65cd3621c.mylabserver.com Ready <none> 68m v1.18.12
Chaitanya.Kolluru@9XFNN53 MINGW64 ~/Desktop/AA-new-2/tuna1/kubernetes-helm-istio/kube-practice/kube-12-7 (master)
$ kc drain 5575e104891c.mylabserver.com --ignore-daemonsets
node/5575e104891c.mylabserver.com cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-d89mx, kube-system/kube-proxy-zzbxc
node/5575e104891c.mylabserver.com drained
Chaitanya.Kolluru@9XFNN53 MINGW64 ~/Desktop/AA-new-2/tuna1/kubernetes-helm-istio/kube-practice/kube-12-7 (master)
$ kc get no
NAME STATUS ROLES AGE VERSION
5575e104891c.mylabserver.com Ready,SchedulingDisabled <none> 69m v1.18.12
7e393cfe541c.mylabserver.com Ready master 72m v1.18.12
faa65cd3621c.mylabserver.com Ready <none> 69m v1.18.12
Chaitanya.Kolluru@9XFNN53 MINGW64 ~/Desktop/AA-new-2/tuna1/kubernetes-helm-istio/kube-practice/kube-12-7 (master)
$ kc delete node 5575e104891c.mylabserver.com
node "5575e104891c.mylabserver.com" deleted
$ kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
ouwm4l.lxwwbv1ryaqqmwrp 22h 2020-12-09T03:27:02Z authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
cloud_user@7e393cfe541c:~/.kube$ kubeadm token generate
cloud_user@7e393cfe541c:~/.kube$ date
Tue Dec 8 04:40:52 UTC 2020
cloud_user@7e393cfe541c:~/.kube$ kubeadm token generate
xmuxk7.n1y0lfl91fdb7ntv
cloud_user@7e393cfe541c:~/.kube$ kubeadm token create xmuxk7.n1y0lfl91fdb7ntv --ttl 2h --print-join-command
W1208 04:41:52.452903 2682 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 172.31.99.166:6443 --token xmuxk7.n1y0lfl91fdb7ntv --discovery-token-ca-cert-hash sha256:486a789182cf85aa3e2fb0399b63afed48448089f3241d678766ecb7a4883320
and i ran this to ignore pre-flight checks and add a previously existing node to the cluster using a new token command:
sudo kubeadm join 172.31.99.166:6443 --token xmuxk7.n1y0lfl91fdb7ntv --discovery-token-ca-cert-hash sha256:486a789182cf85aa3e2fb0399b63afed48448089f3241d678766ecb7a4883320 --ignore-preflight-errors=FileAvailable--etc-kubernetes-kubelet.conf,Port-10250,FileAvailable--etc-kubernetes-pki-ca.crt
$ kubectl get node
NAME STATUS ROLES AGE VERSION
5575e104891c.mylabserver.com NotReady <none> 4s v1.18.12
7e393cfe541c.mylabserver.com Ready master 77m v1.18.12
faa65cd3621c.mylabserver.com Ready <none> 74m v1.18.12
backing up and restoring cluster:
backing up etcd:
Get the etcd binaries:
wget https://github.com/etcd-io/etcd/releases/download/v3.3.12/etcd-v3.3.12-linux-amd64.tar.gz
Unzip the compressed binaries:
tar xvf etcd-v3.3.12-linux-amd64.tar.gz
Move the files into /usr/local/bin:
sudo mv etcd-v3.3.12-linux-amd64/etcd* /usr/local/bin
Take a snapshot of the etcd datastore using etcdctl:
sudo ETCDCTL_API=3 etcdctl snapshot save snapshot.db --cacert /etc/kubernetes/pki/etcd/server.crt --cert /etc/kubernetes/pki/etcd/ca.crt --key /etc/kubernetes/pki/etcd/ca.key
View the help page for etcdctl:
ETCDCTL_API=3 etcdctl --help
Browse to the folder that contains the certificate files:
cd /etc/kubernetes/pki/etcd/
View that the snapshot was successful:
ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshot.db
Zip up the contents of the etcd directory:
**the cert directory
sudo tar -zcvf etcd.tar.gz /etc/kubernetes/pki/etcd
Copy the etcd directory to another server:
scp etcd.tar.gz [email protected]:~/
so the etcd dir and the snapshot.db and restore using the etcdctl snapshot restore command.
networking:
pod and node networking:
See which node our pod is on:
kubectl get pods -o wide
Log in to the node:
ssh [node_name]
View the node's virtual network interfaces:
ifconfig
View the containers in the pod:
docker ps
Get the process ID for the container:
docker inspect --format '{{ .State.Pid }}' [container_id]
Use nsenter to run a command in the process's network namespace:
nsenter -t [container_pid] -n ip addr
sudo nsenter -t 24690 -n ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether 1a:4f:d1:9c:10:0a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.244.219.4/32 scope global eth0
valid_lft forever preferred_lft forever
** 4: eth0@if10 here means the eth0 and if10 are a virtual pair and that if10 corresponds to 10th interface on ip a command op:
10: cali4eb4bc0a4fb@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
container network interface:
is a nw overlay - sort of a tunnel.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
diagram on how inter-node communication occurs:
node1 node2
pod1 pod2
ip 1 ip 2
eth0 eth1
veth0 veth1
eth0->veth0->bridge->eth0(node1) ====CNI ======= eth0(node2)->bridge->veth1->eth1
**the cni used is sort of hinted at by the pod-cidr one uses when kubeadm init is performed.
$ kubectl get pods -n kube-system -o wide | grep h3681c | awk '{print $1}'
etcd-chaitanyah3681c.mylabserver.com
kube-apiserver-chaitanyah3681c.mylabserver.com
kube-controller-manager-chaitanyah3681c.mylabserver.com
kube-flannel-ds-amd64-dwdwz
kube-proxy-ntmhx
kube-scheduler-chaitanyah3681c.mylabserver.com
$ kubectl get pods -n kube-system -o wide | grep h3682c | awk '{print $1}'
coredns-5d4dd4b4db-gpl5x
kube-flannel-ds-amd64-vw7xp
kube-proxy-l4md5
$ kubectl get pods -n kube-system -o wide | grep h3683c | awk '{print $1}'
coredns-5d4dd4b4db-p7zxv
kube-flannel-ds-amd64-f24p6
kube-proxy-p8vtp
kubeadm creates etcd, apiserver, controller manager, scheduler as pods in kube-system namespace instead of how
we created them as services on the controllers in hard way guide.
service networking:
YAML for the nginx NodePort service:
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
spec:
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
selector:
app: nginx
Get the services YAML output for all the services in your cluster:
kubectl get services -o yaml
Try and ping the clusterIP service IP address:
ping 10.96.0.1
View the list of services in your cluster:
kubectl get services
View the list of endpoints in your cluster that get created with a service:
kubectl get endpoints
Look at the iptables rules for your services:
sudo iptables-save | grep KUBE | grep nginx
ingress rules and load balancers:
View the list of services:
kubectl get services
The load balancer YAML spec:
apiVersion: v1
kind: Service
metadata:
name: nginx-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: nginx
Create a new deployment:
kubectl run kubeserve2 --image=chadmcrowell/kubeserve2
View the list of deployments:
kubectl get deployments
Scale the deployments to 2 replicas:
kubectl scale deployment/kubeserve2 --replicas=2
View which pods are on which nodes:
kubectl get pods -o wide
Create a load balancer from a deployment:
kubectl expose deployment kubeserve2 --port 80 --target-port 8080 --type LoadBalancer
kubectl expose deployment kubeserve2 --port 80 --target-port 8080 --container-port 30082 --type LoadBalancer
View the services in your cluster:
kubectl get services
Watch as an external ip is created for a service:
kubectl get services -w
Look at the YAML for a service:
kubectl get services kubeserve2 -o yaml
Curl the external IP of the load balancer:
curl http://[external-ip]
View the annotation associated with a service:
kubectl describe services kubeserve
Set the annotation to route load balancer traffic local to the node:
kubectl annotate service kubeserve2 externalTrafficPolicy=Local
**local vs cluster external traffic policy:
local - means that traffic is distributed at a node level and not at a cluster level.
this means, if u have 5 in node1 and 10 pods in node2, it will distribute all requests equally between the two nodes,
so the pods in node1 will have to work more because of hving less no of pods in the node1.
The YAML for an Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: first.bar.com
http:
paths:
- backend:
serviceName: service1
servicePort: 80
- host: second.foo.com
http:
paths:
- backend:
serviceName: service2
servicePort: 80
- http:
paths:
- backend:
serviceName: service3
servicePort: 80
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
kubeserve2-ingress foo.bar.com 80 8s
cloud_user@chaitanyah3681c:~/practice-5$
cloud_user@chaitanyah3681c:~/practice-5$
cloud_user@chaitanyah3681c:~/practice-5$
cloud_user@chaitanyah3681c:~/practice-5$
cloud_user@chaitanyah3681c:~/practice-5$ kubectl describe ingress kubeserve2-ingress
Name: kubeserve2-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
foo.bar.com
kubeserve2:80 (10.244.1.128:8080,10.244.2.148:8080)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"kubeserve2-ingress","namespace":"default"},"spec":{"rules":[{"host":"foo.bar.com","http":{"paths":[{"backend":{"serviceName":"kubeserve2","servicePort":80}}]}}]}}
Events: <none>
**ingress explained:
https://kubernetes.io/docs/concepts/services-networking/ingress/
also,
An Ingress is an object that only provides a configuration, not an active component (such as a Pod or a Service). As coreypobrien said, you need to deploy an Ingress controller, which will read the ingresses you deployed in your cluster and change its configuration accordingly.
At this page you can find the documentation of the official kubernetes ingress controller, based on nginx https://github.com/kubernetes/ingress-nginx/blob/master/README.md
**more info below:
$ cat ingress-controller.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-controller
spec:
rules:
- http:
paths:
- path: /nginx
backend:
serviceName: nginx-service
servicePort: 80
- path: /kubeserve
backend: