Skip to content

Latest commit

 

History

History
136 lines (81 loc) · 5.25 KB

readme.md

File metadata and controls

136 lines (81 loc) · 5.25 KB

Kubernetes

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications that Google originally designed and is now maintained by the Cloud Native Computing Foundation**

Architecture

Architecture

Image

Usecases & Architecture

K8S Pod Networking

Image

Pod Network

Kubernetes Cluster setup

We have achieved the Kubernetes-cluster setup artifact with the below pre-requisites

One master node (Ubuntu - Desktop OS 64 bite), 2 worker nodes (Ubuntu Server OS - 64 bit)

Worker nodes are VM instances in the same machine (with externally available instances,; we have to use whatever master node uses network interface, eth0,eth1 and wifi)

Worker node should be built with a minimum of 2 GB RAM, 2 cores, and 30 GB ROM

Note: Need to check connectivity among all nodes, ssh or ping service

Setup steps

Below 4 steps we need to execute on each node (both master & slave nodes)

  • sudo apt-get update && sudo apt-get install -qy docker.io
  • sudo apt-get update && sudo apt-get install -y apt-transport-https && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
  • echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" sudo tee -a /etc/apt/sources.list.d/kubernetes.list && sudo apt-get update
  • sudo apt-get update && sudo apt-get install -y kubelet kubeadm kubernetes-cni

Configure cgroup driver used by kubelet on Master Node

Make sure that the cgroup driver used by kubelet is the same as the one used by Docker. Verify that your Docker cgroup driver matches the kubelet config

statements should execute only on master (14 commands should exec only on master)

Double check docker cgroup driver & kubeadm cgroup driver should be equal

  1. docker info | grep -i cgroup cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
  2. sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Cluster Creation

  1. kubeadm init sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address= (secure cluster init, we have to store the init results)

Master has to run with non-root user

  1. sudo useradd -G sudo -m -s /bin/bash
  2. sudo passwd
  3. sudo su
  4. cd $HOME
  5. sudo cp /etc/kubernetes/admin.conf $HOME/
  6. sudo chown $(id -u):$(id -g) $HOME/admin.conf
  7. export KUBECONFIG=$HOME/admin.conf
  8. echo "export KUBECONFIG=$HOME/admin.conf" | tee -a ~/.bashrc
  9. source ~/.bashrc

Apply your pod network (flannel)

  1. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  2. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

Before execute join command in each worker node, we suppose to disable swapoff

  • <swapoff -a>
  • <sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab>
  1. sudo kubeadm join --token token# master-node-ip:6443 --discovery-token-ca-cert-hash sha256:hash#
  2. kubectl get nodes (should display all nodes, which all are connected with kubeadm - join with token)

If anything goes wrong like worker nodes not appearing in or problem with creating pods and container in worker-node can reset the process.

need to execute below statements in every node include master-node, after below statements we need to start from step-1 again

Kubectl basic commands

  • kubectl get nodes

  • kubectl cluster-info

  • kubectl config view

  • kubectl get pods -o wide

  • kubectl get deployments

  • kubectl describe pods

  • kubectl logs pod

  • kubectl run pod-name --image=image#:tag (pod creation)

  • kubectl get services/svc

  • kubectl describe service service-name

  • kubectl scale deployment name --replicas=3

  • kubectl delete service service-name

  • kubectl expose deployment/pod-name --type=LoadBalancer/NodePort --port=service-port

  • kubectl get ingress/ing

  • kubectl describe ingress ingress-name#

Examples:

. kubectl run webserver --image=nginx:alpine --replicas=2 . kubectl expose deployment webserver --type=LoadBalancer --port=80

. kubectl run camunda --image=camunda/camunda-bpm-platform:latest --replicas=2 . kubectl expose deployment/camunda --type=LoadBalancer --port=8080

. kubectl run wso2apim --image=isim/wso2apim . kubectl expose deployment/wso2apim --type=LoadBalancer --port=9443

. kubectl run wso2esb --image=isim/wso2esb . kubectl expose deployment/wso2esb --type=LoadBalancer --port=9443