Skip to content

Latest commit

 

History

History
98 lines (69 loc) · 3.43 KB

K8s-Daemon-Sets.md

File metadata and controls

98 lines (69 loc) · 3.43 KB

LAB: K8s Daemon Sets

This scenario shows how K8s Daemonsets work on minikube by adding new nodes

Steps

  • Copy and save (below) as file on your PC (daemonset.yaml).
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: logdaemonset
  labels:
    app: fluentd-logging
spec:
  selector:
    matchLabels:                                                 # label selector should be same labels in the template (template > metadata > labels)
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master                      # this toleration is to have the daemonset runnable on master nodes
        effect: NoSchedule                                       # remove it if your masters can't run pods  
      containers:
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2      # installing fluentd elasticsearch on each nodes
        resources:
          limits:
            memory: 200Mi                                        # resource limitations configured           
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:                                            # definition of volumeMounts for each pod 
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:                                                   # ephemerial volumes on node (hostpath defined)   
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers    
  • Create daemonset on minikube:

image

  • Run watch command on Linux: "watch kubectl get daemonset", on Win: "kubectl get daemonset -w"

image

  • Add new node on the cluster:

image

  • To see, app runs automatically on the new node:

image

  • Add new node (3rd):

image

  • Now daemonset have 3rd node:

image

  • Delete one of the pod:

image

  • Pod deletion can be seen here:

image

  • Daemonset create new pod automatically:

image

  • See the nodes resource on dashboard:

image

  • Delete nodes and delete daemonset:

image