Skip to content

This project contains documented details of implementation of a CI CD pipeline using Jenkins for Docker and Kubernetes

License

Notifications You must be signed in to change notification settings

SumitM01/CI-CD-for-Docker-Kubernetes-using-Jenkins

Repository files navigation

CI-CD-for-Docker-Kubernetes-using-Jenkins

This project contains documented details of implementation of a CI CD pipeline using Jenkins for Docker and Kubernetes

Overview

The project focuses on implementing continuous delivery for Docker containers 🐳. The aim is to continuously build Docker images and deploy them to a Kubernetes cluster 🚀. This approach is commonly used in microservice architecture, but it can be applied anywhere containers are used.

With continuous code changes, there needs to be a continuous build and test process 🔨, as well as regular deployment of the containers 🚛. The deployment process is typically handled by the operations team 👷‍♂️, who manage the container orchestration tool like Kubernetes 🌐. However, manual deployment can create dependencies 🔗 and be time-consuming ⏰.

To address this, the project aims to automate the build and release process of container images, allowing for fast and continuous deployment as soon as code changes are made by developers 💻. This will be achieved through the implementation of a continuous delivery or deployment pipeline for Docker containers 📦.

Services used

Jenkins Sonarqube Scanner Maven Docker Kubernetes Helm AWS EC2 AWS S3 AWS Route53 AWS ELB AWS IAM Github

Project Architecture

The following events happen serially:

  • A developer makes a code change and pushes it to GitHub 💻.
  • Jenkins fetches the code, including the Dockerfile, Jenkinsfile, and Helm charts 📥.
  • The code is tested and analyzed using Checkstyle and SonarQube scanner 🔍, with results uploaded to SonarQube Server 📈.
  • If the code passes all quality gates, an artifact is built with Maven 🔨.
  • A Docker build process starts to build the Docker image 🐳.
  • If everything passes, the Docker image is pushed to Docker Hub 🚀.
  • Jenkins uses Helm to deploy the Helm charts to the Kubernetes Cluster 🌐.
  • The Helm chart deployment creates all necessary resources, such as pods, services, secrets, and volumes 📦.
  • If any changes are made, such as a new image tag for an application pod, they are implemented 🔧.

Implementation Details

Continous Integration Setup 🛠️

Follow the README.md file in https://github.com/SumitM01/CI-using-Jenkins–Nexus-and-Sonaqube repository to create and setup the Continous Integration pipeline. Create instances for Jenkins and Sonarqube scanner only. Do not create an instance for Nexus artifact storage as it is not required.

Setup Jenkins server

  • Install these additional plugins on Jenkins.
    • Docker pipeline 🐳
  • Log in to jenkins instance using SSH and install openjdk-11-jdk and openjdk-8-jdk using the following commands 🔧
sudo apt update
sudo apt install openjdk-8-jdk -y
sudo apt install openjdk-11-jdk -y

configure-jenkins-maven configure-jenkins-credentials

  • Configure JDK installation on Jenkins by providing Java_Home path. configure-jenkins-jdk-8 configure-jenkins-jdk-11

  • Configure Sonarqube scanner and sonarqube server with sonarqube token 🔍 configure-jenkins-sonarqube-scanner configure-jenkins-system-sonarqube-server

  • SSH to the instance and install docker engine in it using the following commands 🐳

#!/bin/bash
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg -y
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg -y
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
  • Add Jenkins user to Docker group 🔧
usermod -aG docker jenkins

Complete kOps pre-requisites

  • Create a domain in GoDaddy and a subdomain in Route 53.🌐

    • Create a Domain on GoDaddy like this.🖱️ domain-created

    • Create a sub-domain/hosted zone on AWS Route 53 like this.🖱️ sub-domain-created

  • Copy the NameServer records to Godaddy DNS manager 📋 from the subdomain.

    • After creation of hosted zone, copy the displayed records 📋. sub-domain-ns-recs

    • On Godaddy DNS Manager

      • Under Nameservers section, click on Change Numservers and paste the copied records individually. domain-ns-recs-replaced
  • Launch the kOps server instance with the following specifications 🚀:

    • AMI: Ubuntu 20/18
    • Instance type/: t2.micro
    • Security Group Inbound Rules: 22 allowed from your IP kops-server-created
  • Create an S3 bucket 🪣 on the same region as the server.

    • During creation of the bucket ensure the bucket is in the same zone as the kops server. s3-bucket-created
  • Create an IAM 🔐 role for using awscli and store its credentials. iam-user-created

  • Install awscli on kOps server and configure it with the IAM credentials 🔐. configure-awscli

  • Run the following command to download ⬇ awscli :

apt get update
apt install awscli -y
  • Install kubectl and kops from the kubernetes site 🌐:
    • Install kubectl using the following command 🔧:
     curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
    • Install kOps from Kubernetes site using the following command 🔧:
    curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64

Configure SSH key login to github remote:

  • Generate the ssh keys on the kops server using ssh-keygen 🔑
  • Go to account settings -> SSH and GPG keys -> add key -> paste the contents of the public ssh key -> save 💾 git-ssh-keys-created

Create a separate repository for the project 📁:

  • Clone the created repo into your kops machine using SSH link 🔗.
git clone [email protected]:git_username/git_repository_name.git
  • IMPORTANT: Cloning the repository on the machine validates the authentication using the created SSH keys🔑.
  • Copy the contents 📋 of the vprofile-project/cicd-kube branch.
  • Clone 🔗 the vprofile-project repo onto the kops machine.
git clone https://github.com/devopshydclub/vprofile-project.git
  • Checkout to cicd-kube branch🔀.
git checkout cicd-kube
  • Copy all the files in the root to the created repository folder 📋.
cp -r * ../your_created_repo/ 
  • Delete files inside your created repo that are not required: docker-db, docker-web, ansible,compose 🗑️.
rm -rf docker-db docker-web ansible compose
  • Copy dockerfile from inside docker-app folder to root and delete dockerapp folder 📋.
cp Dockerapp/Dockerfile .
rm -rf Dockerapp
  • Delete 🗑️ contents 📋 inside helm/vprofilecharts/templates folder 📁 and replace them with contents of Deploying-an-application-on-Kubernetes- cluster/Setupfiles folder📁.
cd helm/vprofilecharts/templates
rm -rf
cd 
git clone https://github.com/SumitM01/Deploying-an-application-on-Kubernetes-cluster.git
cp -r Deploying-an-application-on-Kubernetes-cluster/Setupfiles/* your_created_repo/helm/vprofilecharts/templates/

Create an EC2 volume and configure it 💽:

  • Create an EC2 volume using the following command🔧:
aws ec2 create-volume --availability-zone=your_prefered_zone --size=3 --volume-type=gp2

ec2-volume-created

  • Note down the volume ID as displayed after volume creation📝.
  • Specify the volume ID in the vprodbdep.yml file with the copied ID 🔧.
  • On AWS console
    • Go to EC2 management console🖱️.
    • On Navigation Pane, go to volumes 🖱️.
    • Search for the created volume with the volume ID and select it🔍.
    • Click on Manage tags and add the following tag to it🔧:
      • Key : KubernetesCluster
      • Value : your_subdomain_name
      • IMPORTANT: This is necessary because without the cluster tags the volume won't get attached to the required instance for database purposes⚡.

Create Kubernetes Cluster using kops🌐:

  • Run the following command to create a Kubernetes cluster using kOps:🚀
kops create cluster --name=your_subdomain_name --state=s3://your_bucket_name --zones=your_preferred_zone --node-count=2 --node-size=t2.small --master-size=t3.medium --dns-zone=your_subdomain_name

Launch Kubernetes cluster using kops:🚢

  • Run the following command to launch the created cluster using kOps:🚀
kops update cluster --name vprofile.sumitmishra.info --state=s3://vprofile-kube-project --yes --admin
  • Wait for 10-15 minutes for the cluster to launch fully.⏳

Install Helm on the instance☸️

  • While you wait for the cluster to be launched, you can install Helm on the kops server using the following commands
cd
wget https://get.helm.sh/helm-v3.12.2-linux-amd64.tar.gz 
tar -zxvf helm-v3.12.2-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
helm --help

Installing-helm

Check health of the cluster using kops:

  • Run the following command to validate the created cluster using kOps:✅
kops validate cluster --name=vprofile.sumitmishra.info --state=s3://vprofile-kube-project

cluster-validation

Configure nodes of kubernetes cluster:⚙️

  • Run the following command to taint all nodes to be Scheduled on launch.
kubectl taint nodes --all node-role.kubernetes.io/control-plane:NoSchedule-
  • IMPORTANT: This is very important because, the latest kubernetes versions do not allow scheduling to be done on control nodes which causes various errors during deployment, to avoid that we should allow scheduling on the control node.
  • Run the following command to check whether all nodes have the same zone.⚡
kubectl get nodes -L zone
  • If there are no records under Zone then assign zones to individual nodes by running the following command for each node.
kubectl label nodes <node-name> zone=your_prefered_zone
  • IMPORTANT: This is necessary because the node should be in the same zone as the created volume in order to get attached to it and in the same zone as specified in the deployment files in order to not raise an error during deployment.⚡

Configure Node on Jenkins and kops server: 🖥️

  • On kops server
  • Connect to kops-server using SSH client.
  • Using ubuntu user, install open-jdk-11 in the server.📦
sudo apt update
sudo apt install openjdk-11-jdk -y
  • Create a folder as /opt/jenkins-slave and provide ownership of jenkins-slave to ubuntu user.📁
sudo mkdir /opt/jenkins-slave
sudo chown ubuntu.ubuntu /opt/jenkins-slave
  • On Jenkins server
  • Configure a node with the following settings:⚙️
    • Remote root directory : /opt/jenkins-slave
    • Labels:KOPS
    • Usage: only build jobs with label expressions matching this node
    • Launch method:launch agents via ssh
    • Host: private kops IP
    • Credentials: kops instance private login key
    • Host key verification strategy: non verifying verification strategy
    • Availability: keep this agent online as much as possible

Write the Jenkinsfile ✍️

  • On local_machine
  • Write a Jenkinsfile inside your-created-repository by referring to the Jenkinsfile present in vprofile-project/cicd-kube directory.
  • Push the contents to github remote repo.

Configure and Run the pipeline:🚀

  • On jenkins

    • Create a pipeline
    • Choose Poll SCM and provide * * * * *
    • Choose Pipeline script from SCM and provide your github repository, branch and Jenkinsfile path then save.💾
    • Now commit to the repository then see that the pipeline gets automatically triggered after the commit.🚨
    • Wait for the pipeline to be completed successfully.✅ Pipeline-success
  • After the successful completion of the pipeline do the following

Create a Route53 record 📝

  • On kops server
  • SSH to kops-server.
  • Run the following command to list all the running services in the project.📋
kubectl get svc
  • Copy the Load balancer ARN from the displayed services.
  • Create a new record in the hosted zone of route 53 with the value as the dns of load balancer.🌐

Results🎉

After everything is done, wait for 5-10 mins⏳ then validate the services by accessing the website using the URL.🔗

  • We can see that the website is up and the frontend services are running fine. Login Page login-page

Welcome Page welcome-page

  • Here we can see that the backend services have been created and configured and are also running fine. User details Page (Database Validation) db-validation

User details Page (Cache Validation) cache-validation

Rabbitmq status Page rabbitmq-page

Cleanup🧹

  • Cleanup the services one by one.
  • Delete the cluster in the kops vm using the following command
kops delete cluster --name your_subdomain_name --state=s3://your_bucket_name --yes
  • Take a snapshot of the entire stack and store it in an s3 bucket for future use.🔮
  • Poweroff/terminate the instances
  • Delete security groups.🗑️
  • Delete S3 buckets if you don't require them.🗑️
  • Delete the hosted zone on AWS Route53 if not required.🗑️

Conclusion

This project implmented the complete Continous Integration and Continous Deployment pipeline using Jenkins for production deployment on Docker and Kubernetes cluster. This ensures efficient and streamlined development process and maintenance of the application.

As documented in this README file, I have invested MANY MANY HOURS of my time in researching 🔎, learning 📖, debugging 👨‍💻 to implement this project. If you appreciate this document please give it a ⭐, share with friends and do give it a try. Thank you for reading this! 😊

References