This Hello World application uses Docker with Node.js and includes a DevOps toolchain that is pre-configured for continuous delivery with Vulnerability Advisor, source control, issue tracking, and online editing, and deployment to the IBM Kubernetes Service.
Application code is stored in source control, along with its Dockerfile and its Kubernetes deployment script. The target cluster is configured during toolchain setup (using an IBM Cloud API key and cluster name). You can later change these by altering the Delivery Pipeline configuration. Any code change to the Git repo will automatically be built, validated and deployed into the Kubernetes cluster.
It implements the following best practices:
- seperate Continuous Integration(CI) and Continuous Delivery(CD) pipelines.
- different deployment strategies (Rolling, Blue/Green and Canary)
- sanity check the Dockerfile prior to attempting creating the image,
- build container image on every Git commit, setting a tag based on build number, timestamp and commit id for traceability
- use a private image registry to store the built image, automatically configure access permissions for target cluster deployment using API tokens than can be revoked,
- check container image for security vulnerabilities,
- insert the built image tag into the deployment manifest automatically,
- use an explicit namespace in cluster to insulate each deployment (and make it easy to clear, by "kubectl delete namespace"),
-
Toolchain Name - Unique name to identify your toolchain
-
Region - Select the region where toolchain is to be deployed (Ex: us-south).
- Provide your application repo details or go with default provided repo.
-
Inventory repo is used to capture the build and artifact metadata.
-
A successful CI build uploads the artifact to IBM Container registry(ICR) and commits the build metadata in JSON Format to the Inventory Repository. The CD Pipeline listens for changes in the inventory and triggers a pipeline run to fetch the artifact from IBM Container Registry and deploys that artifact to your instances.
Kindly refer Secrets Manager in Detail
- Identify your secrets store.
- Select the secrets store instance which you want to use in the toolchain.
Select the IBM Kubernetes cluster on which you want to deploy your application.
Select the Deployment Strategy for releasing your application to IBM Kubernetes cluster. Kindly refer Deployment Strategies in Detail
Select the optional tools as required in your toolchain.
This section will list if there are any issues in the inputs that are provided in previous steps.
Several tools in this toolchain, and possibly in your customizable scripts, require secrets to access privileged resources. An IBM Cloud API key is an example of such a secret. These secrets must be securely stored within an IBM-recommended secrets management tool, such as IBM Key Protect for IBM Cloud, IBM Cloud Secrets Manager, or Hashicorp Vault. The secrets management tool can be integrated into the toolchain so that you can easily reference the secrets in your Tekton pipeline.
key-protect
:- Key Protect is a cloud-based security service that provides life cycle management for encryption keys that are used in IBM Cloud services or customer-built applications
secrets-manager
:- With Secrets Manager you can create, lease, and centrally manage secrets that are used in IBM Cloud services or your custom-built applications.
In Kubernetes there are a few different ways to release an application, it is necessary to choose the right strategy to make your infrastructure reliable during an application update.
These are the different deployment strategies that are supported by the toolchain.
Rolling updates allow deployments update to take place with zero downtime by incrementally updating pods instances with new ones. The new pods will be scheduled on nodes with available resources. Similar to application Scaling, if a deployment is exposed publicly, the service will load-balance the traffic only to available pods during the update. An available pod is an instance that is available to the users of the application.
Rolling updates allow the following actions:
- Promote an application from one environment to another (via container image updates)
- Rollback to previous versions
- Continuous Integration and Continuous Delivery of applications with zero downtime
For the first deployment:-
1. Check if the Ingress controller exists
2. If not, update the current deployment as blue and perform deployment.
For the subsequent deployment:-
1. Identify the service where the ingress controller is pointing.
2. If it is pointing to blue service, then we will create a deployment name as green deployment, else if it is pointing to green, then we will create a deployment name as blue deployment.
3. Perform the new deployment. (This will update the old deployment if older version exists)
4. Perform acceptance test on latest deployment.
5. If acceptance test fails on latest deployment, then fail the pipeline. Developer can debug the latest deployment as live traffic is not affected.
6. If acceptance test passes, then point the ingress controller to new deployment.
7. Old deployment will stay as is for backup/debug purpose.
For the first deployment:-
1. Check if the Ingress controller exists
2. If not, update the current deployment as prod and perform deployment.
For the subsequent deployment:-
1. Deploy the latest deployment as canary deployment.
2. A percentage(`step-size`) of incoming traffic will be routed to canary deployment, A tests will be performed against the canary deployment.
3. If the test passes, then we can increase the `step-size` which will increase the incoming traffic to canary deployment.
4. Once the step-size is 100 percent and all the tests are passed, existing production deployment will be updated with latest changes which are tested in canary deployment.
5. Now canary deployment will be removed and incoming traffic will be routed back to production.