This repo contains a Terraform module for provisioning a Kubernetes cluster for Jenkins X on Google Cloud.
A Terraform "module" refers to a self-contained package of Terraform configurations that are managed as a group. For more information around modules refer to the Terraform documentation.
To make use of this module, you need a Google Cloud project. Instructions on how to setup such a project can be found in the Google Cloud Installation and Setup guide. You need your Google Cloud project id as an input variable for using this module.
You also need to install the Cloud SDK, in particular gcloud
.
You find instructions on how to install and authenticate in the Google Cloud Installation and Setup guide as well.
Once you have gcloud
installed, you need to create Application Default Credentials by running:
gcloud auth application-default login
Alternatively, you can export the environment variable GOOGLE_APPLICATION_CREDENTIALS referencing the path to a Google Cloud service account key file.
Last but not least, ensure you have the following binaries installed:
gcloud
kubectl
~> 1.14.0kubectl
comes bundled with the Cloud SDK
terraform
~> 0.12.0- Terraform installation instruction can be found here
A default Jenkins X ready cluster can be provisioned by creating a file main.tf in an empty directory with the following content:
module "jx" {
source = "jenkins-x/jx/google"
gcp_project = "<my-gcp-project-id>"
}
You can then apply this Terraform configuration via:
terraform init
terraform apply
This creates a cluster within the specified Google Cloud project with all possible configuration options defaulted.
On completion of terraform apply
there will be a jx-requirements.yml in the working directory which can be used as input to jx boot
.
Refer to Running jx boot
for more information.
In the default configuration, no custom domain is used. DNS resolution occurs via nip.io. For more information on how to configure and use a custom domain, refer to Using a custom domain.
If you just want to experiment with Jenkins X, you can set force_destroy
to true
.
This allows you to remove all generated resources when running terraform destroy
, including any generated buckets including their content.
The following two paragraphs provide the full list of configuration and output variables of this Terraform module.
Name | Description | Type | Default | Required |
---|---|---|---|---|
bucket_location | Bucket location for storage | string |
"US" |
no |
cluster_location | The location (region or zone) in which the cluster master will be created. If you specify a zone (such as us-central1-a), the cluster will be a zonal cluster with a single cluster master. If you specify a region (such as us-west1), the cluster will be a regional cluster with multiple masters spread across zones in the region | string |
"us-central1-a" |
no |
cluster_name | Name of the Kubernetes cluster to create | string |
"" |
no |
dev_env_approvers | List of git users allowed to approve pull request for dev enviornment repository | list(string) |
[] |
no |
force_destroy | Flag to determine whether storage buckets get forcefully destroyed | bool |
false |
no |
gcp_project | The name of the GCP project to use | string |
n/a | yes |
git_owner_requirement_repos | The git id of the owner for the requirement repositories | string |
"" |
no |
jenkins_x_namespace | Kubernetes namespace to install Jenkins X in | string |
"jx" |
no |
lets_encrypt_production | Flag to determine wether or not to use the Let's Encrypt production server. | bool |
true |
no |
max_node_count | Maximum number of cluster nodes | number |
5 |
no |
min_node_count | Minimum number of cluster nodes | number |
3 |
no |
node_disk_size | Node disk size in GB | string |
"100" |
no |
node_machine_type | Node type for the Kubernetes cluster | string |
"n1-standard-2" |
no |
parent_domain | The parent domain to be allocated to the cluster | string |
"" |
no |
resource_labels | Set of labels to be applied to the cluster | map |
{} |
no |
tls_email | Email used by Let's Encrypt. Required for TLS when parent_domain is specified. | string |
"" |
no |
velero_namespace | Kubernetes namespace for Velero | string |
"velero" |
no |
velero_schedule | The Velero backup schedule in cron notation to be set in the Velero Schedule CRD (see default-backup.yaml) | string |
"0 * * * *" |
no |
velero_ttl | The the lifetime of a velero backup to be set in the Velero Schedule CRD (see default-backup.yaml) | string |
"720h0m0s" |
no |
version_stream_ref | The git ref for version stream to use when booting Jenkins X. See https://jenkins-x.io/docs/concepts/version-stream/ | string |
"master" |
no |
version_stream_url | The URL for the version stream to use when booting Jenkins X. See https://jenkins-x.io/docs/concepts/version-stream/ | string |
"https://github.com/jenkins-x/jenkins-x-versions.git" |
no |
webhook | Jenkins X webhook handler for git provider | string |
"lighthouse" |
no |
zone | Zone in which to create the cluster (deprecated, use cluster_location instead) | string |
"" |
no |
Name | Description |
---|---|
backup_bucket_url | The URL to the bucket for backup storage |
cluster_location | The location of the created Kubernetes cluster |
cluster_name | The name of the created Kubernetes cluster |
gcp_project | The GCP project in which the resources got created |
log_storage_url | The URL to the bucket for log storage |
report_storage_url | The URL to the bucket for report storage |
repository_storage_url | The URL to the bucket for artifact storage |
vault_bucket_url | The URL to the bucket for secret storage |
An output of applying this Terraform module is a jx-requirements.yml file in the current directory. This file can be used as input to Jenkins X Boot which is responsible for installing all the required Jenkins X components into the cluster created by this module.
jx boot
.
During this first run a git repository containing the source code for Jenkins X Boot is created.
This repository contains the jx-requirements.yml used by successive runs of jx boot
.
Change into an empty directory and execute:
jx boot --requirements <path-to-jx-requirements.yml>
You are prompted for any further required configuration. The number of prompts depends on how much you have pre-configured via your Terraform variables.
If you want to use a custom domain with your Jenkins X installation, you need to provide values for the variables parent_domain and tls_email. parent_domain is the fully qualified domain name you want to use and tls_email is the email address you want to use for issuing Let's Encrypt TLS certificates.
Before you apply the Terraform configuration, you also need to create a Cloud DNS managed zone, with the DNS name in the managed zone matching your custom domain name, for example in the case of example.jenkins-x.rocks as domain:
When creating the managed zone, a set of DNS servers get created which you need to specify in the DNS settings of your DNS registrar.
It is essential that before you run jx boot
, your DNS servers settings are propagated, and your domain resolves.
You can use DNS checker to check whether your domain settings have propagated.
When a custom domain is provided, Jenkins X uses ExternalDNS together with cert-manager to create A record entries in your managed zone for the various exposed applications.
If parent_domain id not set, your cluster will use nip.io in order to create publicly resolvable URLs of the form http://<app-name>-<environment-name>.<cluster-ip>.nip.io.
The configuration as seen in Cluster provisioning is not suited for creating and maintaining a production Jenkins X cluster. The following is a list of considerations for a production usecase.
-
Specify the version attribute of the module, for example:
module "jx" { source = "jenkins-x/jx/google" version = "1.2.4" # insert your configuration }
Specifying the version ensures that you are using a fixed version and that version upgrades cannot occur unintented.
-
Keep the Terraform configuration under version control, by creating a dedicated repository for your cluster configuration or by adding it to an already existing infrastructure repository.
-
Setup a Terraform backend to securely store and share the state of your cluster. For more information refer to Configuring a Terraform backend.
A "backend" in Terraform determines how state is loaded and how an operation such as apply is executed. By default, Terraform uses the local backend which keeps the state of the created resources on the local file system. This is problematic since sensitive information will be stored on disk and it is not possible to share state across a team. When working with Google Cloud a good choice for your Terraform backend is the gcs backend which stores the Terraform state in a Google Cloud Storage bucket. The examples directory of this repository contains configuration examples for using the gcs backed with and without optionally configured customer supplied encryption key.
To use the gcs backend you will need to create the bucket upfront.
You can use gsutil
to create the bucket:
gsutil mb gs://<my-bucket-name>/
It is also recommended to enable versioning on the bucket as an additional safety net in case of state corruption.
gsutil versioning set on gs://<my-bucket-name>
You can verify whether a bucket has versioning enabled via:
gsutil versioning get gs://<my-bucket-name>
terraform init -upgrade
provider "google" {
version = "~> 2.12.0"
project = var.gcp_project
}
provider "google-beta" {
version = "~> 2.12.0"
project = var.gcp_project
}
The recommended way to authenticate to the Google Cloud API is by using a service account.
This allows for authentication regardless of where your code runs.
This Terraform module expects authentication via a service account key.
You can either specify the path to this key directly using the GOOGLE_APPLICATION_CREDENTIALS environment variable or you can run gcloud auth application-default login
.
In the latter case gcloud
obtains user access credentials via a web flow and puts them in the well-known location for Application Default Credentials (ADC), usually ~/.config/gcloud/application_default_credentials.json.
At the moment there is no release pipeline defined in jenkins-x.yml.
A Terraform release does not require building an artifact, only a tag needs to be created and pushed.
To make this task easier and there is a helper script release.sh
which simplifies this process and creates the changelog as well:
./scripts/release.sh
This can be executed on demand whenever a release is required. For the script to work the envrionment variable $GH_TOKEN must be exported and reference a valid GitHub API token.
Contributions are very welcome! Check out the Contribution Guidelines for instructions.