Skip to content

Commit

Permalink
Merge branch 'terraform-google-modules:master' into fix/cluster-dns-p…
Browse files Browse the repository at this point in the history
…rovider
  • Loading branch information
54nd20 authored Sep 25, 2024
2 parents c873b75 + 3a23cd4 commit 3fe6c07
Show file tree
Hide file tree
Showing 276 changed files with 10,155 additions and 1,593 deletions.
1 change: 1 addition & 0 deletions .github/workflows/lint.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
name: 'lint'

on:
workflow_dispatch:
pull_request:
branches:
- master
Expand Down
7 changes: 4 additions & 3 deletions .github/workflows/stale.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Copyright 2022-2023 Google LLC
# Copyright 2022-2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
Expand All @@ -25,9 +25,10 @@ jobs:
if: github.repository_owner == 'GoogleCloudPlatform' || github.repository_owner == 'terraform-google-modules'
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v8
- uses: actions/stale@v9
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: 'This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 7 days'
stale-pr-message: 'This PR is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 7 days'
exempt-issue-labels: triaged,dependencies
exempt-issue-labels: 'triaged'
exempt-pr-labels: 'dependencies,autorelease: pending'
7 changes: 7 additions & 0 deletions .kitchen.yml
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,13 @@ suites:
systems:
- name: simple_regional_with_gateway_api
backend: local
- name: "simple_regional_with_ipv6"
driver:
root_module_directory: test/fixtures/simple_regional_with_ipv6
verifier:
systems:
- name: simple_regional_with_ipv6
backend: local
- name: "simple_regional_with_kubeconfig"
driver:
root_module_directory: test/fixtures/simple_regional_with_kubeconfig
Expand Down
245 changes: 245 additions & 0 deletions CHANGELOG.md

Large diffs are not rendered by default.

10 changes: 9 additions & 1 deletion CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
@@ -1,4 +1,12 @@
# NOTE: This file is automatically generated from values at:
# https://github.com/GoogleCloudPlatform/cloud-foundation-toolkit/blob/master/infra/terraform/test-org/org/locals.tf

* @terraform-google-modules/cft-admins @ericyz
* @terraform-google-modules/cft-admins @ericyz @gtsorbo

# NOTE: GitHub CODEOWNERS locations:
# https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners#codeowners-and-branch-protection

CODEOWNERS @terraform-google-modules/cft-admins
.github/CODEOWNERS @terraform-google-modules/cft-admins
docs/CODEOWNERS @terraform-google-modules/cft-admins

2 changes: 2 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,8 @@ must be refreshed if the module interfaces are changed.

To more cleanly handle cases where desired functionality would require complex duplication of Terraform resources (i.e. [PR 51](https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/pull/51)), this repository is largely generated from the [`autogen`](/autogen) directory.

This uses [Jinja2](https://jinja.palletsprojects.com/en/3.1.x/) under the hood for templating.

The root module is generated by running `make build`. Changes to this repository should be made in the [`autogen`](/autogen) directory where appropriate.

Note: The correct sequence to update the repo using autogen functionality is to run
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
# Make will use bash instead of sh
SHELL := /usr/bin/env bash

DOCKER_TAG_VERSION_DEVELOPER_TOOLS := 1.18
DOCKER_TAG_VERSION_DEVELOPER_TOOLS := 1.22
DOCKER_IMAGE_DEVELOPER_TOOLS := cft/developer-tools
REGISTRY_URL := gcr.io/cloud-foundation-cicd
DOCKER_BIN ?= docker
Expand Down
98 changes: 73 additions & 25 deletions README.md

Large diffs are not rendered by default.

71 changes: 43 additions & 28 deletions autogen/main/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,33 +90,38 @@ module "gke" {
{% if beta_cluster and autopilot_cluster != true %}
istio = true
cloudrun = true
dns_cache = false
{% endif %}
dns_cache = false
{% if autopilot_cluster != true %}
node_pools = [
{
name = "default-node-pool"
machine_type = "e2-medium"
node_locations = "us-central1-b,us-central1-c"
min_count = 1
max_count = 100
local_ssd_count = 0
spot = false
name = "default-node-pool"
machine_type = "e2-medium"
node_locations = "us-central1-b,us-central1-c"
min_count = 1
max_count = 100
local_ssd_count = 0
spot = false
{% if beta_cluster %}
local_ssd_ephemeral_count = 0
local_ssd_ephemeral_count = 0
{% endif %}
disk_size_gb = 100
disk_type = "pd-standard"
image_type = "COS_CONTAINERD"
enable_gcfs = false
enable_gvnic = false
logging_variant = "DEFAULT"
auto_repair = true
auto_upgrade = true
service_account = "project-service-account@<PROJECT ID>.iam.gserviceaccount.com"
preemptible = false
initial_node_count = 80
disk_size_gb = 100
disk_type = "pd-standard"
image_type = "COS_CONTAINERD"
enable_gcfs = false
enable_gvnic = false
logging_variant = "DEFAULT"
auto_repair = true
auto_upgrade = true
service_account = "project-service-account@<PROJECT ID>.iam.gserviceaccount.com"
preemptible = false
initial_node_count = 80
accelerator_count = 1
accelerator_type = "nvidia-l4"
gpu_driver_version = "LATEST"
gpu_sharing_strategy = "TIME_SHARING"
max_shared_clients_per_gpu = 2
},
]
Expand Down Expand Up @@ -192,34 +197,39 @@ The node_pools variable takes the following parameters:
| autoscaling | Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage | true | Optional |
| auto_upgrade | Whether the nodes will be automatically upgraded | true (if cluster is regional) | Optional |
| boot_disk_kms_key | The Customer Managed Encryption Key used to encrypt the boot disk attached to each node in the node pool. This should be of the form projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. | " " | Optional |
{% if beta_cluster %}
| cpu_manager_policy | The CPU manager policy on the node. One of "none" or "static". | "static" | Optional |
| cpu_cfs_quota | Enforces the Pod's CPU limit. Setting this value to false means that the CPU limits for Pods are ignored | null | Optional |
| cpu_cfs_quota_period | The CPU CFS quota period value, which specifies the period of how often a cgroup's access to CPU resources should be reallocated | null | Optional |
| enable\_confidential\_nodes | An optional flag to enable confidential node config. | `bool` | `false` | no |
{% endif %}
| pod_pids_limit | Controls the maximum number of processes allowed to run in a pod. The value must be greater than or equal to 1024 and less than 4194304. | null | Optional |
| enable_confidential_nodes | An optional flag to enable confidential node config. | false | Optional |
| disk_size_gb | Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB | 100 | Optional |
| disk_type | Type of the disk attached to each node (e.g. 'pd-standard' or 'pd-ssd') | pd-standard | Optional |
| effect | Effect for the taint | | Required |
| enable_gcfs | Google Container File System (gcfs) has to be enabled for image streaming to be active. Needs image_type to be set to COS_CONTAINERD. | false | Optional |
| enable_gvnic | gVNIC (GVE) is an alternative to the virtIO-based ethernet driver. Needs a Container-Optimized OS node image. | false | Optional |
| enable_integrity_monitoring | Enables monitoring and attestation of the boot integrity of the instance. The attestation is performed against the integrity policy baseline. This baseline is initially derived from the implicitly trusted boot image when the instance is created. | true | Optional |
| enable_secure_boot | Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails. | false | Optional |
| gpu_driver_version | Mode for how the GPU driver is installed | null | Optional |
| gpu_partition_size | Size of partitions to create on the GPU | null | Optional |
| image_type | The image type to use for this node. Note that changing the image type will delete and recreate all nodes in the node pool | COS_CONTAINERD | Optional |
| initial_node_count | The initial number of nodes for the pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Changing this will force recreation of the resource. Defaults to the value of min_count | " " | Optional |
| key | The key required for the taint | | Required |
| logging_variant | The type of logging agent that is deployed by default for newly created node pools in the cluster. Valid values include DEFAULT and MAX_THROUGHPUT. | DEFAULT | Optional |
| local_ssd_count | The amount of local SSD disks that will be attached to each cluster node and may be used as a `hostpath` volume or a `local` PersistentVolume. | 0 | Optional |
| local_ssd_ephemeral_storage_count | The amount of local SSD disks that will be attached to each cluster node and assigned as scratch space as an `emptyDir` volume. If unspecified, ephemeral storage is backed by the cluster node boot disk. | 0 | Optional |
{% if beta_cluster %}
| local_ssd_ephemeral_count | The amount of local SSD disks that will be attached to each cluster node and assigned as scratch space as an `emptyDir` volume. If unspecified, ephemeral storage is backed by the cluster node boot disk. | 0 | Optional |
{% endif %}
| local_nvme_ssd_count | Number of raw-block local NVMe SSD disks to be attached to the node.Each local SSD is 375 GB in size. If zero, it means no raw-block local NVMe SSD disks to be attached to the node. | 0 | Optional |
| machine_type | The name of a Google Compute Engine machine type | e2-medium | Optional |
| min_cpu_platform | Minimum CPU platform to be used by the nodes in the pool. The nodes may be scheduled on the specified or newer CPU platform. | " " | Optional |
| enable_confidential_storage | Enabling Confidential Storage will create boot disk with confidential mode. | false | Optional |
| max_count | Maximum number of nodes in the NodePool. Must be >= min_count. Cannot be used with total limits. | 100 | Optional |
| total_max_count | Total maximum number of nodes in the NodePool. Must be >= min_count. Cannot be used with per zone limits. | null | Optional |
| max_pods_per_node | The maximum number of pods per node in this cluster | null | Optional |
| strategy | The upgrade stragey to be used for upgrading the nodes. Valid values of state are: `SURGE` or `BLUE_GREEN` | "SURGE" | Optional |
| threads_per_core | Optional The number of threads per physical core. To disable simultaneous multithreading (SMT) set this to 1. If unset, the maximum number of threads supported per core by the underlying processor is assumed | null | Optional |
| enable_nested_virtualization | Whether the node should have nested virtualization | null | Optional |
| max_surge | The number of additional nodes that can be added to the node pool during an upgrade. Increasing max_surge raises the number of nodes that can be upgraded simultaneously. Can be set to 0 or greater. Only works with `SURGE` strategy. | 1 | Optional |
| max_unavailable | The number of nodes that can be simultaneously unavailable during an upgrade. Increasing max_unavailable raises the number of nodes that can be upgraded in parallel. Can be set to 0 or greater. Only works with `SURGE` strategy. | 0 | Optional |
| node_pool_soak_duration | Time needed after draining the entire blue pool. After this period, the blue pool will be cleaned up. By default, it is set to one hour (3600 seconds). The maximum length of the soak time is 7 days (604,800 seconds). Only works with `BLUE_GREEN` strategy. | "3600s" | Optional |
Expand All @@ -229,13 +239,11 @@ The node_pools variable takes the following parameters:
| min_count | Minimum number of nodes in the NodePool. Must be >=0 and <= max_count. Should be used when autoscaling is true. Cannot be used with total limits. | 1 | Optional |
| total_min_count | Total minimum number of nodes in the NodePool. Must be >=0 and <= max_count. Should be used when autoscaling is true. Cannot be used with per zone limits. | null | Optional |
| name | The name of the node pool | | Required |
{% if beta_cluster %}
| placement_policy | Placement type to set for nodes in a node pool. Can be set as [COMPACT](https://cloud.google.com/kubernetes-engine/docs/how-to/compact-placement#overview) if desired | Optional |
| placement_policy | Placement type to set for nodes in a node pool. Can be set as [COMPACT](https://cloud.google.com/kubernetes-engine/docs/how-to/compact-placement#overview) if desired | | Optional |
| pod_range | The name of the secondary range for pod IPs. | | Optional |
{% if not private_cluster %}
| enable_private_nodes | Whether nodes have internal IP addresses only. | | Optional |
{% endif %}
{% endif %}
| node_count | The number of nodes in the nodepool when autoscaling is false. Otherwise defaults to 1. Only valid for non-autoscaling clusters | | Required |
| node_locations | The list of zones in which the cluster's nodes are located. Nodes must be in the region of their regional cluster or in the same region as their cluster's zone for zonal clusters. Defaults to cluster level node locations if nothing is specified | " " | Optional |
| node_metadata | Options to expose the node metadata to the workload running on the node | | Optional |
Expand All @@ -249,6 +257,13 @@ The node_pools variable takes the following parameters:
| value | The value for the taint | | Required |
| version | The Kubernetes version for the nodes in this pool. Should only be set if auto_upgrade is false | " " | Optional |
| location_policy | [Location policy](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_node_pool#location_policy) specifies the algorithm used when scaling-up the node pool. Location policy is supported only in 1.24.1+ clusters. | " " | Optional |
| secondary_boot_disk | Image of a secondary boot disk to preload container images and data on new nodes. For detail see [documentation](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster#nested_secondary_boot_disks). `gcfs_config` must be `enabled=true` for this feature to work. | | Optional |
| queued_provisioning | Makes nodes obtainable through the ProvisioningRequest API exclusively. | | Optional |
| gpu_sharing_strategy | The type of GPU sharing strategy to enable on the GPU node. Accepted values are: "TIME_SHARING" and "MPS". | | Optional |
| max_shared_clients_per_gpu | The maximum number of containers that can share a GPU. | | Optional |
| consume_reservation_type | The type of reservation consumption. Accepted values are: "UNSPECIFIED": Default value (should not be specified). "NO_RESERVATION": Do not consume from any reserved capacity, "ANY_RESERVATION": Consume any reservation available, "SPECIFIC_RESERVATION": Must consume from a specific reservation. Must specify key value fields for specifying the reservations. | | Optional |
| reservation_affinity_key | The label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, specify "compute.googleapis.com/reservation-name" as the key and specify the name of your reservation as its value. | | Optional |
| reservation_affinity_values | The list of label values of reservation resources. For example: the name of the specific reservation when using a key of "compute.googleapis.com/reservation-name". This should be passed as comma separated string. | | Optional |

## windows_node_pools variable
The windows_node_pools variable takes the same parameters as [node_pools](#node\_pools-variable) but is reserved for provisioning Windows based node pools only. This variable is introduced to satisfy a [specific requirement](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster-windows#create_a_cluster_and_node_pools) for the presence of at least one linux based node pool in the cluster before a windows based node pool can be created.
Expand All @@ -272,9 +287,9 @@ The [project factory](https://github.com/terraform-google-modules/terraform-goog
#### Terraform and Plugins
- [Terraform](https://www.terraform.io/downloads.html) 1.3+
{% if beta_cluster %}
- [Terraform Provider for GCP Beta][terraform-provider-google-beta] v5
- [Terraform Provider for GCP Beta][terraform-provider-google-beta] v5.9+
{% else %}
- [Terraform Provider for GCP][terraform-provider-google] v5
- [Terraform Provider for GCP][terraform-provider-google] v5.9+
{% endif %}
#### gcloud
Some submodules use the [terraform-google-gcloud](https://github.com/terraform-google-modules/terraform-google-gcloud) module. By default, this module assumes you already have gcloud installed in your $PATH.
Expand Down
Loading

0 comments on commit 3fe6c07

Please sign in to comment.