Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Managed Service for Prometheus is enabled when creating the GKE clusters with monitoring_enable_managed_prometheus set to false #2039

Closed
awx-fuyuanchu opened this issue Aug 14, 2024 · 2 comments
Labels
bug Something isn't working Stale

Comments

@awx-fuyuanchu
Copy link

awx-fuyuanchu commented Aug 14, 2024

TL;DR

The version we are using is beta-private-cluster v31.0.0

Recently, we created several GKE clusters with monitoring_enable_managed_prometheus set to false.

However, the managed prometheus was enabled ignoring the flag monitoring_enable_managed_prometheus

This is the monitoring_config from the terraform state

    monitoring_config {
        enable_components = [
            "SYSTEM_COMPONENTS",
            "HPA",
            "POD",
            "DAEMONSET",
            "DEPLOYMENT",
            "STATEFULSET",
            "STORAGE",
            "CADVISOR",
            "KUBELET",
        ]

        advanced_datapath_observability_config {
            enable_metrics = false
            enable_relay   = false
            relay_mode     = "DISABLED"
        }

        managed_prometheus {
            enabled = true
        }
    }

Expected behavior

The managed prometheus shouldn't be enabled when we set monitoring_enable_managed_prometheus to false

Observed behavior

The managed prometheus is enabled when creating new clusters with monitoring_enable_managed_prometheus to false

Terraform Configuration

module "gke" {
  source = "github.com/terraform-google-modules/terraform-google-kubernetes-engine.git//modules/beta-private-cluster?ref=v31.0.0"
  regional                 = true
  enable_private_nodes     = true
  network_policy           = var.network_policy
  create_service_account   = false
  remove_default_node_pool = true
  node_metadata            = "GKE_METADATA_SERVER"
  # This is to support workload identity
  enable_pod_security_policy = var.enable_pod_security_policy
  project_id                 = var.project_id
  name                       = local.cluster_name
  service_account            = local.gke_service_account
  identity_namespace         = local.identity_namespace
  region                     = var.region
  network                    = var.network
  subnetwork                 = var.subnetwork
  network_project_id         = var.network_project_id
  ip_range_pods              = var.ip_range_pods
  ip_range_services          = var.ip_range_services
  enable_private_endpoint    = var.enable_private_endpoint
  node_pools                 = var.node_pools
  node_pools_labels          = var.node_pools_labels
  node_pools_oauth_scopes    = var.node_pools_oauth_scopes
  node_pools_metadata        = var.node_pools_metadata
  node_pools_taints          = var.node_pools_taints
  node_pools_tags            = local.node_pools_tags
  kubernetes_version         = var.kubernetes_version
  master_ipv4_cidr_block     = var.master_cidr_block
  default_max_pods_per_node  = var.default_max_pods_per_node
  # Whether L4ILB Subsetting is enabled for this cluster.
  enable_l4_ilb_subsetting = var.enable_l4_ilb_subsetting
  # disable external access if we use the master's internal IP as the endpoint of the cluster
  master_authorized_networks          = local.master_authorized_networks
  istio                               = var.enable_istio
  istio_auth                          = "AUTH_MUTUAL_TLS"
  cloudrun                            = var.enable_cloudrun
  release_channel                     = local.release_channel
  maintenance_start_time              = var.maintenance_start_time
  maintenance_end_time                = var.maintenance_end_time
  maintenance_recurrence              = var.maintenance_recurrence
  maintenance_exclusions              = var.maintenance_exclusions
  authenticator_security_group        = var.authenticator_security_group
  logging_service                     = var.logging_service
  logging_enabled_components          = var.logging_enabled_components
  cluster_autoscaling                 = local.cluster_autoscaling
  notification_config_topic           = var.notification_config_topic
  workload_config_audit_mode          = var.workload_config_audit_mode
  network_tags                        = var.auto_provisioning_network_tags
  security_posture_vulnerability_mode = var.security_posture_vulnerability_mode
  security_posture_mode               = var.security_posture_mode
  firewall_priority                    = var.firewall_priority
  firewall_inbound_ports               = var.firewall_inbound_ports
  cluster_resource_labels              = local.cluster_resource_labels
  gce_pd_csi_driver                    = var.gce_pd_csi_driver
  datapath_provider                    = var.datapath_provider
  dns_cache                            = var.dns_cache
  monitoring_enable_managed_prometheus = false
  enable_resource_consumption_export   = var.enable_resource_consumption_export
  resource_usage_export_dataset_id     = var.resource_usage_export_dataset_id
  # set the output endpoint to the master's internal IP
  deploy_using_private_endpoint = var.deploy_using_private_endpoint
  # for workload cost insights
  enable_cost_allocation = var.enable_cost_allocation
  gateway_api_channel    = var.gateway_api_channel
  deletion_protection    = var.deletion_protection
}

Terraform Version

> terraform version
Terraform v1.4.0
on darwin_amd64

Additional information

No response

@awx-fuyuanchu awx-fuyuanchu added the bug Something isn't working label Aug 14, 2024
@awx-fuyuanchu awx-fuyuanchu changed the title Managed Service for Prometheus was enabled even monitoring_enable_managed_prometheus set to false Managed Service for Prometheus is enabled when creating the GKE clusters with monitoring_enable_managed_prometheus set to false Aug 14, 2024
@DrFaust92
Copy link
Contributor

related to #1894

ill try to create a fresh PR for this + test

Copy link

This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 7 days

@github-actions github-actions bot added the Stale label Oct 14, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Stale
Projects
None yet
Development

No branches or pull requests

2 participants