Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

disable-legacy-endpoints forces new resource #3230

Closed
brettcurtis opened this issue Mar 13, 2019 · 2 comments
Closed

disable-legacy-endpoints forces new resource #3230

brettcurtis opened this issue Mar 13, 2019 · 2 comments
Assignees
Labels

Comments

@brettcurtis
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
  • If an issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to "hashibot", a community member has claimed the issue already.

Terraform Version

Terraform v0.11.10

  • provider.google v2.1.0
  • provider.google-beta v2.1.0
  • provider.kubernetes v1.3.0
  • provider.local v1.1.0
  • provider.null v1.0.0
  • provider.random v2.0.0
  • provider.template v1.0.0

Affected Resource(s)

google_container_cluster

Terraform Configuration Files

resource "google_container_cluster" "k8s_cluster" {                                                                                                                                                                           
  provider = "google-beta"                                                                                                                                                                                                    
  name     = "k8s-cluster-${var.region}"                                                                                                                                                                                      
  project  = "${google_project.k8s_project.project_id}"                                                                                                                                                                       
  region   = "${var.region}"                                                                                                                                                                                                  
                                                                                                                                                                                                                              
  min_master_version = "${var.kubernetes_version}"                                                                                                                                                                            
  node_version       = "${var.kubernetes_version}"                                                                                                                                                                            
  logging_service    = "${var.kubernetes_logging_service}"                                                                                                                                                                    
  monitoring_service = "${var.kubernetes_monitoring_service}"                                                                                                                                                                 
                                                                                                                                                                                                                              
  addons_config {                                                                                                                                                                                                             
    istio_config {                                                                                                                                                                                                            
      disabled = false                                                                                                                                                                                                        
    }                                                                                                                                                                                                                         
  }                                                                                                                                                                                                                           
                                                                                                                                                                                                                              
  node_pool {                                                                                                                                                                                                                 
    name = "default-pool"                                                                                                                                                                                                     
                                                                                                                                                                                                                              
    node_config {                                                                                                                                                                                                             
      machine_type = "${var.machine_type}"                                                                                                                                                                                    
                                                                                                                                                                                                                              
      oauth_scopes = [                                                                                                                                                                                                        
        "https://www.googleapis.com/auth/cloud-platform",                                                                                                                                                                     
        "https://www.googleapis.com/auth/compute",                                                                                                                                                                            
        "https://www.googleapis.com/auth/devstorage.read_write",                                                                                                                                                              
        "https://www.googleapis.com/auth/logging.write",                                                                                                                                                                      
        "https://www.googleapis.com/auth/monitoring",                                                                                                                                                                         
      ]                                                                                                                                                                                                                       
                                                                                                                                                                                                                                                                                                                                                                                                                           
    }                                                                                                                                                                                                                         
                                                                                                                                                                                                                              
    initial_node_count = "${var.node_count}"                                                                                                                                                                                  
                                                                                                                                                                                                                              
    autoscaling {                                                                                                                                                                                                             
      min_node_count = "${var.min_node_count}"                                                                                                                                                                                
      max_node_count = "${var.max_node_count}"                                                                                                                                                                                
    }                                                                                                                                                                                                                         
                                                                                                                                                                                                                              
    management {                                                                                                                                                                                                              
      auto_repair  = "true"                                                                                                                                                                                                   
      auto_upgrade = "false"                                                                                                                                                                                                  
    }                                                                                                                                                                                                                         
  }                                                                                                                                                                                                                           
                                                                                                                                                                                                                              
  timeouts {                                                                                                                                                                                                                  
    update = "20m"                                                                                                                                                                                                            
  }                                                                                                                                                                                                                           
                                                                                                                                                                                                                              
  depends_on = ["google_project_service.k8s_service"]                                                                                                                                                                         
}                                                                          

Expected Behavior

New cluster using the above configuration creates fine however if you run plan again it tries to recreate because of the following:

    node_pool.0.node_config.0.metadata.%:                        "1" => "0" (forces new resource)
    node_pool.0.node_config.0.metadata.disable-legacy-endpoints: "true" => "" (forces new resource)

The problem with simply adding the metadata is when upgrading existing clusters, they all are forced new resources. There seems to be no way to get the old behavior that has an upgrade in place option.

Specifically I'm going from 1.11.5-gke.5 to 1.12.5-gke.10

Steps to Reproduce

  1. terraform apply
    2: Run plan again
@petervandenabeele
Copy link

This is a possible work-around ...

Because I am using https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway
which fails with google 2.x provider due to
GoogleCloudPlatform/terraform-google-nat-gateway#112
I have to stay on google provider v1.20.0 for now.

$ terraform version
Terraform v0.11.13
+ provider.google v1.20.0  ## OLD VERSION
+ provider.kubernetes v1.5.2
+ provider.null v1.0.0
+ provider.template v1.0.0

combined with a fresh GKE v1.12.5 cluster, I was able to add this to the google-container-cluster definition to make it work for producing new clusters and terraform plan for such a newly created cluster that does not do a recreated anymore.

  node_config {
    metadata {
      disable-legacy-endpoints = "true"
    }
  ...

In my experience, this issue:

  • only impacted a fresh 1.12.5-gke.5 cluster that I made this morning
  • did not impact older clusters that I had upgraded from 1.11.7 to 1.12.5, but not recreated

@ghost
Copy link

ghost commented Apr 13, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 13, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants