Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

On google_compute_backend_service when CDN policy is changed from CACHE_ALL_STATIC to FORCE_CACHE_ALL fails to apply due to max_ttl configuration #20661

Open
raffaelenewesis opened this issue Dec 11, 2024 · 1 comment
Labels
forward/linked persistent-bug Hard to diagnose or long lived bugs for which resolutions are more like feature work than bug work plugin-framework Issues related to the migration to PF service/compute-l7-load-balancer

Comments

@raffaelenewesis
Copy link

raffaelenewesis commented Dec 11, 2024

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to a user, that user is claiming responsibility for the issue.
  • Customers working with a Google Technical Account Manager or Customer Engineer can ask them to reach out internally to expedite investigation and resolution of this issue.

Terraform Version & Provider Version(s)

Terraform v1.9.8
on darwin_arm64

  • provider registry.terraform.io/hashicorp/google v6.11.1

Affected Resource(s)

google_compute_backend_service

Terraform Configuration

resource "google_compute_backend_service" "this" {
  name                    = local.backend_service_name
  custom_request_headers  = var.custom_request_headers
  custom_response_headers = var.custom_response_headers
  description             = var.description
  enable_cdn              = var.is_cdn_enabled
  load_balancing_scheme   = var.load_balancing_scheme
  compression_mode        = var.compression_mode
  protocol                = var.protocol

  cdn_policy {
    cache_mode        = var.cache.mode
    default_ttl       = var.cache.default_ttl
    client_ttl        = var.cache.client_ttl
    max_ttl           = var.cache.mode == "CACHE_ALL_STATIC" ? var.cache.max_ttl : null
    serve_while_stale = var.serve_while_stale_in_seconds
    cache_key_policy {
      include_host           = var.cache.include_host
      include_protocol       = var.cache.include_protocol
      include_query_string   = var.cache.include_query_string
      query_string_whitelist = var.cache.query_string_whitelist
    }
  }
  ...
}

Debug Output

Error: Error updating BackendService "projects/[REDACTED]/global/backendServices/backend-svc-secondary-page-builder-fe-api-liv-golf-prd-ue4": googleapi: Error 400: Invalid value for field 'resource.cdnPolicy.maxTtl': '600'. max_ttl must be specified with CACHE_ALL_STATIC cache_mode only., invalid

│ with module.secondary_backend_service["secondary-page-builder-fe-api"].google_compute_backend_service.this,
│ on ../modules/network/backend_service/resource-backend_service.tf line 1, in resource "google_compute_backend_service" "this":
│ 1: resource "google_compute_backend_service" "this" {



Error: Error updating BackendService "projects/[REDACTED]/global/backendServices/backend-svc-primary-page-builder-fe-api-liv-golf-prd-ue1": googleapi: Error 400: Invalid value for field 'resource.cdnPolicy.maxTtl': '600'. max_ttl must be specified with CACHE_ALL_STATIC cache_mode only., invalid

│ with module.primary_backend_service["primary-page-builder-fe-api"].google_compute_backend_service.this,
│ on ../modules/network/backend_service/resource-backend_service.tf line 1, in resource "google_compute_backend_service" "this":
│ 1: resource "google_compute_backend_service" "this" {

Expected Behavior

Terraform should be able to update changes on CDN policy for backend service

Actual Behavior

Fails stating that a max_ttl value has been provided (600s) but as you can see in the configuration the value is set to null if cache_mode is not "CACHE_ALL_STATIC"

Steps to reproduce

  1. terraform apply

Important Factoids

No response

References

See this issue #10560 as it is somewhat similar in behavior

b/383901730

@c2thorn
Copy link
Collaborator

c2thorn commented Dec 13, 2024

max_ttl has the provider property of Optional + Computed that lets it default to the API's value if not set in config. Unfortunately this comes with the side effect of the provider SDK not knowing when you want to remove something or let it be managed by the API default. This side effect is a known problem of the SDK and may not be fixed until we migrate the provier over to a newer framework.

This is the exact problem as described in #14903

One option would be to add a custom encoder to the resource to check if this specific field has been explicitly removed from the user's configuration. This is less preferable to the general migration mentioned earlier and should only be done in the interim for fields that frequently run into this for users.

@raffaelenewesis, since this is an update to an existing resource, would you be able to change the value of max_ttl outside of Terraform to be removed so that this error doesn't appear when you perform the update?

@c2thorn c2thorn added plugin-framework Issues related to the migration to PF persistent-bug Hard to diagnose or long lived bugs for which resolutions are more like feature work than bug work and removed forward/review In review; remove label to forward labels Dec 13, 2024
@c2thorn c2thorn removed the bug label Dec 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
forward/linked persistent-bug Hard to diagnose or long lived bugs for which resolutions are more like feature work than bug work plugin-framework Issues related to the migration to PF service/compute-l7-load-balancer
Projects
None yet
Development

No branches or pull requests

3 participants