Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Notice: Users of aws_s3_bucket_lifecycle_configuration should update provider version #41126

Open
breathingdust opened this issue Jan 28, 2025 · 4 comments
Labels
service/s3 Issues and PRs that pertain to the s3 service.

Comments

@breathingdust
Copy link
Member

breathingdust commented Jan 28, 2025

Notice

NOTE: This is to give advance notice of an incoming version of the provider that will resolve this issue expected in the next few days. This issue will be updated with the confirmed version number when ready.

Users relying on the aws_s3_bucket_lifecycle_configuration resource should update their provider to a yet to be published provider version and ensure that transition_default_minimum_object_size attribute is explicitly added to their configurations.

Description

In September 2024 Amazon S3 updated the default transition behavior for small objects. Full details can be found here. The Terraform AWS Provider was modified in version 5.70.0 via #39578 to allow for this new behavior and give end users the ability to opt into the legacy behavior if required.

Unfortunately the current implementation of this feature can lead to unexpected results in some specific situations which can lead to object transitions being enabled for smaller objects when not explicitly configured. This can lead to increased and unplanned costs.

This issue arises because of an interaction between the S3 API and the terraform-plugin-sdk upon which this resource is built. This leads to the configuration using the existing setting returned by the AWS API rather than the new default when not explicitly configured. This is an atypical interaction, and not something the terraform-plugin-sdk allows us to handle correctly.

To remediate this we will be reimplementing the aws_s3_bucket_lifecycle_configuration resource using the new terraform-plugin-framework, which allows us the ability to handle this behavior correctly. We expect this to be ready in this week's release.

For more information see this detailed bug report: #41073

References

Affected Resource(s) and/or Data Source(s)

  • aws_s3_bucket_lifecycle_configuration
@breathingdust breathingdust added the service/s3 Issues and PRs that pertain to the s3 service. label Jan 28, 2025
Copy link

Community Note

Voting for Prioritization

  • Please vote on this issue by adding a 👍 reaction to the original post to help the community and maintainers prioritize this request.
  • Please see our prioritization guide for information on how we prioritize.
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.

Volunteering to Work on This Issue

  • If you are interested in working on this issue, please leave a comment.
  • If this would be your first contribution, please review the contribution guide.

@breathingdust breathingdust pinned this issue Jan 28, 2025
@breathingdust breathingdust changed the title Notice: Users of aws_s3_bucket_lifecycle_configuration should update to the latest provider version. Notice: Users of aws_s3_bucket_lifecycle_configuration will need to update provider version Jan 28, 2025
@wellsiau-aws
Copy link
Contributor

if you are affected by this issue, the best workaround right now (while waiting for the new provider update) is to explicitly set the behavior according to your intent.

For example if you wish to keep the previous default transition behavior:

resource "aws_s3_bucket_lifecycle_configuration" "bucket-config" {
  bucket = aws_s3_bucket.this.bucket
  transition_default_minimum_object_size = "varies_by_storage_class"
  . . .
}

Of if you wish to use the Updated default transition behavior:

resource "aws_s3_bucket_lifecycle_configuration" "bucket-config" {
  bucket = aws_s3_bucket.this.bucket
  transition_default_minimum_object_size = "all_storage_classes_128K"
  . . .
}

@breathingdust breathingdust changed the title Notice: Users of aws_s3_bucket_lifecycle_configuration will need to update provider version Notice: Users of aws_s3_bucket_lifecycle_configuration should update provider version Jan 29, 2025
jameshochadel added a commit to cloud-gov/terraform-provision that referenced this issue Feb 10, 2025
AWS changed the default for this property from varies_by_storage_class to all_storage_classes_128K. For now we are setting this field explicitly so we keep the old behavior.

- The AWS provider has an issue describing this change here: hashicorp/terraform-provider-aws#41126
- AWS docs here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket-lifecycleconfiguration.html
- Slack thread discussing the change here: https://gsa-tts.slack.com/archives/C0ENP71UG/p1739218411084849
jameshochadel added a commit to cloud-gov/terraform-provision that referenced this issue Feb 10, 2025
AWS changed the default for this property from varies_by_storage_class to all_storage_classes_128K. For now we are setting this field explicitly so we keep the old behavior.

- The AWS provider has an issue describing this change here: hashicorp/terraform-provider-aws#41126
- AWS docs here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket-lifecycleconfiguration.html
- Slack thread discussing the change here: https://gsa-tts.slack.com/archives/C0ENP71UG/p1739218411084849
@Michagogo
Copy link
Contributor

Re: #41329 and #41335, AWS changed the default setting for newly-created resources (did they first introduce that option at that point? Not something I’ve followed, since I haven’t used transition rules any time recently), but very specifically didn’t change anything proactively for existing resources. Terraform should behave the same way — it shouldn’t be making changes unilaterally to configurations that haven’t been touched.

In some cases (e.g. default route tables) there are intentional, documented cases of the provider choosing to make changes by default, or otherwise set certain default values differently from the service’s default, but when that’s not the case then when an argument is not provided it should not be attempting to manage that property, rather it should let the current situation stand. Just because a certain optional parameter to the API generally has a particular default behavior, that doesn’t mean that that parameter should always be sent with that default value, precisely because of situations like this one where a default changing does not mean that all existing resources should be forcibly updated to use that new default as soon as anything anywhere in the configuration (including in completely unrelated sections/modules) is touched.

@Michagogo
Copy link
Contributor

Michagogo commented Feb 11, 2025

From the original issue text:

This issue arises because of an interaction between the S3 API and the terraform-plugin-sdk upon which this resource is built. This leads to the configuration using the existing setting returned by the AWS API rather than the new default when not explicitly configured. This is an atypical interaction, and not something the terraform-plugin-sdk allows us to handle correctly.

Unless I’m misunderstanding this paragraph, that’s precisely the desired behavior. When not explicitly configured, it absolutely should use the existing setting returned by the API. After all, for newly created configurations, if not otherwise specified they will inherently be created using the new default, and older ones will be untouched, which is the service’s behavior for good reason.

And regarding the later comment:

if you are affected by this issue, the best workaround right now (while waiting for the new provider update) is to explicitly set the behavior according to your intent.

In my present case, I have no intent, one way or the other, as the feature is one that isn’t applicable to the environment I’m working on at the moment (which doesn’t use transition lifecycle rules). My requirement is to avoid having any changes arise (both unexpected changes in the plan, and any code changes that aren’t part of the current tasks I’m working on) that I would then need to justify and explain unnecessarily.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
service/s3 Issues and PRs that pertain to the s3 service.
Projects
None yet
Development

No branches or pull requests

3 participants