-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Converting DynamoDB from On Demand to Provisioned capacity while ignoring read/write capacity sets a default of 1 #38100
Comments
Community NoteVoting for Prioritization
Volunteering to Work on This Issue
|
I'm not sure what the ideal behavior is for this, but it definitely shouldn't be to set a default magic value that is extremely low. I think it should either result in an error, or it should query to find the latest value of consumed read/write capacity units and use that as the initial value. |
Big +1We also caught this in production recently in a very big table. PR 22630, a bad call?I may be biased but I am of the opinion this was a bad call (or things were different back then...). Reason being, Terraform is not only assuming that (1) changes are being ignored, but also that (2) the app_autoscaling resource is there to save the day. I agree (1) may be true most of the cases. But I don't agree that (2) should be so naively assumed. ScenarioI'll give you a scenario, let's assume:
Impact in real worldThe About documentationAnother point I feel is worth to bring out: in the documentation for write_capacity and read_capacity, it can be read:
But in reality there is nothing "requiring" the field, because the linked MR made it "safe" by defaulting the value to 1. At the minmal least documentation should be updated to say:
But then again... Conclusion...I think this is a bad call, and yes Terraform should be more pragmatic and just fail if those values are not passed (or ignored) when WDYT? 😄 |
I also wonder if anything has changed recently in the way Terraform "unblocks" from the Modifying of the Another path of potentially preventing this issue would be by making the error in app-autoscaling target creation a Retryable error for cases when "target resource doesn't exist". This would allow for us to define the name of the table manually, which would remove the dependancy in the graph and create the appautoscaling resource asap without waiting for the ddb table to be ready. |
Terraform Core Version
1.7.1
AWS Provider Version
5.44
Affected Resource(s)
aws_dynamodb_table
Expected Behavior
When changing a DynamoDB table from On Demand to Provisioned capacity, an initial value capacity value is required for reads and writes. When switching to provisioned capacity, if no value is provided, it should result in an error.
Actual Behavior
The terraform-provider-aws docs state that these are required values:
However, the docs also state:
So the guidance is to ignore changes to read/write capacity, which makes sense. However, on the initial change to
PROVISIONED
, if these are ignored, the code seems to default the value to 1. On a very busy table, this will result in throttling until autoscaling kicks in.This bit me hard today. It hurt.
Relevant Error/Panic Output Snippet
No response
Terraform Configuration Files
Steps to Reproduce
billing_mode = "PAY_PER_REQUEST"
billing_mode = "PROVISIONED"
and setread_capacity
andwrite_capacity
to some value > 1lifecycle { ignore_changes = [read_capacity, write_capacity] }
Debug Output
No response
Panic Output
No response
Important Factoids
I believe the culprit is here:
Where
provisionedThroughputMinValue
is a constant equal to 1.References
No response
Would you like to implement a fix?
None
The text was updated successfully, but these errors were encountered: