Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions gems/aws-sdk-core/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,13 @@
Unreleased Changes
------------------

* Feature - Add `AWS_NEW_RETRIES_2026` environment variable to opt-in to updated `standard` retry mode with reduced backoff intervals.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, have we considered having a similar changelog entries for DynamoDB since they have specific retryable behavior or is that not needed?

Now that I think about it - are service-specific behaviors documented anywhere?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also have a question on the users who ARE already on standard mode for their retry_mode - do they feel any changes from this update?

Copy link
Copy Markdown
Contributor Author

@richardwang1124 richardwang1124 May 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can I add a DynamoDB changelog entry even though only core was updated? We could extend the core changelog entry to mention that DynamoDB defaults are changed if new retry behavior is enabled. I don't believe any service specific behaviors are documented anywhere yet. Externally I think the blogpost should mention this new DynamoDB behavior, internally I could add more comments or documentation?

Customers who are already on standard and opt in to new retries will feel a difference. Due to the updated backoff timing, retries will be much faster. Throttling behavior will be the same, and due to the updated retry quota draining, customers will fail faster during sustained service errors, but this is intentional to help services recover faster.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DynamoDB

I personally feel that blogpost is not enough for documentation. Not everyone will read blogpost releases nor aws-sdk-core's CHANGELOG entries.

Rethinking this...
This might be a good use case where we should have service-specific plugins. With plugins, you can be specific about this behavior + documentation. Now that I think about this - we might need to do something about the autogenerated config. See:

:max_attempts (Integer) — default: 3 — An integer representing the maximum number attempts that will be made for a single request, including the initial attempt. For example, setting this value to 5 will result in a request being retried up to 4 times. Used in standard and adaptive retry modes.

Above is what I see when I run codegen. Let's talk offline.

Customers who are already on standard ... will feel a difference.

We should probably add a separate entry about this. The way I read the above entry is like: "Ok so if I don't use that env var, i'm still on old standard retries mechanism"

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure I can try adding a DynamoDB plugin instead for retries.

We should probably add a separate entry about this. The way I read the above entry is like: "Ok so if I don't use that env var, i'm still on old standard retries mechanism"

Your original understanding is correct, customers who are already on standard and opt in to new retries will feel a difference. If they do not set the environment variable, there will not be any differences. New retry behavior is disabled by default and opt in only.


3.247.0 (2026-05-13)
------------------

* Feature - Add YJIT & ZJIT tracking to user agent.

* Issue - Fix error messaging in SSO OIDC.

3.246.0 (2026-04-23)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,11 @@ module Retries
# Used in 'standard' and 'adaptive' retry modes.
class RetryQuota
INITIAL_RETRY_TOKENS = 500
RETRY_COST = 5
RETRY_COST = 14
LEGACY_RETRY_COST = 5 # TODO: Remove when new retries become default
NO_RETRY_INCREMENT = 1
TIMEOUT_RETRY_COST = 10
THROTTLING_RETRY_COST = 5
TIMEOUT_RETRY_COST = 10 # TODO: Remove when new retries become default

def initialize(opts = {})
@mutex = Mutex.new
Expand All @@ -19,15 +21,16 @@ def initialize(opts = {})
end

# check if there is sufficient capacity to retry
# and return it. If there is insufficient capacity
# and return it. If there is insufficient capacity
# return 0
# @return [Integer] The amount of capacity checked out
def checkout_capacity(error_inspector)
@mutex.synchronize do
capacity_amount = if error_inspector.networking?
TIMEOUT_RETRY_COST
# TODO: Remove gate and keep only the new_retries branch
capacity_amount = if RetryErrors.new_retries?
error_inspector.throttling_error? ? THROTTLING_RETRY_COST : RETRY_COST
else
RETRY_COST
error_inspector.networking? ? TIMEOUT_RETRY_COST : LEGACY_RETRY_COST
end

# unable to acquire capacity
Expand All @@ -39,8 +42,8 @@ def checkout_capacity(error_inspector)
end

# capacity_amount refers to the amount of capacity requested from
# the last retry. It can either be RETRY_COST, TIMEOUT_RETRY_COST,
# or unset.
# the last retry. It can either be RETRY_COST,
# THROTTLING_RETRY_COST/TIMEOUT_RETRY_COST, or unset.
def release(capacity_amount)
# Implementation note: The release() method is called for
# every API call. In the common case where the request is
Expand Down
Loading
Loading