You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Edited 2026-04-20: this issue originally attributed the churn to AES-GCM's random IV. That diagnosis was wrong — the AES-GCM nonce is a fixed zero. See ground truth below.
Summary
SST v4 writes one resource-{FunctionID}.enc file per Lambda (after #6750) into the user-provided bundle: directory. When many Lambdas share the same pre-built bundle: path (e.g. one worker/<svc>/dist feeding a dozen workers), the zip for each Lambda picks up every sibling'sresource-*.enc. Any time a sibling is added, renamed, or removed, the file set in the shared dir changes, sha256(zip) changes for every sharer, Pulumi uploads a new S3 object, and UpdateFunctionCode fires on every function in the group — even when none of their code actually changed. At scale this hits AWS's hard 15 TPS Lambda control-plane cap (non-raiseable) and stretches every deploy.
The random-IV theory (original post) was wrong
resource.enc is already deterministic for a given plaintext. The AES-GCM nonce is a fixed zero, not random:
platform/src/components/aws/nextjs.ts:743 uses the same zero nonce.
The file layout is [ciphertext][16-byte authTag] — no nonce prefix. For decryption to succeed, encryption must also use zero nonce, so identical plaintext produces identical ciphertext+tag. Empirically confirmed: two Lambdas in the same stage with link: [] have byte-identical 18-byte resource-*.enc.
Ground truth — what actually changes across a "no-op" deploy
Diffed two consecutive main-stage zips for WatchGoogle_Worker_Main uploaded 12 minutes apart:
files in newer zip: 171
files in older zip: 170
per-file sha diffs among shared files: 0
.mjs content hashes shared: 74/74
only in newer: resource-ExternalApiGoogle_Worker_Slow.enc (18 bytes)
One sibling Lambda was added to the same bundle: dir → its resource-*.enc landed in the shared directory → every sharer's zip hash changed → every sharer got UpdateFunctionCode. The Lambda's own code (all 74 .mjs chunks) was byte-identical. This is a pure sibling-contamination churn, not a plaintext or nonce churn.
Reproduction
App with N sst.aws.Function components, several of them sharing the same bundle: path (pre-built with esbuild + Turbo cache).
Commit touches only one of those Lambdas' source, or adds a new Lambda sharing the bundle, or touches nothing at all.
Run pnpm sst deploy --stage <name>.
Expected: only the changed Lambda gets UpdateFunctionCode (or none for a true no-op).
Actual: every Lambda that shares the bundle dir is re-zipped with the new sibling set, gets a new s3Key, and receives UpdateFunctionCode. On our 117-Lambda app we saw ~54/117 no-op updates.
Consumer-side fix (what we shipped)
A $transform(sst.aws.Function, …) that copies each Lambda's bundle: into a per-function private directory under $cli.paths.work/artifacts/<name>-scoped-bundle/ and re-points args.bundle there, filtering resource-*.enc out of the copy. Runtime.Build then writes only the current Lambda's resource-<name>.enc into the private copy. Source: scope-bundle.ts.
After deploying the fix, two back-to-back no-op main deploys (non-infra-touching PR merges) uploaded 0/117 new Lambda zips to S3 — down from ~54/117 pre-fix.
Suggested SST-side fixes
In rough preference order:
Write resource-{FunctionID}.enc into SST's own per-Lambda staging dir ($cli.paths.work/artifacts/{name}-src/) instead of the user-provided bundle: path, then copy only the current Lambda's resource.enc into the zip alongside the user's bundle contents. Keeps Fix resource.enc race when functions share a bundle #6750's within-deploy-race fix, removes the shared-bundle contamination entirely.
Scope the zip glob in createZipAsset to ** plus an ignore pattern that excludes resource-*.enc files whose name doesn't match the current {FunctionID}. Minimal diff, no change to the Go Runtime.Build layout.
Opt-in scopeBundle: true flag on sst.aws.Function that triggers the per-Lambda copy behavior. Conservative path that pushes the work onto users who hit the issue.
Happy to open a PR for option 1 or 2.
Environment
SST v4 (Pulumi-based), 117 Lambdas in a single app
Pre-built esbuild outputs (Turbo-cached) with shared bundle: paths for worker families
Summary
SST v4 writes one
resource-{FunctionID}.encfile per Lambda (after #6750) into the user-providedbundle:directory. When many Lambdas share the same pre-builtbundle:path (e.g. oneworker/<svc>/distfeeding a dozen workers), the zip for each Lambda picks up every sibling'sresource-*.enc. Any time a sibling is added, renamed, or removed, the file set in the shared dir changes,sha256(zip)changes for every sharer, Pulumi uploads a new S3 object, andUpdateFunctionCodefires on every function in the group — even when none of their code actually changed. At scale this hits AWS's hard 15 TPS Lambda control-plane cap (non-raiseable) and stretches every deploy.The random-IV theory (original post) was wrong
resource.encis already deterministic for a given plaintext. The AES-GCM nonce is a fixed zero, not random:packages/sst/src/resource.ts:29-38:platform/src/components/aws/nextjs.ts:743uses the same zero nonce.The file layout is
[ciphertext][16-byte authTag]— no nonce prefix. For decryption to succeed, encryption must also use zero nonce, so identical plaintext produces identical ciphertext+tag. Empirically confirmed: two Lambdas in the same stage withlink: []have byte-identical 18-byteresource-*.enc.Ground truth — what actually changes across a "no-op" deploy
Diffed two consecutive main-stage zips for
WatchGoogle_Worker_Mainuploaded 12 minutes apart:One sibling Lambda was added to the same
bundle:dir → itsresource-*.enclanded in the shared directory → every sharer's zip hash changed → every sharer gotUpdateFunctionCode. The Lambda's own code (all 74.mjschunks) was byte-identical. This is a pure sibling-contamination churn, not a plaintext or nonce churn.Reproduction
sst.aws.Functioncomponents, several of them sharing the samebundle:path (pre-built with esbuild + Turbo cache).pnpm sst deploy --stage <name>.Expected: only the changed Lambda gets
UpdateFunctionCode(or none for a true no-op).Actual: every Lambda that shares the bundle dir is re-zipped with the new sibling set, gets a new
s3Key, and receivesUpdateFunctionCode. On our 117-Lambda app we saw ~54/117 no-op updates.Consumer-side fix (what we shipped)
A
$transform(sst.aws.Function, …)that copies each Lambda'sbundle:into a per-function private directory under$cli.paths.work/artifacts/<name>-scoped-bundle/and re-pointsargs.bundlethere, filteringresource-*.encout of the copy.Runtime.Buildthen writes only the current Lambda'sresource-<name>.encinto the private copy. Source: scope-bundle.ts.After deploying the fix, two back-to-back no-op main deploys (non-infra-touching PR merges) uploaded 0/117 new Lambda zips to S3 — down from ~54/117 pre-fix.
Suggested SST-side fixes
In rough preference order:
resource-{FunctionID}.encinto SST's own per-Lambda staging dir ($cli.paths.work/artifacts/{name}-src/) instead of the user-providedbundle:path, then copy only the current Lambda's resource.enc into the zip alongside the user's bundle contents. Keeps Fix resource.enc race when functions share a bundle #6750's within-deploy-race fix, removes the shared-bundle contamination entirely.createZipAssetto**plus anignorepattern that excludesresource-*.encfiles whose name doesn't match the current{FunctionID}. Minimal diff, no change to the GoRuntime.Buildlayout.scopeBundle: trueflag onsst.aws.Functionthat triggers the per-Lambda copy behavior. Conservative path that pushes the work onto users who hit the issue.Happy to open a PR for option 1 or 2.
Environment
bundle:paths for worker familiesRelated
resource-{FunctionID}.encnaming (root of the sharing interaction).