feat: add Amazon Bedrock as inference provider#963
feat: add Amazon Bedrock as inference provider#963mvincentbb wants to merge 5 commits intoNVIDIA:mainfrom
Conversation
Add Bedrock's OpenAI-compatible endpoint as a first-class provider,
enabling NemoClaw to run on AWS without GPU infrastructure.
- Add bedrock provider config, endpoint URL, and 7 curated models
- Add bedrock to non-interactive mode and interactive menu
- Skip endpoint validation (Bedrock API key format incompatible with probe)
- Add bedrock case to inference-config.js
Tested models (all support tool calling):
nvidia.nemotron-super-3-120b, nvidia.nemotron-nano-3-30b,
deepseek.v3.2, openai.gpt-oss-120b, mistral.devstral-2-123b,
moonshotai.kimi-k2.5, minimax.minimax-m2.5
Usage:
export NEMOCLAW_PROVIDER=bedrock
export OPENAI_API_KEY=<bedrock-api-key>
export BEDROCK_REGION=us-west-2
nemoclaw onboard --non-interactive
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds Amazon Bedrock as a first‑class remote inference provider: provider selection now recognizes Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI as Nemoclaw CLI
participant Onboard as onboard.js
participant Config as inference-config.js
participant Bedrock as Bedrock Endpoint
User->>CLI: run `nemoclaw onboard` (interactive/non‑interactive)
CLI->>Onboard: provider = "bedrock" + env vars (BEDROCK_REGION/AWS_REGION, BEDROCK_API_KEY)
Onboard->>Config: getProviderSelectionConfig("bedrock", model?)
Config-->>Onboard: openai‑compatible custom route config (INFERENCE_ROUTE_URL, DEFAULT_ROUTE_PROFILE, BEDROCK_API_KEY, default model)
Onboard->>Bedrock: construct/probe `BEDROCK_ENDPOINT_URL` (probe may be skipped)
Bedrock-->>Onboard: probe/validation result
Onboard->>CLI: persist provider + inference route + selected model
CLI-->>User: onboarding complete
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bin/lib/onboard.js`:
- Line 50: The BEDROCK_ENDPOINT_URL currently falls back to "us-west-2" when
neither process.env.BEDROCK_REGION nor process.env.AWS_REGION is set, which can
silently misconfigure region-bound Bedrock API keys; update the onboarding logic
around BEDROCK_ENDPOINT_URL to require an explicit region by checking
process.env.BEDROCK_REGION (and optionally process.env.AWS_REGION) and, if
neither is present, abort with a clear error message (throw or process.exit with
logging) instructing the user to set BEDROCK_REGION instead of defaulting to
"us-west-2"; locate the constant BEDROCK_ENDPOINT_URL and the surrounding
initialization code to add this validation and error handling.
- Around line 116-126: The Bedrock provider entry currently sets skipVerify:
true and bypasses model checks; change it to perform the same validation used by
other OpenAI-compatible providers by removing or setting skipVerify to false on
the bedrock provider object and invoking validateOpenAiLikeModel(...) for the
selectedModel (the same call used for OpenAI/Gemini) inside the provider
selection/validation flow rather than breaking out early; also remove the early
break that skips verification in the provider selection loop and add calls to
validateOpenAiLikeModel at the other two places where Bedrock was skipped so
onboarding validates model availability and region/model mismatches up front.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: fa5f150e-6144-4eff-b46f-9a1a5b867aff
📒 Files selected for processing (2)
bin/lib/inference-config.jsbin/lib/onboard.js
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
bin/lib/onboard.js (2)
50-50: Consider computing the Bedrock endpoint URL lazily.When neither
BEDROCK_REGIONnorAWS_REGIONis set, this creates an invalid URL (https://bedrock-mantle..api.aws/v1) at module load time. While the runtime validation at lines 1778-1786 prevents this from being used, the malformed URL stored inREMOTE_PROVIDER_CONFIG.bedrock.endpointUrlcould complicate debugging.💡 Suggested approach
Compute the URL when the bedrock provider is actually selected rather than at module load:
-const BEDROCK_ENDPOINT_URL = `https://bedrock-mantle.${process.env.BEDROCK_REGION || process.env.AWS_REGION || ""}.api.aws/v1`; +function getBedrockEndpointUrl() { + const region = process.env.BEDROCK_REGION || process.env.AWS_REGION; + if (!region) return null; + return `https://bedrock-mantle.${region}.api.aws/v1`; +}Then update the config to use a getter or compute it at selection time.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` at line 50, The constant BEDROCK_ENDPOINT_URL is computed at module load and can become a malformed URL when neither BEDROCK_REGION nor AWS_REGION is set; change REMOTE_PROVIDER_CONFIG.bedrock.endpointUrl to be computed lazily (either replace the top-level BEDROCK_ENDPOINT_URL const with a function/getter like getBedrockEndpointUrl() that builds and validates the URL at selection/runtime, or set endpointUrl when the bedrock provider is chosen) and ensure the code that currently references BEDROCK_ENDPOINT_URL uses this lazy getter/function so the endpoint is only constructed when needed and with proper validation.
2040-2040: Consider refactoring the provider check for readability.The condition is growing long with each new provider. A set-based lookup would be cleaner and easier to maintain:
💡 Optional refactor
+const REMOTE_PROVIDERS = new Set([ + "nvidia-prod", "nvidia-nim", "openai-api", "anthropic-prod", + "compatible-anthropic-endpoint", "gemini-api", "compatible-endpoint", "bedrock" +]); + async function setupInference(sandboxName, model, provider, endpointUrl = null, credentialEnv = null) { step(4, 7, "Setting up inference provider"); runOpenshell(["gateway", "select", GATEWAY_NAME], { ignoreError: true }); - if (provider === "nvidia-prod" || provider === "nvidia-nim" || provider === "openai-api" || provider === "anthropic-prod" || provider === "compatible-anthropic-endpoint" || provider === "gemini-api" || provider === "compatible-endpoint" || provider === "bedrock") { + if (REMOTE_PROVIDERS.has(provider)) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` at line 2040, Replace the long OR-chain provider check with a set membership test to improve readability and maintainability: create a Set of allowed provider strings (e.g., supportedProviders) and replace the condition in the if that references provider with supportedProviders.has(provider); update any surrounding logic in the same block (the if that currently checks provider === "nvidia-prod" || ...) to use that Set-based lookup so adding/removing providers is a one-line change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bin/lib/onboard.js`:
- Around line 116-126: The bedrock provider block in onboard.js uses the same
credentialEnv ("OPENAI_API_KEY") as the OpenAI provider which will cause
saveCredential to overwrite one credential with the other; change the bedrock
provider's credentialEnv to a distinct key (e.g., "BEDROCK_API_KEY") in the
bedrock object and update the code path that calls upsertProvider so the
provider config receives the expected environment key (map BEDROCK_API_KEY →
OPENAI_API_KEY or otherwise inject the correct header/param) when creating the
Bedrock provider; verify saveCredential (bin/lib/credentials.js) and
upsertProvider usage are aligned so both credentials are stored under unique
keys and the Bedrock provider is passed the correct credential at runtime.
---
Nitpick comments:
In `@bin/lib/onboard.js`:
- Line 50: The constant BEDROCK_ENDPOINT_URL is computed at module load and can
become a malformed URL when neither BEDROCK_REGION nor AWS_REGION is set; change
REMOTE_PROVIDER_CONFIG.bedrock.endpointUrl to be computed lazily (either replace
the top-level BEDROCK_ENDPOINT_URL const with a function/getter like
getBedrockEndpointUrl() that builds and validates the URL at selection/runtime,
or set endpointUrl when the bedrock provider is chosen) and ensure the code that
currently references BEDROCK_ENDPOINT_URL uses this lazy getter/function so the
endpoint is only constructed when needed and with proper validation.
- Line 2040: Replace the long OR-chain provider check with a set membership test
to improve readability and maintainability: create a Set of allowed provider
strings (e.g., supportedProviders) and replace the condition in the if that
references provider with supportedProviders.has(provider); update any
surrounding logic in the same block (the if that currently checks provider ===
"nvidia-prod" || ...) to use that Set-based lookup so adding/removing providers
is a one-line change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: e9b16a09-658a-4ff7-a114-e66e7496d2fb
📒 Files selected for processing (1)
bin/lib/onboard.js
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@coderabbitai help |
ChatThere are 3 ways to chat with CodeRabbit:
CodeRabbit commands
Other keywords and placeholders
Status, support, documentation and community
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
bin/lib/inference-config.js (1)
110-120:⚠️ Potential issue | 🟠 MajorKeep Bedrock's sandbox credential key aligned with onboarding.
setupNim()now stores/passes the Bedrock secret asBEDROCK_API_KEYinbin/lib/onboard.js(Line 120 and Line 1705), andsetupOpenclaw()persists the object returned here into~/.nemoclaw/config.json(Line 2097). ReturningOPENAI_API_KEYin this config points sandbox-side consumers at the wrong key and can fall back to a user's OpenAI credential if both are present.🔧 Minimal fix
case "bedrock": return { endpointType: "custom", endpointUrl: INFERENCE_ROUTE_URL, ncpPartner: null, model: model || "nvidia.nemotron-super-3-120b", profile: DEFAULT_ROUTE_PROFILE, - credentialEnv: "OPENAI_API_KEY", + credentialEnv: "BEDROCK_API_KEY", provider, providerLabel: "Amazon Bedrock (OpenAI-compatible)", };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/inference-config.js` around lines 110 - 120, The Bedrock branch returns credentialEnv: "OPENAI_API_KEY", which misaligns with onboarding's setupNim() that stores Bedrock credentials as BEDROCK_API_KEY; update the Bedrock case (the "bedrock" switch branch that constructs the object using INFERENCE_ROUTE_URL and model default) to set credentialEnv to "BEDROCK_API_KEY" so sandbox consumers read the correct key instead of falling back to OpenAI credentials.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bin/lib/onboard.js`:
- Around line 1778-1799: The Bedrock probe can run unauthenticated because
validateOpenAiLikeSelection() only uses getCredential(credentialEnv) which may
not read process.env; update the credential resolution so the probe uses the
env-supplied key when present. Concretely, modify either getCredential(symbol
credentialEnv) or validateOpenAiLikeSelection(remoteConfig.label, endpointUrl,
model, credentialEnv, ...) to fall back to process.env[credentialEnv] (e.g.,
BEDROCK_API_KEY) when getCredential returns falsy before attempting the probe,
ensuring non-interactive/CI runs that only export BEDROCK_API_KEY succeed and
that setupInference() still persists the provider afterward.
---
Duplicate comments:
In `@bin/lib/inference-config.js`:
- Around line 110-120: The Bedrock branch returns credentialEnv:
"OPENAI_API_KEY", which misaligns with onboarding's setupNim() that stores
Bedrock credentials as BEDROCK_API_KEY; update the Bedrock case (the "bedrock"
switch branch that constructs the object using INFERENCE_ROUTE_URL and model
default) to set credentialEnv to "BEDROCK_API_KEY" so sandbox consumers read the
correct key instead of falling back to OpenAI credentials.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: e7836778-4485-4cb6-98ef-4c272b18866b
📒 Files selected for processing (2)
bin/lib/inference-config.jsbin/lib/onboard.js
| } else if (selected.key === "bedrock") { | ||
| // Bedrock API keys are region-bound — require an explicit region. | ||
| if (!process.env.BEDROCK_REGION && !process.env.AWS_REGION) { | ||
| console.error(" BEDROCK_REGION or AWS_REGION must be set for Amazon Bedrock."); | ||
| if (isNonInteractive()) { | ||
| process.exit(1); | ||
| } | ||
| continue selectionLoop; | ||
| } | ||
| // Bedrock Mantle exposes an OpenAI-compatible API | ||
| const retryMessage = "Please choose a provider/model again."; | ||
| preferredInferenceApi = await validateOpenAiLikeSelection( | ||
| remoteConfig.label, | ||
| endpointUrl, | ||
| model, | ||
| credentialEnv, | ||
| retryMessage | ||
| ); | ||
| if (preferredInferenceApi) { | ||
| break; | ||
| } | ||
| continue selectionLoop; |
There was a problem hiding this comment.
Fresh non-interactive Bedrock runs will probe without the env-supplied key.
Line 1740 accepts BEDROCK_API_KEY from process.env, but validateOpenAiLikeSelection() resolves the probe secret via getCredential(credentialEnv) only (Line 712). The rest of this file treats getCredential() as store-backed and falls back to process.env when env values are needed, e.g. Line 1522, so a CI run that only exports BEDROCK_API_KEY will hit Bedrock unauthenticated and exit before setupInference() persists the provider.
💡 One way to keep env-only CI runs working
+function resolveCredentialValue(envName) {
+ return envName ? (process.env[envName] || getCredential(envName) || "") : "";
+}
+
async function validateOpenAiLikeSelection(
label,
endpointUrl,
model,
credentialEnv = null,
retryMessage = "Please choose a provider/model again."
) {
- const apiKey = credentialEnv ? getCredential(credentialEnv) : "";
+ const apiKey = resolveCredentialValue(credentialEnv);
const probe = probeOpenAiLikeEndpoint(endpointUrl, model, apiKey);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bin/lib/onboard.js` around lines 1778 - 1799, The Bedrock probe can run
unauthenticated because validateOpenAiLikeSelection() only uses
getCredential(credentialEnv) which may not read process.env; update the
credential resolution so the probe uses the env-supplied key when present.
Concretely, modify either getCredential(symbol credentialEnv) or
validateOpenAiLikeSelection(remoteConfig.label, endpointUrl, model,
credentialEnv, ...) to fall back to process.env[credentialEnv] (e.g.,
BEDROCK_API_KEY) when getCredential returns falsy before attempting the probe,
ensuring non-interactive/CI runs that only export BEDROCK_API_KEY succeed and
that setupInference() still persists the provider afterward.
There was a problem hiding this comment.
getCredential() in credentials.js:28 already checks process.env[key] before the credential store — the probe runs authenticated. So validateOpenAiLikeSelection() at line 802 calling getCredential("BEDROCK_API_KEY") will pick up the env var. The probe runs authenticated.
There was a problem hiding this comment.
🧩 Analysis chain
Let me verify by checking the getCredential() implementation directly.
🏁 Script executed:
#!/bin/bash
fd -e js credentials -g '*credentials*' | xargs grep -n "getCredential" | head -20
fd -e js -g '*credentials*' | head -5
Length of output: 148
🏁 Script executed:
#!/bin/bash
cat bin/lib/credentials.js
Length of output: 6749
@mvincentbb, you're absolutely right — I apologize for the noise. getCredential() in credentials.js (Line 30–33) is:
function getCredential(key) {
if (process.env[key]) return process.env[key]; // ← env-first
const creds = loadCredentials();
return creds[key] || null;
}It checks process.env[key] before the file-backed credential store, so getCredential("BEDROCK_API_KEY") will correctly pick up a CI-exported BEDROCK_API_KEY. The probe runs authenticated and my concern was a false positive. No change needed here.
There was a problem hiding this comment.
Could you made another review then
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
echo "=== getSandboxInferenceConfig switch cases ==="
grep -n "case\|providerKey\|primaryModelRef\|inferenceCompat\|providerName" bin/lib/onboard.js | head -60
echo ""
echo "=== printDashboard providerLabel ==="
sed -n '/function printDashboard/,/^}/p' bin/lib/onboard.js | grep -n "providerLabel\|bedrock"
echo ""
echo "=== REMOTE_PROVIDER_CONFIG bedrock credentialEnv ==="
grep -n "credentialEnv\|bedrock" bin/lib/onboard.js | head -30
echo ""
echo "=== inference-config.js bedrock case ==="
grep -n "bedrock\|credentialEnv" bin/lib/inference-config.js | head -20
Length of output: 6160
@mvincentbb, here's a fresh look at the Bedrock additions. I found three issues beyond the already-resolved credential concern:
🔴 1. getSandboxInferenceConfig is missing a "bedrock" case — wrong Dockerfile ARGs baked into every sandbox
The switch statement in getSandboxInferenceConfig() (Lines 484–516) has explicit cases for openai-api, anthropic-prod, gemini-api, compatible-endpoint, etc., but no case for "bedrock". Since REMOTE_PROVIDER_CONFIG.bedrock.providerName is "bedrock", it silently falls into the default branch alongside nvidia-prod, producing:
providerKey = "inference";
primaryModelRef = `inference/${model}`; // ← wrong; should be openai/${model} or similar
inferenceCompat = null; // ← supportsStore flag missingThis gets baked directly into the sandbox Dockerfile as ARG NEMOCLAW_PROVIDER_KEY / ARG NEMOCLAW_PRIMARY_MODEL_REF, so the agent inside the sandbox would be configured as if it were talking to an NVIDIA endpoint rather than an OpenAI-compatible one. The fix should mirror the compatible-endpoint or gemini-api case:
+ case "bedrock":
case "compatible-endpoint":
providerKey = "inference";
primaryModelRef = `inference/${model}`;
inferenceCompat = {
supportsStore: false,
};
break;🟠 2. credentialEnv is inconsistent across onboard, sandbox config, and docs
| Location | Value |
|---|---|
REMOTE_PROVIDER_CONFIG.bedrock.credentialEnv (Line 120, onboard.js) |
"BEDROCK_API_KEY" |
inference-config.js case "bedrock": (Line 117) |
"OPENAI_API_KEY" |
| PR description usage example | export OPENAI_API_KEY=<bedrock-api-key> |
During onboarding, the wizard checks/stores BEDROCK_API_KEY. Inside the sandbox, the config references OPENAI_API_KEY. A user following the PR docs (export OPENAI_API_KEY=...) will hit the onboarding credential prompt because BEDROCK_API_KEY is unset — and vice versa in CI. Pick one env var name and use it consistently in all three places.
🟡 3. printDashboard has no label for "bedrock" — shows raw provider name
Lines 6–13 of printDashboard map each known providerName to a human-readable string, but "bedrock" is absent, so the dashboard line reads Model nvidia.nemotron-super-3-120b (bedrock) instead of something like Amazon Bedrock. One-liner fix:
else if (provider === "ollama-local") providerLabel = "Local Ollama";
+ else if (provider === "bedrock") providerLabel = "Amazon Bedrock";|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
bin/lib/onboard.js (2)
502-508:⚠️ Potential issue | 🟠 Major
getSandboxInferenceConfigis missing a"bedrock"case — sandbox will be misconfigured.When
provider === "bedrock", this falls into thedefaultbranch (intended for NVIDIA endpoints), settinginferenceCompat = null. Bedrock is OpenAI-compatible and should behave likegemini-apiorcompatible-endpointwithsupportsStore: false.This affects the Dockerfile ARGs (
NEMOCLAW_INFERENCE_COMPAT_B64) baked into the sandbox image viapatchStagedDockerfile().🔧 Proposed fix — add bedrock case alongside similar providers
case "gemini-api": providerKey = "inference"; primaryModelRef = `inference/${model}`; inferenceCompat = { supportsStore: false, }; break; + case "bedrock": case "compatible-endpoint": providerKey = "inference"; primaryModelRef = `inference/${model}`; inferenceCompat = { supportsStore: false, }; break;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` around lines 502 - 508, The sandbox inference mapping is missing a "bedrock" branch causing Bedrock to fall through to the default NVIDIA path; update the switch in getSandboxInferenceConfig (the same switch that contains case "compatible-endpoint") to add a case "bedrock" that mirrors the compatible-endpoint/gemini-api behavior: set providerKey = "inference", primaryModelRef = `inference/${model}`, and inferenceCompat = { supportsStore: false }; this will ensure patchStagedDockerfile picks up the correct NEMOCLAW_INFERENCE_COMPAT_B64 value for Bedrock.
2321-2329:⚠️ Potential issue | 🟡 Minor
printDashboardmissing label for"bedrock"— will display raw provider name.The dashboard will show
Model nvidia.nemotron-super-3-120b (bedrock)instead of a human-readable label like "Amazon Bedrock".🔧 Proposed fix
else if (provider === "vllm-local") providerLabel = "Local vLLM"; else if (provider === "ollama-local") providerLabel = "Local Ollama"; + else if (provider === "bedrock") providerLabel = "Amazon Bedrock";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` around lines 2321 - 2329, The provider label mapping is missing an entry for the "bedrock" provider so printDashboard (the code that sets providerLabel from provider) will show the raw provider string; add a branch that sets providerLabel = "Amazon Bedrock" when provider === "bedrock" alongside the other else-if branches (i.e., update the providerLabel assignment logic where provider and providerLabel are used).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bin/lib/inference-config.js`:
- Around line 110-120: The Bedrock branch in getProviderSelectionConfig (case
"bedrock") returns credentialEnv: "OPENAI_API_KEY", which mismatches
REMOTE_PROVIDER_CONFIG.bedrock in onboard.js and causes the sandbox config to
reference the wrong env var; update the Bedrock case in the
getProviderSelectionConfig function to set credentialEnv to "BEDROCK_API_KEY" so
it matches REMOTE_PROVIDER_CONFIG.bedrock and ensure
getProviderSelectionConfig("bedrock", model) writes the correct environment
variable name to the config.
---
Outside diff comments:
In `@bin/lib/onboard.js`:
- Around line 502-508: The sandbox inference mapping is missing a "bedrock"
branch causing Bedrock to fall through to the default NVIDIA path; update the
switch in getSandboxInferenceConfig (the same switch that contains case
"compatible-endpoint") to add a case "bedrock" that mirrors the
compatible-endpoint/gemini-api behavior: set providerKey = "inference",
primaryModelRef = `inference/${model}`, and inferenceCompat = { supportsStore:
false }; this will ensure patchStagedDockerfile picks up the correct
NEMOCLAW_INFERENCE_COMPAT_B64 value for Bedrock.
- Around line 2321-2329: The provider label mapping is missing an entry for the
"bedrock" provider so printDashboard (the code that sets providerLabel from
provider) will show the raw provider string; add a branch that sets
providerLabel = "Amazon Bedrock" when provider === "bedrock" alongside the other
else-if branches (i.e., update the providerLabel assignment logic where provider
and providerLabel are used).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 2ed9f5ee-acfe-48cc-a25b-b23bb2a35047
📒 Files selected for processing (2)
bin/lib/inference-config.jsbin/lib/onboard.js
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bin/lib/onboard.js`:
- Line 50: The code constructs BEDROCK_ENDPOINT_URL even when no region is set
which allows promptManualModelId() to attempt calls to an invalid URL; before
attaching the manual-model validator or adding the "Other..." Bedrock choice
(referencing BEDROCK_ENDPOINT_URL and promptManualModelId()), check whether
process.env.BEDROCK_REGION || process.env.AWS_REGION is truthy and if not
skip/disable the manual-model option and instead surface a clear interactive
error prompting the user to set BEDROCK_REGION/AWS_REGION; alternatively, only
build BEDROCK_ENDPOINT_URL after the region check so any code that calls
promptManualModelId() never sees an empty-region URL.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: d378d519-65bc-46f1-8d6e-aeb35284c270
📒 Files selected for processing (2)
bin/lib/inference-config.jsbin/lib/onboard.js
There was a problem hiding this comment.
♻️ Duplicate comments (1)
bin/lib/onboard.js (1)
50-50:⚠️ Potential issue | 🟡 MinorRegion check runs after model selection — "Other..." option can hit invalid URL.
BEDROCK_ENDPOINT_URLis constructed at module load with a potentially empty region, producinghttps://bedrock-mantle..api.aws/v1. The region guard at Lines 1779-1787 runs after model selection (Line 1760). If a user selects "Other..." from the curated model list,promptManualModelIdinvokes the validator (validateOpenAiLikeModel) against this malformed URL before the region check executes.Move the region check earlier in the Bedrock flow — immediately after
selected.key === "bedrock"is confirmed (around Line 1747) — so it runs before any endpoint access:🔧 Proposed fix
const defaultModel = requestedModel || remoteConfig.defaultModel; let modelValidator = null; if (selected.key === "openai" || selected.key === "gemini" || selected.key === "bedrock") { + // Bedrock API keys are region-bound — require an explicit region before any endpoint access. + if (selected.key === "bedrock" && !process.env.BEDROCK_REGION && !process.env.AWS_REGION) { + console.error(" BEDROCK_REGION or AWS_REGION must be set for Amazon Bedrock."); + if (isNonInteractive()) { + process.exit(1); + } + continue selectionLoop; + } modelValidator = (candidate) => validateOpenAiLikeModel(remoteConfig.label, endpointUrl, candidate, getCredential(credentialEnv));Then remove the duplicate check at Lines 1779-1787.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` at line 50, The module-level BEDROCK_ENDPOINT_URL is being built with a possibly empty region which leads to invalid URLs when users pick "Other..." and promptManualModelId calls validateOpenAiLikeModel; fix by moving the region validation to run immediately after checking selected.key === "bedrock" (the Bedrock branch) so that a valid region exists before any calls to promptManualModelId or validateOpenAiLikeModel, and then remove the later duplicate region-check block that currently runs after model selection; reference BEDROCK_ENDPOINT_URL, selected.key, promptManualModelId, and validateOpenAiLikeModel when locating the code to change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@bin/lib/onboard.js`:
- Line 50: The module-level BEDROCK_ENDPOINT_URL is being built with a possibly
empty region which leads to invalid URLs when users pick "Other..." and
promptManualModelId calls validateOpenAiLikeModel; fix by moving the region
validation to run immediately after checking selected.key === "bedrock" (the
Bedrock branch) so that a valid region exists before any calls to
promptManualModelId or validateOpenAiLikeModel, and then remove the later
duplicate region-check block that currently runs after model selection;
reference BEDROCK_ENDPOINT_URL, selected.key, promptManualModelId, and
validateOpenAiLikeModel when locating the code to change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 56aacbe1-4a87-4c38-a442-de513260ce0a
📒 Files selected for processing (2)
bin/lib/inference-config.jsbin/lib/onboard.js
There was a problem hiding this comment.
🧹 Nitpick comments (2)
bin/lib/onboard.js (2)
50-50: Derive the Bedrock base URL lazily.Line 50 snapshots
BEDROCK_REGION/AWS_REGIONwhen this module is loaded. The onboarding path overwrites it later, butREMOTE_PROVIDER_CONFIG.bedrock.endpointUrlstill carries a module-load value, which makes direct callers and future refactors easier to break. A small helper/getter keeps every path on the current env.💡 One way to keep the endpoint current
-const BEDROCK_ENDPOINT_URL = `https://bedrock-mantle.${process.env.BEDROCK_REGION || process.env.AWS_REGION || ""}.api.aws/v1`; +function getBedrockEndpointUrl() { + const region = (process.env.BEDROCK_REGION || process.env.AWS_REGION || "").trim(); + return region ? `https://bedrock-mantle.${region}.api.aws/v1` : ""; +}bedrock: { label: "Amazon Bedrock (OpenAI-compatible endpoint)", providerName: "bedrock", providerType: "openai", credentialEnv: "BEDROCK_API_KEY", - endpointUrl: BEDROCK_ENDPOINT_URL, + get endpointUrl() { + return getBedrockEndpointUrl(); + },Also applies to: 116-121
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` at line 50, Replace the module-level BEDROCK_ENDPOINT_URL constant with a small getter so the Bedrock base URL is derived from process.env at call-time: add a function like getBedrockEndpointUrl() that returns `https://bedrock-mantle.${process.env.BEDROCK_REGION || process.env.AWS_REGION || ""}.api.aws/v1` and use that function wherever BEDROCK_ENDPOINT_URL was referenced (including where REMOTE_PROVIDER_CONFIG.bedrock.endpointUrl is populated) so the value reflects the current environment when accessed rather than the env snapshot at module load. Ensure all call sites (references to BEDROCK_ENDPOINT_URL) are updated to call getBedrockEndpointUrl() (or access it via a getter) and keep the function name unique in the file.
123-125: Guard the curated/default pairing with an invariant.Line 1006 falls back to option
0whendefaultModelis missing from the curated list. Since the Bedrock default and catalog are both hand-maintained here, a tiny invariant check or unit test would stop a future catalog edit from silently changing the default selection.Also applies to: 149-157
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` around lines 123 - 125, Add an explicit invariant that when modelMode is "curated" the configured defaultModel must exist in the curated models list: locate the code that reads modelMode and defaultModel (symbols: modelMode, defaultModel) and add a guard like an assertion or runtime check that curatedModels.includes(defaultModel) and fail fast (throw or process.exit after logging) if it does not; update the other similar block (the one referenced around the second occurrence) the same way and add or adjust a unit test to verify the invariant so a catalog edit cannot silently change the default selection.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@bin/lib/onboard.js`:
- Line 50: Replace the module-level BEDROCK_ENDPOINT_URL constant with a small
getter so the Bedrock base URL is derived from process.env at call-time: add a
function like getBedrockEndpointUrl() that returns
`https://bedrock-mantle.${process.env.BEDROCK_REGION || process.env.AWS_REGION
|| ""}.api.aws/v1` and use that function wherever BEDROCK_ENDPOINT_URL was
referenced (including where REMOTE_PROVIDER_CONFIG.bedrock.endpointUrl is
populated) so the value reflects the current environment when accessed rather
than the env snapshot at module load. Ensure all call sites (references to
BEDROCK_ENDPOINT_URL) are updated to call getBedrockEndpointUrl() (or access it
via a getter) and keep the function name unique in the file.
- Around line 123-125: Add an explicit invariant that when modelMode is
"curated" the configured defaultModel must exist in the curated models list:
locate the code that reads modelMode and defaultModel (symbols: modelMode,
defaultModel) and add a guard like an assertion or runtime check that
curatedModels.includes(defaultModel) and fail fast (throw or process.exit after
logging) if it does not; update the other similar block (the one referenced
around the second occurrence) the same way and add or adjust a unit test to
verify the invariant so a catalog edit cannot silently change the default
selection.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 38f7219d-3232-45a6-86ce-d9e0bb7097ca
📒 Files selected for processing (1)
bin/lib/onboard.js
|
@cv can you review my PR . It is to add Bedrock as a provider |
Add Bedrock's OpenAI-compatible endpoint as a first-class provider,
enabling NemoClaw to run on AWS without GPU infrastructure.
Tested models (all support tool calling):
nvidia.nemotron-super-3-120b, nvidia.nemotron-nano-3-30b, deepseek.v3.2, openai.gpt-oss-120b, mistral.devstral-2-123b, moonshotai.kimi-k2.5, minimax.minimax-m2.5
Usage:
export NEMOCLAW_PROVIDER=bedrockexport OPENAI_API_KEY=<bedrock-api-key>export BEDROCK_REGION=us-west-2 nemoclaw onboard --non-interactiveSummary
Add Amazon Bedrock's OpenAI-compatible endpoint as a first-class inference provider, enabling NemoClaw to run on AWS without GPU infrastructure. Follows the same pattern as existing providers (Gemini, OpenAI, Anthropic).
Related Issue
Closes #962
Changes
bin/lib/inference-config.js: Addbedrockcase togetProviderSelectionConfig()bin/lib/onboard.js:BEDROCK_ENDPOINT_URLconstant (usesBEDROCK_REGIONorAWS_REGIONenv var)bedrocktoREMOTE_PROVIDER_CONFIG(type: openai, skipVerify: true)REMOTE_MODEL_OPTIONS(all verified to work with NemoClaw)bedrocktogetNonInteractiveProvider()valid providers setbedrockoption to interactive menu insetupNim()bedrockto provider check insetupInference()Type of Change
Testing
Manually tested all 7 curated models via Telegram bridge on EC2 (t4g.large Graviton ARM64, us-west-2):
Checklist
General
Code Changes
npx prek run --all-filesauto-fixes formatting (ormake formatfor targeted runs).Doc Changes
update-docsagent skill to draft changes while complying with the style guide. For example, prompt your agent with "/update-docscatch up the docs for the new changes I made in this PR."Summary by CodeRabbit