feat: runtime config mutability via OpenClaw shim + OpenShell v0.0.15#940
feat: runtime config mutability via OpenClaw shim + OpenShell v0.0.15#940
Conversation
…0.0.15 EXPERIMENTAL — POC branch to validate three-tier config resolution: 1. Frozen openclaw.json (gateway.auth.token, CORS — always immutable) 2. Policy defaults (config_overrides in openclaw-sandbox.yaml) 3. User runtime overrides (nemoclaw config-set → overrides file → hot-reload) OpenClaw shim patch (patches/openclaw-config-overrides.patch): - Adds OPENCLAW_CONFIG_OVERRIDES_FILE env var support to config loader - Deep-merges overrides onto frozen config, stripping gateway.* for security - Adds overrides file to chokidar watcher for hot-reload OpenShell minimum bumped to v0.0.15: - Auto-TLS termination (PR #544) — removes need for tls: terminate - Security hardening SEC-002–010 (PR #548) - Runtime settings channel (PR #474) - Version check now enforced in onboard preflight Policy changes: - Remove 35 deprecated tls: terminate annotations (base + all presets) - Remove permissive wildcard L7 rules from claude_code/nvidia endpoints - Add config_overrides section defining mutable fields + defaults New commands: - nemoclaw <sandbox> config-set --key <path> --value <value> - nemoclaw <sandbox> config-get [--key <path>]
Patch file for OpenShell server + TUI that extends the PolicyChunk approval flow to handle config-change requests (config: prefix on rule_name). Applied the same way as the OpenClaw shim — at build time, not pushed upstream. Server: skip network policy merge for config: chunks on approval. TUI: show CONFIG badge, display config key instead of endpoint.
Updated openshell-config-approval.patch now includes:
Sandbox side (lib.rs, grpc_client.rs):
- Config request scanner: polls /sandbox/.openclaw-data/config-requests/
for JSON request files, submits as config: PolicyChunks via existing
SubmitPolicyAnalysis gRPC (same pattern as network denial submission)
- Config apply loop: in the policy poll loop, checks for approved config:
chunks and writes merged overrides to config-overrides.json5
- get_draft_policy client method for fetching approved chunks
- gateway.* blocked at submission time (defense in depth)
Server side (grpc.rs):
- approve_draft_chunk: skip network merge for config: chunks
- submit_policy_analysis: relax proposed_rule for config: chunks
TUI side (sandbox_draft.rs):
- CONFIG badge for config: chunks
- Panel renamed "Rules & Config"
Round-trip flow:
1. Agent writes {"key":"...","value":"..."} to config-requests/*.json
2. Sandbox proxy scans, creates PolicyChunk, submits to gateway
3. TUI shows CONFIG chunk with key name, user approves with [a]
4. Sandbox poll loop detects approval, writes overrides file
5. OpenClaw hot-reloads via chokidar watcher
…atch The policy YAML uses deny_unknown_fields — extension fields are not allowed. Mutable config field allow-list lives in NemoClaw code (config-set.js), not in the policy YAML. The filesystem_policy already controls what's writable via the read_only/read_write path lists. Also: detect dev-build OpenShell and use local image tag instead of non-existent GHCR tag.
…quired L7 rules - config-set.js: use `openshell sandbox connect` with stdin piping instead of nonexistent `openshell exec` command - onboard.js: same fix for overrides file write - openclaw-sandbox.yaml: restore required wildcard rules on endpoints with protocol: rest + enforcement: enforce (proxy validates their presence) Tested: config-set writes overrides, config-get reads them back, gateway.* is blocked.
Root cause: OpenClaw bundler duplicates resolveConfigForRead into 6 dist chunks. Previous patch only hit config-CO7zBdn8.js but gateway runs through daemon-cli.js. New approach patches ALL files. Also: sandbox connect can't write files (different mount namespace). Switched to openshell sandbox upload/download.
Gateway log shows: agent model: inference/SHIM-TEST-WORKS
The OpenClaw config overrides shim successfully deep-merges the
overrides file onto the frozen openclaw.json at config load time.
Pre-seeded override in entrypoint, verified via gateway log download.
Entrypoint temporarily hardcodes a test override for verification —
revert to empty {} default after confirming.
Resolve 6 merge conflicts to combine main's security/structural changes with the runtime config mutability feature: - Dockerfile: use base image FROM, apply openclaw shim on pre-installed CLI, keep all new build args and security hardening, add OPENCLAW_CONFIG_OVERRIDES_FILE env var - nemoclaw.js: import both config-set and inference-config modules - onboard.js: use main's formatEnvAssignment/patchStagedDockerfile, add OPENCLAW_CONFIG_OVERRIDES_FILE to sandbox env args - npm.yaml/pypi.yaml: take main's access:full + binaries structure - nemoclaw-start.sh: keep main's privilege separation (gosu/gateway user), add overrides file creation in both root and non-root paths
- Remove config_overrides tests (section no longer in policy YAML) - Convert config-set.test.js to ESM/vitest - All 338 tests pass
📝 WalkthroughWalkthroughAdds sandbox-scoped runtime config mutability: CLI commands and tests, an allow-listed config-set/get flow, sandbox scanner that submits config-change draft chunks to OpenShell, server-side approval handling, automatic application of approved overrides into Changes
Sequence DiagramsequenceDiagram
participant User as User / CLI
participant Sandbox as Sandbox Process
participant Scanner as Sandbox Scanner
participant OpenShellCLI as OpenShell CLI
participant OpenShellServer as OpenShell Server
participant FS as Sandbox FS
participant OpenClaw as OpenClaw Runtime
User->>Sandbox: place /sandbox/.openclaw-data/config-requests/<id>.json
Scanner->>FS: read request file
FS-->>Scanner: return { key, value }
Scanner->>Scanner: validate key (reject gateway.*)
Scanner->>OpenShellCLI: submit_policy_analysis(config:{key} draft)
OpenShellCLI->>OpenShellServer: create draft chunk
OpenShellServer-->>OpenShellServer: admin approves draft → status="approved"
OpenShellServer->>OpenShellCLI: approved draft visible
OpenShellCLI->>Sandbox: deliver approved config chunk
Sandbox->>FS: deep-merge approved rationale → /sandbox/.openclaw-data/config-overrides.json5
OpenClaw->>FS: read OPENCLAW_CONFIG_OVERRIDES_FILE
OpenClaw->>OpenClaw: deep-merge overrides (gateway keys ignored)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 13
🧹 Nitpick comments (5)
patches/openshell-config-approval.patch (1)
214-218: Non-atomic file write may cause race with chokidar watcher.
std::fs::write(CONFIG_OVERRIDES_PATH, &json)is not atomic. The watcher (fromopenclaw-config-overrides.patch:47) may trigger during the write, causing the shim to read a truncated or empty file.The shim handles parse errors gracefully by returning the original config, but this could cause:
- Spurious error logs
- Missed config updates until the next watcher trigger
🔧 Suggested fix: atomic write via temp file + rename
use std::io::Write; let tmp_path = format!("{}.tmp", CONFIG_OVERRIDES_PATH); let mut file = std::fs::File::create(&tmp_path)?; file.write_all(json.as_bytes())?; file.sync_all()?; std::fs::rename(&tmp_path, CONFIG_OVERRIDES_PATH)?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@patches/openshell-config-approval.patch` around lines 214 - 218, The current non-atomic write using std::fs::write(CONFIG_OVERRIDES_PATH, &json) can race with the chokidar watcher and produce truncated reads; replace it with an atomic write: write the JSON to a temporary file (e.g., tmp_path = format!("{}.tmp", CONFIG_OVERRIDES_PATH)) using File::create and write_all, call file.sync_all(), then std::fs::rename(tmp_path, CONFIG_OVERRIDES_PATH) and handle/report errors from each operation instead of returning early after the non-atomic write; touch the same surrounding function where CONFIG_OVERRIDES_PATH and the warn! call live to locate the change.bin/lib/onboard.js (2)
1627-1637: DuplicatesetNestedValuefunction - also exists inbin/lib/config-set.js:106-116.Both implementations are identical. Extract to a shared utility module to avoid duplication.
♻️ Suggested refactor
Create a shared utility, e.g.,
bin/lib/utils.js:function setNestedValue(obj, dottedPath, value) { const parts = dottedPath.split("."); let current = obj; for (let i = 0; i < parts.length - 1; i++) { if (!(parts[i] in current) || typeof current[parts[i]] !== "object") { current[parts[i]] = {}; } current = current[parts[i]]; } current[parts[parts.length - 1]] = value; } module.exports = { setNestedValue };Then import in both
onboard.jsandconfig-set.js.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` around lines 1627 - 1637, The duplicate setNestedValue implementation should be extracted to a shared utility and imported where needed: create a new module (e.g., utils.js) that exports the setNestedValue function, remove the local setNestedValue definitions in the modules containing the duplicates, and replace them with a require/import of the shared setNestedValue; ensure the exported function name matches the current usage sites (setNestedValue) so callers in onboard.js and config-set.js continue to work without other changes.
1569-1580:writeConfigOverridesFromPolicysilently returns if the policy file lacks aconfig_overridessection.The function searches for
"\nconfig_overrides:\n"innemoclaw-blueprint/policies/openclaw-sandbox.yaml, but this section does not exist in the file. The function returns early without writing anything and without logging output, making the behavior opaque.If no policy-derived defaults are expected at this stage, add debug logging to clarify intent:
Debug logging suggestion
const startIdx = yaml.indexOf("\nconfig_overrides:\n"); - if (startIdx === -1) return; + if (startIdx === -1) { + console.log(" ⓘ No config_overrides section in policy — skipping initial overrides file"); + return; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` around lines 1569 - 1580, The function writeConfigOverridesFromPolicy currently returns silently when the policy file is missing or when the "\nconfig_overrides:\n" section isn't found; update writeConfigOverridesFromPolicy to emit a clear debug/info log in both early-return cases (when fs.existsSync(policyPath) is false and when startIdx === -1) that includes the policyPath and a short message (e.g., "policy file not found" or "no config_overrides section present") so callers can understand why no overrides were written; locate the logic around policyPath, fs.existsSync(policyPath), and the startIdx === -1 check and add the logging there (use the existing logging mechanism or console if none exists).patches/openclaw-config-overrides.patch (1)
10-10: Consider removing synchronous debug logging before production.The
appendFileSynccalls on lines 10, 14, 26, and 30 are helpful for POC debugging but add synchronous I/O overhead in the config resolution path. Consider gating these behind a debug flag or removing them for production.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@patches/openclaw-config-overrides.patch` at line 10, The synchronous debug logging calls using fs$1.appendFileSync (seen writing to "/sandbox/.openclaw-data/nemoclaw-shim.log") should not run unconditionally in the config resolution path; change the code to either remove these appendFileSync calls or gate them behind a debug flag (e.g. an environment variable like OPENCLAW_DEBUG or a module-level debug constant) and/or switch to asynchronous logging (fs.appendFile) so they don't block execution; update every occurrence (the appendFileSync calls that mention OPENCLAW_CONFIG_OVERRIDES_FILE and any similar lines) to check the debug flag before logging or replace with non-blocking I/O.bin/lib/config-set.js (1)
44-46: Redundantrequire()calls inside functions.
fsis already required at line 8, andosis required multiple times inside functions (lines 45, 63, 88). Move these to the top of the file for clarity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/config-set.js` around lines 44 - 46, The function sandboxRun contains redundant require() calls for "os" and "fs" (fs is already required earlier and os is required multiple times inside functions); refactor by moving const os = require("os") and const fs = require("fs") to the top-level module scope and remove the duplicate requires inside sandboxRun and any other functions (e.g., where os is required again), keeping the rest of sandboxRun's logic unchanged and referencing the top-level os and fs variables.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bin/lib/config-set.js`:
- Around line 168-172: The gateway check is case-sensitive and can be bypassed
(e.g., "Gateway" or "GATEWAY"); update the condition that guards key to use a
case-insensitive comparison by lowercasing the key (e.g., call
key.toLowerCase()) and then test startsWith("gateway.") or === "gateway" so keys
like "Gateway.auth.token" are refused; apply this change where the variable key
is evaluated in the existing block that currently checks
key.startsWith("gateway.") || key === "gateway".
- Around line 73-74: The code reads the overrides file into raw (const raw =
fs.readFileSync(dlFile, "utf-8")) and then calls JSON.parse(raw), which fails
for JSON5 features; update this to use a JSON5 parser (e.g., require/import
json5 and call JSON5.parse(raw)) so comments/trailing commas are accepted, and
add the json5 dependency to package.json; ensure you reference the same
dlFile/raw variables and replace the JSON.parse call in config-set.js
accordingly.
In `@Dockerfile`:
- Around line 73-74: The env variable OPENCLAW_CONFIG_OVERRIDES_FILE currently
points to a .json5 file but the shim in patches/apply-openclaw-shim.js uses
JSON.parse (which only accepts strict JSON); fix by making the formats
consistent — either rename OPENCLAW_CONFIG_OVERRIDES_FILE to end with .json
(update the Dockerfile value) or update patches/apply-openclaw-shim.js to parse
JSON5 (import/require the json5 package and replace JSON.parse with JSON5.parse
when reading the file) and ensure the dependency is available at build/runtime.
- Around line 29-31: The apply-openclaw-shim step can silently succeed if fewer
files are patched than expected; update patches/apply-openclaw-shim.js to
enforce a hard failure by defining an EXPECTED_PATCHES constant (set to 6) and,
after the patch loop that increments patched, compare patched to
EXPECTED_PATCHES and call process.exit(1) after logging an error if they differ
so the Docker RUN fails when the shim no longer matches upstream bundles.
In `@patches/apply-openclaw-shim.js`:
- Around line 17-18: The shim currently reads the override file into _raw and
uses JSON.parse to produce _ov, which fails for JSON5 files; change the parse
step to use a JSON5 parser (replace JSON.parse(_raw) with JSON5.parse(_raw)) and
ensure you require the JSON5 module available in the OpenClaw bundle (e.g.
require the existing deps.json5 or require('json5') depending on the bundle
context) so the code uses JSON5.parse; update the code that sets _ov to call
JSON5.parse and add the corresponding require/import for JSON5 inside
apply-openclaw-shim.js.
- Around line 19-20: The code currently only deletes the lowercase property
using delete _ov.gateway which allows keys like "Gateway" or "GATEWAY" to
remain; update the logic that works with the overrides object (_ov) to normalize
key casing by iterating Object.keys(_ov) and deleting any property whose
key.toLowerCase() === 'gateway' (i.e., remove all case variants), ensuring you
still check _ov exists and is an object before iterating.
In `@patches/apply-openclaw-shim.sh`:
- Around line 1-62: This script patches OpenClaw files but is unused—either
remove patches/apply-openclaw-shim.sh from the repo or add clear documentation
about when/how it is invoked; if you intend to keep it, fix
_nemoClawMergeOverrides in the SHIM: replace JSON.parse with a JSON5 parser
(require("json5").parse or similar) to match the "JSON5 overlay" claim, and
instead of the case-sensitive delete _ov.gateway perform a case-insensitive
removal of any top-level keys named "gateway" (e.g., iterate Object.keys(_ov)
and delete keys where key.toLowerCase() === "gateway") before the deep-merge
implementation (_dm) to ensure gateway.* keys are stripped regardless of casing.
In `@patches/openclaw-config-overrides.patch`:
- Around line 16-17: The current deletion only removes the lowercase key (delete
_ov.gateway) and misses case variants; update the cleanup to remove any key
whose case-insensitive name equals "gateway" (e.g., iterate Object.keys(_ov) and
delete keys where key.toLowerCase() === "gateway") so both "gateway" and
"Gateway" (and other variants) are removed; apply the same pattern used in
apply-openclaw-shim.js for consistency with the security model.
In `@patches/openshell-config-approval.patch`:
- Around line 110-114: The gateway block check in the Rust scanner (the if using
key.starts_with("gateway.") || key == "gateway" in the shown snippet) is
case-sensitive and can be bypassed; change the check to normalize case (e.g.,
use key.to_lowercase() and then starts_with("gateway.") or == "gateway") and
keep the existing behavior of warn!(...) and processed_files.push(path) /
continue; also apply the same case-normalization fix to the client-side check in
bin/lib/config-set.js so keys like "Gateway.auth.token" or "GATEWAY" are
correctly blocked.
- Line 213: The current config applier only deletes the exact lowercase
"gateway" key via merged.remove("gateway"), which misses variants like "Gateway"
or "GATEWAY"; update the removal logic to perform a case-insensitive removal by
scanning the merged map's keys for any that equalIgnoreCase("gateway") (collect
matches and remove them) so all casing variants are deleted; locate the usage of
merged.remove("gateway") and replace it with the case-insensitive key
discovery-and-remove approach to ensure the security constraint is enforced for
any key casing.
In `@scripts/nemoclaw-start.sh`:
- Around line 257-260: The current block writes and chowns
OPENCLAW_CONFIG_OVERRIDES_FILE even if it's a symlink into sandbox-writable
areas; to fix, before creating the file in the script, validate the path: ensure
OPENCLAW_CONFIG_OVERRIDES_FILE is not a symlink (test -L) and does not resolve
to a location under /sandbox/.openclaw-data (check prefix or use readlink -f and
reject paths starting with /sandbox/.openclaw-data); if either check fails, do
not create or chown the file and emit a clear error/log message; otherwise
safely create the file (echo '{}' > ...) and chown as currently done.
In `@scripts/poc-round-trip-test.sh`:
- Around line 21-31: The script defines SANDBOX_NAME="poc-test" but then calls
nemoclaw onboard (and relies on interactive wait_enter), which will create a
sandbox with the default name instead of using $SANDBOX_NAME; update the call
site so the onboarding command uses the intended sandbox name (e.g., pass the
name to the nemoclaw onboard command or use the CLI's non-interactive
flag/parameter to set the sandbox to SANDBOX_NAME), or alternatively replace the
interactive flow (wait_enter / nemoclaw onboard) with a prompt that instructs
the user to enter "poc-test" when asked and verify the onboarding
returned/created SANDBOX_NAME before proceeding. Ensure changes reference
SANDBOX_NAME, wait_enter, and the nemoclaw onboard invocation so later steps
that use $SANDBOX_NAME refer to the actual created sandbox.
In `@test/config-set.test.js`:
- Around line 18-22: The allow-list test currently only ensures no keys start
with "gateway.", but must also reject the exact "gateway" key; update the test
that iterates over allowList (variable allowList in the "does NOT include
gateway paths" it block) to assert each key is neither equal to "gateway" nor
startsWith("gateway."), e.g. replace the single
assert.ok(!key.startsWith("gateway."), ...) with a combined check that fails if
key === "gateway" or key.startsWith("gateway."), preserving the existing failure
message and context.
---
Nitpick comments:
In `@bin/lib/config-set.js`:
- Around line 44-46: The function sandboxRun contains redundant require() calls
for "os" and "fs" (fs is already required earlier and os is required multiple
times inside functions); refactor by moving const os = require("os") and const
fs = require("fs") to the top-level module scope and remove the duplicate
requires inside sandboxRun and any other functions (e.g., where os is required
again), keeping the rest of sandboxRun's logic unchanged and referencing the
top-level os and fs variables.
In `@bin/lib/onboard.js`:
- Around line 1627-1637: The duplicate setNestedValue implementation should be
extracted to a shared utility and imported where needed: create a new module
(e.g., utils.js) that exports the setNestedValue function, remove the local
setNestedValue definitions in the modules containing the duplicates, and replace
them with a require/import of the shared setNestedValue; ensure the exported
function name matches the current usage sites (setNestedValue) so callers in
onboard.js and config-set.js continue to work without other changes.
- Around line 1569-1580: The function writeConfigOverridesFromPolicy currently
returns silently when the policy file is missing or when the
"\nconfig_overrides:\n" section isn't found; update
writeConfigOverridesFromPolicy to emit a clear debug/info log in both
early-return cases (when fs.existsSync(policyPath) is false and when startIdx
=== -1) that includes the policyPath and a short message (e.g., "policy file not
found" or "no config_overrides section present") so callers can understand why
no overrides were written; locate the logic around policyPath,
fs.existsSync(policyPath), and the startIdx === -1 check and add the logging
there (use the existing logging mechanism or console if none exists).
In `@patches/openclaw-config-overrides.patch`:
- Line 10: The synchronous debug logging calls using fs$1.appendFileSync (seen
writing to "/sandbox/.openclaw-data/nemoclaw-shim.log") should not run
unconditionally in the config resolution path; change the code to either remove
these appendFileSync calls or gate them behind a debug flag (e.g. an environment
variable like OPENCLAW_DEBUG or a module-level debug constant) and/or switch to
asynchronous logging (fs.appendFile) so they don't block execution; update every
occurrence (the appendFileSync calls that mention OPENCLAW_CONFIG_OVERRIDES_FILE
and any similar lines) to check the debug flag before logging or replace with
non-blocking I/O.
In `@patches/openshell-config-approval.patch`:
- Around line 214-218: The current non-atomic write using
std::fs::write(CONFIG_OVERRIDES_PATH, &json) can race with the chokidar watcher
and produce truncated reads; replace it with an atomic write: write the JSON to
a temporary file (e.g., tmp_path = format!("{}.tmp", CONFIG_OVERRIDES_PATH))
using File::create and write_all, call file.sync_all(), then
std::fs::rename(tmp_path, CONFIG_OVERRIDES_PATH) and handle/report errors from
each operation instead of returning early after the non-atomic write; touch the
same surrounding function where CONFIG_OVERRIDES_PATH and the warn! call live to
locate the change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 53a2a906-0e41-42b0-813e-5e35426124e2
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (22)
Dockerfilebin/lib/config-set.jsbin/lib/onboard.jsbin/nemoclaw.jsnemoclaw-blueprint/blueprint.yamlnemoclaw-blueprint/policies/openclaw-sandbox.yamlnemoclaw-blueprint/policies/presets/discord.yamlnemoclaw-blueprint/policies/presets/docker.yamlnemoclaw-blueprint/policies/presets/huggingface.yamlnemoclaw-blueprint/policies/presets/jira.yamlnemoclaw-blueprint/policies/presets/outlook.yamlnemoclaw-blueprint/policies/presets/slack.yamlnemoclaw-blueprint/policies/presets/telegram.yamlpatches/apply-openclaw-shim.jspatches/apply-openclaw-shim.shpatches/openclaw-config-overrides.patchpatches/openshell-config-approval.patchscripts/install-openshell.shscripts/nemoclaw-start.shscripts/poc-round-trip-test.shtest/config-set.test.jstest/policies.test.js
💤 Files with no reviewable changes (7)
- nemoclaw-blueprint/policies/presets/telegram.yaml
- nemoclaw-blueprint/policies/presets/huggingface.yaml
- nemoclaw-blueprint/policies/presets/docker.yaml
- nemoclaw-blueprint/policies/presets/discord.yaml
- nemoclaw-blueprint/policies/presets/jira.yaml
- nemoclaw-blueprint/policies/presets/outlook.yaml
- nemoclaw-blueprint/policies/presets/slack.yaml
Resolve Dockerfile conflict: keep both the hardening step from main (#830) and the config overrides shim from this branch.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bin/lib/onboard.js`:
- Around line 1651-1657: The YAML read from policyPath may contain CRLFs or the
config_overrides block at byte 0, so normalize newlines before searching:
convert yaml to use "\n" line endings and then search for "config_overrides:\n"
allowing it to match at the file start or after a newline (e.g. check
yaml.startsWith("config_overrides:\n") or use a regex that accepts
start-of-string or "\n" before "config_overrides:"); update the startIdx logic
that currently uses yaml.indexOf("\nconfig_overrides:\n") to use the normalized
string and the adjusted search so the helper doesn't silently return on CRLF
checkouts or when the block is the first line.
- Around line 1259-1287: Call getInstalledOpenshellVersion() exactly once and
use its single result to derive both the parsed openshellVersion and the raw
output used to set isDevBuild, returning/propagating both pieces from
getInstalledOpenshellVersion(); if the function cannot parse a version or
returns null/undefined, exit with an error (fail-closed) instead of silently
skipping the gate. Remove the extra runCapture() call and any second invocation
of the openshell binary; update callers (the version check block that currently
references installedVersion and the later logic that computes isDevBuild) to
consume the single returned object (version and fullOutput) so isDevBuild is
computed from the same output string as openshellVersion.
- Around line 1693-1697: The success message is printed even when the `run` call
used to upload `scriptFile` to the sandbox is executed with `ignoreError: true`,
which hides failures; change the logic around the `run` invocation (the call
that uses `run(..., { ignoreError: true })` with variables `script`,
`scriptFile`, `sandboxName` and helper `writeSandboxConfigSyncFile`) to capture
and check the result/exit code or let errors surface instead of swallowing them,
only print "✓ Config overrides file written to sandbox" when the `run` completed
successfully, and still perform the best-effort cleanup of `scriptFile` via
`fs.unlinkSync` in a finally-like block.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3935db83-d5ea-4fc1-881f-909a65c62186
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (6)
Dockerfilebin/lib/onboard.jsbin/nemoclaw.jsnemoclaw-blueprint/policies/presets/discord.yamlnemoclaw-blueprint/policies/presets/slack.yamlscripts/nemoclaw-start.sh
💤 Files with no reviewable changes (2)
- nemoclaw-blueprint/policies/presets/discord.yaml
- nemoclaw-blueprint/policies/presets/slack.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
- scripts/nemoclaw-start.sh
- bin/nemoclaw.js
| // Enforce min_openshell_version from blueprint.yaml | ||
| const installedVersion = getInstalledOpenshellVersion(); | ||
| if (installedVersion) { | ||
| const blueprintPath = path.join(ROOT, "nemoclaw-blueprint", "blueprint.yaml"); | ||
| if (fs.existsSync(blueprintPath)) { | ||
| const blueprintRaw = fs.readFileSync(blueprintPath, "utf-8"); | ||
| const minMatch = blueprintRaw.match(/min_openshell_version:\s*"([^"]+)"/); | ||
| if (minMatch) { | ||
| const minRequired = minMatch[1]; | ||
| const vGte = (a, b) => { | ||
| const pa = a.split(".").map(Number); | ||
| const pb = b.split(".").map(Number); | ||
| for (let i = 0; i < 3; i++) { | ||
| if ((pa[i] || 0) > (pb[i] || 0)) return true; | ||
| if ((pa[i] || 0) < (pb[i] || 0)) return false; | ||
| } | ||
| return true; | ||
| }; | ||
| if (!vGte(installedVersion, minRequired)) { | ||
| console.error(""); | ||
| console.error(` !! OpenShell ${installedVersion} is below the minimum required version ${minRequired}.`); | ||
| console.error(` Please upgrade: https://github.com/NVIDIA/OpenShell/releases`); | ||
| console.error(""); | ||
| process.exit(1); | ||
| } | ||
| console.log(` ✓ openshell version ${installedVersion} meets minimum ${minRequired}`); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find the file and check its size
find . -name "onboard.js" -path "*/bin/lib/*" -type fRepository: NVIDIA/NemoClaw
Length of output: 79
🏁 Script executed:
# Find and read the onboard.js file
wc -l bin/lib/onboard.jsRepository: NVIDIA/NemoClaw
Length of output: 82
🏁 Script executed:
# Read the code at lines 1259-1287
sed -n '1259,1287p' bin/lib/onboard.jsRepository: NVIDIA/NemoClaw
Length of output: 1334
🏁 Script executed:
# Read the code at lines 1397-1413
sed -n '1397,1413p' bin/lib/onboard.jsRepository: NVIDIA/NemoClaw
Length of output: 1020
🏁 Script executed:
# Find the getInstalledOpenshellVersion function
rg -A 15 "function getInstalledOpenshellVersion|const getInstalledOpenshellVersion" bin/lib/onboard.jsRepository: NVIDIA/NemoClaw
Length of output: 663
🏁 Script executed:
# Find resolveOpenshell function
rg -A 10 "function resolveOpenshell|const resolveOpenshell" bin/lib/onboard.jsRepository: NVIDIA/NemoClaw
Length of output: 41
🏁 Script executed:
# Find runCaptureOpenshell function
rg -A 10 "function runCaptureOpenshell|const runCaptureOpenshell" bin/lib/onboard.jsRepository: NVIDIA/NemoClaw
Length of output: 325
🏁 Script executed:
# Get more context around line 1397 to understand the scope and variable usage
sed -n '1380,1420p' bin/lib/onboard.jsRepository: NVIDIA/NemoClaw
Length of output: 2066
🏁 Script executed:
# Look for where openshellVersion is defined/set
rg -B 10 "openshellVersion" bin/lib/onboard.js | head -60Repository: NVIDIA/NemoClaw
Length of output: 1527
Consolidate redundant OpenShell version invocations and enforce fail-closed behavior.
The version check at lines 1259–1287 is fail-open: if getInstalledOpenshellVersion() returns null, the entire gate is silently skipped. Additionally, lines 1397–1398 redundantly invoke openshell -V twice—once inside getInstalledOpenshellVersion() and again via raw runCapture() to determine isDevBuild. This creates a gap: openshellVersion is parsed from the first invocation's output, while isDevBuild is checked against the second invocation's output. If the outputs differ (due to caching, binary drift, or environment changes), the dev/stable image selection will be inconsistent with the version tag.
Call getInstalledOpenshellVersion() once, extract both the version and the full output string from that single result, and fail explicitly when the output is unparseable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bin/lib/onboard.js` around lines 1259 - 1287, Call
getInstalledOpenshellVersion() exactly once and use its single result to derive
both the parsed openshellVersion and the raw output used to set isDevBuild,
returning/propagating both pieces from getInstalledOpenshellVersion(); if the
function cannot parse a version or returns null/undefined, exit with an error
(fail-closed) instead of silently skipping the gate. Remove the extra
runCapture() call and any second invocation of the openshell binary; update
callers (the version check block that currently references installedVersion and
the later logic that computes isDevBuild) to consume the single returned object
(version and fullOutput) so isDevBuild is computed from the same output string
as openshellVersion.
| const yaml = fs.readFileSync(policyPath, "utf-8"); | ||
|
|
||
| // Simple YAML extraction of config_overrides section. | ||
| // For a POC we parse the defaults with a lightweight approach rather than | ||
| // pulling in a full YAML parser at this layer (pyyaml is only in Docker). | ||
| const startIdx = yaml.indexOf("\nconfig_overrides:\n"); | ||
| if (startIdx === -1) return; |
There was a problem hiding this comment.
Normalize YAML newlines before searching for config_overrides.
indexOf("\nconfig_overrides:\n") only matches LF line endings and also misses the block when it starts at byte 0. On a CRLF checkout this helper silently returns without seeding the overrides file.
🔧 Suggested fix
- const yaml = fs.readFileSync(policyPath, "utf-8");
+ const yaml = fs.readFileSync(policyPath, "utf-8").replace(/\r\n?/g, "\n");
- const startIdx = yaml.indexOf("\nconfig_overrides:\n");
+ const startIdx = yaml.search(/(?:^|\n)config_overrides:\n/);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bin/lib/onboard.js` around lines 1651 - 1657, The YAML read from policyPath
may contain CRLFs or the config_overrides block at byte 0, so normalize newlines
before searching: convert yaml to use "\n" line endings and then search for
"config_overrides:\n" allowing it to match at the file start or after a newline
(e.g. check yaml.startsWith("config_overrides:\n") or use a regex that accepts
start-of-string or "\n" before "config_overrides:"); update the startIdx logic
that currently uses yaml.indexOf("\nconfig_overrides:\n") to use the normalized
string and the adjusted search so the helper doesn't silently return on CRLF
checkouts or when the block is the first line.
| const script = `cat > /sandbox/.openclaw-data/config-overrides.json5 <<'EOF_OVERRIDES'\n${json}\nEOF_OVERRIDES\nexit\n`; | ||
| const scriptFile = writeSandboxConfigSyncFile(script); | ||
| run(`openshell sandbox connect "${sandboxName}" < ${shellQuote(scriptFile)}`, { ignoreError: true }); | ||
| try { fs.unlinkSync(scriptFile); } catch { /* cleanup best-effort */ } | ||
| console.log(" ✓ Config overrides file written to sandbox"); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n bin/lib/onboard.js | sed -n '1690,1700p'Repository: NVIDIA/NemoClaw
Length of output: 665
🏁 Script executed:
rg -n "function run\(" bin/lib/onboard.js | head -20Repository: NVIDIA/NemoClaw
Length of output: 41
🏁 Script executed:
rg -n "import.*run\|require.*run" bin/lib/onboard.js | head -20Repository: NVIDIA/NemoClaw
Length of output: 41
🏁 Script executed:
head -50 bin/lib/onboard.jsRepository: NVIDIA/NemoClaw
Length of output: 1964
🏁 Script executed:
rg -n "function run\|const run.*=|exports.run" bin/lib/runner.js | head -30Repository: NVIDIA/NemoClaw
Length of output: 41
🏁 Script executed:
cat -n bin/lib/runner.js | head -100Repository: NVIDIA/NemoClaw
Length of output: 3251
🏁 Script executed:
rg -n "openshellShellCommand" bin/lib/Repository: NVIDIA/NemoClaw
Length of output: 471
🏁 Script executed:
cat -n bin/lib/onboard.js | sed -n '362,365p'Repository: NVIDIA/NemoClaw
Length of output: 222
🏁 Script executed:
cat -n bin/lib/onboard.js | sed -n '1680,1700p'Repository: NVIDIA/NemoClaw
Length of output: 1093
🏁 Script executed:
cat -n bin/lib/onboard.js | sed -n '1650,1700p'Repository: NVIDIA/NemoClaw
Length of output: 2532
🏁 Script executed:
cat -n bin/lib/onboard.js | sed -n '1640,1650p'Repository: NVIDIA/NemoClaw
Length of output: 500
Check the result of sandbox connect before reporting success.
The write is best-effort today with ignoreError: true, which swallows a failed upload but still prints a success message. This leaves the sandbox running without the policy defaults and gives the user no indication of the failure.
🔧 Suggested fix
- run(`openshell sandbox connect "${sandboxName}" < ${shellQuote(scriptFile)}`, { ignoreError: true });
+ const result = run(
+ `${openshellShellCommand(["sandbox", "connect", sandboxName])} < ${shellQuote(scriptFile)}`,
+ { ignoreError: true }
+ );
+ if (result.status !== 0) {
+ throw new Error(`Failed to write config overrides for sandbox '${sandboxName}'`);
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bin/lib/onboard.js` around lines 1693 - 1697, The success message is printed
even when the `run` call used to upload `scriptFile` to the sandbox is executed
with `ignoreError: true`, which hides failures; change the logic around the
`run` invocation (the call that uses `run(..., { ignoreError: true })` with
variables `script`, `scriptFile`, `sandboxName` and helper
`writeSandboxConfigSyncFile`) to capture and check the result/exit code or let
errors surface instead of swallowing them, only print "✓ Config overrides file
written to sandbox" when the `run` completed successfully, and still perform the
best-effort cleanup of `scriptFile` via `fs.unlinkSync` in a finally-like block.
- Prefix unused sandboxRun with _ for no-unused-vars - Add comment to empty catch block for no-empty - Add SPDX license header to poc-round-trip-test.sh - Regex and shfmt auto-fixes from pre-commit hooks
Covers the full chain: - Shim injection patches OpenClaw dist files correctly - Shim deep-merges overrides at runtime, strips gateway.* - config-set refuses gateway.* keys (security enforcement) - config-set refuses missing --key/--value args - Round-trip: write overrides → shim reads and applies them - Defense in depth: gateway.* in overrides file stripped by shim - Graceful handling of malformed JSON, empty overrides, arrays
Replaces the JS-only unit tests with a TypeScript test that covers: - Full E2E (Docker + sandbox): onboard → config-set → verify overrides in sandbox → verify gateway.* refused → verify shim applies override → cleanup (skipped when Docker/API key unavailable) - Shim unit verification: injection, deep-merge, gateway stripping, malformed JSON, array replacement (always runs) - config-set security: gateway.* refusal, missing args (always runs) - apply-openclaw-shim.js: multi-file patching, no-match exit (always runs)
- Detect Colima Docker socket via `docker context inspect` so openshell finds Docker when the default socket points to Docker Desktop - Use background+wait pattern from test-full-e2e.sh to prevent install.sh's background port-forward from blocking execSync - Deep cleanup in beforeAll: kill stale port-forwards, remove Docker containers/volumes from previous failed runs - Fix sandbox upload destination (directory, not file path) - Increase timeout to 20 min for full install.sh Docker image builds - Use openshell 0.0.15 release (not dev build) per blueprint minimum 28/28 tests pass including full install.sh → gateway → sandbox → config-set → config-get → gateway.* refusal → shim defense-in-depth → cleanup.
- E2E test clones OpenShell, applies config-approval patch, builds via mise run cluster, creates sandbox with staged build context - Setup script (scripts/setup-e2e-demo.sh) does full from-zero setup and prints instructions for the two-terminal interactive demo - POC script (scripts/poc-round-trip-test.sh) walks through the TUI approval flow: config request → scanner → PolicyChunk → TUI approve → overrides file written - OpenShell patch fixes: sandbox_name (not UUID) passed to get_draft_policy, chmod 777 on config-requests dir so sandbox user can write, all output visible with stdio inherit Known issue: approved config chunks keep rewriting the overrides file on every poll cycle (should skip when unchanged or clear after apply).
…into feat/runtime-config-mutability
Three items: 1. Approved config chunks rewrite overrides every poll cycle (need mark-consumed or skip-if-unchanged) 2. TUI detail view is empty for CONFIG chunks (need to show key + proposed value from rationale field) 3. E2E should test system prompt change, not just model — that's the actual use case (agent self-modification with operator approval)
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (10)
test/config-mutability-e2e.test.ts (4)
294-304: Registry entry structure duplicated from setup script and registry.js.The sandbox registry entry structure is now duplicated in three places:
bin/lib/registry.js,scripts/setup-e2e-demo.sh, and here. Consider extracting a shared test utility or using theregisterSandboxfunction directly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/config-mutability-e2e.test.ts` around lines 294 - 304, Duplicate sandbox registry entry creation should be replaced with the shared utility to avoid drift: remove the manual construction of the registry object and calls that write to the sandboxes.json in test/config-mutability-e2e.test.ts and instead call the existing registerSandbox function (from bin/lib/registry.js or the test utilities) or import a shared helper used by scripts/setup-e2e-demo.sh; ensure you pass SANDBOX_NAME and the same fields (name, createdAt, model, nimContainer, provider, gpuEnabled, policies) to registerSandbox so the registry is written consistently and delete the in-test JSON read/write block.
469-477: Missing null check before accessing parsed.gateway.
sandboxDownloadreturns an empty string on error (line 119), butJSON.parse("")throws. If the download fails, this test will throw with an unhelpful parse error rather than a clear assertion failure.♻️ Suggested improvement
const raw = sandboxDownload("/sandbox/.openclaw-data/config-overrides.json5"); + expect(raw).toBeTruthy(); // Fail early with clear message if download failed const parsed = JSON.parse(raw);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/config-mutability-e2e.test.ts` around lines 469 - 477, The test currently calls JSON.parse(raw) without guarding against sandboxDownload returning an empty string, so add a null/empty check before parsing: verify the raw result from sandboxDownload("/sandbox/.openclaw-data/config-overrides.json5") is non-empty (e.g. expect(raw).toBeTruthy() or fail with a clear message) and only then call JSON.parse(raw) to produce parsed; this ensures parsing errors are reported as a clear test assertion rather than an uncaught JSON.parse("") exception when checking parsed.gateway.
388-391: JSON.parse on .json5 file content may fail.Similar to the issue in
bin/lib/config-set.js, this test usesJSON.parse(content)on a file namedconfig-overrides.json5. If the file contains JSON5 features (comments, trailing commas), this will throw.Since this is a test validating content written by
writeOverrides(which usesJSON.stringify), it will work for now, but the test would break if the shim or Rust code ever writes JSON5 syntax.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/config-mutability-e2e.test.ts` around lines 388 - 391, The test reads a .json5 file via sandboxDownload and uses JSON.parse(content), which will fail on JSON5 syntax; update the test to parse with a JSON5-aware parser instead (replace JSON.parse(content) with JSON5.parse(content) and add the JSON5 import) so the assertion on parsed.agents.defaults.model.primary still works even if the file contains JSON5 features written by writeOverrides or the Rust shim.
316-317: Blocking sleep in async context.
execSync("sleep 2")blocks the entire Node.js event loop. In a Vitest context, this prevents proper timeout handling and signal processing. Consider using an async sleep utility.♻️ Suggested improvement
+const sleep = (ms: number) => new Promise((r) => setTimeout(r, ms)); + // In beforeAll (make it async): - execSync("sleep 2"); + await sleep(2000);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/config-mutability-e2e.test.ts` around lines 316 - 317, The test currently uses execSync("sleep 2") which blocks the Node event loop; replace this blocking call (execSync) with an asynchronous sleep (e.g., an awaitable timeout/promisified setTimeout) and await it from the surrounding test/setup so the event loop remains responsive; also ensure the enclosing test or hook is declared async so awaiting the non-blocking sleep works correctly.bin/lib/config-set.js (3)
105-115: setNestedValue overwrites non-object intermediate values without warning.If
current[parts[i]]exists but is not an object (e.g., a string or number), it will be silently replaced with{}. Consider logging a warning or throwing an error to alert the user that an existing value is being clobbered.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/config-set.js` around lines 105 - 115, The setNestedValue function currently silently replaces non-object intermediate values; update setNestedValue to detect when current[parts[i]] exists and typeof current[parts[i]] !== "object" and then either throw a descriptive Error or emit a clear warning instead of overwriting—e.g., in setNestedValue, when iterating parts, if a non-object is encountered at an intermediate key (current and parts[i]), throw (or console.warn) with the path and existing value so the caller knows the value would be clobbered; keep the rest of the traversal logic the same and ensure the check references setNestedValue, obj, and dottedPath so it triggers before assigning current[parts[i]] = {}.
18-38: Fragile YAML parsing via regex may break on edge cases.The manual regex-based YAML extraction (
entryPattern = /^ {2}([\w.]+):/gm) assumes exactly 2-space indentation and will fail if the YAML uses tabs, different indent levels, or ifconfig_overrides:appears in a comment. Consider using a proper YAML parser likejs-yaml.♻️ Suggested improvement
+const yaml = require("js-yaml"); + function loadAllowList() { const policyPath = path.join(ROOT, "nemoclaw-blueprint", "policies", "openclaw-sandbox.yaml"); if (!fs.existsSync(policyPath)) return new Set(); - const yaml = fs.readFileSync(policyPath, "utf-8"); - // Extract everything after "config_overrides:" to end of file - const startIdx = yaml.indexOf("\nconfig_overrides:\n"); - if (startIdx === -1) return new Set(); - const block = yaml.slice(startIdx); - + const raw = fs.readFileSync(policyPath, "utf-8"); + const doc = yaml.load(raw); + if (!doc || !doc.config_overrides) return new Set(); + const keys = new Set(); - // Match top-level entries: exactly 2-space indent, dotted path, colon - const entryPattern = /^ {2}([\w.]+):/gm; - let m; - while ((m = entryPattern.exec(block)) !== null) { - // Skip "default:" which is a value key, not an entry key - if (m[1] === "default") continue; - keys.add(m[1]); + for (const key of Object.keys(doc.config_overrides)) { + if (key === "default") continue; + keys.add(key); } return keys; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/config-set.js` around lines 18 - 38, The loadAllowList function uses fragile regex to parse YAML; replace the manual parsing with a proper YAML parse using a library like js-yaml: read the file as now, call yaml.load(...) to get an object, locate the config_overrides property on the root object, iterate its own keys (skipping "default") and add them to the Set; keep the existing behavior of returning an empty Set if file missing or config_overrides is absent and retain the function name loadAllowList and the returned Set so callers continue to work.
46-47: Temp file created with mode 0o600 but script content is not sanitized.If
scriptcontains shell metacharacters from untrusted input, this could lead to command injection. While the current callers appear controlled, adding a comment or validation would improve safety.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/config-set.js` around lines 46 - 47, The temp file write uses fs.writeFileSync(tmpFile, script + "\nexit\n", { mode: 0o600 }) with script coming from callers; validate or sanitize the script string before writing to prevent shell metacharacter/command-injection risks (or explicitly reject/escape unsafe characters), and add an inline comment near tmpFile/fs.writeFileSync explaining why sanitization is required; reference the variables tmpFile, script, and the fs.writeFileSync call when implementing the checks so callers producing untrusted input are guarded.scripts/setup-e2e-demo.sh (1)
185-210: Inline Node.js for registry manipulation duplicates registry.js logic.The registry entry structure here must stay in sync with
bin/lib/registry.js(per context snippet). Consider invoking the existingregisterSandboxfunction instead of duplicating the structure.♻️ Suggested improvement
- node -e " - const fs = require('fs'); - const r = JSON.parse(fs.readFileSync('$REGISTRY', 'utf8')); - r.sandboxes = r.sandboxes || {}; - r.sandboxes['$SANDBOX_NAME'] = { - name: '$SANDBOX_NAME', - createdAt: new Date().toISOString(), - model: null, nimContainer: null, provider: null, gpuEnabled: false, policies: [] - }; - fs.writeFileSync('$REGISTRY', JSON.stringify(r, null, 2)); - " + node -e " + const { registerSandbox } = require('$ROOT/bin/lib/registry'); + registerSandbox({ name: '$SANDBOX_NAME' }); + "🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/setup-e2e-demo.sh` around lines 185 - 210, Replace the inline Node.js JSON manipulation with a call to the existing registerSandbox function exported by registry.js instead of duplicating the registry object shape; import the module that exports registerSandbox, call registerSandbox({ name: SANDBOX_NAME, createdAt: new Date().toISOString(), model: null, nimContainer: null, provider: null, gpuEnabled: false, policies: [] }) and ensure you also set the default sandbox when creating the first entry using the same helper so the registry shape remains consistent with registry.js.patches/openshell-config-approval.patch (2)
62-68: World-writable directory (0o777) for config-requests.Creating
/sandbox/.openclaw-data/config-requestswith mode0o777allows any process in the sandbox to write config request files. While this is intentional to allow the sandbox user to create requests, consider documenting this security trade-off or restricting to the sandbox user's group if possible.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@patches/openshell-config-approval.patch` around lines 62 - 68, The new code sets CONFIG_REQUESTS_DIR to 0o777 via std::fs::set_permissions which makes the directory world-writable; instead either document this security trade-off in a comment near the PermissionsExt/set_permissions call or restrict permissions and ownership: change the permission to group-writable (e.g., 0o770) and ensure the directory is chowned/chgrp'd to the sandbox user/group (use std::fs::set_permissions with Permissions::from_mode(0o770) and call std::fs::set_owner/chown equivalent or use nix::unistd::chown) so only the sandbox user/group can write, referencing CONFIG_REQUESTS_DIR, set_permissions, and PermissionsExt in your change.
301-330: Config approval returns empty policy_version and policy_hash.When approving a
config:chunk, the response returnspolicy_version: 0andpolicy_hash: String::new(). Clients that rely on these fields for cache invalidation or version tracking may behave unexpectedly. Consider returning the current policy version/hash even for config chunks, or documenting this behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@patches/openshell-config-approval.patch`:
- Around line 170-191: The early returns after
CachedOpenShellClient::connect(...) and client.submit_policy_analysis(...) cause
processed_files cleanup to be skipped; ensure processed_files are deleted
regardless of submission outcome by moving the for path in processed_files {
std::fs::remove_file(&path); } loop to run before any return or by using an RAII
guard/cleanup function that removes files in processed_files in all code paths
(references: submit_policy_analysis, CachedOpenShellClient::connect,
processed_files).
In `@scripts/setup-e2e-demo.sh`:
- Around line 99-101: The cleanup commands are unsafe/non-portable: avoid
blindly killing any PID output by `lsof -ti :8080,:18789` and remove GNU-only
`xargs -r` used with `docker rmi`; instead, check that `lsof` returns PIDs
before calling `kill` (e.g., capture/verify non-empty PID list from `lsof -t`
and then call `kill` on that list) and for the Docker cleanup, test that `docker
images --format '{{.Repository}}:{{.Tag}}' | grep openshell` finds results
before piping to `xargs docker rmi -f` (or omit `-r` and guard with an if/test
to avoid invoking `docker rmi` with empty args) so the `lsof -ti :8080,:18789 |
xargs kill` and `docker images ... | grep openshell | xargs -r docker rmi -f`
invocations are replaced with guarded, portable checks.
In `@test/config-mutability-e2e.test.ts`:
- Around line 99-105: The osh helper is vulnerable to shell injection because it
builds a command string with manual single-quote quoting and passes it to
execSync; replace the execSync string invocation with Node's execFileSync to
pass arguments safely (call execFileSync('openshell', args, { encoding: 'utf-8',
timeout: TIMEOUT_MED, env: baseEnv })) so the args array is handled without
shell interpolation, keep the function signature osh(...args: string[]): string
and trim the returned output as before.
---
Nitpick comments:
In `@bin/lib/config-set.js`:
- Around line 105-115: The setNestedValue function currently silently replaces
non-object intermediate values; update setNestedValue to detect when
current[parts[i]] exists and typeof current[parts[i]] !== "object" and then
either throw a descriptive Error or emit a clear warning instead of
overwriting—e.g., in setNestedValue, when iterating parts, if a non-object is
encountered at an intermediate key (current and parts[i]), throw (or
console.warn) with the path and existing value so the caller knows the value
would be clobbered; keep the rest of the traversal logic the same and ensure the
check references setNestedValue, obj, and dottedPath so it triggers before
assigning current[parts[i]] = {}.
- Around line 18-38: The loadAllowList function uses fragile regex to parse
YAML; replace the manual parsing with a proper YAML parse using a library like
js-yaml: read the file as now, call yaml.load(...) to get an object, locate the
config_overrides property on the root object, iterate its own keys (skipping
"default") and add them to the Set; keep the existing behavior of returning an
empty Set if file missing or config_overrides is absent and retain the function
name loadAllowList and the returned Set so callers continue to work.
- Around line 46-47: The temp file write uses fs.writeFileSync(tmpFile, script +
"\nexit\n", { mode: 0o600 }) with script coming from callers; validate or
sanitize the script string before writing to prevent shell
metacharacter/command-injection risks (or explicitly reject/escape unsafe
characters), and add an inline comment near tmpFile/fs.writeFileSync explaining
why sanitization is required; reference the variables tmpFile, script, and the
fs.writeFileSync call when implementing the checks so callers producing
untrusted input are guarded.
In `@patches/openshell-config-approval.patch`:
- Around line 62-68: The new code sets CONFIG_REQUESTS_DIR to 0o777 via
std::fs::set_permissions which makes the directory world-writable; instead
either document this security trade-off in a comment near the
PermissionsExt/set_permissions call or restrict permissions and ownership:
change the permission to group-writable (e.g., 0o770) and ensure the directory
is chowned/chgrp'd to the sandbox user/group (use std::fs::set_permissions with
Permissions::from_mode(0o770) and call std::fs::set_owner/chown equivalent or
use nix::unistd::chown) so only the sandbox user/group can write, referencing
CONFIG_REQUESTS_DIR, set_permissions, and PermissionsExt in your change.
In `@scripts/setup-e2e-demo.sh`:
- Around line 185-210: Replace the inline Node.js JSON manipulation with a call
to the existing registerSandbox function exported by registry.js instead of
duplicating the registry object shape; import the module that exports
registerSandbox, call registerSandbox({ name: SANDBOX_NAME, createdAt: new
Date().toISOString(), model: null, nimContainer: null, provider: null,
gpuEnabled: false, policies: [] }) and ensure you also set the default sandbox
when creating the first entry using the same helper so the registry shape
remains consistent with registry.js.
In `@test/config-mutability-e2e.test.ts`:
- Around line 294-304: Duplicate sandbox registry entry creation should be
replaced with the shared utility to avoid drift: remove the manual construction
of the registry object and calls that write to the sandboxes.json in
test/config-mutability-e2e.test.ts and instead call the existing registerSandbox
function (from bin/lib/registry.js or the test utilities) or import a shared
helper used by scripts/setup-e2e-demo.sh; ensure you pass SANDBOX_NAME and the
same fields (name, createdAt, model, nimContainer, provider, gpuEnabled,
policies) to registerSandbox so the registry is written consistently and delete
the in-test JSON read/write block.
- Around line 469-477: The test currently calls JSON.parse(raw) without guarding
against sandboxDownload returning an empty string, so add a null/empty check
before parsing: verify the raw result from
sandboxDownload("/sandbox/.openclaw-data/config-overrides.json5") is non-empty
(e.g. expect(raw).toBeTruthy() or fail with a clear message) and only then call
JSON.parse(raw) to produce parsed; this ensures parsing errors are reported as a
clear test assertion rather than an uncaught JSON.parse("") exception when
checking parsed.gateway.
- Around line 388-391: The test reads a .json5 file via sandboxDownload and uses
JSON.parse(content), which will fail on JSON5 syntax; update the test to parse
with a JSON5-aware parser instead (replace JSON.parse(content) with
JSON5.parse(content) and add the JSON5 import) so the assertion on
parsed.agents.defaults.model.primary still works even if the file contains JSON5
features written by writeOverrides or the Rust shim.
- Around line 316-317: The test currently uses execSync("sleep 2") which blocks
the Node event loop; replace this blocking call (execSync) with an asynchronous
sleep (e.g., an awaitable timeout/promisified setTimeout) and await it from the
surrounding test/setup so the event loop remains responsive; also ensure the
enclosing test or hook is declared async so awaiting the non-blocking sleep
works correctly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3e6e2dc7-4794-4272-b3f0-dc6a7a65ffb1
📒 Files selected for processing (5)
bin/lib/config-set.jspatches/openshell-config-approval.patchscripts/poc-round-trip-test.shscripts/setup-e2e-demo.shtest/config-mutability-e2e.test.ts
✅ Files skipped from review due to trivial changes (1)
- scripts/poc-round-trip-test.sh
| + if !chunks.is_empty() { | ||
| + match CachedOpenShellClient::connect(endpoint).await { | ||
| + Ok(client) => { | ||
| + if let Err(e) = client | ||
| + .submit_policy_analysis(sandbox_name, vec![], chunks, "config") | ||
| + .await | ||
| + { | ||
| + warn!(error = %e, "Failed to submit config change requests"); | ||
| + return; | ||
| + } | ||
| + } | ||
| + Err(e) => { | ||
| + warn!(error = %e, "Failed to connect to gateway for config submission"); | ||
| + return; | ||
| + } | ||
| + } | ||
| + } | ||
| + | ||
| + for path in processed_files { | ||
| + let _ = std::fs::remove_file(&path); | ||
| + } | ||
| +} |
There was a problem hiding this comment.
Processed files not deleted when chunk submission fails.
If submit_policy_analysis fails (line 177), the function returns early without deleting the files in processed_files. This means the same files will be reprocessed on the next poll cycle, potentially causing repeated submission attempts.
♻️ Suggested fix: Move file deletion before the early return or use RAII pattern
+ // Delete processed files regardless of submission success to avoid
+ // infinite retry loops for malformed or gateway.* requests.
+ for path in &processed_files {
+ let _ = std::fs::remove_file(path);
+ }
+
if !chunks.is_empty() {
match CachedOpenShellClient::connect(endpoint).await {
Ok(client) => {
if let Err(e) = client
.submit_policy_analysis(sandbox_name, vec![], chunks, "config")
.await
{
warn!(error = %e, "Failed to submit config change requests");
return;
}
}
Err(e) => {
warn!(error = %e, "Failed to connect to gateway for config submission");
return;
}
}
}
-
- for path in processed_files {
- let _ = std::fs::remove_file(&path);
- }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| + if !chunks.is_empty() { | |
| + match CachedOpenShellClient::connect(endpoint).await { | |
| + Ok(client) => { | |
| + if let Err(e) = client | |
| + .submit_policy_analysis(sandbox_name, vec![], chunks, "config") | |
| + .await | |
| + { | |
| + warn!(error = %e, "Failed to submit config change requests"); | |
| + return; | |
| + } | |
| + } | |
| + Err(e) => { | |
| + warn!(error = %e, "Failed to connect to gateway for config submission"); | |
| + return; | |
| + } | |
| + } | |
| + } | |
| + | |
| + for path in processed_files { | |
| + let _ = std::fs::remove_file(&path); | |
| + } | |
| +} | |
| // Delete processed files regardless of submission success to avoid | |
| // infinite retry loops for malformed or gateway.* requests. | |
| for path in &processed_files { | |
| let _ = std::fs::remove_file(path); | |
| } | |
| if !chunks.is_empty() { | |
| match CachedOpenShellClient::connect(endpoint).await { | |
| Ok(client) => { | |
| if let Err(e) = client | |
| .submit_policy_analysis(sandbox_name, vec![], chunks, "config") | |
| .await | |
| { | |
| warn!(error = %e, "Failed to submit config change requests"); | |
| return; | |
| } | |
| } | |
| Err(e) => { | |
| warn!(error = %e, "Failed to connect to gateway for config submission"); | |
| return; | |
| } | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@patches/openshell-config-approval.patch` around lines 170 - 191, The early
returns after CachedOpenShellClient::connect(...) and
client.submit_policy_analysis(...) cause processed_files cleanup to be skipped;
ensure processed_files are deleted regardless of submission outcome by moving
the for path in processed_files { std::fs::remove_file(&path); } loop to run
before any return or by using an RAII guard/cleanup function that removes files
in processed_files in all code paths (references: submit_policy_analysis,
CachedOpenShellClient::connect, processed_files).
| lsof -ti :8080,:18789 2>/dev/null | xargs kill 2>/dev/null || true | ||
| docker buildx prune -af 2>/dev/null || true | ||
| docker images --format '{{.Repository}}:{{.Tag}}' | grep openshell | xargs -r docker rmi -f 2>/dev/null || true |
There was a problem hiding this comment.
Potential portability and safety issues in cleanup commands.
- Line 99:
lsof -ti :8080,:18789 | xargs killcould kill unrelated processes using those ports. - Line 101:
xargs -ris a GNU extension not available on macOS's defaultxargs. This will cause a "illegal option -- r" error on stock macOS.
♻️ Proposed fix for macOS compatibility
-lsof -ti :8080,:18789 2>/dev/null | xargs kill 2>/dev/null || true
+# Kill only openshell-related processes on these ports
+lsof -ti :8080,:18789 2>/dev/null | while read pid; do
+ ps -p "$pid" -o comm= 2>/dev/null | grep -q openshell && kill "$pid" 2>/dev/null
+done || true
-docker images --format '{{.Repository}}:{{.Tag}}' | grep openshell | xargs -r docker rmi -f 2>/dev/null || true
+docker images --format '{{.Repository}}:{{.Tag}}' | grep openshell | xargs docker rmi -f 2>/dev/null || trueNote: Without -r, an empty input to xargs will still run docker rmi -f with no arguments, but Docker handles that gracefully with an error message.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/setup-e2e-demo.sh` around lines 99 - 101, The cleanup commands are
unsafe/non-portable: avoid blindly killing any PID output by `lsof -ti
:8080,:18789` and remove GNU-only `xargs -r` used with `docker rmi`; instead,
check that `lsof` returns PIDs before calling `kill` (e.g., capture/verify
non-empty PID list from `lsof -t` and then call `kill` on that list) and for the
Docker cleanup, test that `docker images --format '{{.Repository}}:{{.Tag}}' |
grep openshell` finds results before piping to `xargs docker rmi -f` (or omit
`-r` and guard with an if/test to avoid invoking `docker rmi` with empty args)
so the `lsof -ti :8080,:18789 | xargs kill` and `docker images ... | grep
openshell | xargs -r docker rmi -f` invocations are replaced with guarded,
portable checks.
| function osh(...args: string[]): string { | ||
| return execSync(`openshell ${args.map((a) => `'${a}'`).join(" ")}`, { | ||
| encoding: "utf-8", | ||
| timeout: TIMEOUT_MED, | ||
| env: baseEnv, | ||
| }).trim(); | ||
| } |
There was a problem hiding this comment.
Shell injection risk in osh() helper.
The helper uses template string interpolation with single quotes: `openshell ${args.map((a) => `'${a}'`).join(" ")}`. If any argument contains a single quote, this breaks out of the quoting and allows shell injection.
🛡️ Proposed fix: Use execFileSync for safe argument passing
function osh(...args: string[]): string {
- return execSync(`openshell ${args.map((a) => `'${a}'`).join(" ")}`, {
+ return execFileSync("openshell", args, {
encoding: "utf-8",
timeout: TIMEOUT_MED,
env: baseEnv,
}).trim();
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function osh(...args: string[]): string { | |
| return execSync(`openshell ${args.map((a) => `'${a}'`).join(" ")}`, { | |
| encoding: "utf-8", | |
| timeout: TIMEOUT_MED, | |
| env: baseEnv, | |
| }).trim(); | |
| } | |
| function osh(...args: string[]): string { | |
| return execFileSync("openshell", args, { | |
| encoding: "utf-8", | |
| timeout: TIMEOUT_MED, | |
| env: baseEnv, | |
| }).trim(); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/config-mutability-e2e.test.ts` around lines 99 - 105, The osh helper is
vulnerable to shell injection because it builds a command string with manual
single-quote quoting and passes it to execSync; replace the execSync string
invocation with Node's execFileSync to pass arguments safely (call
execFileSync('openshell', args, { encoding: 'utf-8', timeout: TIMEOUT_MED, env:
baseEnv })) so the args array is handled without shell interpolation, keep the
function signature osh(...args: string[]): string and trim the returned output
as before.
…utability # Conflicts: # bin/lib/onboard.js
config-mutability-remaining-work.md is not a user-facing doc page. Sphinx -W fails because it isn't in any toctree.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
config-mutability-remaining-work.md (1)
5-19: Well-analyzed bug with sound recommendation.The three fix options are clearly presented with appropriate trade-offs. Option A (mark consumed) is the right choice for preserving audit trails while stopping repeated writes.
Optional consideration: Option A introduces an "applied" state in the chunk lifecycle. Consider documenting whether applied chunks should be retained indefinitely for audit purposes or eventually archived/purged to prevent table growth.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config-mutability-remaining-work.md` around lines 5 - 19, apply_approved_config_chunks in crates/openshell-sandbox/src/lib.rs currently rewrites config-overrides.json5 every poll because approved chunks remain approved; implement Option A: after successfully merging and writing the overrides file in apply_approved_config_chunks, call a new gRPC method (e.g., AcknowledgeConfigChunks or AcknowledgeDraftPolicyChunks) on the gateway service to update the affected draft_policy_chunks rows from status "approved" to "applied" (or equivalent), ensure the RPC accepts the chunk IDs returned by the original query, only invoke it after a successful file write, surface/log and handle RPC errors, and add the corresponding server-side RPC handler to update the database so future polls won't return those chunks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bin/lib/onboard.js`:
- Line 1955: The regex assigned to entryPattern currently uses [\w.]+ which
omits hyphens; update the pattern to allow hyphens in YAML keys by changing the
character class to include '-' (for example use [\w.-]+ or [A-Za-z0-9_.-]+) so
entryPattern (const entryPattern = /^ {2}([...\n])/gm) will match keys like
"my-feature.enabled"; keep the rest of the pattern the same and ensure the
hyphen is either first or escaped inside the character class to avoid range
semantics.
---
Nitpick comments:
In `@config-mutability-remaining-work.md`:
- Around line 5-19: apply_approved_config_chunks in
crates/openshell-sandbox/src/lib.rs currently rewrites config-overrides.json5
every poll because approved chunks remain approved; implement Option A: after
successfully merging and writing the overrides file in
apply_approved_config_chunks, call a new gRPC method (e.g.,
AcknowledgeConfigChunks or AcknowledgeDraftPolicyChunks) on the gateway service
to update the affected draft_policy_chunks rows from status "approved" to
"applied" (or equivalent), ensure the RPC accepts the chunk IDs returned by the
original query, only invoke it after a successful file write, surface/log and
handle RPC errors, and add the corresponding server-side RPC handler to update
the database so future polls won't return those chunks.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: aeef804b-3129-4184-a590-9303cc9ef431
📒 Files selected for processing (3)
bin/lib/onboard.jsbin/nemoclaw.jsconfig-mutability-remaining-work.md
🚧 Files skipped from review as they are similar to previous changes (1)
- bin/nemoclaw.js
| // Each entry looks like: | ||
| // agents.defaults.model.primary: | ||
| // default: "inference/nvidia/nemotron-3-super-120b-a12b" | ||
| const entryPattern = /^ {2}([\w.]+):\s*\n\s+default:\s*(.*)/gm; |
There was a problem hiding this comment.
Regex key pattern excludes hyphens.
The pattern [\w.]+ only matches word characters and dots. YAML keys can include hyphens (e.g., my-feature.enabled), which this regex would skip.
🔧 Suggested fix
- const entryPattern = /^ {2}([\w.]+):\s*\n\s+default:\s*(.*)/gm;
+ const entryPattern = /^ {2}([\w.\-]+):\s*\n\s+default:\s*(.*)/gm;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const entryPattern = /^ {2}([\w.]+):\s*\n\s+default:\s*(.*)/gm; | |
| const entryPattern = /^ {2}([\w.\-]+):\s*\n\s+default:\s*(.*)/gm; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bin/lib/onboard.js` at line 1955, The regex assigned to entryPattern
currently uses [\w.]+ which omits hyphens; update the pattern to allow hyphens
in YAML keys by changing the character class to include '-' (for example use
[\w.-]+ or [A-Za-z0-9_.-]+) so entryPattern (const entryPattern = /^
{2}([...\n])/gm) will match keys like "my-feature.enabled"; keep the rest of the
pattern the same and ensure the hyphen is either first or escaped inside the
character class to avoid range semantics.
1. Stop repeated config chunk rewrites — apply_approved_config_chunks
now tracks applied chunk IDs in a HashSet and skips already-applied
chunks on subsequent poll cycles.
2. TUI config detail view — pressing Enter on a CONFIG chunk now shows
the config key and pretty-printed proposed JSON override instead of
the empty network rule detail view.
3. Demo uses ui.assistant.name — POC script and E2E test exercise a
non-inference user-preference field ("Lew Alcindor" → "Kareem
Abdul-Jabbar"). POC script is now fully self-contained: builds
everything from source, creates sandbox, then runs interactive demo.
4. Dockerfile bakes in ui.assistant.name: "Lew Alcindor" as the
baseline display name for the demo scenario.
…into feat/runtime-config-mutability
Summary
Runtime config mutability — allows changing OpenClaw config fields (model, agent preferences, display name) without sandbox recreation.
patches/apply-openclaw-shim.jsinjects_nemoClawMergeOverrides()into all 6 dist entry points. ReadsOPENCLAW_CONFIG_OVERRIDES_FILE, deep-merges onto frozenopenclaw.json, stripsgateway.*for security.tls: terminateannotations), security hardening SEC-002–010, version check enforced in onboard preflight.nemoclaw <sandbox> config-set --key <path> --value <value>andconfig-getviaopenshell sandbox upload/download.patches/openshell-config-approval.patchextends PolicyChunk approval flow forconfig:prefixed chunks (TUI shows CONFIG badge with detail view, server skips network merge on approval, sandbox proxy scans for request files and applies approved overrides). Includes dedup logic to prevent repeated writes after approval.What changed in final commits
apply_approved_config_chunksnow tracks applied chunk IDs in aHashSet<String>— stops the 10-second rewrite loop after approval.ui.assistant.name("Lew Alcindor" → "Kareem Abdul-Jabbar") — a non-inference user-preference field that demonstrates the core value prop.scripts/poc-round-trip-test.shnow builds everything from source (Docker, OpenShell, CLI, sandbox) with zero assumed prerequisites.How to reproduce the demo
Prerequisites
miseinstalled (curl https://mise.run | sh)rustup)bash4+ (brew install bashon macOS)NVIDIA_API_KEYsetGITHUB_TOKENset (orgh auth login)Full build + demo (from scratch)
The script handles everything: starts Docker, cleans state, clones OpenShell v0.0.15, applies patches, compiles from Rust source, deploys k3s cluster, builds CLI, creates sandbox.
# Terminal 1 — run the bootstrap (takes ~15 min on first run) bash scripts/poc-round-trip-test.shWhen bootstrap completes, it prints instructions. Open a second terminal:
# Terminal 2 — TUI (leave running) openshell term -g openshell-sourceBack in Terminal 1, press Enter to start the interactive demo. The script walks through:
openclaw.jsonwithui.assistant.name: "Lew Alcindor"[a]to approveconfig-getshows the new name, and the OpenClaw chat UI reflects "Kareem Abdul-Jabbar"gateway.auth.tokenchange request is blocked at the scanner levelconfig-setworks without TUI approval (operator path)gateway.*is also blocked from the host sideRe-run demo only (skip bootstrap)
If the sandbox is already running from a previous build:
Verified
gateway.*blocked at CLI level + stripped by shim (defense in depth)Test plan
nemoclaw onboardwith patched OpenShell clusternemoclaw <sandbox> config-set --key ui.assistant.name --value "Kareem Abdul-Jabbar"config-getgateway.auth.tokenchange is refused