From 40d505085a0c74c60c7fc1a7ce800a362163ffb2 Mon Sep 17 00:00:00 2001 From: peteryuqin Date: Thu, 26 Mar 2026 10:29:18 -0400 Subject: [PATCH 1/2] docs: explain Linux Ollama bind requirement --- README.md | 2 +- docs/reference/inference-profiles.md | 8 ++++++++ 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 5c4b5cadf..d28c4a870 100644 --- a/README.md +++ b/README.md @@ -220,7 +220,7 @@ During onboarding, NemoClaw validates the selected provider and model before it Credentials stay on the host in `~/.nemoclaw/credentials.json`. The sandbox only sees the routed `inference.local` endpoint, not your raw provider key. -Local Ollama is supported in the standard onboarding flow. Local vLLM remains experimental, and local host-routed inference on macOS still depends on OpenShell host-routing support in addition to the local service itself being reachable on the host. +Local Ollama is supported in the standard onboarding flow. On Linux Docker hosts, Ollama must listen on `0.0.0.0:11434` so sandboxes can reach `http://host.openshell.internal:11434`; a loopback-only `127.0.0.1:11434` bind will fail validation. Local vLLM remains experimental, and local host-routed inference on macOS still depends on OpenShell host-routing support in addition to the local service itself being reachable on the host. --- diff --git a/docs/reference/inference-profiles.md b/docs/reference/inference-profiles.md index f1c1a4f49..08517501e 100644 --- a/docs/reference/inference-profiles.md +++ b/docs/reference/inference-profiles.md @@ -76,6 +76,14 @@ Ollama gets additional onboarding help: - it warms the model - it validates the model before continuing +On Linux hosts that run NemoClaw with Docker, the sandbox reaches Ollama through `http://host.openshell.internal:11434`, not the host shell's `localhost` socket. If Ollama is already running, make sure it listens on `0.0.0.0:11434` instead of `127.0.0.1:11434`, for example: + +```bash +OLLAMA_HOST=0.0.0.0:11434 ollama serve +``` + +If Ollama only binds loopback, NemoClaw can detect it on the host but the sandbox-side validation step will fail because containers cannot reach it. + ## Experimental Local Providers The following local providers require `NEMOCLAW_EXPERIMENTAL=1`: From 644f52c12ea8b43c743a8f354e482b11c1d2238b Mon Sep 17 00:00:00 2001 From: peteryuqin Date: Thu, 26 Mar 2026 16:09:41 -0400 Subject: [PATCH 2/2] docs: align Ollama example formatting --- docs/reference/inference-profiles.md | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/docs/reference/inference-profiles.md b/docs/reference/inference-profiles.md index 08517501e..1b6b32d9d 100644 --- a/docs/reference/inference-profiles.md +++ b/docs/reference/inference-profiles.md @@ -76,13 +76,15 @@ Ollama gets additional onboarding help: - it warms the model - it validates the model before continuing -On Linux hosts that run NemoClaw with Docker, the sandbox reaches Ollama through `http://host.openshell.internal:11434`, not the host shell's `localhost` socket. If Ollama is already running, make sure it listens on `0.0.0.0:11434` instead of `127.0.0.1:11434`, for example: +On Linux hosts that run NemoClaw with Docker, the sandbox reaches Ollama through `http://host.openshell.internal:11434`, not the host shell's `localhost` socket. +If Ollama is already running, make sure it listens on `0.0.0.0:11434` instead of `127.0.0.1:11434`. +For example: -```bash -OLLAMA_HOST=0.0.0.0:11434 ollama serve +```console +$ OLLAMA_HOST=0.0.0.0:11434 ollama serve ``` -If Ollama only binds loopback, NemoClaw can detect it on the host but the sandbox-side validation step will fail because containers cannot reach it. +If Ollama only binds loopback, NemoClaw can detect it on the host but the sandbox-side validation step fails because containers cannot reach it. ## Experimental Local Providers