Skip to content

fix: offer Ollama install fallback when cloud API is unavailable#380

Open
futhgar wants to merge 3 commits intoNVIDIA:mainfrom
futhgar:fix/skip-api-key-local-inference
Open

fix: offer Ollama install fallback when cloud API is unavailable#380
futhgar wants to merge 3 commits intoNVIDIA:mainfrom
futhgar:fix/skip-api-key-local-inference

Conversation

@futhgar
Copy link
Copy Markdown

@futhgar futhgar commented Mar 19, 2026

Summary

  • Always show an "Install Ollama" option during onboard on Linux (not just macOS), so users have a local inference fallback when build.nvidia.com is overloaded or down
  • Use the official curl -fsSL https://ollama.com/install.sh | sh installer on Linux (Homebrew on macOS)
  • Add install-ollama as a valid NEMOCLAW_PROVIDER value for non-interactive/CI mode

Problem

When Ollama is not installed and the NVIDIA API server is unavailable, nemoclaw onboard presents only the cloud option on Linux. Users can't get an API key and onboarding fails with no fallback path (issue #301).

Fix

The install-ollama option was already implemented for macOS but gated behind process.platform === "darwin". This PR extends it to Linux with the appropriate installer command. The change is minimal — no restructuring of the onboard flow.

Fixes #301

Test plan

  • New test: verifies Install Ollama (Linux) option appears when Ollama is not installed on Linux
  • New test: verifies the curl installer (not Homebrew) is invoked on Linux
  • Existing onboard-selection.test.js tests still pass
  • All onboard/credentials/inference-config tests pass
  • npm test shows no new failures (pre-existing failures in runtime-shell.test.js and install-preflight.test.js are unrelated)

Summary by CodeRabbit

  • New Features

    • Platform-specific Ollama installation during setup: macOS uses Homebrew, other platforms use the official installer script.
    • The Ollama install option is offered on macOS and Linux when Ollama is not present or running.
  • Bug Fixes

    • Non-interactive provider selection accepts the install option and silently falls back to an existing local Ollama provider if available.
  • Tests

    • Added a test verifying the Linux install option and installer behavior.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 19, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: ea2ae4d3-cbbd-45f3-8a79-f356bd4d493e

📥 Commits

Reviewing files that changed from the base of the PR and between d40b6fc and 8e84410.

📒 Files selected for processing (1)
  • test/onboard-selection.test.js
✅ Files skipped from review due to trivial changes (1)
  • test/onboard-selection.test.js

📝 Walkthrough

Walkthrough

Adds an install-ollama non-interactive provider option, offers it on macOS and Linux when Ollama is missing, implements platform-specific install commands, falls back to an existing Ollama if it appears, and adds a Linux onboarding test validating the curl installer flow.

Changes

Cohort / File(s) Summary
Onboarding Logic
bin/lib/onboard.js
Adds install-ollama as a valid NEMOCLAW_PROVIDER; offers install-ollama on darwin and linux when Ollama is absent; if Ollama becomes available, selects ollama silently; uses brew install ollama on macOS and `curl -fsSL https://ollama.com/install.sh
Test Coverage
test/onboard-selection.test.js
New Vitest that spawns bin/lib/onboard.js, forces process.platform='linux', stubs detection to simulate no local Ollama/vLLM, selects the install-ollama option, and asserts the UI, final provider (ollama-local), and that the curl installer (not brew) is used.

Sequence Diagram

sequenceDiagram
    participant User
    participant Onboard as Onboarding
    participant Platform as PlatformDetector
    participant Installer as PackageManager
    participant Ollama

    User->>Onboard: start onboarding
    Onboard->>Onboard: check if ollama installed & running
    alt not available
        Onboard->>User: present "Install Ollama" option
        User->>Onboard: choose install
        Onboard->>Platform: detect platform
        Platform-->>Onboard: darwin / linux
        alt darwin
            Onboard->>Installer: run "brew install ollama"
        else linux
            Onboard->>Installer: run "curl -fsSL https://ollama.com/install.sh | sh"
        end
        Installer->>Ollama: install
        Onboard->>Ollama: start "OLLAMA_HOST=0.0.0.0:11434" + "ollama serve"
    else available
        Onboard->>Onboard: select existing "ollama" provider
    end
    Onboard->>User: continue to model selection
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰
I hopped through code with eager nose,
When servers fail, a local path grows,
Brew for Mac, a curl for Linux land,
Ollama rises by a rabbit's hand,
Now models hum where I can stand!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'fix: offer Ollama install fallback when cloud API is unavailable' directly summarizes the main change: adding a local Ollama installation option as a fallback when the cloud API is unavailable, matching the primary objectives.
Linked Issues check ✅ Passed The PR fully addresses the objectives in issue #301 by adding an 'Install Ollama' onboarding option on Linux as a fallback when the NVIDIA cloud API is unavailable, enabling non-cloud local inference installation.
Out of Scope Changes check ✅ Passed All changes are directly scoped to the linked issue #301: the onboard.js modifications add Linux Ollama installation support and the test validates this behavior; no unrelated changes detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@bin/lib/onboard.js`:
- Around line 607-613: The installer calls that invoke run("brew install
ollama", { ignoreError: true }) and run("curl -fsSL
https://ollama.com/install.sh | sh", { ignoreError: true }) must not swallow
failures; update the onboarding logic so the run calls return and/or throw on
error (remove ignoreError: true) and explicitly check the result or catch
exceptions in the surrounding function (the installer branch in onboard.js) and
on failure log a clear error and abort onboarding (process.exit(1) or rethrow)
before proceeding to switch provider/model to Ollama.
- Around line 502-509: The current options builder omits the "install-ollama"
option when Ollama is present (hasOllama/ollamaRunning), causing
CI/non-interactive runs with NEMOCLAW_PROVIDER=install-ollama to fail; modify
the logic that populates options (the block referencing hasOllama, ollamaRunning
and pushing to options) to special-case the environment variable
NEMOCLAW_PROVIDER === "install-ollama" so the "install-ollama" entry is added
unconditionally when that env var is set, or alternatively relax the
non-interactive provider validation to accept "install-ollama" even if the
option wasn't pushed; update references in the same function that validate the
chosen provider to check process.env.NEMOCLAW_PROVIDER as an allowed provider.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 69e50485-aeb9-43e5-8f18-c3ea67223c15

📥 Commits

Reviewing files that changed from the base of the PR and between 9513eca and feac390.

📒 Files selected for processing (2)
  • bin/lib/onboard.js
  • test/onboard-selection.test.js

@wscurran wscurran added enhancement: feature Use this label to identify requests for new capabilities in NemoClaw. Local Models Running NemoClaw with local models priority: medium Issue that should be addressed in upcoming releases labels Mar 19, 2026
futhgar added 2 commits March 28, 2026 17:09
When Ollama is not installed, the onboard wizard previously only offered
the NVIDIA Cloud API option on Linux. If build.nvidia.com was overloaded
or down, users couldn't obtain an API key and onboarding would fail with
no fallback.

Now, an "Install Ollama" option is always shown on both Linux and macOS
when Ollama is not already present. On Linux, the official curl installer
(ollama.com/install.sh) is used instead of Homebrew. This ensures users
always have a local inference path available regardless of NVIDIA API
server availability.

Also adds install-ollama as a valid NEMOCLAW_PROVIDER value for
non-interactive mode.

Fixes NVIDIA#301

Signed-off-by: Josue Gomez <josue@guatulab.com>
…tall errors

Address CodeRabbit review:
- In non-interactive mode, fall back to the existing 'ollama' option
  when NEMOCLAW_PROVIDER=install-ollama but Ollama is already installed
- Remove ignoreError from Ollama install commands so failures are
  caught immediately instead of silently continuing

Signed-off-by: Josue Gomez <josue@guatulab.com>
Signed-off-by: futhgar <jmaldonado.rosa@gmail.com>
@futhgar futhgar force-pushed the fix/skip-api-key-local-inference branch from d94ddea to d40b6fc Compare March 28, 2026 21:09
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
bin/lib/onboard.js (1)

2216-2260: Minor indentation inconsistency at Line 2226.

The sleep(2) call has 8-space indentation instead of 6, breaking alignment with the surrounding code. This doesn't affect functionality but harms readability.

🧹 Fix indentation
       run("OLLAMA_HOST=0.0.0.0:11434 ollama serve > /dev/null 2>&1 &", { ignoreError: true });
-        sleep(2);
+      sleep(2);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/lib/onboard.js` around lines 2216 - 2260, In the selected.key ===
"install-ollama" branch adjust the indentation of the sleep(2); line so it
aligns with the surrounding console.log and run calls (same indentation level as
the preceding run("OLLAMA_HOST=...") and console.log lines) — replace the
8-space indent with the 6-space indent used by the block to restore consistent
formatting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/onboard-selection.test.js`:
- Around line 1327-1333: The test is selecting option "2" (OpenAI) instead of
Install Ollama; update the credentials.prompt stub in the test so the first
prompt returns "7" (Install Ollama) instead of "2" to exercise the ollama flow
(reference credentials.prompt and setupNim()), and add a fake curl binary into a
temporary bin and ensure the spawnSync call used by the code under test sees
that bin by updating the spawnSync/env used in the test to include the temp bin
at the front of PATH (reference spawnSync and PATH) so the install-ollama path
can detect curl and proceed.

---

Nitpick comments:
In `@bin/lib/onboard.js`:
- Around line 2216-2260: In the selected.key === "install-ollama" branch adjust
the indentation of the sleep(2); line so it aligns with the surrounding
console.log and run calls (same indentation level as the preceding
run("OLLAMA_HOST=...") and console.log lines) — replace the 8-space indent with
the 6-space indent used by the block to restore consistent formatting.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 722fed8a-52cc-48e1-a9c7-c02618cb0873

📥 Commits

Reviewing files that changed from the base of the PR and between d94ddea and d40b6fc.

📒 Files selected for processing (2)
  • bin/lib/onboard.js
  • test/onboard-selection.test.js

…test

The test was selecting option 2 (OpenAI) instead of option 7
(install-ollama). The setupNim() options array puts install-ollama at
position 7 when Ollama is not installed: build, openai, custom,
anthropic, anthropicCompatible, gemini, install-ollama.

Also fix __dirname → import.meta.dirname to match the ES module format
used by all other tests in this file.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement: feature Use this label to identify requests for new capabilities in NemoClaw. Local Models Running NemoClaw with local models priority: medium Issue that should be addressed in upcoming releases

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Install fails because API server fails and there is no graceful fallback

2 participants