Public-safe OpenClaw starter for AI SaaS work on NVIDIA-hosted models.
Maintained by Autopilot AI Tech
Website: autopilotaitech.com
If this starter saves you time, leave a tip here:
Special thanks to Peter Steinberger and NVIDIA AI for the technology, ideas, and open ecosystem work that helped make this starter possible.
What this repo does:
- runs OpenClaw in Docker
- uses NVIDIA's hosted API models instead of local GPU inference
- keeps the entire repo mounted into the container as a single shared mount at
/share - includes local skills for audit-first wiring, architecture, and enterprise SaaS delivery
- includes starter prompts for auditing, planning, fixing, and architecture work
This starter is built around the newer NVIDIA-hosted coding stack rather than a generic OpenClaw setup.
What was incorporated:
- NVIDIA API Catalog /
build.nvidia.comas the model backend nvidia/nemotron-3-super-120b-a12bas the default deep reasoning modelmoonshotai/kimi-k2-instruct-0905as the faster implementation modelopenai/gpt-oss-20bas a lightweight fallback and repo-ops model
Why this matters:
- no local GPU hosting burden
- free prototyping path through NVIDIA's hosted API access
- stronger fit for coding, planning, agentic execution, and multi-step SaaS work
- better separation between deep architecture tasks and faster implementation loops
How it is wired:
- the OpenClaw bootstrap script configures the NVIDIA provider automatically
deep-coderdefaults to Nemotron 3 Superfast-coderuses Kimi K2repo-opsuses GPT-OSS 20B- all three are available through the same NVIDIA-backed OpenClaw config
- for NVIDIA-hosted NVIDIA models, the starter keeps the provider-side model id as
nvidia/nemotron-3-super-120b-a12bbut maps the OpenClaw-selected ref tonvidia/nvidia/nemotron-3-super-120b-a12bso Nemotron does not 404 and fall back to Kimi
What this repo does not include:
- personal API keys
- your local drive letters
- your existing project repos
- private tokens or OpenClaw state
This repo is meant to be public-safe.
.envis local-only and must not be committedupstream/is fetched during setup and is not committedvendor/superpowersandvendor/context-hubare fetched during setup and are not committedprojects/*is ignored so your actual app repos stay localOPENCLAW_LOCAL_CONTROL_UI_RELAXED=trueis the local-dev default so the Control UI works on localhost without pairing. Do not use that setting on a VPS/public deployment without replacing it with proper auth hardening.
Public defaults live in .env.example.
Local machine overrides belong in .env. For example:
OPENCLAW_IMAGE=<your-prebuilt-openclaw-image>
That local override is not the public default and should not be documented as required for other users.
- OpenClaw
- Docker Compose
- NVIDIA API Catalog / build.nvidia.com
- Nemotron 3 Super
- Kimi K2 Instruct
- GPT-OSS 20B
- optional Cloudflare tunnel for preview apps
- ClawScope operator dashboard for background activity and stall detection
- optional Linuxbrew for skill dependencies
This starter now defaults to the safer baseline we could apply without breaking normal local use:
- token auth enabled on the OpenClaw gateway
- auth rate limiting enabled
- no
allowInsecureAuth - no
dangerouslyDisableDeviceAuthin the starter bootstrap - Docker named volumes for OpenClaw state instead of a world-writable Windows host mount
The live stack used during development may still carry temporary local-only break-glass settings for browser pairing. Those are not the public default in this starter.
This repo is a local development starter, not a hardened internet-facing deployment.
Do not copy this setup to a VPS and expose it to the public internet without hardening it first.
Minimum things to fix before any public exposure:
- do not use break-glass Control UI auth flags
- use real HTTPS and a trusted reverse proxy
- lock down gateway auth and rate limiting
- review model/tool exposure for untrusted input
- harden filesystem permissions for OpenClaw state
- restrict origin/access instead of assuming localhost-style trust
If you want a VPS-safe deployment, treat this repo as a starting point only and harden it before exposing it.
projects/-> clone or copy the app you want OpenClaw to work onprompts/-> starter prompts for common AI SaaS workflowsskills/-> local workspace skills included in this startervendor/-> external skill repos cloned by the helper scriptscripts/-> setup and run helpers
Clone the repo and move into it:
git clone https://github.com/autopilotaitech/clawforge-saas-starter.git
Set-Location .\clawforge-saas-starterPrepare local files:
.\scripts\prepare-host.ps1Edit .env and set NVIDIA_API_KEY.
Get your API key from the NVIDIA API Catalog:
The starter pins OpenClaw source to a stable upstream tag by default:
OPENCLAW_GIT_REF=v2026.3.13-1
Override that only if you intentionally want to test a newer upstream snapshot.
Fetch OpenClaw source and the extra skill repos:
.\scripts\fetch-openclaw.ps1
.\scripts\fetch-skill-repos.ps1Build the local image:
.\scripts\build-image.ps1Linuxbrew is installed automatically during bootstrap so skill installers like brew are available without a separate manual step.
If you ever need to repair or refresh the Linuxbrew install manually:
.\scripts\install-homebrew.ps1Bootstrap the OpenClaw config:
.\scripts\bootstrap-openclaw.ps1Start the stack:
.\scripts\start-stack.ps1Start only the monitor:
.\scripts\start-monitor.ps1Attach the monitor to an already-running OpenClaw gateway without starting the starter stack:
.\scripts\start-live-monitor.ps1Caveat:
- if there is no local
.openclawstate directory and no detectable gateway container, you must pass-StateDiror-Containerexplicitly
Show the tokenized dashboard URL:
.\scripts\dashboard-url.ps1Show the ClawScope operator URL:
.\scripts\monitor-url.ps1Keep the starter repo mounted as /share.
Put real app repos under:
projects/<your-repo>
Examples:
projects/arcline.pro-masterprojects/my-saas-app
That keeps everything visible on the host while still using one mount only.
Generated apps should listen on APP_PREVIEW_PORT, which defaults to 4310.
Start a free Quick Tunnel:
.\scripts\start-preview-tunnel.ps1
.\scripts\preview-url.ps1For a stable hostname, set CLOUDFLARE_TUNNEL_TOKEN in .env and use:
.\scripts\start-named-tunnel.ps1ClawScope is a lightweight command-center layer included in this starter.
It polls OpenClaw's real CLI surfaces and gives you:
- gateway health
- active sessions
- stale-session detection
- recent error signal
- event timeline from recent gateway logs
- local/public preview URL visibility
It is intentionally read-only. It does not drive OpenClaw or mutate state.
If you already have an OpenClaw gateway running elsewhere on the same machine and do not want to restart it, use start-live-monitor.ps1. That mode reads from the existing openclaw-gateway Docker container and serves ClawScope on http://127.0.0.1:18880/.
task-execution-guardrailsautonomous-saas-deliverywiring-auditprincipal-architect
External skill repos expected by bootstrap:
vendor/superpowersvendor/context-hub
They are fetched by setup scripts and intentionally not committed into the public repo by default.
prompts/00-openclaw-workflow.mdprompts/01-beta-audit.mdprompts/02-implementation-plan.mdprompts/03-single-fix.mdprompts/04-wiring-audit.mdprompts/05-principal-architecture.mdprompts/01-route-audit-batch.mdprompts/02-consolidate-findings.mdprompts/03-implement-one-fix.mdprompts/04-post-fix-review.md
.\scripts\start-shell.ps1
.\scripts\attach-shell.ps1
.\scripts\start-gateway.ps1
.\scripts\models-status.ps1
.\scripts\stop-stack.ps1
docker compose ps
docker compose logs -f openclaw-gateway- GitHub org: github.com/autopilotaitech
- Website: autopilotaitech.com
- Support: buy.stripe.com/cNi28q3bZ4Vf3gpdXoenS03
This project is released under the MIT License. See LICENSE.
