diff --git a/.agents/skills/nemoclaw-configure-inference/SKILL.md b/.agents/skills/nemoclaw-configure-inference/SKILL.md deleted file mode 100644 index 4e971d1af..000000000 --- a/.agents/skills/nemoclaw-configure-inference/SKILL.md +++ /dev/null @@ -1,342 +0,0 @@ ---- -name: "nemoclaw-configure-inference" -description: "Lists all inference providers offered during NemoClaw onboarding. Use when explaining which providers are available, what the onboard wizard presents, or how inference routing works. Changes the active inference model without restarting the sandbox. Use when switching inference providers, changing the model runtime, or reconfiguring inference routing. Connects NemoClaw to a local inference server. Use when setting up Ollama, vLLM, TensorRT-LLM, NIM, or any OpenAI-compatible local model server with NemoClaw." ---- - -# NemoClaw Configure Inference - -Lists all inference providers offered during NemoClaw onboarding. Use when explaining which providers are available, what the onboard wizard presents, or how inference routing works. - -## Context - -NemoClaw supports multiple inference providers. -During onboarding, the `nemoclaw onboard` wizard presents a numbered list of providers to choose from. -Your selection determines where the agent's inference traffic is routed. - -## How Inference Routing Works - -The agent inside the sandbox talks to `inference.local`. -It never connects to a provider directly. -OpenShell intercepts inference traffic on the host and forwards it to the provider you selected. - -Provider credentials stay on the host. -The sandbox does not receive your API key. - -## Provider Options - -The onboard wizard presents the following provider options by default. -The first six are always available. -Ollama appears when it is installed or running on the host. - -| Option | Description | Curated models | -|--------|-------------|----------------| -| NVIDIA Endpoints | Routes to models hosted on [build.nvidia.com](https://build.nvidia.com). You can also enter any model ID from the catalog. Set `NVIDIA_API_KEY`. | Nemotron 3 Super 120B, Kimi K2.5, GLM-5, MiniMax M2.5, GPT-OSS 120B | -| OpenAI | Routes to the OpenAI API. Set `OPENAI_API_KEY`. | `gpt-5.4`, `gpt-5.4-mini`, `gpt-5.4-nano`, `gpt-5.4-pro-2026-03-05` | -| Other OpenAI-compatible endpoint | Routes to any server that implements `/v1/chat/completions`. If the endpoint also supports `/responses` with OpenClaw-style tool calling, NemoClaw can use that path; otherwise it falls back to `/chat/completions`. The wizard prompts for a base URL and model name. Works with OpenRouter, LocalAI, llama.cpp, or any compatible proxy. Set `COMPATIBLE_API_KEY`. | You provide the model name. | -| Anthropic | Routes to the Anthropic Messages API. Set `ANTHROPIC_API_KEY`. | `claude-sonnet-4-6`, `claude-haiku-4-5`, `claude-opus-4-6` | -| Other Anthropic-compatible endpoint | Routes to any server that implements the Anthropic Messages API (`/v1/messages`). The wizard prompts for a base URL and model name. Set `COMPATIBLE_ANTHROPIC_API_KEY`. | You provide the model name. | -| Google Gemini | Routes to Google's OpenAI-compatible endpoint. NemoClaw prefers `/responses` only when the endpoint proves it can handle tool calling in a way OpenClaw uses; otherwise it falls back to `/chat/completions`. Set `GEMINI_API_KEY`. | `gemini-3.1-pro-preview`, `gemini-3.1-flash-lite-preview`, `gemini-3-flash-preview`, `gemini-2.5-pro`, `gemini-2.5-flash`, `gemini-2.5-flash-lite` | -| Local Ollama | Routes to a local Ollama instance on `localhost:11434`. NemoClaw detects installed models, offers starter models if none are present, pulls and warms the selected model, and validates it. | Selected during onboarding. For more information, refer to Use a Local Inference Server (see the `nemoclaw-configure-inference` skill). | - -## Experimental Options - -The following local inference options require `NEMOCLAW_EXPERIMENTAL=1` and, when prerequisites are met, appear in the onboarding selection list. - -| Option | Condition | Notes | -|--------|-----------|-------| -| Local NVIDIA NIM | NIM-capable GPU detected | Pulls and manages a NIM container. | -| Local vLLM | vLLM running on `localhost:8000` | Auto-detects the loaded model. | - -For setup instructions, refer to Use a Local Inference Server (see the `nemoclaw-configure-inference` skill). - -## Validation - -NemoClaw validates the selected provider and model before creating the sandbox. -If validation fails, the wizard returns to provider selection. - -| Provider type | Validation method | -|---|---| -| OpenAI | Tries `/responses` first, then `/chat/completions`. | -| NVIDIA Endpoints | Tries `/responses` first with a tool-calling probe that matches OpenClaw behavior. Falls back to `/chat/completions` if the endpoint does not return a compatible tool call. | -| Google Gemini | Tries `/responses` first with a tool-calling probe that matches OpenClaw behavior. Falls back to `/chat/completions` if the endpoint does not return a compatible tool call. | -| Other OpenAI-compatible endpoint | Tries `/responses` first with a tool-calling probe that matches OpenClaw behavior. Falls back to `/chat/completions` if the endpoint does not return a compatible tool call. | -| Anthropic-compatible | Tries `/v1/messages`. | -| NVIDIA Endpoints (manual model entry) | Validates the model name against the catalog API. | -| Compatible endpoints | Sends a real inference request because many proxies do not expose a `/models` endpoint. For OpenAI-compatible endpoints, the probe includes tool calling before NemoClaw favors `/responses`. | - -## Prerequisites - -- A running NemoClaw sandbox. -- The OpenShell CLI on your `PATH`. -- NemoClaw installed. -- A local model server running, or Ollama installed. The NemoClaw onboard wizard can also start Ollama for you. - -Change the active inference model while the sandbox is running. -No restart is required. - -## Step 1: Switch to a Different Model - -Switching happens through the OpenShell inference route. -Use the provider and model that match the upstream you want to use. - -### NVIDIA Endpoints - -```console -$ openshell inference set --provider nvidia-prod --model nvidia/nemotron-3-super-120b-a12b -``` - -### OpenAI - -```console -$ openshell inference set --provider openai-api --model gpt-5.4 -``` - -### Anthropic - -```console -$ openshell inference set --provider anthropic-prod --model claude-sonnet-4-6 -``` - -### Google Gemini - -```console -$ openshell inference set --provider gemini-api --model gemini-2.5-flash -``` - -### Compatible Endpoints - -If you onboarded a custom compatible endpoint, switch models with the provider created for that endpoint: - -```console -$ openshell inference set --provider compatible-endpoint --model -``` - -```console -$ openshell inference set --provider compatible-anthropic-endpoint --model -``` - -If the provider itself needs to change, rerun `nemoclaw onboard`. - -## Step 2: Verify the Active Model - -Run the status command to confirm the change: - -```console -$ nemoclaw status -``` - -Add the `--json` flag for machine-readable output: - -```console -$ nemoclaw status --json -``` - -The output includes the active provider, model, and endpoint. - -## Step 3: Notes - -- The host keeps provider credentials. -- The sandbox continues to use `inference.local`. -- Runtime switching changes the OpenShell route. It does not rewrite your stored credentials. - ---- - -NemoClaw can route inference to a model server running on your machine instead of a cloud API. -This page covers Ollama, compatible-endpoint paths for other servers, and two experimental options for vLLM and NVIDIA NIM. - -All approaches use the same `inference.local` routing model. -The agent inside the sandbox never connects to your model server directly. -OpenShell intercepts inference traffic and forwards it to the local endpoint you configure. - -## Step 4: Ollama - -Ollama is the default local inference option. -The onboard wizard detects Ollama automatically when it is installed or running on the host. - -If Ollama is not running, NemoClaw starts it for you. -On macOS, the wizard also offers to install Ollama through Homebrew if it is not present. - -Run the onboard wizard. - -```console -$ nemoclaw onboard -``` - -Select **Local Ollama** from the provider list. -NemoClaw lists installed models or offers starter models if none are installed. -It pulls the selected model, loads it into memory, and validates it before continuing. - -### Linux with Docker - -On Linux hosts that run NemoClaw with Docker, the sandbox reaches Ollama through -`http://host.openshell.internal:11434`, not the host shell's `localhost` socket. -If Ollama is already running, make sure it listens on `0.0.0.0:11434` instead of -`127.0.0.1:11434`. - -```console -$ OLLAMA_HOST=0.0.0.0:11434 ollama serve -``` - -If Ollama only binds loopback, NemoClaw can detect it on the host, but the -sandbox-side validation step fails because containers cannot reach it. - -### Non-Interactive Setup - -```console -$ NEMOCLAW_PROVIDER=ollama \ - NEMOCLAW_MODEL=qwen2.5:14b \ - nemoclaw onboard --non-interactive -``` - -If `NEMOCLAW_MODEL` is not set, NemoClaw selects a default model based on available memory. - -| Variable | Purpose | -|---|---| -| `NEMOCLAW_PROVIDER` | Set to `ollama`. | -| `NEMOCLAW_MODEL` | Ollama model tag to use. Optional. | - -## Step 5: OpenAI-Compatible Server - -This option works with any server that implements `/v1/chat/completions`, including vLLM, TensorRT-LLM, llama.cpp, LocalAI, and others. -If the server also supports `/v1/responses`, NemoClaw only favors that path when onboarding can verify tool-calling behavior that matches what OpenClaw actually sends. -Otherwise NemoClaw falls back to `/v1/chat/completions`. - -Start your model server. -The examples below use vLLM, but any OpenAI-compatible server works. - -```console -$ vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000 -``` - -Run the onboard wizard. - -```console -$ nemoclaw onboard -``` - -When the wizard asks you to choose an inference provider, select **Other OpenAI-compatible endpoint**. -Enter the base URL of your local server, for example `http://localhost:8000/v1`. - -The wizard prompts for an API key. -If your server does not require authentication, enter any non-empty string (for example, `dummy`). - -NemoClaw validates the endpoint by sending a test inference request before continuing. -For OpenAI-compatible endpoints, the validation prefers `/responses` only when the probe produces a compatible function or tool call. -Endpoints that return `200 OK` on `/responses` but do not format tool calls the way OpenClaw expects are configured to use `/chat/completions` instead. - -### Non-Interactive Setup - -Set the following environment variables for scripted or CI/CD deployments. - -```console -$ NEMOCLAW_PROVIDER=custom \ - NEMOCLAW_ENDPOINT_URL=http://localhost:8000/v1 \ - NEMOCLAW_MODEL=meta-llama/Llama-3.1-8B-Instruct \ - COMPATIBLE_API_KEY=dummy \ - nemoclaw onboard --non-interactive -``` - -| Variable | Purpose | -|---|---| -| `NEMOCLAW_PROVIDER` | Set to `custom` for an OpenAI-compatible endpoint. | -| `NEMOCLAW_ENDPOINT_URL` | Base URL of the local server. | -| `NEMOCLAW_MODEL` | Model ID as reported by the server. | -| `COMPATIBLE_API_KEY` | API key for the endpoint. Use any non-empty value if authentication is not required. | - -## Step 6: Anthropic-Compatible Server - -If your local server implements the Anthropic Messages API (`/v1/messages`), choose **Other Anthropic-compatible endpoint** during onboarding instead. - -```console -$ nemoclaw onboard -``` - -For non-interactive setup, use `NEMOCLAW_PROVIDER=anthropicCompatible` and set `COMPATIBLE_ANTHROPIC_API_KEY`. - -```console -$ NEMOCLAW_PROVIDER=anthropicCompatible \ - NEMOCLAW_ENDPOINT_URL=http://localhost:8080 \ - NEMOCLAW_MODEL=my-model \ - COMPATIBLE_ANTHROPIC_API_KEY=dummy \ - nemoclaw onboard --non-interactive -``` - -## Step 7: vLLM Auto-Detection (Experimental) - -When vLLM is already running on `localhost:8000`, NemoClaw can detect it automatically and query the `/v1/models` endpoint to determine the loaded model. - -Set the experimental flag and run onboard. - -```console -$ NEMOCLAW_EXPERIMENTAL=1 nemoclaw onboard -``` - -Select **Local vLLM [experimental]** from the provider list. -NemoClaw detects the running model and validates the endpoint. - -> **Note:** NemoClaw forces the `chat/completions` API path for vLLM. -> The vLLM `/v1/responses` endpoint does not run the `--tool-call-parser`, so tool calls arrive as raw text. - -### Non-Interactive Setup - -```console -$ NEMOCLAW_EXPERIMENTAL=1 \ - NEMOCLAW_PROVIDER=vllm \ - nemoclaw onboard --non-interactive -``` - -NemoClaw auto-detects the model from the running vLLM instance. -To override the model, set `NEMOCLAW_MODEL`. - -## Step 8: NVIDIA NIM (Experimental) - -NemoClaw can pull, start, and manage a NIM container on hosts with a NIM-capable NVIDIA GPU. - -Set the experimental flag and run onboard. - -```console -$ NEMOCLAW_EXPERIMENTAL=1 nemoclaw onboard -``` - -Select **Local NVIDIA NIM [experimental]** from the provider list. -NemoClaw filters available models by GPU VRAM, pulls the NIM container image, starts it, and waits for it to become healthy before continuing. - -> **Note:** NIM uses vLLM internally. -> The same `chat/completions` API path restriction applies. - -### Non-Interactive Setup - -```console -$ NEMOCLAW_EXPERIMENTAL=1 \ - NEMOCLAW_PROVIDER=nim \ - nemoclaw onboard --non-interactive -``` - -To select a specific model, set `NEMOCLAW_MODEL`. - -## Step 9: Verify the Configuration - -After onboarding completes, confirm the active provider and model. - -```console -$ nemoclaw status -``` - -The output shows the provider label (for example, "Local vLLM" or "Other OpenAI-compatible endpoint") and the active model. - -## Step 10: Switch Models at Runtime - -You can change the model without re-running onboard. -Refer to Switch Inference Models (see the `nemoclaw-configure-inference` skill) for the full procedure. - -For compatible endpoints, the command is: - -```console -$ openshell inference set --provider compatible-endpoint --model -``` - -If the provider itself needs to change (for example, switching from vLLM to a cloud API), rerun `nemoclaw onboard`. - -## Related Skills - -- `nemoclaw-get-started` β€” Quickstart for first-time installation diff --git a/.agents/skills/nemoclaw-configure-inference/references/inference-options.md b/.agents/skills/nemoclaw-configure-inference/references/inference-options.md deleted file mode 100644 index 09f131aa6..000000000 --- a/.agents/skills/nemoclaw-configure-inference/references/inference-options.md +++ /dev/null @@ -1,61 +0,0 @@ -# Inference Options - -NemoClaw supports multiple inference providers. -During onboarding, the `nemoclaw onboard` wizard presents a numbered list of providers to choose from. -Your selection determines where the agent's inference traffic is routed. - -## How Inference Routing Works - -The agent inside the sandbox talks to `inference.local`. -It never connects to a provider directly. -OpenShell intercepts inference traffic on the host and forwards it to the provider you selected. - -Provider credentials stay on the host. -The sandbox does not receive your API key. - -## Provider Options - -The onboard wizard presents the following provider options by default. -The first six are always available. -Ollama appears when it is installed or running on the host. - -| Option | Description | Curated models | -|--------|-------------|----------------| -| NVIDIA Endpoints | Routes to models hosted on [build.nvidia.com](https://build.nvidia.com). You can also enter any model ID from the catalog. Set `NVIDIA_API_KEY`. | Nemotron 3 Super 120B, Kimi K2.5, GLM-5, MiniMax M2.5, GPT-OSS 120B | -| OpenAI | Routes to the OpenAI API. Set `OPENAI_API_KEY`. | `gpt-5.4`, `gpt-5.4-mini`, `gpt-5.4-nano`, `gpt-5.4-pro-2026-03-05` | -| Other OpenAI-compatible endpoint | Routes to any server that implements `/v1/chat/completions`. If the endpoint also supports `/responses` with OpenClaw-style tool calling, NemoClaw can use that path; otherwise it falls back to `/chat/completions`. The wizard prompts for a base URL and model name. Works with OpenRouter, LocalAI, llama.cpp, or any compatible proxy. Set `COMPATIBLE_API_KEY`. | You provide the model name. | -| Anthropic | Routes to the Anthropic Messages API. Set `ANTHROPIC_API_KEY`. | `claude-sonnet-4-6`, `claude-haiku-4-5`, `claude-opus-4-6` | -| Other Anthropic-compatible endpoint | Routes to any server that implements the Anthropic Messages API (`/v1/messages`). The wizard prompts for a base URL and model name. Set `COMPATIBLE_ANTHROPIC_API_KEY`. | You provide the model name. | -| Google Gemini | Routes to Google's OpenAI-compatible endpoint. NemoClaw prefers `/responses` only when the endpoint proves it can handle tool calling in a way OpenClaw uses; otherwise it falls back to `/chat/completions`. Set `GEMINI_API_KEY`. | `gemini-3.1-pro-preview`, `gemini-3.1-flash-lite-preview`, `gemini-3-flash-preview`, `gemini-2.5-pro`, `gemini-2.5-flash`, `gemini-2.5-flash-lite` | -| Local Ollama | Routes to a local Ollama instance on `localhost:11434`. NemoClaw detects installed models, offers starter models if none are present, pulls and warms the selected model, and validates it. | Selected during onboarding. For more information, refer to Use a Local Inference Server (see the `nemoclaw-configure-inference` skill). | - -## Experimental Options - -The following local inference options require `NEMOCLAW_EXPERIMENTAL=1` and, when prerequisites are met, appear in the onboarding selection list. - -| Option | Condition | Notes | -|--------|-----------|-------| -| Local NVIDIA NIM | NIM-capable GPU detected | Pulls and manages a NIM container. | -| Local vLLM | vLLM running on `localhost:8000` | Auto-detects the loaded model. | - -For setup instructions, refer to Use a Local Inference Server (see the `nemoclaw-configure-inference` skill). - -## Validation - -NemoClaw validates the selected provider and model before creating the sandbox. -If validation fails, the wizard returns to provider selection. - -| Provider type | Validation method | -|---|---| -| OpenAI | Tries `/responses` first, then `/chat/completions`. | -| NVIDIA Endpoints | Tries `/responses` first with a tool-calling probe that matches OpenClaw behavior. Falls back to `/chat/completions` if the endpoint does not return a compatible tool call. | -| Google Gemini | Tries `/responses` first with a tool-calling probe that matches OpenClaw behavior. Falls back to `/chat/completions` if the endpoint does not return a compatible tool call. | -| Other OpenAI-compatible endpoint | Tries `/responses` first with a tool-calling probe that matches OpenClaw behavior. Falls back to `/chat/completions` if the endpoint does not return a compatible tool call. | -| Anthropic-compatible | Tries `/v1/messages`. | -| NVIDIA Endpoints (manual model entry) | Validates the model name against the catalog API. | -| Compatible endpoints | Sends a real inference request because many proxies do not expose a `/models` endpoint. For OpenAI-compatible endpoints, the probe includes tool calling before NemoClaw favors `/responses`. | - -## Next Steps - -- Use a Local Inference Server (see the `nemoclaw-configure-inference` skill) for Ollama, vLLM, NIM, and compatible-endpoint setup details. -- Switch Inference Models (see the `nemoclaw-configure-inference` skill) for changing the model at runtime without re-onboarding. diff --git a/.agents/skills/nemoclaw-configure-security/SKILL.md b/.agents/skills/nemoclaw-configure-security/SKILL.md deleted file mode 100644 index 784783766..000000000 --- a/.agents/skills/nemoclaw-configure-security/SKILL.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -name: "nemoclaw-configure-security" -description: "Presents a risk framework for every configurable security control in NemoClaw. Use when evaluating security posture, reviewing sandbox security defaults, or assessing control trade-offs. Explains where NemoClaw stores provider credentials, the file permissions it applies, and the operational security trade-offs of plaintext local storage. Use when reviewing credential handling or advising users how to secure stored API keys." ---- - -# NemoClaw Configure Security - -Presents a risk framework for every configurable security control in NemoClaw. Use when evaluating security posture, reviewing sandbox security defaults, or assessing control trade-offs. - -## Context - -NemoClaw ships with deny-by-default security controls across four layers: network, filesystem, process, and inference. -You can tune every control, but each change shifts the risk profile. -This page documents every configurable knob, its default, what it protects, the concrete risk of relaxing it, and a recommendation for common use cases. - -For background on how the layers fit together, refer to How It Works (see the `nemoclaw-overview` skill). - - - -## Protection Layers at a Glance - -NemoClaw enforces security at four layers. -NemoClaw locks some when it creates the sandbox and requires a restart to change them. -You can hot-reload others while the sandbox runs. - -The following diagram shows the default posture immediately after `nemoclaw onboard`, before you approve any endpoints or apply any presets. - -```mermaid -flowchart TB - subgraph HOST["Your Machine: default posture after nemoclaw onboard"] - direction TB - - YOU["πŸ‘€ Operator"] - - subgraph NC["NemoClaw + OpenShell"] - direction TB - - subgraph SB["Sandbox: the agent's isolated world"] - direction LR - PROC["βš™οΈ Process Layer
Controls what the agent can execute"] - FS["πŸ“ Filesystem Layer
Controls what the agent can read and write"] - AGENT["πŸ€– Agent"] - end - - subgraph GW["Gateway: the gatekeeper"] - direction LR - NET["🌐 Network Layer
Controls where the agent can connect"] - INF["🧠 Inference Layer
Controls which AI models the agent can use"] - end - end - end - - OUTSIDE["🌍 Outside World
Internet Β· AI Providers Β· APIs"] - - AGENT -- "all requests" --> GW - GW -- "approved only" --> OUTSIDE - YOU -. "approve / deny" .-> GW - - classDef agent fill:#76b900,stroke:#5a8f00,color:#fff,stroke-width:2px,font-weight:bold - classDef locked fill:#1a1a1a,stroke:#76b900,color:#fff,stroke-width:2px - classDef hot fill:#333,stroke:#76b900,color:#e6f2cc,stroke-width:2px - classDef external fill:#f5f5f5,stroke:#ccc,color:#1a1a1a,stroke-width:1px - classDef operator fill:#fff,stroke:#76b900,color:#1a1a1a,stroke-width:2px,font-weight:bold - - class AGENT agent - class PROC,FS locked - class NET,INF hot - class OUTSIDE external - class YOU operator - - style HOST fill:none,stroke:#76b900,stroke-width:2px,color:#1a1a1a - style NC fill:none,stroke:#76b900,stroke-width:1px,stroke-dasharray:5 5,color:#1a1a1a - style SB fill:#f5faed,stroke:#76b900,stroke-width:2px,color:#1a1a1a - style GW fill:#2a2a2a,stroke:#76b900,stroke-width:2px,color:#fff - -*Full details in `references/best-practices.md`.* - -## Reference - -- [NemoClaw Credential Storage](references/credential-storage.md) diff --git a/.agents/skills/nemoclaw-configure-security/references/best-practices.md b/.agents/skills/nemoclaw-configure-security/references/best-practices.md deleted file mode 100644 index 0046da5c7..000000000 --- a/.agents/skills/nemoclaw-configure-security/references/best-practices.md +++ /dev/null @@ -1,487 +0,0 @@ -# Security Best Practices - -NemoClaw ships with deny-by-default security controls across four layers: network, filesystem, process, and inference. -You can tune every control, but each change shifts the risk profile. -This page documents every configurable knob, its default, what it protects, the concrete risk of relaxing it, and a recommendation for common use cases. - -For background on how the layers fit together, refer to How It Works (see the `nemoclaw-overview` skill). - - - -## Protection Layers at a Glance - -NemoClaw enforces security at four layers. -NemoClaw locks some when it creates the sandbox and requires a restart to change them. -You can hot-reload others while the sandbox runs. - -The following diagram shows the default posture immediately after `nemoclaw onboard`, before you approve any endpoints or apply any presets. - -```mermaid -flowchart TB - subgraph HOST["Your Machine: default posture after nemoclaw onboard"] - direction TB - - YOU["πŸ‘€ Operator"] - - subgraph NC["NemoClaw + OpenShell"] - direction TB - - subgraph SB["Sandbox: the agent's isolated world"] - direction LR - PROC["βš™οΈ Process Layer
Controls what the agent can execute"] - FS["πŸ“ Filesystem Layer
Controls what the agent can read and write"] - AGENT["πŸ€– Agent"] - end - - subgraph GW["Gateway: the gatekeeper"] - direction LR - NET["🌐 Network Layer
Controls where the agent can connect"] - INF["🧠 Inference Layer
Controls which AI models the agent can use"] - end - end - end - - OUTSIDE["🌍 Outside World
Internet Β· AI Providers Β· APIs"] - - AGENT -- "all requests" --> GW - GW -- "approved only" --> OUTSIDE - YOU -. "approve / deny" .-> GW - - classDef agent fill:#76b900,stroke:#5a8f00,color:#fff,stroke-width:2px,font-weight:bold - classDef locked fill:#1a1a1a,stroke:#76b900,color:#fff,stroke-width:2px - classDef hot fill:#333,stroke:#76b900,color:#e6f2cc,stroke-width:2px - classDef external fill:#f5f5f5,stroke:#ccc,color:#1a1a1a,stroke-width:1px - classDef operator fill:#fff,stroke:#76b900,color:#1a1a1a,stroke-width:2px,font-weight:bold - - class AGENT agent - class PROC,FS locked - class NET,INF hot - class OUTSIDE external - class YOU operator - - style HOST fill:none,stroke:#76b900,stroke-width:2px,color:#1a1a1a - style NC fill:none,stroke:#76b900,stroke-width:1px,stroke-dasharray:5 5,color:#1a1a1a - style SB fill:#f5faed,stroke:#76b900,stroke-width:2px,color:#1a1a1a - style GW fill:#2a2a2a,stroke:#76b900,stroke-width:2px,color:#fff -``` - -:::{list-table} -:header-rows: 1 -:widths: 20 30 20 30 - -* - Layer - - What it protects - - Enforcement point - - Changeable at runtime - -* - Network - - Unauthorized outbound connections and data exfiltration. - - OpenShell gateway - - Yes. Use `openshell policy set` or operator approval. - -* - Filesystem - - System binary tampering, credential theft, config manipulation. - - Landlock LSM + container mounts - - No. Requires sandbox re-creation. - -* - Process - - Privilege escalation, fork bombs, syscall abuse. - - Container runtime (Docker/K8s `securityContext`) - - No. Requires sandbox re-creation. - -* - Inference - - Credential exposure, unauthorized model access, cost overruns. - - OpenShell gateway - - Yes. Use `openshell inference set`. - -::: - -## Network Controls - -NemoClaw controls which hosts, ports, and HTTP methods the sandbox can reach, and lets operators approve or deny requests in real time. - - - -### Deny-by-Default Egress - -The sandbox blocks all outbound connections unless you explicitly list the endpoint in the policy file `nemoclaw-blueprint/policies/openclaw-sandbox.yaml`. - -| Aspect | Detail | -|---|---| -| Default | All egress denied. Only endpoints in the baseline policy can receive traffic. | -| What you can change | Add endpoints to the policy file (static) or with `openshell policy set` (dynamic). | -| Risk if relaxed | Each allowed endpoint is a potential data exfiltration path. The agent can send workspace content, credentials, or conversation history to any reachable host. | -| Recommendation | Add only endpoints the agent needs for its task. Prefer operator approval for one-off requests over permanently widening the baseline. | - -### Binary-Scoped Endpoint Rules - -Each network policy entry restricts which executables can reach the endpoint using the `binaries` field. - -OpenShell identifies the calling binary by reading `/proc//exe` (the kernel-trusted executable path, not `argv[0]`), walking the process tree for ancestor binaries, and computing a SHA256 hash of each binary on first use. -If someone replaces a binary while the sandbox runs, the hash mismatch triggers an immediate deny. - -| Aspect | Detail | -|---|---| -| Default | Each endpoint restricts access to specific binaries. For example, only `/usr/bin/gh` and `/usr/bin/git` can reach `github.com`. Binary paths support glob patterns (`*` matches one path component, `**` matches recursively). | -| What you can change | Add binaries to an endpoint entry, or omit the `binaries` field to allow any executable. | -| Risk if relaxed | Removing binary restrictions lets any process in the sandbox reach the endpoint. An agent could use `curl`, `wget`, or a Python script to exfiltrate data to an allowed host, bypassing the intended usage pattern. | -| Recommendation | Always scope endpoints to the binaries that need them. If the agent needs a host from a new binary, add that binary explicitly rather than removing the restriction. | - -### Path-Scoped HTTP Rules - -Endpoint rules restrict allowed HTTP methods and URL paths. - -| Aspect | Detail | -|---|---| -| Default | Most endpoints allow GET and POST on `/**`. Some allow GET only (read-only), such as `docs.openclaw.ai`. | -| What you can change | Add methods (PUT, DELETE, PATCH) or restrict paths to specific prefixes. | -| Risk if relaxed | Allowing all methods on an API endpoint gives the agent write and delete access. For example, allowing DELETE on `api.github.com` lets the agent delete repositories. | -| Recommendation | Use GET-only rules for endpoints that the agent only reads. Add write methods only for endpoints where the agent must create or modify resources. Restrict paths to specific API routes when possible. | - -### L4-Only vs L7 Inspection (`protocol` Field) - -All sandbox egress goes through OpenShell's CONNECT proxy. -The `protocol` field on an endpoint controls whether the proxy also inspects individual HTTP requests inside the tunnel. - -| Aspect | Detail | -|---|---| -| Default | Endpoints without a `protocol` field use L4-only enforcement: the proxy checks host, port, and binary identity, then relays the TCP stream without inspecting payloads. Setting `protocol: rest` enables L7 inspection: the proxy auto-detects and terminates TLS, then evaluates each HTTP request's method and path against the endpoint's `rules` or `access` preset. | -| What you can change | Add `protocol: rest` to an endpoint to enable per-request HTTP inspection. Use the `access` preset (`full`, `read-only`, `read-write`) or explicit `rules` to control allowed methods and paths. | -| Risk if relaxed | L4-only endpoints (no `protocol` field) allow the agent to send any data through the tunnel after the initial connection is permitted. The proxy cannot see or filter the HTTP method, path, or body. The `access: full` preset with `protocol: rest` enables inspection but allows all methods and paths, so it does not restrict what the agent can do at the HTTP level. | -| Recommendation | Use `protocol: rest` with specific `rules` for REST APIs where you want method and path control. Use `protocol: rest` with `access: read-only` for read-only endpoints. Omit `protocol` only for non-HTTP protocols (WebSocket, gRPC streaming) or endpoints that do not need HTTP inspection. | - -### Operator Approval Flow - -When the agent reaches an unlisted endpoint, OpenShell blocks the request and prompts the operator in the TUI. - -| Aspect | Detail | -|---|---| -| Default | Enabled. The gateway blocks all unlisted endpoints and requires approval. | -| What you can change | The system merges approved endpoints into the sandbox's policy as a new durable revision. They persist across sandbox restarts within the same sandbox instance. However, when you destroy and recreate the sandbox (for example, by running `nemoclaw onboard`), the policy resets to the baseline defined in the blueprint. | -| Risk if relaxed | Approving an endpoint permanently widens the running sandbox's policy. If you approve a broad domain (such as a CDN that hosts arbitrary content), the agent can fetch anything from that domain until you destroy and recreate the sandbox. | -| Recommendation | Review each blocked request before approving. If you find yourself approving the same endpoint repeatedly, add it to the baseline policy with appropriate binary and path restrictions. To reset approved endpoints, destroy and recreate the sandbox. | - -### Policy Presets - -NemoClaw ships preset policy files in `nemoclaw-blueprint/policies/presets/` for common integrations. - -| Preset | What it enables | Key risk | -|---|---|---| -| `discord` | Discord REST API, WebSocket gateway, CDN. | CDN endpoint (`cdn.discordapp.com`) allows GET to any path. WebSocket uses `access: full` (no inspection). | -| `docker` | Docker Hub, NVIDIA container registry. | Allows pulling arbitrary container images into the sandbox. | -| `huggingface` | Hugging Face model registry. | Allows downloading arbitrary models and datasets. | -| `jira` | Atlassian Jira API. | Gives agent read/write access to project issues and comments. | -| `npm` | npm and Yarn registries. | Allows installing arbitrary npm packages, which may contain malicious code. | -| `outlook` | Microsoft 365, Outlook. | Gives agent access to email. | -| `pypi` | Python Package Index. | Allows installing arbitrary Python packages, which may contain malicious code. | -| `slack` | Slack API, Socket Mode, webhooks. | WebSocket uses `access: full`. Agent can post to any channel the bot token has access to. | -| `telegram` | Telegram Bot API. | Agent can send messages to any chat the bot token has access to. | - -**Recommendation:** Apply presets only when the agent's task requires the integration. Review the preset's YAML file before applying to understand the endpoints, methods, and binary restrictions it adds. - -## Filesystem Controls - -NemoClaw restricts which paths the agent can read and write, protecting system binaries, configuration files, and gateway credentials. - - - -### Read-Only System Paths - -The container mounts system directories read-only to prevent the agent from modifying binaries, libraries, or configuration files. - -| Aspect | Detail | -|---|---| -| Default | `/usr`, `/lib`, `/proc`, `/dev/urandom`, `/app`, `/etc`, `/var/log` are read-only. | -| What you can change | Add or remove paths in the `filesystem_policy.read_only` section of the policy file. | -| Risk if relaxed | Making `/usr` or `/lib` writable lets the agent replace system binaries (such as `curl` or `node`) with trojanized versions. Making `/etc` writable lets the agent modify DNS resolution, TLS trust stores, or user accounts. | -| Recommendation | Never make system paths writable. If the agent needs a writable location for generated files, use a subdirectory of `/sandbox`. | - -### Read-Only `.openclaw` Config - -The `/sandbox/.openclaw` directory contains the OpenClaw gateway configuration, including auth tokens and CORS settings. -The container mounts it read-only while writable agent state (plugins, agent data) lives in `/sandbox/.openclaw-data` through symlinks. - -Multiple defense layers protect this directory: - -- **DAC permissions.** Root owns the directory and `openclaw.json` with `chmod 444`, so the sandbox user cannot write to them. -- **Immutable flag.** The entrypoint applies `chattr +i` to the directory and all symlinks, preventing modification even if other controls fail. -- **Symlink validation.** At startup, the entrypoint verifies every symlink in `.openclaw` points to the expected `.openclaw-data` target. If any symlink points elsewhere, the container refuses to start. -- **Config integrity hash.** The build process pins a SHA256 hash of `openclaw.json`. The entrypoint verifies it at startup and refuses to start if the hash does not match. - -| Aspect | Detail | -|---|---| -| Default | The container mounts `/sandbox/.openclaw` as read-only, root-owned, immutable, and integrity-verified at startup. `/sandbox/.openclaw-data` remains writable. | -| What you can change | Move `/sandbox/.openclaw` from `read_only` to `read_write` in the policy file. | -| Risk if relaxed | A writable `.openclaw` directory lets the agent modify its own gateway config: disabling CORS, changing auth tokens, or redirecting inference to an attacker-controlled endpoint. This is the single most dangerous filesystem change. | -| Recommendation | Never make `/sandbox/.openclaw` writable. | - -### Writable Paths - -The agent has read-write access to `/sandbox`, `/tmp`, and `/dev/null`. - -| Aspect | Detail | -|---|---| -| Default | `/sandbox` (agent workspace), `/tmp` (temporary files), `/dev/null`. | -| What you can change | Add additional writable paths in `filesystem_policy.read_write`. | -| Risk if relaxed | Each additional writable path expands the agent's ability to persist data and potentially modify system behavior. Adding `/var` lets the agent write to log directories. Adding `/home` gives access to other user directories. | -| Recommendation | Keep writable paths to `/sandbox` and `/tmp`. If the agent needs a persistent working directory, create a subdirectory under `/sandbox`. | - -### Landlock LSM Enforcement - -Landlock is a Linux Security Module that enforces filesystem access rules at the kernel level. - -| Aspect | Detail | -|---|---| -| Default | `compatibility: best_effort`. The entrypoint applies Landlock rules when the kernel supports them and silently skips them on older kernels. | -| What you can change | This is a NemoClaw default, not a user-facing knob. | -| Risk if relaxed | On kernels without Landlock support (pre-5.13), filesystem restrictions rely solely on container mount configuration, which is less granular. | -| Recommendation | Run on a kernel that supports Landlock (5.13+). Ubuntu 22.04 LTS and later include Landlock support. | - -## Process Controls - -NemoClaw limits the capabilities, user privileges, and resource quotas available to processes inside the sandbox. - - - -### Capability Drops - -The entrypoint drops dangerous Linux capabilities from the bounding set at startup using `capsh`. -This limits what capabilities any child process (gateway, sandbox, agent) can ever acquire. - -The entrypoint drops these capabilities: `cap_net_raw`, `cap_dac_override`, `cap_sys_chroot`, `cap_fsetid`, `cap_setfcap`, `cap_mknod`, `cap_audit_write`, `cap_net_bind_service`. -The entrypoint keeps these because it needs them for privilege separation using gosu: `cap_chown`, `cap_setuid`, `cap_setgid`, `cap_fowner`, `cap_kill`. - -This is best-effort: if `capsh` is not available or `CAP_SETPCAP` is not in the bounding set, the entrypoint logs a warning and continues with the default capability set. -For additional protection, pass `--cap-drop=ALL` with `docker run` or Compose (see Sandbox Hardening (see the `nemoclaw-deploy-remote` skill)). - -| Aspect | Detail | -|---|---| -| Default | The entrypoint drops dangerous capabilities at startup using `capsh`. Best-effort. | -| What you can change | When launching with `docker run` directly, pass `--cap-drop=ALL --cap-add=NET_BIND_SERVICE` for stricter enforcement. In the standard NemoClaw flow (with `nemoclaw onboard`), the entrypoint handles capability dropping automatically. | -| Risk if relaxed | `CAP_NET_RAW` allows raw socket access for network sniffing. `CAP_DAC_OVERRIDE` bypasses filesystem permission checks. Attackers can use `CAP_SYS_CHROOT` in container escape chains. If `capsh` is unavailable, the container runs with the default Docker capability set. | -| Recommendation | Run on an image that includes `capsh` (the NemoClaw image includes it through `libcap2-bin`). For defense-in-depth, also pass `--cap-drop=ALL` at the container runtime level. | - -### Gateway Process Isolation - -The OpenClaw gateway runs as a separate `gateway` user, not as the `sandbox` user that runs the agent. - -| Aspect | Detail | -|---|---| -| Default | The entrypoint starts the gateway process using `gosu gateway`, isolating it from the agent's `sandbox` user. | -| What you can change | This is not a user-facing knob. The entrypoint enforces it when running as root. In non-root mode (when OpenShell sets `no-new-privileges`), gateway process isolation does not work because `gosu` cannot change users. | -| Risk if relaxed | If the gateway and agent run as the same user, the agent can kill the gateway process and restart it with a tampered configuration (the "fake-HOME" attack). | -| Recommendation | No action needed. The entrypoint handles this automatically. Be aware that non-root mode disables this isolation. | - -### No New Privileges - -The `no-new-privileges` flag prevents processes from gaining additional privileges through setuid binaries or capability inheritance. - -| Aspect | Detail | -|---|---| -| Default | OpenShell sets `PR_SET_NO_NEW_PRIVS` using `prctl()` inside the sandbox process as part of the seccomp filter setup. The NemoClaw Compose example also shows the equivalent `security_opt: no-new-privileges:true` setting. | -| What you can change | OpenShell's seccomp path enforces this inside the sandbox. It is not a user-facing knob. | -| Risk if relaxed | Without this flag, a compromised process could execute a setuid binary to escalate to root inside the container, then attempt container escape techniques. | -| Recommendation | No action needed. OpenShell enforces this automatically when the sandbox network policy is active. This flag prevents `gosu` from switching users, so non-root mode disables gateway process isolation in the NemoClaw entrypoint. | - -### Process Limit - -A process limit caps the number of processes the sandbox user can spawn. -The entrypoint sets both soft and hard limits using `ulimit -u 512`. -This is best-effort: if the container runtime restricts `ulimit` modification, the entrypoint logs a security warning and continues without the limit. - -| Aspect | Detail | -|---|---| -| Default | 512 processes (`ulimit -u 512`), best-effort. | -| What you can change | Increase or decrease the limit with `--ulimit nproc=N:N` in `docker run` or the `ulimits` section in Compose. The runtime-level ulimit takes precedence over the entrypoint's setting. | -| Risk if relaxed | Removing or raising the limit makes the sandbox vulnerable to fork-bomb attacks, where a runaway process spawns children until the host runs out of resources. If the entrypoint cannot set the limit (logs `[SECURITY] Could not set soft/hard nproc limit`), the container runs without process limits. | -| Recommendation | Keep the default at 512. If the agent runs workloads that spawn many child processes (such as parallel test runners), increase to 1024 and monitor host resource usage. If the entrypoint logs a warning about ulimit restrictions, set the limit through the container runtime instead. | - -### Non-Root User - -The sandbox runs agent processes as a dedicated `sandbox` user and group. -The entrypoint starts as root for privilege separation, then drops to the `sandbox` user for all agent commands. - -| Aspect | Detail | -|---|---| -| Default | `run_as_user: sandbox`, `run_as_group: sandbox`. A separate `gateway` user runs the gateway process. | -| What you can change | Change the `process` section in the policy file to run as a different user. | -| Risk if relaxed | Running as `root` inside the container gives the agent access to modify any file in the container filesystem and increases the impact of container escape vulnerabilities. | -| Recommendation | Never run as root. Keep the `sandbox` user. | - -### PATH Hardening - -The entrypoint locks the `PATH` environment variable to system directories, preventing the agent from injecting malicious binaries into command resolution. - -| Aspect | Detail | -|---|---| -| Default | The entrypoint sets `PATH` to `/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin` at startup. | -| What you can change | This is not a user-facing knob. The entrypoint enforces it. | -| Risk if relaxed | Without PATH hardening, the agent could create an executable named `curl` or `git` in a writable directory earlier in the PATH, intercepting commands run by the entrypoint or other processes. | -| Recommendation | No action needed. The entrypoint handles this automatically. | - -### Build Toolchain Removal - -The Dockerfile removes compilers and network probes from the runtime image. - -| Aspect | Detail | -|---|---| -| Default | The Dockerfile purges `gcc`, `gcc-12`, `g++`, `g++-12`, `cpp`, `cpp-12`, `make`, `netcat-openbsd`, `netcat-traditional`, and `ncat` from the sandbox image. | -| What you can change | Modify the Dockerfile to keep these tools, or install them at runtime if package manager access is allowed. | -| Risk if relaxed | A compiler lets the agent build arbitrary native code, including kernel exploits or custom network tools. `netcat` enables arbitrary TCP connections that bypass HTTP-level policy enforcement. | -| Recommendation | Keep build tools removed. If the agent needs to compile code, run the build in a separate, purpose-built container and copy artifacts into the sandbox. | - -## Gateway Authentication Controls - -The OpenClaw gateway authenticates devices that connect to the Control UI dashboard. -NemoClaw hardens these defaults at image build time. - -### Device Authentication - -Device authentication requires each connecting device to go through a pairing flow before it can interact with the gateway. - -| Aspect | Detail | -|---|---| -| Default | Enabled. The gateway requires device pairing for all connections. | -| What you can change | Set `NEMOCLAW_DISABLE_DEVICE_AUTH=1` as a Docker build argument to disable device authentication. This is a build-time setting baked into `openclaw.json` and verified by hash at startup. | -| Risk if relaxed | Disabling device auth allows any device on the network to connect to the gateway without proving identity. This is dangerous when combined with LAN-bind changes or cloudflared tunnels in remote deployments, resulting in an unauthenticated, publicly reachable dashboard. | -| Recommendation | Keep device auth enabled (the default). Only disable it for headless or development environments where no untrusted devices can reach the gateway. | - -### Insecure Auth Derivation - -The `allowInsecureAuth` setting controls whether the gateway permits non-HTTPS authentication. - -| Aspect | Detail | -|---|---| -| Default | Derived from the `CHAT_UI_URL` scheme at build time. When the URL uses `http://` (local development), insecure auth is allowed. When it uses `https://` (remote or production), insecure auth is blocked. | -| What you can change | This is derived automatically from `CHAT_UI_URL`. Set `CHAT_UI_URL` to an `https://` URL to enforce secure auth. | -| Risk if relaxed | Allowing insecure auth over HTTPS defeats the purpose of TLS, because authentication tokens transit in cleartext. | -| Recommendation | Use `https://` for any deployment accessible beyond `localhost`. The default local URL (`http://127.0.0.1:18789`) correctly allows insecure auth for local development. | - -### Auto-Pair Client Allowlist - -The auto-pair watcher automatically approves device pairing requests from recognized clients, so you do not need to manually approve the Control UI. - -| Aspect | Detail | -|---|---| -| Default | The watcher approves devices with `clientId` set to `openclaw-control-ui` or `clientMode` set to `webchat`. All other clients are rejected and logged. | -| What you can change | This is not a user-facing knob. The allowlist is defined in the entrypoint script. | -| Risk if relaxed | Approving all device types without validation lets rogue or unexpected clients pair with the gateway unchallenged. | -| Recommendation | No action needed. The entrypoint handles this automatically. If you see `[auto-pair] rejected unknown client=...` in the logs, investigate the source of the unexpected connection. | - -### CLI Secret Redaction - -The CLI automatically redacts secret patterns (API keys, bearer tokens, provider credentials) from command output and error messages before logging them. - -| Aspect | Detail | -|---|---| -| Default | Enabled. The runner redacts secrets from stdout, stderr, and thrown error messages. | -| What you can change | This is not a user-facing knob. The CLI enforces it on all command output paths. | -| Risk if relaxed | Without redaction, secrets could appear in terminal scrollback, log files, or debug output shared in bug reports. | -| Recommendation | No action needed. If you share `nemoclaw debug` output, verify that no secrets appear in the collected diagnostics. | - -## Inference Controls - -OpenShell routes all inference traffic through the gateway to isolate provider credentials from the sandbox. - -### Routed Inference through `inference.local` - -The OpenShell gateway intercepts all inference requests from the agent and routes them to the configured provider. -The agent never receives the provider API key. - -| Aspect | Detail | -|---|---| -| Default | The agent talks to `inference.local`. The host owns the credential and upstream endpoint. | -| What you can change | You cannot configure this architecture. The system always enforces it. | -| Risk if bypassed | If the agent could reach an inference endpoint directly (by adding it to the network policy), it would need an API key. Since the sandbox does not contain credentials, this acts as defense-in-depth. However, adding an inference provider's host to the network policy without going through OpenShell routing could let the agent use a stolen or hardcoded key. | -| Recommendation | Do not add inference provider hosts (such as `api.openai.com` or `api.anthropic.com`) to the network policy. Use OpenShell inference routing instead. | - -### Provider Trust Tiers - -Different inference providers have different trust and cost profiles. - -| Provider | Trust level | Cost risk | Data handling | -|---|---|---|---| -| NVIDIA Endpoints | High. Hosted on `build.nvidia.com`. | Pay-per-token with an API key. Unattended agents can accumulate cost. | NVIDIA infrastructure processes requests. | -| OpenAI | High. Commercial API. | Pay-per-token. Same cost risk as NVIDIA Endpoints. | Subject to OpenAI data policies. | -| Anthropic | High. Commercial API. | Pay-per-token. Same cost risk as NVIDIA Endpoints. | Subject to Anthropic data policies. | -| Google Gemini | High. Commercial API. | Pay-per-token. Same cost risk as NVIDIA Endpoints. | Subject to Google data policies. | -| Local Ollama | Self-hosted. No data leaves the machine. | No per-token cost. GPU/CPU resource cost. | Data stays local. | -| Custom compatible endpoint | Varies. Depends on the proxy or gateway. | Varies. | Depends on the endpoint operator. | - -**Recommendation:** For sensitive workloads, use local Ollama to keep data on-premise. For general use, NVIDIA Endpoints provide a good balance of capability and trust. Review the data policies of any cloud provider you use. - -### Experimental Providers - -The `NEMOCLAW_EXPERIMENTAL=1` environment variable gates local NVIDIA NIM and local vLLM. - -| Aspect | Detail | -|---|---| -| Default | Disabled. The onboarding wizard does not show these providers. | -| What you can change | Set `NEMOCLAW_EXPERIMENTAL=1` before running `nemoclaw onboard`. | -| Risk if relaxed | NemoClaw has not fully validated these providers. NIM requires a NIM-capable GPU. vLLM must already be running on `localhost:8000`. Misconfiguration can cause failed inference or unexpected behavior. | -| Recommendation | Use experimental providers only for evaluation. Do not rely on them for always-on assistants. | - -## Posture Profiles - -The following profiles describe how to configure NemoClaw for different use cases. -These are not separate policy files. -They provide guidance on which controls to keep tight or relax. - -### Locked-Down (Default) - -Use for always-on assistants with minimal external access. - -- Keep all defaults. Do not add presets. -- Use operator approval for any endpoint the agent requests. -- Use NVIDIA Endpoints or local Ollama for inference. -- Monitor the TUI for unexpected network requests. - -### Development - -Use when the agent needs package registries, Docker Hub, or broader GitHub access during development tasks. - -- Apply the `pypi` and `npm` presets for package installation. -- Apply the `docker` preset if the agent builds or pulls container images. -- Keep binary restrictions on all presets. -- Review the agent's network activity periodically with `openshell term`. -- Use operator approval for any endpoint not covered by a preset. - -### Integration Testing - -Use when the agent talks to internal APIs or third-party services during testing. - -- Add custom endpoint entries with tight path and method restrictions. -- Use `protocol: rest` for all HTTP APIs to maintain inspection. -- Use operator approval for unknown endpoints during test runs. -- Review and clean up the baseline policy after testing. Remove endpoints that are no longer needed. - -## Common Mistakes - -The following patterns weaken security without providing meaningful benefit. - -| Mistake | Why it matters | What to do instead | -|---------|---------------|-------------------| -| Omitting `protocol: rest` on REST API endpoints | Endpoints without a `protocol` field use L4-only enforcement. The proxy allows the TCP stream through after checking host, port, and binary, but cannot see or filter individual HTTP requests. | Add `protocol: rest` with explicit `rules` to enable per-request method and path control on REST APIs. | -| Adding endpoints to the baseline policy for one-off requests | Adding an endpoint to the baseline policy makes it permanently reachable across all sandbox instances. | Use operator approval. Approved endpoints persist within the sandbox instance but reset when you destroy and recreate the sandbox. | -| Relying solely on the entrypoint for capability drops | The entrypoint drops dangerous capabilities using `capsh`, but this is best-effort. If `capsh` is unavailable or `CAP_SETPCAP` is not in the bounding set, the container runs with the default capability set. | Pass `--cap-drop=ALL` at the container runtime level as defense-in-depth. | -| Granting write access to `/sandbox/.openclaw` | This directory contains the OpenClaw gateway configuration. A writable `.openclaw` lets the agent modify auth tokens, disable CORS, or redirect inference routing. | Store agent-writable state in `/sandbox/.openclaw-data`. | -| Adding inference provider hosts to the network policy | Direct network access to an inference host bypasses credential isolation and usage tracking. | Use OpenShell inference routing instead of adding hosts like `api.openai.com` or `api.anthropic.com` to the network policy. | -| Disabling device auth for remote deployments | Without device auth, any device on the network can connect to the gateway without pairing. Combined with a cloudflared tunnel, this makes the dashboard publicly accessible and unauthenticated. | Keep `NEMOCLAW_DISABLE_DEVICE_AUTH` at its default (`0`). Only set it to `1` for local headless or development environments. | - -## Related Topics - -- Network Policies (see the `nemoclaw-reference` skill) for the full baseline policy reference. -- Customize the Network Policy (see the `nemoclaw-manage-policy` skill) for static and dynamic policy changes. -- Approve or Deny Network Requests (see the `nemoclaw-manage-policy` skill) for the operator approval flow. -- Sandbox Hardening (see the `nemoclaw-deploy-remote` skill) for container-level security measures. -- Inference Options (see the `nemoclaw-configure-inference` skill) for provider configuration details. -- How It Works (see the `nemoclaw-overview` skill) for the protection layer architecture. - diff --git a/.agents/skills/nemoclaw-configure-security/references/credential-storage.md b/.agents/skills/nemoclaw-configure-security/references/credential-storage.md deleted file mode 100644 index 553f91b63..000000000 --- a/.agents/skills/nemoclaw-configure-security/references/credential-storage.md +++ /dev/null @@ -1,111 +0,0 @@ -# Credential Storage - -NemoClaw stores operator-provided host-side credentials under `~/.nemoclaw/`. -These credentials are used during onboarding and host-side lifecycle operations. -They are not encrypted at rest by NemoClaw. -Instead, NemoClaw relies on local filesystem ownership and Unix permissions to limit access. - -## Location and Permissions - -By default, NemoClaw stores credentials in: - -```text -~/.nemoclaw/credentials.json -``` - -When NemoClaw creates this state directory, it uses owner-only permissions: - -- `~/.nemoclaw/` is created with mode `0700` -- `~/.nemoclaw/credentials.json` is written with mode `0600` - -That means only the local account that owns the files should be able to read or modify them. - -NemoClaw also refuses to use obviously unsafe `HOME` paths such as `/tmp`, `/var/tmp`, `/dev/shm`, or `/` for credential storage. -If `HOME` points to one of those locations, onboarding exits with an error instead of writing secrets there. - -## Plaintext Storage Warning - -The credential file is plaintext JSON. -NemoClaw does **not** currently encrypt the file or integrate with the host operating system keychain. - -A typical file looks like this: - -```json -{ - "NVIDIA_API_KEY": "nvapi-...", - "GITHUB_TOKEN": "ghp_...", - "OPENAI_API_KEY": "sk-..." -} -``` - -Treat this file like any other local secret material. -Anyone who can read it can reuse those credentials with the upstream provider. - -## Precedence and Scope - -When NemoClaw looks up a credential, it checks environment variables first. -If the corresponding environment variable is set, NemoClaw uses that value instead of the stored file. - -This behavior is useful for: - -- CI or automation where you do not want to persist secrets to disk -- temporary overrides during testing -- short-lived or rotated credentials - -For interactive local use, `nemoclaw onboard` can save credentials into `~/.nemoclaw/credentials.json` so future runs do not prompt again. - -## Security Recommendations - -Use the following practices to reduce the risk of credential exposure. - -1. Keep your home directory private and owned by your user account. -2. Exclude `~/.nemoclaw/` from cloud-sync folders, shared folders, and broad backup exports unless those systems are already approved for secret storage. -3. Prefer short-lived or low-scope provider credentials where the upstream service supports them. -4. Rotate keys after suspected exposure, machine transfer, or account changes. -5. Prefer environment variables for ephemeral automation instead of persisting long-lived secrets locally. -6. Do not copy `credentials.json` into container images, Git repositories, bug reports, or support bundles. - -## Inspect and Repair Permissions - -To inspect the current permissions: - -```console -$ ls -ld ~/.nemoclaw ~/.nemoclaw/credentials.json -``` - -Expected output should show a private directory and file, for example: - -```text -drwx------ ... ~/.nemoclaw --rw------- ... ~/.nemoclaw/credentials.json -``` - -If the permissions are broader than expected, tighten them: - -```console -$ chmod 700 ~/.nemoclaw -$ chmod 600 ~/.nemoclaw/credentials.json -``` - -## Rotate or Remove Stored Credentials - -The simplest way to replace a stored provider key is to rerun onboarding and provide the new value when prompted: - -```console -$ nemoclaw onboard -``` - -To remove the stored file entirely: - -```console -$ rm -f ~/.nemoclaw/credentials.json -``` - -On the next run, NemoClaw prompts again unless the credential is supplied through the environment. - -## Related Files - -Other NemoClaw host-side state also lives under `~/.nemoclaw/`, such as sandbox registry metadata. -These files are operational state, not provider secrets, but they should still remain in a user-owned home directory. - -For the broader sandbox security model and operational trade-offs, see Security Best Practices (see the `nemoclaw-configure-security` skill) and Architecture (see the `nemoclaw-reference` skill). diff --git a/.agents/skills/nemoclaw-deploy-remote/SKILL.md b/.agents/skills/nemoclaw-deploy-remote/SKILL.md deleted file mode 100644 index 95aaeed01..000000000 --- a/.agents/skills/nemoclaw-deploy-remote/SKILL.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -name: "nemoclaw-deploy-remote" -description: "Explains how to run NemoClaw on a remote GPU instance, including the deprecated Brev compatibility path and the preferred installer plus onboard flow. Describes security hardening measures applied to the NemoClaw sandbox container image. Use when reviewing container security, Docker capabilities, process limits, or sandbox hardening controls. Explains how Telegram reaches the sandboxed OpenClaw agent through OpenShell-managed processes and onboarding-time channel configuration. Use when setting up Telegram, a chat interface, or messaging integration without relying on nemoclaw start for bridges." ---- - -# NemoClaw Deploy Remote - -Explains how to run NemoClaw on a remote GPU instance, including the deprecated Brev compatibility path and the preferred installer plus onboard flow. - -## Prerequisites - -- The [Brev CLI](https://brev.nvidia.com) installed and authenticated. -- A provider credential for the inference backend you want to use during onboarding. -- NemoClaw installed locally if you plan to use the deprecated `nemoclaw deploy` wrapper. Otherwise, install NemoClaw directly on the remote host after provisioning it. -- A machine where you can run `nemoclaw onboard` (local or remote host that runs the gateway and sandbox). -- A Telegram bot token from [BotFather](https://t.me/BotFather). - -Run NemoClaw on a remote GPU instance through [Brev](https://brev.nvidia.com). -The preferred path is to provision the VM, run the standard NemoClaw installer on that host, and then run `nemoclaw onboard`. - -## Step 1: Quick Start - -If your Brev instance is already up and has already been onboarded with a sandbox, start with the standard sandbox chat flow: - -```console -$ nemoclaw my-assistant connect -$ openclaw tui -``` - -This gets you into the sandbox shell first and opens the OpenClaw chat UI right away. -If the VM is fresh, run the standard installer on that host and then run `nemoclaw onboard` before trying `nemoclaw my-assistant connect`. - -If you are connecting from your local machine and still need to provision the remote VM, you can still use `nemoclaw deploy ` as the legacy compatibility path described below. - -## Step 2: Deploy the Instance - -> **Warning:** The `nemoclaw deploy` command is deprecated. -> Prefer provisioning the remote host separately, then running the standard NemoClaw installer and `nemoclaw onboard` on that host. - -Create a Brev instance and run the legacy compatibility flow: - -```console -$ nemoclaw deploy -``` - -Replace `` with a name for your remote instance, for example `my-gpu-box`. - -The legacy compatibility flow performs the following steps on the VM: - -1. Installs Docker and the NVIDIA Container Toolkit if a GPU is present. -2. Installs the OpenShell CLI. -3. Runs `nemoclaw onboard` (the setup wizard) to create the gateway, register providers, and launch the sandbox. -4. Starts optional host auxiliary services (for example the cloudflared tunnel) when `cloudflared` is available. Channel messaging is configured during onboarding and runs through OpenShell-managed processes, not through `nemoclaw start`. - -By default, the compatibility wrapper asks Brev to provision on `gcp`. Override this with `NEMOCLAW_BREV_PROVIDER` if you need a different Brev cloud provider. - -## Step 3: Connect to the Remote Sandbox - -After deployment finishes, the deploy command opens an interactive shell inside the remote sandbox. -To reconnect after closing the session, run the command again: - -```console -$ nemoclaw deploy -``` - -## Step 4: Monitor the Remote Sandbox - -SSH to the instance and run the OpenShell TUI to monitor activity and approve network requests: - -```console -$ ssh 'cd /home/ubuntu/nemoclaw && set -a && . .env && set +a && openshell term' -``` - -## Step 5: Verify Inference - -Run a test agent prompt inside the remote sandbox: - -```console -$ openclaw agent --agent main --local -m "Hello from the remote sandbox" --session-id test -``` - -## Step 6: Remote Dashboard Access - -The NemoClaw dashboard validates the browser origin against an allowlist baked -into the sandbox image at build time. By default the allowlist only contains -`http://127.0.0.1:18789`. When accessing the dashboard from a remote browser -(for example through a Brev public URL or an SSH port-forward), set -`CHAT_UI_URL` to the origin the browser will use **before** running setup: - -```console -$ export CHAT_UI_URL="https://openclaw0-.brevlab.com" -$ nemoclaw deploy -``` - -For SSH port-forwarding, the origin is typically `http://127.0.0.1:18789` (the -default), so no extra configuration is needed. - -> **Warning:** On Brev, set `CHAT_UI_URL` in the launchable environment configuration so it is -> available when the installer builds the sandbox image. If `CHAT_UI_URL` is not -> set on a headless host, the compatibility wrapper prints a warning. -> -> `NEMOCLAW_DISABLE_DEVICE_AUTH` is also evaluated at image build time. -> If you disable device auth for a remote deployment, any device that can reach the dashboard origin can connect without pairing. -> Avoid this on internet-reachable or shared-network deployments. - -## Step 7: GPU Configuration - -The deploy script uses the `NEMOCLAW_GPU` environment variable to select the GPU type. -The default value is `a2-highgpu-1g:nvidia-tesla-a100:1`. -Set this variable before running `nemoclaw deploy` to use a different GPU configuration: - -```console -$ export NEMOCLAW_GPU="a2-highgpu-1g:nvidia-tesla-a100:2" -$ nemoclaw deploy -``` - ---- - -Telegram, Discord, and Slack reach your agent through OpenShell-managed processes and gateway constructs. -NemoClaw configures those channels during `nemoclaw onboard`. Tokens are registered with OpenShell providers, channel configuration is baked into the sandbox image, and runtime delivery stays under OpenShell control. - -`nemoclaw start` does not start Telegram (or other chat bridges). It only starts optional host services such as the cloudflared tunnel when that binary is present. -For details, refer to Commands (see the `nemoclaw-reference` skill). - -## Step 8: Create a Telegram Bot - -Open Telegram and send `/newbot` to [@BotFather](https://t.me/BotFather). -Follow the prompts to create a bot and copy the bot token. - -## Step 9: Provide the Bot Token and Optional Allowlist - -Onboarding reads Telegram credentials from either host environment variables or the NemoClaw credential store (`getCredential` / `saveCredential` in the onboard flow). You do not have to export variables if you enter the token when the wizard asks. - -### Option A: Environment variables (CI, scripts, or before you start the wizard) - -```console -$ export TELEGRAM_BOT_TOKEN= -``` - -Optional comma-separated allowlist (maps to the wizard field β€œTelegram User ID (for DM access)”): - -```console -$ export TELEGRAM_ALLOWED_IDS="123456789,987654321" -``` - -### Option B: Interactive `nemoclaw onboard` - -When the wizard reaches **Messaging channels**, it lists Telegram, Discord, and Slack. -Press **1** to toggle Telegram on or off, then **Enter** when done. -If the token is not already in the environment or credential store, the wizard prompts for it and saves it to the store. -If `TELEGRAM_ALLOWED_IDS` is not set, the wizard can prompt for allowed sender IDs for Telegram DMs (you can leave this blank and rely on OpenClaw pairing instead). - -## Step 10: Run `nemoclaw onboard` - -Complete the rest of the wizard so the blueprint can create OpenShell providers (for example `-telegram-bridge`), bake channel configuration into the image (`NEMOCLAW_MESSAGING_CHANNELS_B64`), and start the sandbox. - -Channel entries in `/sandbox/.openclaw/openclaw.json` are fixed at image build time. Landlock keeps that path read-only at runtime, so you cannot patch messaging config inside a running sandbox. - -If you add or change `TELEGRAM_BOT_TOKEN` (or toggle channels) after a sandbox already exists, you typically need to run `nemoclaw onboard` again so the image and provider attachments are rebuilt with the new settings. - -For a full first-time flow, refer to Quickstart (see the `nemoclaw-get-started` skill). - -## Step 11: Confirm Delivery - -After the sandbox is running, send a message to your bot in Telegram. -If something fails, use `openshell term` on the host, check gateway logs, and verify network policy allows the Telegram API (see Customize the Network Policy (see the `nemoclaw-manage-policy` skill) and the `telegram` preset). - -## Step 12: `nemoclaw start` (cloudflared Only) - -`nemoclaw start` starts cloudflared when it is installed, which can expose the dashboard with a public URL. -It does not affect Telegram connectivity. - -```console -$ nemoclaw start -``` - -## Reference - -- [Sandbox Image Hardening](references/sandbox-hardening.md) - -## Related Skills - -- `nemoclaw-monitor-sandbox` β€” Monitor Sandbox Activity for sandbox monitoring tools -- `nemoclaw-reference` β€” Commands for the full `deploy` command reference diff --git a/.agents/skills/nemoclaw-deploy-remote/references/sandbox-hardening.md b/.agents/skills/nemoclaw-deploy-remote/references/sandbox-hardening.md deleted file mode 100644 index dc0ad59fa..000000000 --- a/.agents/skills/nemoclaw-deploy-remote/references/sandbox-hardening.md +++ /dev/null @@ -1,68 +0,0 @@ -# Sandbox Image Hardening - -The NemoClaw sandbox image applies several security measures to reduce attack -surface and limit the blast radius of untrusted workloads. - -## Removed Unnecessary Tools - -Build toolchains (`gcc`, `g++`, `make`) and network probes (`netcat`) are -explicitly purged from the runtime image. These tools are not needed at runtime -and would unnecessarily widen the attack surface. - -If you need a compiler during build, use the existing multi-stage build -(the `builder` stage has full Node.js tooling) and copy only artifacts into the -runtime stage. - -## Process Limits - -The container ENTRYPOINT sets `ulimit -u 512` to cap the number of processes -a sandbox user can spawn. This mitigates fork-bomb attacks. The startup script -(`nemoclaw-start.sh`) applies the same limit. - -Adjust the value via the `--ulimit nproc=512:512` flag if launching with -`docker run` directly. - -## Dropping Linux Capabilities - -When running the sandbox container, drop all Linux capabilities and re-add only -what is strictly required: - -```console -$ docker run --rm \ - --cap-drop=ALL \ - --ulimit nproc=512:512 \ - nemoclaw-sandbox -``` - -### Docker Compose Example - -```yaml -services: - nemoclaw-sandbox: - image: nemoclaw-sandbox:latest - cap_drop: - - ALL - cap_add: - - NET_BIND_SERVICE - ulimits: - nproc: - soft: 512 - hard: 512 - security_opt: - - no-new-privileges:true - read_only: true - tmpfs: - - /tmp:size=64m -``` - -> **Note:** The `Dockerfile` itself cannot enforce `--cap-drop`. That is a -> runtime concern controlled by the container orchestrator. Always configure -> capability dropping in your `docker run` flags, Compose file, or Kubernetes -> `securityContext`. - -## References - -- [#807](https://github.com/NVIDIA/NemoClaw/issues/807): gcc in sandbox image -- [#808](https://github.com/NVIDIA/NemoClaw/issues/808): netcat in sandbox image -- [#809](https://github.com/NVIDIA/NemoClaw/issues/809): No process limit -- [#797](https://github.com/NVIDIA/NemoClaw/issues/797): Drop Linux capabilities diff --git a/.agents/skills/nemoclaw-get-started/SKILL.md b/.agents/skills/nemoclaw-get-started/SKILL.md deleted file mode 100644 index f3559bdf5..000000000 --- a/.agents/skills/nemoclaw-get-started/SKILL.md +++ /dev/null @@ -1,95 +0,0 @@ ---- -name: "nemoclaw-get-started" -description: "Installs NemoClaw, launches a sandbox, and runs the first agent prompt. Use when onboarding, installing, or launching a NemoClaw sandbox for the first time." ---- - -# NemoClaw Get Started - -Installs NemoClaw, launches a sandbox, and runs the first agent prompt. Use when onboarding, installing, or launching a NemoClaw sandbox for the first time. - -## Prerequisites - -Before getting started, check the prerequisites to ensure you have the necessary software and hardware to run NemoClaw. - -> **Alpha software:** NemoClaw is in alpha, available as an early preview since March 16, 2026. -> APIs, configuration schemas, and runtime behavior are subject to breaking changes between releases. -> Do not use this software in production environments. -> File issues and feedback through the GitHub repository as the project continues to stabilize. - -Follow these steps to get started with NemoClaw and your first sandboxed OpenClaw agent. - -## Step 1: Install NemoClaw and Onboard OpenClaw Agent - -Download and run the installer script. -The script installs Node.js if it is not already present, then runs the guided onboard wizard to create a sandbox, configure inference, and apply security policies. - -> **Note:** NemoClaw creates a fresh OpenClaw instance inside the sandbox during the onboarding process. - -```bash -curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash -``` - -If you use nvm or fnm to manage Node.js, the installer may not update your current shell's PATH. -If `nemoclaw` is not found after install, run `source ~/.bashrc` (or `source ~/.zshrc` for zsh) or open a new terminal. - -> **Note:** The onboard flow builds the sandbox image with `NEMOCLAW_DISABLE_DEVICE_AUTH=1` so the dashboard is immediately usable during setup. -> This is a build-time setting baked into the sandbox image, not a runtime knob. -> If you export `NEMOCLAW_DISABLE_DEVICE_AUTH` after onboarding finishes, it has no effect on an existing sandbox. - -When the install completes, a summary confirms the running environment: - -```text -────────────────────────────────────────────────── -Sandbox my-assistant (Landlock + seccomp + netns) -Model nvidia/nemotron-3-super-120b-a12b (NVIDIA Endpoints) -────────────────────────────────────────────────── -Run: nemoclaw my-assistant connect -Status: nemoclaw my-assistant status -Logs: nemoclaw my-assistant logs --follow -────────────────────────────────────────────────── - -[INFO] === Installation complete === -``` - -## Step 2: Chat with the Agent - -Connect to the sandbox, then chat with the agent through the TUI or the CLI. - -```bash -nemoclaw my-assistant connect -``` - -In the sandbox shell, open the OpenClaw terminal UI and start a chat: - -```bash -openclaw tui -``` - -Alternatively, send a single message and print the response: - -```bash -openclaw agent --agent main --local -m "hello" --session-id test -``` - -## Step 3: Uninstall - -To remove NemoClaw and all resources created during setup, run the uninstall script: - -```bash -curl -fsSL https://raw.githubusercontent.com/NVIDIA/NemoClaw/refs/heads/main/uninstall.sh | bash -``` - -| Flag | Effect | -|--------------------|-----------------------------------------------------| -| `--yes` | Skip the confirmation prompt. | -| `--keep-openshell` | Leave the `openshell` binary installed. | -| `--delete-models` | Also remove NemoClaw-pulled Ollama models. | - -For troubleshooting installation or onboarding issues, see the Troubleshooting guide (see the `nemoclaw-reference` skill). - -## Related Skills - -- `nemoclaw-configure-inference` β€” Switch inference providers to use a different model or endpoint -- `nemoclaw-manage-policy` β€” Approve or deny network requests when the agent tries to reach external hosts -- `nemoclaw-deploy-remote` β€” Deploy to a remote GPU instance for always-on operation -- `nemoclaw-monitor-sandbox` β€” Monitor sandbox activity through the OpenShell TUI diff --git a/.agents/skills/nemoclaw-manage-policy/SKILL.md b/.agents/skills/nemoclaw-manage-policy/SKILL.md deleted file mode 100644 index bac54ae58..000000000 --- a/.agents/skills/nemoclaw-manage-policy/SKILL.md +++ /dev/null @@ -1,166 +0,0 @@ ---- -name: "nemoclaw-manage-policy" -description: "Reviews and approves blocked agent network requests in the TUI. Use when approving or denying sandbox egress requests, managing blocked network calls, or using the approval TUI. Adds, removes, or modifies allowed endpoints in the sandbox policy. Use when customizing network policy, changing egress rules, or configuring sandbox endpoint access." ---- - -# NemoClaw Manage Policy - -Reviews and approves blocked agent network requests in the TUI. Use when approving or denying sandbox egress requests, managing blocked network calls, or using the approval TUI. - -## Prerequisites - -- A running NemoClaw sandbox. -- The OpenShell CLI on your `PATH`. -- A running NemoClaw sandbox for dynamic changes, or the NemoClaw source repository for static changes. - -Review and act on network requests that the agent makes to endpoints not listed in the sandbox policy. -OpenShell intercepts these requests and presents them in the TUI for operator approval. - -## Step 1: Open the TUI - -Start the OpenShell terminal UI to monitor sandbox activity: - -```console -$ openshell term -``` - -For a remote sandbox, pass the instance name: - -```console -$ ssh my-gpu-box 'cd /home/ubuntu/nemoclaw && . .env && openshell term' -``` - -The TUI displays the sandbox state, active inference provider, and a live feed of network activity. - -## Step 2: Trigger a Blocked Request - -When the agent attempts to reach an endpoint that is not in the baseline policy, OpenShell blocks the connection and displays the request in the TUI. -The blocked request includes the following details: - -- **Host and port** of the destination. -- **Binary** that initiated the request. -- **HTTP method** and path, if available. - -## Step 3: Approve or Deny the Request - -The TUI presents an approval prompt for each blocked request. - -- **Approve** the request to add the endpoint to the running policy for the current session. -- **Deny** the request to keep the endpoint blocked. - -Approved endpoints remain in the running policy until the sandbox stops. -They are not persisted to the baseline policy file. - -## Step 4: Run the Walkthrough - -To observe the approval flow in a guided session, run the walkthrough script: - -```console -$ ./scripts/walkthrough.sh -``` - -This script opens a split tmux session with the TUI on the left and the agent on the right. -The walkthrough requires tmux and the `NVIDIA_API_KEY` environment variable. - ---- - -Add, remove, or modify the endpoints that the sandbox is allowed to reach. - -The sandbox policy is defined in a declarative YAML file in the NemoClaw repository and enforced at runtime by [NVIDIA OpenShell](https://github.com/NVIDIA/OpenShell). -NemoClaw supports both static policy changes that persist across restarts and dynamic updates applied to a running sandbox through the OpenShell CLI. - -## Step 5: Static Changes - -Static changes modify the baseline policy file and take effect after the next sandbox creation. - -### Edit the Policy File - -Open `nemoclaw-blueprint/policies/openclaw-sandbox.yaml` and add or modify endpoint entries. - -Each entry in the `network` section defines an endpoint group with the following fields: - -`endpoints` -: Host and port pairs that the sandbox can reach. - -`binaries` -: Executables allowed to use this endpoint. - -`rules` -: HTTP methods and paths that are permitted. - -### Re-Run Onboard - -Apply the updated policy by re-running the onboard wizard: - -```console -$ nemoclaw onboard -``` - -The wizard picks up the modified policy file and applies it to the sandbox. - -### Verify the Policy - -Check that the sandbox is running with the updated policy: - -```console -$ nemoclaw status -``` - -## Step 6: Dynamic Changes - -Dynamic changes apply a policy update to a running sandbox without restarting it. - -### Create a Policy File - -Create a YAML file with the endpoints to add. -Follow the same format as the baseline policy in `nemoclaw-blueprint/policies/openclaw-sandbox.yaml`. - -### Apply the Policy - -Use the OpenShell CLI to apply the policy update: - -```console -$ openshell policy set -``` - -The change takes effect immediately. - -### Scope of Dynamic Changes - -Dynamic changes apply only to the current session. -When the sandbox stops, the running policy resets to the baseline defined in the policy file. -To make changes permanent, update the static policy file and re-run setup. - -## Step 7: Policy Presets - -NemoClaw ships preset policy files for common integrations in `nemoclaw-blueprint/policies/presets/`. -Apply a preset as-is or use it as a starting template for a custom policy. - -Available presets: - -| Preset | Endpoints | -|--------|-----------| -| `discord` | Discord webhook API | -| `docker` | Docker Hub, NVIDIA container registry | -| `huggingface` | Hugging Face model registry | -| `jira` | Atlassian Jira API | -| `npm` | npm and Yarn registries | -| `outlook` | Microsoft 365 and Outlook | -| `pypi` | Python Package Index | -| `slack` | Slack API and webhooks | -| `telegram` | Telegram Bot API | - -To apply a preset to a running sandbox, pass it as a policy file: - -```console -$ openshell policy set nemoclaw-blueprint/policies/presets/pypi.yaml -``` - -To include a preset in the baseline, merge its entries into `openclaw-sandbox.yaml` and re-run `nemoclaw onboard`. - -## Related Skills - -- `nemoclaw-reference` β€” Network Policies for the full baseline policy reference -- `nemoclaw-monitor-sandbox` β€” Monitor Sandbox Activity for general sandbox monitoring -- OpenShell [Policy Schema](https://docs.nvidia.com/openshell/latest/reference/policy-schema.html) for the full YAML policy schema reference. -- OpenShell [Sandbox Policies](https://docs.nvidia.com/openshell/latest/sandboxes/policies.html) for applying, iterating, and debugging policies at the OpenShell layer. diff --git a/.agents/skills/nemoclaw-monitor-sandbox/SKILL.md b/.agents/skills/nemoclaw-monitor-sandbox/SKILL.md deleted file mode 100644 index 4c54a3ace..000000000 --- a/.agents/skills/nemoclaw-monitor-sandbox/SKILL.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -name: "nemoclaw-monitor-sandbox" -description: "Inspects sandbox health, traces agent behavior, and diagnoses problems. Use when monitoring a running sandbox, debugging agent issues, or checking sandbox logs." ---- - -# NemoClaw Monitor Sandbox - -Inspects sandbox health, traces agent behavior, and diagnoses problems. Use when monitoring a running sandbox, debugging agent issues, or checking sandbox logs. - -## Prerequisites - -- A running NemoClaw sandbox. -- The OpenShell CLI on your `PATH`. - -Use the NemoClaw status, logs, and TUI tools together to inspect sandbox health, trace agent behavior, and diagnose problems. - -## Step 1: Check Sandbox Health - -Run the status command to view the sandbox state, blueprint run information, and active inference configuration: - -```console -$ nemoclaw status -``` - -Key fields in the output include the following: - -- Sandbox state, which indicates whether the sandbox is running, stopped, or in an error state. -- Blueprint run ID, which is the identifier for the most recent blueprint execution. -- Inference provider, which shows the active provider, model, and endpoint. - -Run `nemoclaw status` on the host to check sandbox state. -Use `openshell sandbox list` for the underlying sandbox details. - -## Step 2: View Blueprint and Sandbox Logs - -Stream the most recent log output from the blueprint runner and sandbox: - -```console -$ nemoclaw logs -``` - -To follow the log output in real time: - -```console -$ nemoclaw logs --follow -``` - -## Step 3: Monitor Network Activity in the TUI - -Open the OpenShell terminal UI for a live view of sandbox network activity and egress requests: - -```console -$ openshell term -``` - -For a remote sandbox, SSH to the instance and run `openshell term` there. - -The TUI shows the following information: - -- Active network connections from the sandbox. -- Blocked egress requests awaiting operator approval. -- Inference routing status. - -Refer to Approve or Deny Agent Network Requests (see the `nemoclaw-manage-policy` skill) for details on handling blocked requests. - -## Step 4: Test Inference - -Run a test inference request to verify that the provider is responding: - -```console -$ nemoclaw my-assistant connect -$ openclaw agent --agent main --local -m "Test inference" --session-id debug -``` - -If the request fails, check the following: - -1. Run `nemoclaw status` to confirm the active provider and endpoint. -2. Run `nemoclaw logs --follow` to view error messages from the blueprint runner. -3. Verify that the inference endpoint is reachable from the host. - -## Related Skills - -- `nemoclaw-reference` β€” Troubleshooting for common issues and resolution steps -- `nemoclaw-manage-policy` β€” Approve or Deny Agent Network Requests for the operator approval flow -- `nemoclaw-configure-inference` β€” Switch Inference Providers to change the active provider diff --git a/.agents/skills/nemoclaw-overview/SKILL.md b/.agents/skills/nemoclaw-overview/SKILL.md deleted file mode 100644 index 56078e7ab..000000000 --- a/.agents/skills/nemoclaw-overview/SKILL.md +++ /dev/null @@ -1,201 +0,0 @@ ---- -name: "nemoclaw-overview" -description: "Explains how OpenClaw, OpenShell, and NemoClaw form the ecosystem, NemoClaw’s position in the stack, and when to prefer NemoClaw versus integrating OpenShell and OpenClaw directly. Use when users ask about the relationship between OpenClaw, OpenShell, and NemoClaw, or when to use NemoClaw versus OpenShell. Describes how NemoClaw works internally: CLI, plugin, blueprint runner, OpenShell orchestration, inference routing, and protection layers. Use for sandbox lifecycle and architecture mechanics; not for product definition (Overview) or multi-project placement (Ecosystem). Explains what NemoClaw covers: onboarding, lifecycle management, and management of OpenClaw within OpenShell containers, plus capabilities and why it exists. Use when users ask what NemoClaw is or what the project provides. For ecosystem placement or OpenShell-only paths, use the Ecosystem page; for internal mechanics, use How It Works. Lists changelogs and feature history for NemoClaw releases. Use when checking what changed in a releas..." ---- - -# NemoClaw Overview - -Explains how OpenClaw, OpenShell, and NemoClaw form the ecosystem, NemoClaw’s position in the stack, and when to prefer NemoClaw versus integrating OpenShell and OpenClaw directly. Use when users ask about the relationship between OpenClaw, OpenShell, and NemoClaw, or when to use NemoClaw versus OpenShell. - -## Context - -NemoClaw provides onboarding, lifecycle management, and management of OpenClaw within OpenShell containers. - -This page describes how the ecosystem is formed across projects, where NemoClaw sits relative to [OpenShell](https://github.com/NVIDIA/OpenShell) and [OpenClaw](https://openclaw.ai), and how to choose between NemoClaw and OpenShell. - -## How the Stack Fits Together - -Three pieces usually appear together in a NemoClaw deployment, each with a distinct scope: - -| Project | Scope | -|---------|--------| -| [OpenClaw](https://openclaw.ai) | The assistant: runtime, tools, memory, and behavior inside the container. It does not define the sandbox or the host gateway. | -| [OpenShell](https://github.com/NVIDIA/OpenShell) | The execution environment: sandbox lifecycle, network and filesystem policy, inference routing, and the operator-facing `openshell` CLI for those primitives. | -| NemoClaw | The NVIDIA reference stack that implements the definition above on the host: `nemoclaw` CLI and plugin, versioned blueprint, channel messaging configured for OpenShell-managed delivery, and state migration helpers so OpenClaw runs inside OpenShell in a documented, repeatable way. | - -NemoClaw sits above OpenShell in the operator workflow. -It drives OpenShell APIs and CLI to create and configure the sandbox that runs OpenClaw. -Models and endpoints sit behind OpenShell’s inference routing. -NemoClaw onboarding wires provider choice into that routing. - -```mermaid -flowchart TB - NC["🦞 NVIDIA NemoClaw
CLI, plugin, blueprint"] - OS["🐚 NVIDIA OpenShell
Gateway, policy, inference routing"] - OC["🦞 OpenClaw
Assistant in sandbox"] - - NC -->|orchestrates| OS - OS -->|isolates and runs| OC - - classDef nv fill:#76b900,stroke:#333,color:#fff - classDef nvLight fill:#e6f2cc,stroke:#76b900,color:#1a1a1a - classDef nvDark fill:#333,stroke:#76b900,color:#fff - - class NC nv - class OS nv - class OC nvDark - - linkStyle 0 stroke:#76b900,stroke-width:2px - linkStyle 1 stroke:#76b900,stroke-width:2px -``` - -## NemoClaw Path versus OpenShell Path - -Both paths assume OpenShell can sandbox a workload. -The difference is who owns the integration work. - -| Path | What it means | -|------|---------------| -| **NemoClaw path** | You adopt the reference stack. NemoClaw’s blueprint encodes a hardened image, default policies, and orchestration so `nemoclaw onboard` can stand up a known-good OpenClaw-on-OpenShell setup with less custom glue. | -| **OpenShell path** | You use OpenShell as the platform and supply your own container, install steps for OpenClaw, policy YAML, provider setup, and any host bridges. OpenShell stays the sandbox and policy engine; nothing requires NemoClaw’s blueprint or CLI. | - -## When to Use Which - -Use the following table to decide when to use NemoClaw versus OpenShell. - -| Situation | Prefer | -|-----------|--------| -| You want OpenClaw with minimal assembly, NVIDIA defaults, and the documented install and onboard flow. | NemoClaw | -| You need maximum flexibility: custom images, a layout that does not match the NemoClaw blueprint, or a workload outside this reference stack. | OpenShell with your own integration | -| You are standardizing on the NVIDIA reference for always-on assistants with policy and inference routing. | NemoClaw | -| You are building internal platform abstractions where the NemoClaw CLI or blueprint is not the right fit. | OpenShell (and your orchestration) | - -*Full details in `references/ecosystem.md`.* - -This page explains how NemoClaw operates, which parts run where, how the blueprint drives OpenShell, and how inference and policy attach to the sandbox. - -## How the Pieces Connect - -The `nemoclaw` CLI is the primary entrypoint for setting up and managing sandboxed OpenClaw agents. -It delegates heavy lifting to a versioned blueprint, a Python artifact that orchestrates sandbox creation, policy application, and inference provider setup through the OpenShell CLI. - -Between your shell and the running sandbox, NemoClaw contributes these integration layers: - -| Layer | Role in the flow | -|-------|------------------| -| Onboarding | `nemoclaw onboard` validates credentials, selects providers, and drives blueprint execution until the sandbox is ready. | -| Blueprint | Supplies the hardened image definition, default policies, capability posture, and orchestration steps the runner applies through OpenShell. | -| State management | Migrates agent state across machines with credential stripping and integrity checks. | -| Channel messaging | OpenShell-managed processes connect Telegram, Discord, Slack, and similar platforms to the agent. NemoClaw enables this through onboarding and blueprint wiring; delivery is not a separate NemoClaw host daemon. | - -For repository layout, file paths, and deeper diagrams, see Architecture (see the `nemoclaw-reference` skill). - -```mermaid -flowchart TB - subgraph Host - CMD["nemoclaw onboard"] - PLUGIN[nemoclaw plugin] - BLUEPRINT[blueprint runner] - CLI["openshell CLI sandbox Β· gateway Β· inference Β· policy"] - - CMD --> PLUGIN - PLUGIN --> BLUEPRINT - BLUEPRINT --> CLI - end - - subgraph Sandbox["OpenShell Sandbox"] - AGENT[OpenClaw agent] - INF[NVIDIA inference, routed] - NET[default network policy] - FS[filesystem isolation] - - AGENT --- INF - AGENT --- NET - AGENT --- FS - end - - PLUGIN --> AGENT - - classDef nv fill:#76b900,stroke:#333,color:#fff - classDef nvLight fill:#e6f2cc,stroke:#76b900,color:#1a1a1a - classDef nvDark fill:#333,stroke:#76b900,color:#fff - - class CMD,PLUGIN,BLUEPRINT nvDark - class CLI nv - class AGENT nv - class INF,NET,FS nvLight - - style Host fill:none,stroke:#76b900,stroke-width:2px,color:#1a1a1a - style Sandbox fill:#f5faed,stroke:#76b900,stroke-width:2px,color:#1a1a1a -``` - -## Design Principles - -NemoClaw architecture follows the following principles. - -*Full details in `references/how-it-works.md`.* - -NVIDIA NemoClaw is an open source reference stack that simplifies running [OpenClaw](https://openclaw.ai) always-on assistants. -NemoClaw provides onboarding, lifecycle management, and management of OpenClaw within OpenShell containers. -It incorporates policy-based privacy and security guardrails, giving you control over your agents’ behavior and data handling. -This enables self-evolving claws to run more safely in clouds, on prem, RTX PCs and DGX Spark. - -NemoClaw pairs open source and hosted models (for example [NVIDIA Nemotron](https://build.nvidia.com)) with a hardened sandbox, routed inference, and declarative egress policy so deployment stays safer and more repeatable. -The sandbox runtime comes from [NVIDIA OpenShell](https://github.com/NVIDIA/OpenShell); NemoClaw adds the blueprint, `nemoclaw` CLI, onboarding, and related tooling as the reference way to run OpenClaw there. - -| Capability | Description | -|-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------| -| Sandbox OpenClaw | Creates an OpenShell sandbox pre-configured for OpenClaw, with filesystem and network policies applied from the first boot. | -| Route inference | Configures OpenShell inference routing so agent traffic goes to the provider and model you chose during onboarding (NVIDIA Endpoints, OpenAI, Anthropic, Gemini, compatible endpoints, local Ollama, and others). The agent uses `inference.local` inside the sandbox; credentials stay on the host. | -| Manage the lifecycle | Handles blueprint versioning, digest verification, and sandbox setup. | - -## Key Features - -NemoClaw provides the following product capabilities. - -| Feature | Description | -|---------|-------------| -| Guided onboarding | Validates credentials, selects providers, and creates a working sandbox in one command. | -| Hardened blueprint | A security-first Dockerfile with capability drops, least-privilege network rules, and declarative policy. | -| State management | Safe migration of agent state across machines with credential stripping and integrity verification. | -| Channel messaging | OpenShell-managed processes connect Telegram, Discord, Slack, and similar platforms to the sandboxed agent. NemoClaw configures channels during onboarding; OpenShell supplies the native constructs, credential flow, and runtime supervision. | -| Routed inference | Provider-routed model calls through the OpenShell gateway, transparent to the agent. Supports NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, and local Ollama. | -| Layered protection | Network, filesystem, process, and inference controls that can be hot-reloaded or locked at creation. | - -## Challenge - -Autonomous AI agents like OpenClaw can make arbitrary network requests, access the host filesystem, and call any inference endpoint. Without guardrails, this creates security, cost, and compliance risks that grow as agents run unattended. - -## Benefits - -NemoClaw provides the following benefits. - -| Benefit | Description | -|----------------------------|------------------------------------------------------------------------------------------------------------------------| -| Sandboxed execution | Every agent runs inside an OpenShell sandbox with Landlock, seccomp, and network namespace isolation. No access is granted by default. | -| Routed inference | Model traffic is routed through the OpenShell gateway to your selected provider, transparent to the agent. You can switch providers or models. Refer to Inference Options (see the `nemoclaw-configure-inference` skill). | -| Declarative network policy | Egress rules are defined in YAML. Unknown hosts are blocked and surfaced to the operator for approval. | -| Single CLI | The `nemoclaw` command orchestrates the full stack: gateway, sandbox, inference provider, and network policy. | -| Blueprint lifecycle | Versioned blueprints handle sandbox creation, digest verification, and reproducible setup. | - -## Use Cases - -You can use NemoClaw for various use cases including the following. - -| Use Case | Description | -|---------------------------|----------------------------------------------------------------------------------------------| -| Always-on assistant | Run an OpenClaw assistant with controlled network access and operator-approved egress. | -| Sandboxed testing | Test agent behavior in a locked-down environment before granting broader permissions. | -| Remote GPU deployment | Deploy a sandboxed agent to a remote GPU instance for persistent operation. | - -*Full details in `references/overview.md`.* - -## Reference - -- [NemoClaw Release Notes](references/release-notes.md) - -## Related Skills - -- `nemoclaw-get-started` β€” Quickstart to install NemoClaw and run your first agent -- `nemoclaw-configure-inference` β€” Switch Inference Providers to configure the inference provider -- `nemoclaw-manage-policy` β€” Approve or Deny Network Requests to manage egress approvals diff --git a/.agents/skills/nemoclaw-overview/references/ecosystem.md b/.agents/skills/nemoclaw-overview/references/ecosystem.md deleted file mode 100644 index dc160445c..000000000 --- a/.agents/skills/nemoclaw-overview/references/ecosystem.md +++ /dev/null @@ -1,70 +0,0 @@ -# Ecosystem - -NemoClaw provides onboarding, lifecycle management, and management of OpenClaw within OpenShell containers. - -This page describes how the ecosystem is formed across projects, where NemoClaw sits relative to [OpenShell](https://github.com/NVIDIA/OpenShell) and [OpenClaw](https://openclaw.ai), and how to choose between NemoClaw and OpenShell. - -## How the Stack Fits Together - -Three pieces usually appear together in a NemoClaw deployment, each with a distinct scope: - -| Project | Scope | -|---------|--------| -| [OpenClaw](https://openclaw.ai) | The assistant: runtime, tools, memory, and behavior inside the container. It does not define the sandbox or the host gateway. | -| [OpenShell](https://github.com/NVIDIA/OpenShell) | The execution environment: sandbox lifecycle, network and filesystem policy, inference routing, and the operator-facing `openshell` CLI for those primitives. | -| NemoClaw | The NVIDIA reference stack that implements the definition above on the host: `nemoclaw` CLI and plugin, versioned blueprint, channel messaging configured for OpenShell-managed delivery, and state migration helpers so OpenClaw runs inside OpenShell in a documented, repeatable way. | - -NemoClaw sits above OpenShell in the operator workflow. -It drives OpenShell APIs and CLI to create and configure the sandbox that runs OpenClaw. -Models and endpoints sit behind OpenShell’s inference routing. -NemoClaw onboarding wires provider choice into that routing. - -```mermaid -flowchart TB - NC["🦞 NVIDIA NemoClaw
CLI, plugin, blueprint"] - OS["🐚 NVIDIA OpenShell
Gateway, policy, inference routing"] - OC["🦞 OpenClaw
Assistant in sandbox"] - - NC -->|orchestrates| OS - OS -->|isolates and runs| OC - - classDef nv fill:#76b900,stroke:#333,color:#fff - classDef nvLight fill:#e6f2cc,stroke:#76b900,color:#1a1a1a - classDef nvDark fill:#333,stroke:#76b900,color:#fff - - class NC nv - class OS nv - class OC nvDark - - linkStyle 0 stroke:#76b900,stroke-width:2px - linkStyle 1 stroke:#76b900,stroke-width:2px -``` - -## NemoClaw Path versus OpenShell Path - -Both paths assume OpenShell can sandbox a workload. -The difference is who owns the integration work. - -| Path | What it means | -|------|---------------| -| **NemoClaw path** | You adopt the reference stack. NemoClaw’s blueprint encodes a hardened image, default policies, and orchestration so `nemoclaw onboard` can stand up a known-good OpenClaw-on-OpenShell setup with less custom glue. | -| **OpenShell path** | You use OpenShell as the platform and supply your own container, install steps for OpenClaw, policy YAML, provider setup, and any host bridges. OpenShell stays the sandbox and policy engine; nothing requires NemoClaw’s blueprint or CLI. | - -## When to Use Which - -Use the following table to decide when to use NemoClaw versus OpenShell. - -| Situation | Prefer | -|-----------|--------| -| You want OpenClaw with minimal assembly, NVIDIA defaults, and the documented install and onboard flow. | NemoClaw | -| You need maximum flexibility: custom images, a layout that does not match the NemoClaw blueprint, or a workload outside this reference stack. | OpenShell with your own integration | -| You are standardizing on the NVIDIA reference for always-on assistants with policy and inference routing. | NemoClaw | -| You are building internal platform abstractions where the NemoClaw CLI or blueprint is not the right fit. | OpenShell (and your orchestration) | - -## Related topics - -| Page | View | -|------|------| -| Overview (see the `nemoclaw-overview` skill) | What NemoClaw is: capabilities, benefits, and use cases. | -| How It Works (see the `nemoclaw-overview` skill) | How NemoClaw runs: plugin, blueprint, sandbox creation, routing, protection layers. | -| Architecture (see the `nemoclaw-reference` skill) | Repository structure and technical diagrams. | diff --git a/.agents/skills/nemoclaw-overview/references/how-it-works.md b/.agents/skills/nemoclaw-overview/references/how-it-works.md deleted file mode 100644 index 42c6d0004..000000000 --- a/.agents/skills/nemoclaw-overview/references/how-it-works.md +++ /dev/null @@ -1,129 +0,0 @@ -# How NemoClaw Works - -This page explains how NemoClaw operates, which parts run where, how the blueprint drives OpenShell, and how inference and policy attach to the sandbox. - -## How the Pieces Connect - -The `nemoclaw` CLI is the primary entrypoint for setting up and managing sandboxed OpenClaw agents. -It delegates heavy lifting to a versioned blueprint, a Python artifact that orchestrates sandbox creation, policy application, and inference provider setup through the OpenShell CLI. - -Between your shell and the running sandbox, NemoClaw contributes these integration layers: - -| Layer | Role in the flow | -|-------|------------------| -| Onboarding | `nemoclaw onboard` validates credentials, selects providers, and drives blueprint execution until the sandbox is ready. | -| Blueprint | Supplies the hardened image definition, default policies, capability posture, and orchestration steps the runner applies through OpenShell. | -| State management | Migrates agent state across machines with credential stripping and integrity checks. | -| Channel messaging | OpenShell-managed processes connect Telegram, Discord, Slack, and similar platforms to the agent. NemoClaw enables this through onboarding and blueprint wiring; delivery is not a separate NemoClaw host daemon. | - -For repository layout, file paths, and deeper diagrams, see Architecture (see the `nemoclaw-reference` skill). - -```mermaid -flowchart TB - subgraph Host - CMD["nemoclaw onboard"] - PLUGIN[nemoclaw plugin] - BLUEPRINT[blueprint runner] - CLI["openshell CLI sandbox Β· gateway Β· inference Β· policy"] - - CMD --> PLUGIN - PLUGIN --> BLUEPRINT - BLUEPRINT --> CLI - end - - subgraph Sandbox["OpenShell Sandbox"] - AGENT[OpenClaw agent] - INF[NVIDIA inference, routed] - NET[default network policy] - FS[filesystem isolation] - - AGENT --- INF - AGENT --- NET - AGENT --- FS - end - - PLUGIN --> AGENT - - classDef nv fill:#76b900,stroke:#333,color:#fff - classDef nvLight fill:#e6f2cc,stroke:#76b900,color:#1a1a1a - classDef nvDark fill:#333,stroke:#76b900,color:#fff - - class CMD,PLUGIN,BLUEPRINT nvDark - class CLI nv - class AGENT nv - class INF,NET,FS nvLight - - style Host fill:none,stroke:#76b900,stroke-width:2px,color:#1a1a1a - style Sandbox fill:#f5faed,stroke:#76b900,stroke-width:2px,color:#1a1a1a -``` - -## Design Principles - -NemoClaw architecture follows the following principles. - -Thin plugin, versioned blueprint -: The plugin stays small and stable. Orchestration logic lives in the blueprint and evolves on its own release cadence. - -Respect CLI boundaries -: The `nemoclaw` CLI is the primary interface for sandbox management. - -Supply chain safety -: Blueprint artifacts are immutable, versioned, and digest-verified before execution. - -OpenShell-backed lifecycle -: NemoClaw orchestrates OpenShell resources under the hood, but `nemoclaw onboard` - is the supported operator entry point for creating or recreating NemoClaw-managed sandboxes. - -Reproducible setup -: Running setup again recreates the sandbox from the same blueprint and policy definitions. - -## Plugin and Blueprint - -NemoClaw is split into two parts: - -- The *plugin* is a TypeScript package that registers an inference provider and the `/nemoclaw` slash command inside the sandbox. - It handles user interaction and delegates orchestration work to the blueprint. -- The *blueprint* is a versioned Python artifact that contains all the logic for creating sandboxes, applying policies, and configuring inference. - The plugin resolves, verifies, and executes the blueprint as a subprocess. - -This separation keeps the plugin small and stable while allowing the blueprint to evolve on its own release cadence. - -## Sandbox Creation - -When you run `nemoclaw onboard`, NemoClaw creates an OpenShell sandbox that runs OpenClaw in an isolated container. -The blueprint orchestrates this process through the OpenShell CLI: - -1. The plugin downloads the blueprint artifact, checks version compatibility, and verifies the digest. -2. The blueprint determines which OpenShell resources to create or update, such as the gateway, inference providers, sandbox, and network policy. -3. The blueprint calls OpenShell CLI commands to create the sandbox and configure each resource. - -After the sandbox starts, the agent runs inside it with all network, filesystem, and inference controls in place. - -## Inference Routing - -Inference requests from the agent never leave the sandbox directly. -OpenShell intercepts every inference call and routes it to the configured provider. -During onboarding, NemoClaw validates the selected provider and model, configures the OpenShell route, and bakes the matching model reference into the sandbox image. -The sandbox then talks to `inference.local`, while the host owns the actual provider credential and upstream endpoint. - -## Protection Layers - -The sandbox starts with a default policy that controls network egress, filesystem access, process privileges, and inference routing. - -| Layer | What it protects | When it applies | -|---|---|---| -| Network | Blocks unauthorized outbound connections. | Hot-reloadable at runtime. | -| Filesystem | Prevents reads and writes outside `/sandbox` and `/tmp`. | Locked at sandbox creation. | -| Process | Blocks privilege escalation and dangerous syscalls. | Locked at sandbox creation. | -| Inference | Reroutes model API calls to controlled backends. | Hot-reloadable at runtime. | - -When the agent tries to reach an unlisted host, OpenShell blocks the request and surfaces it in the TUI for operator approval. Approved endpoints persist for the current session but are not saved to the baseline policy file. - -For details on the baseline rules, refer to Network Policies (see the `nemoclaw-reference` skill). For container-level hardening, refer to Sandbox Hardening (see the `nemoclaw-deploy-remote` skill). - -## Next Steps - -- Read Ecosystem (see the `nemoclaw-overview` skill) for stack-level relationships and NemoClaw versus OpenShell-only paths. -- Follow the Quickstart (see the `nemoclaw-get-started` skill) to launch your first sandbox. -- Refer to the Architecture (see the `nemoclaw-reference` skill) for the full technical structure, including file layouts and the blueprint lifecycle. -- Refer to Inference Options (see the `nemoclaw-configure-inference` skill) for detailed provider configuration. diff --git a/.agents/skills/nemoclaw-overview/references/overview.md b/.agents/skills/nemoclaw-overview/references/overview.md deleted file mode 100644 index 4b6cdefe5..000000000 --- a/.agents/skills/nemoclaw-overview/references/overview.md +++ /dev/null @@ -1,64 +0,0 @@ -# Overview - -NVIDIA NemoClaw is an open source reference stack that simplifies running [OpenClaw](https://openclaw.ai) always-on assistants. -NemoClaw provides onboarding, lifecycle management, and management of OpenClaw within OpenShell containers. -It incorporates policy-based privacy and security guardrails, giving you control over your agents’ behavior and data handling. -This enables self-evolving claws to run more safely in clouds, on prem, RTX PCs and DGX Spark. - -NemoClaw pairs open source and hosted models (for example [NVIDIA Nemotron](https://build.nvidia.com)) with a hardened sandbox, routed inference, and declarative egress policy so deployment stays safer and more repeatable. -The sandbox runtime comes from [NVIDIA OpenShell](https://github.com/NVIDIA/OpenShell); NemoClaw adds the blueprint, `nemoclaw` CLI, onboarding, and related tooling as the reference way to run OpenClaw there. - -| Capability | Description | -|-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------| -| Sandbox OpenClaw | Creates an OpenShell sandbox pre-configured for OpenClaw, with filesystem and network policies applied from the first boot. | -| Route inference | Configures OpenShell inference routing so agent traffic goes to the provider and model you chose during onboarding (NVIDIA Endpoints, OpenAI, Anthropic, Gemini, compatible endpoints, local Ollama, and others). The agent uses `inference.local` inside the sandbox; credentials stay on the host. | -| Manage the lifecycle | Handles blueprint versioning, digest verification, and sandbox setup. | - -## Key Features - -NemoClaw provides the following product capabilities. - -| Feature | Description | -|---------|-------------| -| Guided onboarding | Validates credentials, selects providers, and creates a working sandbox in one command. | -| Hardened blueprint | A security-first Dockerfile with capability drops, least-privilege network rules, and declarative policy. | -| State management | Safe migration of agent state across machines with credential stripping and integrity verification. | -| Channel messaging | OpenShell-managed processes connect Telegram, Discord, Slack, and similar platforms to the sandboxed agent. NemoClaw configures channels during onboarding; OpenShell supplies the native constructs, credential flow, and runtime supervision. | -| Routed inference | Provider-routed model calls through the OpenShell gateway, transparent to the agent. Supports NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, and local Ollama. | -| Layered protection | Network, filesystem, process, and inference controls that can be hot-reloaded or locked at creation. | - -## Challenge - -Autonomous AI agents like OpenClaw can make arbitrary network requests, access the host filesystem, and call any inference endpoint. Without guardrails, this creates security, cost, and compliance risks that grow as agents run unattended. - -## Benefits - -NemoClaw provides the following benefits. - -| Benefit | Description | -|----------------------------|------------------------------------------------------------------------------------------------------------------------| -| Sandboxed execution | Every agent runs inside an OpenShell sandbox with Landlock, seccomp, and network namespace isolation. No access is granted by default. | -| Routed inference | Model traffic is routed through the OpenShell gateway to your selected provider, transparent to the agent. You can switch providers or models. Refer to Inference Options (see the `nemoclaw-configure-inference` skill). | -| Declarative network policy | Egress rules are defined in YAML. Unknown hosts are blocked and surfaced to the operator for approval. | -| Single CLI | The `nemoclaw` command orchestrates the full stack: gateway, sandbox, inference provider, and network policy. | -| Blueprint lifecycle | Versioned blueprints handle sandbox creation, digest verification, and reproducible setup. | - -## Use Cases - -You can use NemoClaw for various use cases including the following. - -| Use Case | Description | -|---------------------------|----------------------------------------------------------------------------------------------| -| Always-on assistant | Run an OpenClaw assistant with controlled network access and operator-approved egress. | -| Sandboxed testing | Test agent behavior in a locked-down environment before granting broader permissions. | -| Remote GPU deployment | Deploy a sandboxed agent to a remote GPU instance for persistent operation. | - -## Next Steps - -- Ecosystem (see the `nemoclaw-overview` skill) to understand how OpenClaw, OpenShell, and NemoClaw relate in the wider stack, and when to use NemoClaw versus OpenShell. -- How It Works (see the `nemoclaw-overview` skill) to understand how NemoClaw works internally: plugin, blueprint, sandbox lifecycle. -- Quickstart (see the `nemoclaw-get-started` skill) to install NemoClaw and run your first agent. -- Switch Inference Providers (see the `nemoclaw-configure-inference` skill) to configure the inference provider. -- Approve or Deny Network Requests (see the `nemoclaw-manage-policy` skill) to manage egress approvals. -- Deploy to a Remote GPU Instance (see the `nemoclaw-deploy-remote` skill) for persistent operation. -- Monitor Sandbox Activity (see the `nemoclaw-monitor-sandbox` skill) to observe agent behavior. diff --git a/.agents/skills/nemoclaw-overview/references/release-notes.md b/.agents/skills/nemoclaw-overview/references/release-notes.md deleted file mode 100644 index ea482162c..000000000 --- a/.agents/skills/nemoclaw-overview/references/release-notes.md +++ /dev/null @@ -1,10 +0,0 @@ -# Release Notes - -NVIDIA NemoClaw is available in early preview starting March 16, 2026. Use the following GitHub resources to track changes. - -| Resource | Description | -|---|---| -| [Releases](https://github.com/NVIDIA/NemoClaw/releases) | Versioned release notes and downloadable assets. | -| [Release comparison](https://github.com/NVIDIA/NemoClaw/compare) | Diff between any two tags or branches. | -| [Merged pull requests](https://github.com/NVIDIA/NemoClaw/pulls?q=is%3Apr+is%3Amerged) | Individual changes with review discussion. | -| [Commit history](https://github.com/NVIDIA/NemoClaw/commits/main) | Full commit log on `main`. | diff --git a/.agents/skills/nemoclaw-reference/SKILL.md b/.agents/skills/nemoclaw-reference/SKILL.md deleted file mode 100644 index 07a828cd4..000000000 --- a/.agents/skills/nemoclaw-reference/SKILL.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -name: "nemoclaw-reference" -description: "Describes how NemoClaw combines a CLI plugin with a versioned blueprint to move OpenClaw into a controlled sandbox. Use when looking up NemoClaw architecture, plugin structure, or blueprint design. Lists all slash commands and standalone NemoClaw CLI commands. Use when looking up a command, checking command syntax, or browsing the CLI reference. Documents baseline network policy, filesystem rules, and operator approval flow. Use when reviewing default network policies, understanding egress controls, or looking up the approval flow. Diagnoses and resolves common NemoClaw installation, onboarding, and runtime issues. Use when troubleshooting errors, debugging sandbox problems, or resolving setup failures." ---- - -# NemoClaw Reference - -Describes how NemoClaw combines a CLI plugin with a versioned blueprint to move OpenClaw into a controlled sandbox. Use when looking up NemoClaw architecture, plugin structure, or blueprint design. - -## Reference - -- [NemoClaw Architecture: Plugin, Blueprint, and Sandbox Structure](references/architecture.md) -- [NemoClaw CLI Commands Reference](references/commands.md) -- [NemoClaw Network Policies: Baseline Rules and Operator Approval](references/network-policies.md) -- [NemoClaw Troubleshooting Guide](references/troubleshooting.md) diff --git a/.agents/skills/nemoclaw-reference/references/architecture.md b/.agents/skills/nemoclaw-reference/references/architecture.md deleted file mode 100644 index ee34f899e..000000000 --- a/.agents/skills/nemoclaw-reference/references/architecture.md +++ /dev/null @@ -1,173 +0,0 @@ -# Architecture - -NemoClaw has two main components: a TypeScript plugin that integrates with the OpenClaw CLI, and a Python blueprint that orchestrates OpenShell resources. - -## System Overview - -NVIDIA OpenShell is a general-purpose agent runtime. It provides sandbox containers, a credential-storing gateway, inference proxying, and policy enforcement, but has no opinions about what runs inside. NemoClaw is an opinionated reference stack built on OpenShell that handles what goes in the sandbox and makes the setup accessible. - -```mermaid -graph LR - classDef nemoclaw fill:#76b900,stroke:#5a8f00,color:#fff,stroke-width:2px,font-weight:bold - classDef openshell fill:#1a1a1a,stroke:#1a1a1a,color:#fff,stroke-width:2px,font-weight:bold - classDef sandbox fill:#444,stroke:#76b900,color:#fff,stroke-width:2px,font-weight:bold - classDef agent fill:#f5f5f5,stroke:#e0e0e0,color:#1a1a1a,stroke-width:1px - classDef external fill:#f5f5f5,stroke:#e0e0e0,color:#1a1a1a,stroke-width:1px - classDef user fill:#fff,stroke:#76b900,color:#1a1a1a,stroke-width:2px,font-weight:bold - - USER(["πŸ‘€ User"]):::user - - subgraph EXTERNAL["External Services"] - INFERENCE["Inference Provider
NVIDIA Endpoints Β· OpenAI
Anthropic Β· Ollama Β· vLLM
"]:::external - MSGAPI["Messaging Platforms
Telegram Β· Discord Β· Slack"]:::external - INTERNET["Internet
PyPI Β· npm Β· GitHub Β· APIs"]:::external - end - - subgraph HOST["Host Machine"] - - subgraph NEMOCLAW["NemoClaw"] - direction TB - NCLI["CLI + Onboarding
Guided setup Β· provider selection
credential validation Β· deploy
"]:::nemoclaw - BP["Blueprint
Hardened Dockerfile
Network policies Β· Presets
Security configuration
"]:::nemoclaw - MIGRATE["State Management
Migration snapshots
Credential stripping
Integrity verification
"]:::nemoclaw - end - - subgraph OPENSHELL["OpenShell"] - direction TB - GW["Gateway
Credential store
Inference proxy
Policy engine
Device auth
"]:::openshell - OSCLI["openshell CLI
provider Β· sandbox
gateway Β· policy
"]:::openshell - CHMSG["Channel messaging
OpenShell-managed
Telegram Β· Discord Β· Slack
"]:::openshell - - subgraph SANDBOX["Sandbox Container πŸ”’"] - direction TB - AGENT["Agent
OpenClaw or any
compatible agent
"]:::agent - PLUG["NemoClaw Plugin
Extends agent with
managed configuration
"]:::sandbox - end - end - end - - USER -->|"nemoclaw onboard
nemoclaw connect"| NCLI - USER -->|"Chat messages"| MSGAPI - - NCLI -->|"Orchestrates"| OSCLI - BP -->|"Defines sandbox
shape + policies"| SANDBOX - MIGRATE -->|"Safe state
transfer"| SANDBOX - - AGENT -->|"Inference requests
no credentials"| GW - GW -->|"Proxied with
credential injected"| INFERENCE - - MSGAPI -->|"Platform APIs"| CHMSG - CHMSG -->|"Deliver to agent"| AGENT - - AGENT -.->|"Policy-gated"| INTERNET - GW -.->|"Enforced by
gateway"| INTERNET -``` - -## NemoClaw Plugin - -The plugin is a thin TypeScript package that registers an inference provider and the `/nemoclaw` slash command. -It runs in-process with the OpenClaw gateway inside the sandbox. - -```text -nemoclaw/ -β”œβ”€β”€ src/ -β”‚ β”œβ”€β”€ index.ts Plugin entry: registers all commands -β”‚ β”œβ”€β”€ cli.ts Commander.js subcommand wiring -β”‚ β”œβ”€β”€ commands/ -β”‚ β”‚ β”œβ”€β”€ launch.ts Fresh install into OpenShell -β”‚ β”‚ β”œβ”€β”€ connect.ts Interactive shell into sandbox -β”‚ β”‚ β”œβ”€β”€ status.ts Blueprint run state + sandbox health -β”‚ β”‚ β”œβ”€β”€ logs.ts Stream blueprint and sandbox logs -β”‚ β”‚ └── slash.ts /nemoclaw chat command handler -β”‚ └── blueprint/ -β”‚ β”œβ”€β”€ resolve.ts Version resolution, cache management -β”‚ β”œβ”€β”€ fetch.ts Download blueprint from OCI registry -β”‚ β”œβ”€β”€ verify.ts Digest verification, compatibility checks -β”‚ β”œβ”€β”€ exec.ts Subprocess execution of blueprint runner -β”‚ └── state.ts Persistent state (run IDs) -β”œβ”€β”€ openclaw.plugin.json Plugin manifest -└── package.json Commands declared under openclaw.extensions -``` - -## NemoClaw Blueprint - -The blueprint is a versioned Python artifact with its own release stream. -The plugin resolves, verifies, and executes the blueprint as a subprocess. -The blueprint drives all interactions with the OpenShell CLI. - -```text -nemoclaw-blueprint/ -β”œβ”€β”€ blueprint.yaml Manifest: version, profiles, compatibility -β”œβ”€β”€ policies/ -β”‚ └── openclaw-sandbox.yaml Default network + filesystem policy -``` - -The blueprint runtime (TypeScript) lives in the plugin source tree: - -```text -nemoclaw/src/blueprint/ -β”œβ”€β”€ runner.ts CLI runner: plan / apply / status / rollback -β”œβ”€β”€ ssrf.ts SSRF endpoint validation (IP + DNS checks) -β”œβ”€β”€ snapshot.ts Migration snapshot / restore lifecycle -β”œβ”€β”€ state.ts Persistent run state management -``` - -### Blueprint Lifecycle - -```mermaid -flowchart LR - A[resolve] --> B[verify digest] - B --> C[plan] - C --> D[apply] - D --> E[status] -``` - -1. Resolve. The plugin locates the blueprint artifact and checks the version against `min_openshell_version` and `min_openclaw_version` constraints in `blueprint.yaml`. -2. Verify. The plugin checks the artifact digest against the expected value. -3. Plan. The runner determines what OpenShell resources to create or update, such as the gateway, providers, sandbox, inference route, and policy. -4. Apply. The runner executes the plan by calling `openshell` CLI commands. -5. Status. The runner reports current state. - -## Sandbox Environment - -The sandbox runs the -[`ghcr.io/nvidia/openshell-community/sandboxes/openclaw`](https://github.com/NVIDIA/OpenShell-Community) -container image. Inside the sandbox: - -- OpenClaw runs with the NemoClaw plugin pre-installed. -- Inference calls are routed through OpenShell to the configured provider. -- Network egress is restricted by the baseline policy in `openclaw-sandbox.yaml`. -- Filesystem access is confined to `/sandbox` and `/tmp` for read-write access, with system paths read-only. - -## Inference Routing - -Inference requests from the agent never leave the sandbox directly. -OpenShell intercepts them and routes to the configured provider: - -```text -Agent (sandbox) ──▢ OpenShell gateway ──▢ NVIDIA Endpoint (build.nvidia.com) -``` - -Refer to Inference Options (see the `nemoclaw-configure-inference` skill) for provider configuration details. - -## Host-Side State and Config - -NemoClaw keeps its operator-facing state on the host rather than inside the sandbox. - -| Path | Purpose | -|---|---| -| `~/.nemoclaw/credentials.json` | Provider credentials saved during onboarding. Stored as plaintext JSON protected by local filesystem permissions; see Credential Storage (see the `nemoclaw-configure-security` skill). | -| `~/.nemoclaw/sandboxes.json` | Registered sandbox metadata, including the default sandbox selection. | -| `~/.openclaw/openclaw.json` | Host OpenClaw configuration that NemoClaw snapshots or restores during migration flows. | - -The following environment variables configure optional services and local access. - -| Variable | Purpose | -|---|---| -| `TELEGRAM_BOT_TOKEN` | Telegram bot token you provide before `nemoclaw onboard`. OpenShell stores it in a provider; the sandbox receives placeholders, not the raw secret. | -| `TELEGRAM_ALLOWED_IDS` | Comma-separated Telegram user or chat IDs for allowlists when onboarding applies channel restrictions. | -| `CHAT_UI_URL` | URL for the optional chat UI endpoint. | -| `NEMOCLAW_DISABLE_DEVICE_AUTH` | Build-time-only toggle that disables gateway device pairing when set to `1` before the sandbox image is created. | - -For normal setup and reconfiguration, prefer `nemoclaw onboard` over editing these files by hand. -Do not treat `NEMOCLAW_DISABLE_DEVICE_AUTH` as a runtime setting for an already-created sandbox. diff --git a/.agents/skills/nemoclaw-reference/references/commands.md b/.agents/skills/nemoclaw-reference/references/commands.md deleted file mode 100644 index 77b87d919..000000000 --- a/.agents/skills/nemoclaw-reference/references/commands.md +++ /dev/null @@ -1,267 +0,0 @@ -# Commands - -The `nemoclaw` CLI is the primary interface for managing NemoClaw sandboxes. It is installed when you run `npm install -g nemoclaw`. - -## `/nemoclaw` Slash Command - -The `/nemoclaw` slash command is available inside the OpenClaw chat interface for quick actions: - -| Subcommand | Description | -|---|---| -| `/nemoclaw` | Show slash-command help and host CLI pointers | -| `/nemoclaw status` | Show sandbox and inference state | -| `/nemoclaw onboard` | Show onboarding status and reconfiguration guidance | -| `/nemoclaw eject` | Show rollback instructions for returning to the host installation | - -## Standalone Host Commands - -The `nemoclaw` binary handles host-side operations that run outside the OpenClaw plugin context. - -### `nemoclaw help`, `nemoclaw --help`, `nemoclaw -h` - -Show the top-level usage summary and command groups. -Running `nemoclaw` with no arguments shows the same help output. - -```console -$ nemoclaw help -``` - -### `nemoclaw --version`, `nemoclaw -v` - -Print the installed NemoClaw CLI version. - -```console -$ nemoclaw --version -``` - -### `nemoclaw onboard` - -Run the interactive setup wizard (recommended for new installs). -The wizard creates an OpenShell gateway, registers inference providers, builds the sandbox image, and creates the sandbox. -Use this command for new installs and for recreating a sandbox after changes to policy or configuration. - -```console -$ nemoclaw onboard [--non-interactive] [--resume] [--from ] -``` - -> **Warning:** For NemoClaw-managed environments, use `nemoclaw onboard` when you need to create or recreate the OpenShell gateway or sandbox. -> Avoid `openshell self-update`, `npm update -g openshell`, `openshell gateway start --recreate`, or `openshell sandbox create` directly unless you intend to manage OpenShell separately and then rerun `nemoclaw onboard`. - -The wizard prompts for a provider first, then collects the provider credential if needed. -Supported non-experimental choices include NVIDIA Endpoints, OpenAI, Anthropic, Google Gemini, and compatible OpenAI or Anthropic endpoints. -Credentials are stored in `~/.nemoclaw/credentials.json`. For file permissions, plaintext storage behavior, and hardening guidance, see Credential Storage (see the `nemoclaw-configure-security` skill). -The legacy `nemoclaw setup` command is deprecated; use `nemoclaw onboard` instead. - -If you enable Brave Search during onboarding, NemoClaw currently stores the Brave API key in the sandbox's OpenClaw configuration. -That means the OpenClaw agent can read the key. -NemoClaw explores an OpenShell-hosted credential path first, but the current OpenClaw Brave runtime does not consume that path end to end yet. -Treat Brave Search as an explicit opt-in and use a dedicated low-privilege Brave key. - -For non-interactive onboarding, you must explicitly accept the third-party software notice: - -```console -$ nemoclaw onboard --non-interactive --yes-i-accept-third-party-software -``` - -or: - -```console -$ NEMOCLAW_ACCEPT_THIRD_PARTY_SOFTWARE=1 nemoclaw onboard --non-interactive -``` - -To enable Brave Search in non-interactive mode, set: - -```console -$ BRAVE_API_KEY=... \ - nemoclaw onboard --non-interactive -``` - -`BRAVE_API_KEY` enables Brave Search in non-interactive mode and also enables `web_fetch`. - -The wizard prompts for a sandbox name. -Names must follow RFC 1123 subdomain rules: lowercase alphanumeric characters and hyphens only, and must start and end with an alphanumeric character. -Uppercase letters are automatically lowercased. - -Before creating the gateway, the wizard runs preflight checks. -It verifies that Docker is reachable, warns on unsupported runtimes such as Podman, and prints host remediation guidance when prerequisites are missing. - -#### `--from ` - -Build the sandbox image from a custom Dockerfile instead of the stock NemoClaw image. -The entire parent directory of the specified file is used as the Docker build context, so any files your Dockerfile references (scripts, config, etc.) must live alongside it. - -```console -$ nemoclaw onboard --from path/to/Dockerfile -``` - -The file can have any name; if it is not already named `Dockerfile`, onboard copies it to `Dockerfile` inside the staged build context automatically. -All NemoClaw build arguments (`NEMOCLAW_MODEL`, `NEMOCLAW_PROVIDER_KEY`, `NEMOCLAW_INFERENCE_BASE_URL`, etc.) are injected as `ARG` overrides at build time, so declare them in your Dockerfile if you need to reference them. - -In non-interactive mode, the path can also be supplied via the `NEMOCLAW_FROM_DOCKERFILE` environment variable: - -```console -$ NEMOCLAW_NON_INTERACTIVE=1 NEMOCLAW_FROM_DOCKERFILE=path/to/Dockerfile nemoclaw onboard -``` - -If a `--resume` is attempted with a different `--from` path than the original session, onboarding exits with a conflict error rather than silently building from the wrong image. - -### `nemoclaw list` - -List all registered sandboxes with their model, provider, and policy presets. - -```console -$ nemoclaw list -``` - -### `nemoclaw deploy` - -> **Warning:** The `nemoclaw deploy` command is deprecated. -> Prefer provisioning the remote host separately, then running the standard NemoClaw installer and `nemoclaw onboard` on that host. - -Deploy NemoClaw to a remote GPU instance through [Brev](https://brev.nvidia.com). -This command remains as a compatibility wrapper for the older Brev-specific bootstrap flow. - -```console -$ nemoclaw deploy -``` - -### `nemoclaw connect` - -Connect to a sandbox by name. - -```console -$ nemoclaw my-assistant connect -``` - -### `nemoclaw status` - -Show sandbox status, health, and inference configuration. - -```console -$ nemoclaw my-assistant status -``` - -### `nemoclaw logs` - -View sandbox logs. -Use `--follow` to stream output in real time. - -```console -$ nemoclaw my-assistant logs [--follow] -``` - -### `nemoclaw destroy` - -Stop the NIM container and delete the sandbox. -This removes the sandbox from the registry. - -> **Warning:** Destroying a sandbox permanently deletes all files inside it, including -> workspace files (see the `nemoclaw-workspace` skill) (SOUL.md, USER.md, IDENTITY.md, AGENTS.md, MEMORY.md, and daily memory notes). -> Back up your workspace first by following the instructions at Back Up and Restore (see the `nemoclaw-workspace` skill). - -```console -$ nemoclaw my-assistant destroy -``` - -### `nemoclaw policy-add` - -Add a policy preset to a sandbox. -Presets extend the baseline network policy with additional endpoints. - -```console -$ nemoclaw my-assistant policy-add -``` - -### `nemoclaw policy-list` - -List available policy presets and show which ones are applied to the sandbox. - -```console -$ nemoclaw my-assistant policy-list -``` - -### `openshell term` - -Open the OpenShell TUI to monitor sandbox activity and approve network egress requests. -Run this on the host where the sandbox is running. - -```console -$ openshell term -``` - -For a remote Brev instance, SSH to the instance and run `openshell term` there, or use a port-forward to the gateway. - -### `nemoclaw start` - -Start optional host auxiliary services. This is the cloudflared tunnel when `cloudflared` is installed (for a public URL to the dashboard). Channel messaging (Telegram, Discord, Slack) is not started here; it is configured during `nemoclaw onboard` and runs through OpenShell-managed constructs. - -```console -$ nemoclaw start -``` - -### `nemoclaw stop` - -Stop host auxiliary services started by `nemoclaw start` (for example cloudflared). - -```console -$ nemoclaw stop -``` - -### `nemoclaw status` - -Show the sandbox list and the status of host auxiliary services (for example cloudflared). - -```console -$ nemoclaw status -``` - -### `nemoclaw setup-spark` - -> **Warning:** The `nemoclaw setup-spark` command is deprecated. -> Use the standard installer and run `nemoclaw onboard` instead, because current OpenShell releases handle the older DGX Spark cgroup behavior. - -This command remains as a compatibility alias to `nemoclaw onboard`. - -```console -$ nemoclaw setup-spark -``` - -### `nemoclaw debug` - -Collect diagnostics for bug reports. -Gathers system info, Docker state, gateway logs, and sandbox status into a summary or tarball. -Use `--sandbox ` to target a specific sandbox, `--quick` for a smaller snapshot, or `--output ` to save a tarball that you can attach to an issue. - -```console -$ nemoclaw debug [--quick] [--sandbox NAME] [--output PATH] -``` - -| Flag | Description | -|------|-------------| -| `--quick` | Collect minimal diagnostics only | -| `--sandbox NAME` | Target a specific sandbox (default: auto-detect) | -| `--output PATH` | Write diagnostics tarball to the given path | - -### `nemoclaw uninstall` - -Run `uninstall.sh` to remove NemoClaw sandboxes, gateway resources, related images and containers, and local state. -The CLI uses the local `uninstall.sh` first and falls back to the hosted script if the local file is unavailable. - -| Flag | Effect | -|---|---| -| `--yes` | Skip the confirmation prompt | -| `--keep-openshell` | Leave the `openshell` binary installed | -| `--delete-models` | Also remove NemoClaw-pulled Ollama models | - -```console -$ nemoclaw uninstall [--yes] [--keep-openshell] [--delete-models] -``` - -### Legacy `nemoclaw setup` - -Deprecated. Use `nemoclaw onboard` instead. -Running `nemoclaw setup` now delegates directly to `nemoclaw onboard`. - -```console -$ nemoclaw setup -``` diff --git a/.agents/skills/nemoclaw-reference/references/network-policies.md b/.agents/skills/nemoclaw-reference/references/network-policies.md deleted file mode 100644 index 7e895cf43..000000000 --- a/.agents/skills/nemoclaw-reference/references/network-policies.md +++ /dev/null @@ -1,122 +0,0 @@ -# Network Policies - -NemoClaw runs with a deny-by-default network policy. -The sandbox can only reach endpoints that are explicitly allowed. -Any request to an unlisted destination is intercepted by OpenShell, and the operator is prompted to approve or deny it in real time through the TUI. - -## Baseline Policy - -The baseline policy is defined in `nemoclaw-blueprint/policies/openclaw-sandbox.yaml`. - -### Filesystem - -| Path | Access | -|---|---| -| `/sandbox`, `/tmp`, `/dev/null` | Read-write | -| `/usr`, `/lib`, `/proc`, `/dev/urandom`, `/app`, `/etc`, `/var/log` | Read-only | - -The sandbox process runs as a dedicated `sandbox` user and group. -Landlock LSM enforcement applies on a best-effort basis. - -### Network Policies - -The following endpoint groups are allowed by default: - -:::{list-table} -:header-rows: 1 -:widths: 20 30 20 30 - -* - Policy - - Endpoints - - Binaries - - Rules - -* - `claude_code` - - `api.anthropic.com:443`, `statsig.anthropic.com:443`, `sentry.io:443` - - `/usr/local/bin/claude` - - All methods - -* - `nvidia` - - `integrate.api.nvidia.com:443`, `inference-api.nvidia.com:443` - - `/usr/local/bin/claude`, `/usr/local/bin/openclaw` - - All methods - -* - `github` - - `github.com:443` - - `/usr/bin/gh`, `/usr/bin/git` - - All methods, all paths - -* - `github_rest_api` - - `api.github.com:443` - - `/usr/bin/gh` - - GET, POST, PATCH, PUT, DELETE - -* - `clawhub` - - `clawhub.ai:443` - - `/usr/local/bin/openclaw`, `/usr/local/bin/node` - - GET, POST - -* - `openclaw_api` - - `openclaw.ai:443` - - `/usr/local/bin/openclaw`, `/usr/local/bin/node` - - GET, POST - -* - `openclaw_docs` - - `docs.openclaw.ai:443` - - `/usr/local/bin/openclaw` - - GET only - -* - `npm_registry` - - `registry.npmjs.org:443` - - `/usr/local/bin/openclaw`, `/usr/local/bin/npm`, `/usr/local/bin/node` - - All methods, all paths - -* - `telegram` - - `api.telegram.org:443` - - Any binary - - GET, POST on `/bot*/**` - -::: - -All endpoints use TLS termination and are enforced at port 443. - -### Inference - -The baseline policy allows only the `local` inference route. External inference -providers are reached through the OpenShell gateway, not by direct sandbox egress. - -## Operator Approval Flow - -When the agent attempts to reach an endpoint not listed in the policy, OpenShell intercepts the request and presents it in the TUI for operator review: - -1. The agent makes a network request to an unlisted host. -2. OpenShell blocks the connection and logs the attempt. -3. The TUI command `openshell term` displays the blocked request with host, port, and requesting binary. -4. The operator approves or denies the request. -5. If approved, the endpoint is added to the running policy for the session. - -To try this, run the walkthrough: - -```console -$ ./scripts/walkthrough.sh -``` - -This opens a split tmux session with the TUI on the left and the agent on the right. - -## Modifying the Policy - -### Static Changes - -Edit `nemoclaw-blueprint/policies/openclaw-sandbox.yaml` and re-run the onboard wizard: - -```console -$ nemoclaw onboard -``` - -### Dynamic Changes - -Apply policy updates to a running sandbox without restarting: - -```console -$ openshell policy set -``` diff --git a/.agents/skills/nemoclaw-reference/references/troubleshooting.md b/.agents/skills/nemoclaw-reference/references/troubleshooting.md deleted file mode 100644 index 072a0e1ab..000000000 --- a/.agents/skills/nemoclaw-reference/references/troubleshooting.md +++ /dev/null @@ -1,308 +0,0 @@ -# Troubleshooting - -This page covers common issues you may encounter when installing, onboarding, or running NemoClaw, along with their resolution steps. - -> **Get Help:** If your issue is not listed here, join the [NemoClaw Discord channel](https://discord.gg/XFpfPv9Uvx) to ask questions and get help from the community. You can also [file an issue on GitHub](https://github.com/NVIDIA/NemoClaw/issues/new). - -## Installation - -### `nemoclaw` not found after install - -If you use nvm or fnm to manage Node.js, the installer may not update your current shell's PATH. -The `nemoclaw` binary is installed but the shell session does not know where to find it. - -Run `source ~/.bashrc` (or `source ~/.zshrc` for zsh), or open a new terminal window. - -### Installer fails on unsupported platform - -The installer checks for a supported OS and architecture before proceeding. -NemoClaw requires Linux Ubuntu 22.04 LTS or later. -If you see an unsupported platform error, verify that you are running on a supported Linux distribution. - -### Node.js version is too old - -NemoClaw requires Node.js 22.16 or later. -If the installer exits with a Node.js version error, check your current version: - -```console -$ node --version -``` - -If the version is below 22.16, install a supported release. -If you use nvm, run: - -```console -$ nvm install 22 -$ nvm use 22 -``` - -Then re-run the installer. - -### Image push fails with out-of-memory errors - -The sandbox image is approximately 2.4 GB compressed. During image push, the Docker daemon, k3s, and the OpenShell gateway run alongside the export pipeline, which buffers decompressed layers in memory. On machines with less than 8 GB of RAM, this combined usage can trigger the OOM killer. - -If you cannot add memory, configure at least 8 GB of swap to work around the issue at the cost of slower performance. - -### Docker is not running - -The installer and onboard wizard require Docker to be running. -If you see a Docker connection error, start the Docker daemon: - -```console -$ sudo systemctl start docker -``` - -On macOS with Docker Desktop, open the Docker Desktop application and wait for it to finish starting before retrying. - -### macOS first-run failures - -The two most common first-run failures on macOS are missing developer tools and Docker connection errors. - -To avoid these issues, install the prerequisites in the following order before running the NemoClaw installer: - -1. Install Xcode Command Line Tools (`xcode-select --install`). These are needed by the installer and Node.js toolchain. -2. Install and start a supported container runtime (Docker Desktop or Colima). Without a running runtime, the installer cannot connect to Docker. - -### npm install fails with permission errors - -If `npm install` fails with an `EACCES` permission error, do not run npm with `sudo`. -Instead, configure npm to use a directory you own: - -```console -$ mkdir -p ~/.npm-global -$ npm config set prefix ~/.npm-global -$ export PATH=~/.npm-global/bin:$PATH -``` - -Add the `export` line to your `~/.bashrc` or `~/.zshrc` to make it permanent, then re-run the installer. - -### Port already in use - -The NemoClaw gateway uses port `18789` by default. -If another process is already bound to this port, onboarding fails. -Identify the conflicting process, verify it is safe to stop, and terminate it: - -```console -$ sudo lsof -i :18789 -$ kill -``` - -If the process does not exit, use `kill -9 ` to force-terminate it. -Then retry onboarding. - -## Onboarding - -### Cgroup v2 errors during onboard - -Older NemoClaw releases relied on a Docker cgroup workaround on Ubuntu 24.04, DGX Spark, and WSL2. -Current OpenShell releases handle that behavior themselves, so NemoClaw no longer requires a Spark-specific setup step. - -If onboarding reports that Docker is missing or unreachable, fix Docker first and retry onboarding: - -```console -$ nemoclaw onboard -``` - -If you are using Podman, NemoClaw warns and continues, but OpenShell officially documents Docker-based runtimes only. -If onboarding or sandbox lifecycle fails, switch to Docker Desktop, Colima, or Docker Engine and rerun onboarding. - -### Invalid sandbox name - -Sandbox names must follow RFC 1123 subdomain rules: lowercase alphanumeric characters and hyphens only, and must start and end with an alphanumeric character. -Uppercase letters are automatically lowercased. - -If the name does not match these rules, the wizard exits with an error. -Choose a name such as `my-assistant` or `dev1`. - -### Sandbox creation fails on DGX - -On DGX machines, sandbox creation can fail if the gateway's DNS has not finished propagating or if a stale port forward from a previous onboard run is still active. - -Run `nemoclaw onboard` to retry. -The wizard cleans up stale port forwards and waits for gateway readiness automatically. - -### Colima socket not detected (macOS) - -Newer Colima versions use the XDG base directory (`~/.config/colima/default/docker.sock`) instead of the legacy path (`~/.colima/default/docker.sock`). -NemoClaw checks both paths. -If neither is found, verify that Colima is running: - -```console -$ colima status -``` - -### Sandbox creation killed by OOM (exit 137) - -On systems with 8 GB RAM or less and no swap configured, the sandbox image push can exhaust available memory and get killed by the Linux OOM killer (exit code 137). - -NemoClaw automatically detects low memory during onboarding and prompts to create a 4 GB swap file. -If this automatic step fails or you are using a custom setup flow, create swap manually before running `nemoclaw onboard`: - -```console -$ sudo dd if=/dev/zero of=/swapfile bs=1M count=4096 status=none -$ sudo chmod 600 /swapfile -$ sudo mkswap /swapfile -$ sudo swapon /swapfile -$ echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab -$ nemoclaw onboard -``` - -## Runtime - -### Reconnect after a host reboot - -After a host reboot, the container runtime, OpenShell gateway, and sandbox may not be running. -Follow these steps to reconnect. - -1. Start the container runtime. - - - **Linux:** start Docker if it is not already running (`sudo systemctl start docker`) - - **macOS:** open Docker Desktop or start Colima (`colima start`) - -1. Check sandbox state. - - ```console - $ openshell sandbox list - ``` - - If the sandbox shows `Ready`, skip to step 4. - -1. Restart the gateway (if needed). - - If the sandbox is not listed or the command fails, restart the OpenShell gateway: - - ```console - $ openshell gateway start --name nemoclaw - ``` - - Wait a few seconds, then re-check with `openshell sandbox list`. - -1. Reconnect. - - ```console - $ nemoclaw connect - ``` - -1. Start host auxiliary services (if needed). - - If you use the cloudflared tunnel started by `nemoclaw start`, start it again: - - ```console - $ nemoclaw start - ``` - - Telegram, Discord, and Slack are handled by OpenShell-managed channel messaging configured at onboarding, not by a separate bridge process from `nemoclaw start`. - -> **If the sandbox does not recover:** If the sandbox remains missing after restarting the gateway, run `nemoclaw onboard` to recreate it. -> The wizard prompts for confirmation before destroying an existing sandbox. If you confirm, it **destroys and recreates** the sandbox. Workspace files (SOUL.md, USER.md, IDENTITY.md, AGENTS.md, MEMORY.md, and daily memory notes) are lost. -> Back up your workspace first by following the instructions at Back Up and Restore (see the `nemoclaw-workspace` skill). - -### Sandbox shows as stopped - -The sandbox may have been stopped or deleted. -Run `nemoclaw onboard` to recreate the sandbox from the same blueprint and policy definitions. - -### Status shows "not running" inside the sandbox - -This is expected behavior. -When checking status inside an active sandbox, host-side sandbox state and inference configuration are not inspectable. -The status command detects the sandbox context and reports "active (inside sandbox)" instead. - -Run `openshell sandbox list` on the host to check the underlying sandbox state. - -### Inference requests time out - -Verify that the inference provider endpoint is reachable from the host. -Check the active provider and endpoint: - -```console -$ nemoclaw status -``` - -If the endpoint is correct but requests still fail, check for network policy rules that may block the connection. -Then verify the credential and base URL for the provider you selected during onboarding. - -### `NEMOCLAW_DISABLE_DEVICE_AUTH=1` does not change an existing sandbox - -This is expected behavior. -`NEMOCLAW_DISABLE_DEVICE_AUTH` is a build-time setting used when NemoClaw creates the sandbox image. -Changing or exporting it later does not rewrite the baked `openclaw.json` inside an existing sandbox. - -If you need a different device-auth setting, rerun onboarding so NemoClaw rebuilds the sandbox image with the desired configuration. -For the security trade-offs, refer to Security Best Practices (see the `nemoclaw-configure-security` skill). - -### Agent cannot reach an external host - -OpenShell blocks outbound connections to hosts not listed in the network policy. -Open the TUI to see blocked requests and approve them: - -```console -$ openshell term -``` - -To permanently allow an endpoint, add it to the network policy. -Refer to Customize the Network Policy (see the `nemoclaw-manage-policy` skill) for details. - -### Blueprint run failed - -View the error output for the failed blueprint run: - -```console -$ nemoclaw logs -``` - -Use `--follow` to stream logs in real time while debugging. - -## Podman - -### `open /dev/kmsg: operation not permitted` - -This error appears when the Podman machine is running in rootless mode. -K3s kubelet requires `/dev/kmsg` access for its OOM watcher, which is not available in rootless containers. - -Switch the Podman machine to rootful mode and restart: - -```console -$ podman machine stop -$ podman machine set --rootful -$ podman machine start -``` - -Then destroy and recreate the gateway: - -```console -$ openshell gateway destroy --name nemoclaw -$ nemoclaw onboard -``` - -### Image push timeout with Podman - -When creating a sandbox, the 1.5 GB sandbox image push into K3s may time out through Podman's API socket. -This is a known limitation of the bollard Docker client's default timeout. - -Manually push the image using the Docker CLI, which has no such timeout: - -```console -$ docker images --format '{{.Repository}}:{{.Tag}}' | grep sandbox-from -$ docker save | \ - docker exec -i openshell-cluster-nemoclaw \ - ctr -a /run/k3s/containerd/containerd.sock -n k8s.io images import - -``` - -After the import completes, create the sandbox manually: - -```console -$ openshell sandbox create --name my-assistant --from -``` - -### Podman machine resources - -The default Podman machine has 2 GB RAM, which is insufficient for the sandbox image push and K3s cluster overhead. -Allocate at least 8 GB RAM and 4 CPUs: - -```console -$ podman machine stop -$ podman machine set --cpus 6 --memory 8192 -$ podman machine start -``` diff --git a/.agents/skills/nemoclaw-workspace/SKILL.md b/.agents/skills/nemoclaw-workspace/SKILL.md deleted file mode 100644 index 4f41e901b..000000000 --- a/.agents/skills/nemoclaw-workspace/SKILL.md +++ /dev/null @@ -1,169 +0,0 @@ ---- -name: "nemoclaw-workspace" -description: "Backs up and restores OpenClaw workspace files before destructive operations. Use when backing up a sandbox, restoring workspace state, or preparing for a destructive operation. Explains what workspace files are, where they live, and how they persist across sandbox restarts. Use when asking about soul.md, identity.md, memory.md, agents.md, or sandbox file persistence." ---- - -# NemoClaw Workspace - -Backs up and restores OpenClaw workspace files before destructive operations. Use when backing up a sandbox, restoring workspace state, or preparing for a destructive operation. - -## Context - -OpenClaw stores agent identity, behavior, and memory in a set of Markdown files inside the sandbox. -These files live at `/sandbox/.openclaw/workspace/` and are read by the agent at the start of every session. - -## File Reference - -Each file controls a distinct aspect of the agent's behavior and memory. - -| File | Purpose | Upstream Docs | -|---|---|---| -| `SOUL.md` | Core personality, tone, and behavioral rules. | [SOUL template](https://docs.openclaw.ai/reference/templates/SOUL) | -| `USER.md` | Preferences, context, and facts the agent learns about you. | [USER template](https://docs.openclaw.ai/reference/templates/USER) | -| `IDENTITY.md` | Agent name, creature type, emoji, and self-presentation. | [IDENTITY template](https://docs.openclaw.ai/reference/templates/IDENTITY) | -| `AGENTS.md` | Multi-agent coordination, memory conventions, and safety guidelines. | [AGENTS template](https://docs.openclaw.ai/reference/templates/AGENTS) | -| `MEMORY.md` | Curated long-term memory distilled from daily notes. | β€” | -| `memory/` | Directory of daily note files (`YYYY-MM-DD.md`) for session continuity. | β€” | - -## Where They Live - -All workspace files reside inside the sandbox filesystem: - -```text -/sandbox/.openclaw/workspace/ -β”œβ”€β”€ AGENTS.md -β”œβ”€β”€ IDENTITY.md -β”œβ”€β”€ MEMORY.md -β”œβ”€β”€ SOUL.md -β”œβ”€β”€ USER.md -└── memory/ - β”œβ”€β”€ 2026-03-18.md - └── 2026-03-19.md -``` - -> **Note:** The workspace directory is hidden (`.openclaw`). -> The files are not at `/sandbox/SOUL.md`. Use the full path when downloading or uploading. - -## Persistence Behavior - -Understanding when these files persist and when they are lost is critical. - -| Event | Workspace files | -|---|---| -| Sandbox restart | **Preserved:** the sandbox PVC retains its data. | -| `nemoclaw destroy` | **Lost:** the sandbox and its PVC are deleted. | - -> **Warning:** Always back up your workspace files before running `nemoclaw destroy`. -> See Back Up and Restore (see the `nemoclaw-workspace` skill) for instructions. - -## Editing Workspace Files - -The agent reads these files at the start of every session. -You can edit them in two ways: - -1. **Let the agent do it:** Ask your agent to update its persona, memory, or user context during a session. -2. **Edit manually:** Use `openshell sandbox connect` to open a terminal inside the sandbox and edit files directly, or use `openshell sandbox upload` to push edited files from your host. - -## Prerequisites - -- A running NemoClaw sandbox (for backup) or a freshly created sandbox (for restore). -- The OpenShell CLI on your `PATH`. -- The sandbox name (shown by `nemoclaw list`). - -Workspace files define your agent's personality, memory, and user context. -They persist across sandbox restarts but are **permanently deleted** when you run `nemoclaw destroy`. - -This guide covers manual backup with CLI commands and an automated script. - -## Step 1: When to Back Up - -- Before running `nemoclaw destroy`. -- Before major NemoClaw version upgrades. -- Periodically, if you have invested time customizing your agent. - -## Step 2: Manual Backup - -Use `openshell sandbox download` to copy files from the sandbox to your host. - -```console -$ SANDBOX=my-assistant -$ BACKUP_DIR=~/.nemoclaw/backups/$(date +%Y%m%d-%H%M%S) -$ mkdir -p "$BACKUP_DIR" - -$ openshell sandbox download "$SANDBOX" /sandbox/.openclaw/workspace/SOUL.md "$BACKUP_DIR/" -$ openshell sandbox download "$SANDBOX" /sandbox/.openclaw/workspace/USER.md "$BACKUP_DIR/" -$ openshell sandbox download "$SANDBOX" /sandbox/.openclaw/workspace/IDENTITY.md "$BACKUP_DIR/" -$ openshell sandbox download "$SANDBOX" /sandbox/.openclaw/workspace/AGENTS.md "$BACKUP_DIR/" -$ openshell sandbox download "$SANDBOX" /sandbox/.openclaw/workspace/MEMORY.md "$BACKUP_DIR/" -$ openshell sandbox download "$SANDBOX" /sandbox/.openclaw/workspace/memory/ "$BACKUP_DIR/memory/" -``` - -## Step 3: Manual Restore - -Use `openshell sandbox upload` to push files back into a sandbox. - -```console -$ SANDBOX=my-assistant -$ BACKUP_DIR=~/.nemoclaw/backups/20260320-120000 # pick a timestamp - -$ openshell sandbox upload "$SANDBOX" "$BACKUP_DIR/SOUL.md" /sandbox/.openclaw/workspace/ -$ openshell sandbox upload "$SANDBOX" "$BACKUP_DIR/USER.md" /sandbox/.openclaw/workspace/ -$ openshell sandbox upload "$SANDBOX" "$BACKUP_DIR/IDENTITY.md" /sandbox/.openclaw/workspace/ -$ openshell sandbox upload "$SANDBOX" "$BACKUP_DIR/AGENTS.md" /sandbox/.openclaw/workspace/ -$ openshell sandbox upload "$SANDBOX" "$BACKUP_DIR/MEMORY.md" /sandbox/.openclaw/workspace/ -$ openshell sandbox upload "$SANDBOX" "$BACKUP_DIR/memory/" /sandbox/.openclaw/workspace/memory/ -``` - -## Step 4: Using the Backup Script - -The repository includes a convenience script at `scripts/backup-workspace.sh`. - -### Backup - -```console -$ ./scripts/backup-workspace.sh backup my-assistant -Backing up workspace from sandbox 'my-assistant'... -Backup saved to /home/user/.nemoclaw/backups/20260320-120000/ (6 items) -``` - -### Restore - -Restore from the most recent backup: - -```console -$ ./scripts/backup-workspace.sh restore my-assistant -``` - -Restore from a specific timestamp: - -```console -$ ./scripts/backup-workspace.sh restore my-assistant 20260320-120000 -``` - -## Step 5: Verifying a Backup - -List backed-up files to confirm completeness: - -```console -$ ls ~/.nemoclaw/backups/20260320-120000/ -AGENTS.md -IDENTITY.md -MEMORY.md -SOUL.md -USER.md -memory/ -``` - -## Step 6: Inspecting Files Inside the Sandbox - -Connect to the sandbox to list or view workspace files directly: - -```console -$ openshell sandbox connect my-assistant -$ ls -la /sandbox/.openclaw/workspace/ -``` - -## Related Skills - -- `nemoclaw-reference` β€” Commands reference -- `nemoclaw-monitor-sandbox` β€” Monitor Sandbox Activity diff --git a/.agents/skills/nemoclaw-workspace/references/workspace-files.md b/.agents/skills/nemoclaw-workspace/references/workspace-files.md deleted file mode 100644 index 4cdafba54..000000000 --- a/.agents/skills/nemoclaw-workspace/references/workspace-files.md +++ /dev/null @@ -1,61 +0,0 @@ -# Workspace Files - -OpenClaw stores agent identity, behavior, and memory in a set of Markdown files inside the sandbox. -These files live at `/sandbox/.openclaw/workspace/` and are read by the agent at the start of every session. - -## File Reference - -Each file controls a distinct aspect of the agent's behavior and memory. - -| File | Purpose | Upstream Docs | -|---|---|---| -| `SOUL.md` | Core personality, tone, and behavioral rules. | [SOUL template](https://docs.openclaw.ai/reference/templates/SOUL) | -| `USER.md` | Preferences, context, and facts the agent learns about you. | [USER template](https://docs.openclaw.ai/reference/templates/USER) | -| `IDENTITY.md` | Agent name, creature type, emoji, and self-presentation. | [IDENTITY template](https://docs.openclaw.ai/reference/templates/IDENTITY) | -| `AGENTS.md` | Multi-agent coordination, memory conventions, and safety guidelines. | [AGENTS template](https://docs.openclaw.ai/reference/templates/AGENTS) | -| `MEMORY.md` | Curated long-term memory distilled from daily notes. | β€” | -| `memory/` | Directory of daily note files (`YYYY-MM-DD.md`) for session continuity. | β€” | - -## Where They Live - -All workspace files reside inside the sandbox filesystem: - -```text -/sandbox/.openclaw/workspace/ -β”œβ”€β”€ AGENTS.md -β”œβ”€β”€ IDENTITY.md -β”œβ”€β”€ MEMORY.md -β”œβ”€β”€ SOUL.md -β”œβ”€β”€ USER.md -└── memory/ - β”œβ”€β”€ 2026-03-18.md - └── 2026-03-19.md -``` - -> **Note:** The workspace directory is hidden (`.openclaw`). -> The files are not at `/sandbox/SOUL.md`. Use the full path when downloading or uploading. - -## Persistence Behavior - -Understanding when these files persist and when they are lost is critical. - -| Event | Workspace files | -|---|---| -| Sandbox restart | **Preserved:** the sandbox PVC retains its data. | -| `nemoclaw destroy` | **Lost:** the sandbox and its PVC are deleted. | - -> **Warning:** Always back up your workspace files before running `nemoclaw destroy`. -> See Back Up and Restore (see the `nemoclaw-workspace` skill) for instructions. - -## Editing Workspace Files - -The agent reads these files at the start of every session. -You can edit them in two ways: - -1. **Let the agent do it:** Ask your agent to update its persona, memory, or user context during a session. -2. **Edit manually:** Use `openshell sandbox connect` to open a terminal inside the sandbox and edit files directly, or use `openshell sandbox upload` to push edited files from your host. - -## Next Steps - -- Back Up and Restore workspace files (see the `nemoclaw-workspace` skill) -- Commands reference (see the `nemoclaw-reference` skill)