TLDR: Most secure: sbx, micro-VM isolation, separate kernel, no Docker Desktop required, one command. Easiest: zerobox, strips env vars by default, per-host secret injection, npm install -g zerobox && zerobox -- claude. Also worth knowing: nono for irrevocable kernel restrictions and host tool sandboxing.
nono (kernel-level, irrevocable)
nono run --profile claude-code --allow-cwd --proxy-allow llmapi -- claudenono uses Landlock (Linux) and Seatbelt (macOS). Once applied, restrictions are irrevocable. There is no API to loosen them, not even for nono itself. No Docker, no images, no daemon.
Built-in credential proxy, atomic filesystem rollback, cryptographic audit trails. Built-in profiles for Claude Code and other agents.
Trade-off: Alpha (v0.7.0). Some edge cases with device files. Inherits your env vars unless you block them manually.
zerobox (clean environment, per-host secrets)
zerobox --allow-write=. --allow-net=api.openai.com \ --secret OPENAI_API_KEY=sk-123 --secret-host OPENAI_API_KEY=api.openai.com \ -- node agent.jszerobox is built on OpenAI Codex’s production sandbox runtime. Seatbelt on macOS, bubblewrap + seccomp on Linux. Single binary, ~10ms overhead.
Clean environment by default. Only PATH, HOME, USER, SHELL, TERM, LANG are inherited. Everything else is stripped. Even if your shell has STRIPE_SECRET_KEY, the sandbox never sees it. You do not need to think about it.
Per-host secret injection. The sandboxed process sees a placeholder. The proxy substitutes the real key only for the allowed host. If a prompt injection tries to send the key to evil.com, the proxy does not inject it.
Also has a TypeScript SDK for programmatic sandboxing of individual tool calls.
Trade-off: Very new (v0.1.7, 7 stars). No built-in profiles. No atomic rollback. No audit trail.
sbx: Docker sandboxes (most secure)
sbx run claudesbx is Docker’s standalone sandbox CLI (the evolution of docker sandbox). Micro-VM isolation with its own kernel and Docker daemon. Docker Desktop is not required. Network policies built in (locked down, balanced, or custom). Supports Claude Code, Codex, Copilot, Gemini, Kiro, and others.
What the agent CAN do: read/edit source code, create files, run linters/formatters, install packages, run tests, access git history, spin up containers inside the sandbox, reach dev API via network.
What the agent CANNOT do: read .env files (they do not exist), access host/container env vars, connect to database/Redis directly, read Infisical session token, access 1Password vault, mount Docker socket, escape the micro-VM.
YOLO mode by default: agents work without asking permission because the sandbox IS the permission system.
Trade-off: First run pulls the agent image (slower). Each sandbox persists until removed. macOS (Apple Silicon) or Windows required for micro-VMs.
Others
Dev Containers: .devcontainer/ in your project, VS Code manages it. Full control over image and tools. Shares host kernel (weaker than Docker Sandbox). Do not mount your home directory.
Codespaces / Gitpod / DevPod: Cloud-based. Needs internet. Costs money. Does not work with Expo/iOS Simulator.
Full VM: OrbStack, UTM, Lima. Maximum isolation. Heavy. Overkill for most work.
Which one to pick
If you only want two choices:
Most secure: sbx. Micro-VM isolation (separate kernel). A kernel exploit in the guest cannot compromise the host. sbx run claude, one command, no Docker Desktop required.
Easiest: zerobox. npm install -g zerobox && zerobox -- claude. No Docker, no VM, no daemon. Strips env vars by default so secrets never leak in. Per-host secret injection.
If you want more nuance:
nono for irrevocable kernel restrictions, built-in agent profiles, and audit trails. Also best for sandboxing host tools like Helix and lazygit (Securing host tools).
sbx for the strongest isolation. Use for projects handling real money or private keys.
Any of the above + Docker Compose is a good combination: sandbox the AI agent, run the dev stack with secrets via Infisical.
The remaining risk: cloud inference
Even with perfect isolation, cloud AI models still see your source code. For maximum privacy, run local models via llama-server. See Vitalik Buterin’s self-sovereign LLM setup.