AI

NVIDIA OpenShell: Safe, Private Runtime for Autonomous AI Agents

OpenShell is NVIDIA's open-source sandboxed runtime for AI agents with declarative YAML policies for filesystem, network, process, and inference security.

Keeping this site alive takes effort — your support means everything.
無程式碼也能輕鬆打造專業LINE官方帳號!一鍵導入模板,讓AI助你行銷加分! 無程式碼也能輕鬆打造專業LINE官方帳號!一鍵導入模板,讓AI助你行銷加分!
NVIDIA OpenShell: Safe, Private Runtime for Autonomous AI Agents

Autonomous AI agents are powerful, but they come with significant risk. An agent with shell access could accidentally delete files, make unwanted network requests, or leak sensitive data. Traditional containerization (Docker, gVisor) was not designed for the granular, agent-specific security policies that AI applications need. NVIDIA OpenShell addresses this gap with a purpose-built sandboxed runtime for AI agents.

OpenShell, published at github.com/NVIDIA/OpenShell, is NVIDIA’s open-source answer to agent security. It provides an isolated execution environment where agents operate under declarative YAML policies that precisely control filesystem access, network communication, process execution, and inference calls. The sandbox runs as a separate process with minimal privileges, enforcing policies at the kernel level through Linux security modules.

What makes OpenShell distinct from general-purpose sandboxes is its AI-aware design. It understands agent-specific operations like model inference calls, tool invocations, and context window boundaries. Policies can be written to allow an agent to read a project directory but never write to it, or to call a specific API endpoint but block all other network traffic. This granularity is essential for production deployments where agents handle sensitive data.

What is NVIDIA OpenShell?

OpenShell is an open-source sandboxed runtime environment for AI agents. It provides security isolation through declarative YAML policies that control filesystem, network, process, and inference operations. Built by NVIDIA, it is designed to be agent-agnostic – compatible with Claude Code, LangChain agents, AutoGPT, and custom agent implementations.

How does OpenShell’s sandbox work?

OpenShell uses Linux kernel security features to enforce agent isolation.

Security DomainControlsEnforcement Method
FilesystemRead/write/execute per pathLinux seccomp + Landlock
NetworkAllow/block per hostname, port, protocoleBPF + nftables
ProcessRestrict fork/exec, signal capabilitiesLinux seccomp-bpf
InferenceAllow/block per model endpointApplication-level intercept
EnvironmentMask environment variables, secretsProcess-level isolation
TimeLimit execution wall-clock timeProcess cgroup quotas

Each policy is compiled into a seccomp filter that runs in the kernel, making enforcement both fast and secure.

What does an OpenShell policy look like?

Policies are defined in YAML with a clear, declarative syntax:

name: "code-review-agent"
version: "1.0"

filesystem:
  read:
    paths: ["/home/user/projects", "/usr/share/doc"]
  write: []
  execute: []

network:
  allow:
    - hostname: "api.github.com"
      port: 443
      protocol: "tcp"
  deny:
    - hostname: "*"
      port: "*"
      protocol: "*"

process:
  max_forks: 10
  allowed_executables: ["/usr/bin/git", "/usr/bin/python3"]

inference:
  allowed_models:
    - "nvidia/nemotron-4-340b-instruct"
  max_tokens_per_call: 4096
  max_calls_per_session: 100

This policy restricts a code review agent to reading project files, calling only the GitHub API and an NVIDIA inference endpoint, and executing only git and Python.

Which AI agents are supported?

OpenShell is designed to work with any AI agent that can be launched as a subprocess.

AgentIntegration MethodStatus
Claude CodeCLI launch within sandboxSupported
LangChain agentsPython SDK integrationSupported
AutoGPTCLI launch within sandboxSupported
Custom Python agentsOpenShell Python APINative support
Any agent binaryoshell run Universal

The oshell CLI tool launches any command within a sandbox context:

# Launch Claude Code in a restricted sandbox
oshell run --policy code-review.yaml -- claude code

# Launch a custom agent
oshell run --policy data-analysis.yaml -- python agent.py

How do I install OpenShell?

Installation requires a Linux system with kernel support for the required security modules:

# Download the latest release
curl -LO https://github.com/NVIDIA/OpenShell/releases/latest/download/oshell
chmod +x oshell
sudo mv oshell /usr/local/bin/

# Verify installation
oshell --version

Docker images are also available for non-Linux development environments, though full security enforcement requires native Linux kernel features.

What is OpenShell’s license?

OpenShell is released under the Apache License 2.0, NVIDIA’s standard open-source license. This allows free use, modification, and distribution in commercial and non-commercial projects, with patent protection from NVIDIA.

Frequently Asked Questions

What is NVIDIA OpenShell?

OpenShell is NVIDIA’s open-source sandboxed runtime for AI agents that enforces security policies through Linux kernel features. It controls filesystem, network, process, and inference operations via declarative YAML policies.

How does the OpenShell sandbox work?

It uses Linux seccomp, Landlock, eBPF, and cgroups to enforce fine-grained policies at the kernel level. Each policy is compiled into a seccomp filter for fast, secure enforcement.

What does an OpenShell policy look like?

Policies are defined in YAML with sections for filesystem (read/write/execute paths), network (allow/deny hostnames and ports), process (fork limits and allowed executables), and inference (allowed models and token limits).

Which AI agents are supported?

Claude Code, LangChain agents, AutoGPT, and any custom agent that can run as a subprocess. The oshell run command wraps any binary or script in a sandbox context.

How do I install OpenShell?

Download the oshell binary from GitHub Releases, make it executable, and place it in your PATH. Requires a Linux system with kernel support for seccomp, Landlock, and eBPF.

Further Reading

TAG
CATEGORIES