Autonomous AI agents are powerful, but they come with significant risk. An agent with shell access could accidentally delete files, make unwanted network requests, or leak sensitive data. Traditional containerization (Docker, gVisor) was not designed for the granular, agent-specific security policies that AI applications need. NVIDIA OpenShell addresses this gap with a purpose-built sandboxed runtime for AI agents.
OpenShell, published at github.com/NVIDIA/OpenShell, is NVIDIA’s open-source answer to agent security. It provides an isolated execution environment where agents operate under declarative YAML policies that precisely control filesystem access, network communication, process execution, and inference calls. The sandbox runs as a separate process with minimal privileges, enforcing policies at the kernel level through Linux security modules.
What makes OpenShell distinct from general-purpose sandboxes is its AI-aware design. It understands agent-specific operations like model inference calls, tool invocations, and context window boundaries. Policies can be written to allow an agent to read a project directory but never write to it, or to call a specific API endpoint but block all other network traffic. This granularity is essential for production deployments where agents handle sensitive data.
What is NVIDIA OpenShell?
OpenShell is an open-source sandboxed runtime environment for AI agents. It provides security isolation through declarative YAML policies that control filesystem, network, process, and inference operations. Built by NVIDIA, it is designed to be agent-agnostic – compatible with Claude Code, LangChain agents, AutoGPT, and custom agent implementations.
How does OpenShell’s sandbox work?
OpenShell uses Linux kernel security features to enforce agent isolation.
| Security Domain | Controls | Enforcement Method |
|---|---|---|
| Filesystem | Read/write/execute per path | Linux seccomp + Landlock |
| Network | Allow/block per hostname, port, protocol | eBPF + nftables |
| Process | Restrict fork/exec, signal capabilities | Linux seccomp-bpf |
| Inference | Allow/block per model endpoint | Application-level intercept |
| Environment | Mask environment variables, secrets | Process-level isolation |
| Time | Limit execution wall-clock time | Process cgroup quotas |
Each policy is compiled into a seccomp filter that runs in the kernel, making enforcement both fast and secure.
What does an OpenShell policy look like?
Policies are defined in YAML with a clear, declarative syntax:
name: "code-review-agent"
version: "1.0"
filesystem:
read:
paths: ["/home/user/projects", "/usr/share/doc"]
write: []
execute: []
network:
allow:
- hostname: "api.github.com"
port: 443
protocol: "tcp"
deny:
- hostname: "*"
port: "*"
protocol: "*"
process:
max_forks: 10
allowed_executables: ["/usr/bin/git", "/usr/bin/python3"]
inference:
allowed_models:
- "nvidia/nemotron-4-340b-instruct"
max_tokens_per_call: 4096
max_calls_per_session: 100
This policy restricts a code review agent to reading project files, calling only the GitHub API and an NVIDIA inference endpoint, and executing only git and Python.
Which AI agents are supported?
OpenShell is designed to work with any AI agent that can be launched as a subprocess.
| Agent | Integration Method | Status |
|---|---|---|
| Claude Code | CLI launch within sandbox | Supported |
| LangChain agents | Python SDK integration | Supported |
| AutoGPT | CLI launch within sandbox | Supported |
| Custom Python agents | OpenShell Python API | Native support |
| Any agent binary | oshell run | Universal |
The oshell CLI tool launches any command within a sandbox context:
# Launch Claude Code in a restricted sandbox
oshell run --policy code-review.yaml -- claude code
# Launch a custom agent
oshell run --policy data-analysis.yaml -- python agent.py
How do I install OpenShell?
Installation requires a Linux system with kernel support for the required security modules:
# Download the latest release
curl -LO https://github.com/NVIDIA/OpenShell/releases/latest/download/oshell
chmod +x oshell
sudo mv oshell /usr/local/bin/
# Verify installation
oshell --version
Docker images are also available for non-Linux development environments, though full security enforcement requires native Linux kernel features.
What is OpenShell’s license?
OpenShell is released under the Apache License 2.0, NVIDIA’s standard open-source license. This allows free use, modification, and distribution in commercial and non-commercial projects, with patent protection from NVIDIA.
Frequently Asked Questions
What is NVIDIA OpenShell?
OpenShell is NVIDIA’s open-source sandboxed runtime for AI agents that enforces security policies through Linux kernel features. It controls filesystem, network, process, and inference operations via declarative YAML policies.
How does the OpenShell sandbox work?
It uses Linux seccomp, Landlock, eBPF, and cgroups to enforce fine-grained policies at the kernel level. Each policy is compiled into a seccomp filter for fast, secure enforcement.
What does an OpenShell policy look like?
Policies are defined in YAML with sections for filesystem (read/write/execute paths), network (allow/deny hostnames and ports), process (fork limits and allowed executables), and inference (allowed models and token limits).
Which AI agents are supported?
Claude Code, LangChain agents, AutoGPT, and any custom agent that can run as a subprocess. The oshell run command wraps any binary or script in a sandbox context.
How do I install OpenShell?
Download the oshell binary from GitHub Releases, make it executable, and place it in your PATH. Requires a Linux system with kernel support for seccomp, Landlock, and eBPF.
Further Reading
- NVIDIA OpenShell GitHub Repository
- NVIDIA AI Agent Security Guide
- Linux seccomp Documentation
- Landlock Linux Security Module
- eBPF for Security: A Practical Guide
graph TD
A[AI Agent] --> B[OpenShell Sandbox]
B --> C{Policy Enforcer}
C --> D[seccomp Filter]
C --> E[Landlock FS]
C --> F[eBPF Network]
C --> G[cgroup Limits]
D --> H[Kernel]
E --> H
F --> H
G --> H
H --> I[System Calls]
H --> J[File Operations]
H --> K[Network Packets]
H --> L[Process Creation]flowchart LR
subgraph Policy Declaration
A[YAML Policy] --> B[Filesystem Rules]
A --> C[Network Rules]
A --> D[Process Rules]
A --> E[Inference Rules]
end
subgraph Kernel Enforcement
B --> F[Landlock]
C --> G[eBPF]
D --> H[seccomp]
E --> I[App Intercept]
end
subgraph Outcomes
F --> J[Allowed]
G --> J
H --> J
I --> J
F --> K[Denied]
G --> K
H --> K
I --> K
end
無程式碼也能輕鬆打造專業LINE官方帳號!一鍵導入模板,讓AI助你行銷加分!