Webinar - May 21Building Regulated Infrastructure: How Lucis Standardized Security for Global Care
← Articles/No. 543 · AI Agents

Claude Code Sandbox: The Complete Guide to Sandboxing AI Agents in Production

How to sandbox Claude Code, Codex, and other AI coding agents for production use. Compare local Docker, Daytona, E2B, and Qovery approaches - with architecture diagrams and real-world examples.

Romaric Philogene
CEO & Co-founder
MAY 13, 2026 · 12 MIN
Claude Code Sandbox: The Complete Guide to Sandboxing AI Agents in Production

Key Points:

  • AI agents running on developer laptops are a security liability. Claude Code, Codex, and Cursor have access to SSH keys, API tokens, and production credentials. One runaway task and your blast radius is everything the developer can reach.
  • Sandbox-only platforms solve half the problem. Tools like Daytona and E2B provide isolated execution environments, but they have no path to production. Code stays in the sandbox forever.
  • The enterprise answer is sandbox-to-production governance. You need isolated environments with scoped secrets, network isolation, and audit trails - AND a deployment pipeline that takes agent-written code to staging and production.

Qovery · Kubernetes for the AI era
Build with Claude Code, Deploy with Qovery
Learn more

Why You Need to Sandbox AI Coding Agents

If your engineering team uses Claude Code, Codex, Cursor, or OpenCode, your AI agents are running with the same permissions as your developers. That means:

  • Full access to SSH keys and AWS credentials on the developer's machine
  • Unaudited command execution - no log of what the agent did, accessed, or modified
  • No network isolation - the agent can reach any API, database, or service the developer can
  • No cost controls - agents can spin up cloud resources, create databases, or run expensive operations indefinitely

This is not hypothetical. As AI coding agents become autonomous - running for hours on tasks, working on multiple tickets in parallel - the blast radius of an uncontrolled agent grows exponentially.

The 5 Approaches to Sandboxing AI Agents

1. Local Docker Containers (DIY)

The simplest approach: run your AI agent inside a Docker container on the developer's machine.

How it works:

BASH
docker run -it --rm \
  -v $(pwd):/workspace \
  -e ANTHROPIC_API_KEY=$ANTHROPIC_API_KEY \
  node:20 bash
# Then run Claude Code inside the container

Pros:

  • Free, no vendor dependency
  • Full control over the container image
  • Works offline

Cons:

  • Still runs on the developer's machine (their credentials are one volume mount away)
  • No network isolation (container shares the host network by default)
  • No audit trail
  • No scalability - one agent per developer machine
  • Docker Desktop licensing costs for enterprise teams

Best for: Individual developers experimenting with AI agents.

2. Daytona Sandboxes

Daytona is an open-source sandbox infrastructure platform that provides isolated compute environments for AI agents.

How it works:

PYTHON
from daytona import Daytona
daytona = Daytona()
sandbox = daytona.create()
response = sandbox.process.exec("claude --task 'Fix the login bug'")

Pros:

  • Sub-90ms sandbox creation
  • Dedicated kernel per sandbox (real isolation)
  • Snapshot/restore for reproducible agent states
  • SDKs for Python, TypeScript, Go, Java

Cons:

  • No deployment capability - code stays in the sandbox
  • Control plane runs on Daytona's infrastructure (not true BYOC)
  • Basic RBAC (org-level only)
  • No managed databases, no production infrastructure
  • Individual sandboxes only - no multi-service environments

Best for: Teams building AI coding agent products (code interpreters, eval platforms).

3. E2B Sandboxes

E2B offers cloud sandboxes optimized for AI agent code execution, with a focus on the code interpreter use case.

How it works:

PYTHON
from e2b_code_interpreter import Sandbox
sandbox = Sandbox()
execution = sandbox.run_code("print('Hello from sandboxed agent')")

Pros:

  • Purpose-built for code execution
  • Good SDK ergonomics
  • Instant sandbox creation

Cons:

  • Similar limitations to Daytona: sandbox-only, no deployment
  • Managed infrastructure only (no BYOC)
  • Focused on code interpreter use cases, less on full development environments
  • No enterprise governance features documented

Best for: AI products that need code execution as a feature (chatbots, data analysis tools).

4. Claude Code's Built-in Sandboxing

Claude Code itself supports Docker-based sandboxing and the --sandbox flag for macOS sandbox mode.

How it works:

BASH|macOS sandbox (limited file/network access)
claude --sandbox

# Docker-based sandbox
claude --docker

Pros:

  • Zero setup - built into Claude Code itself
  • Restricts file system access and network calls

Cons:

  • macOS sandbox is limited (no Linux support)
  • Docker mode still runs on the developer's machine
  • No centralized audit trail
  • No way for platform teams to enforce sandboxing across the organization
  • No multi-agent orchestration

Best for: Individual developers who want basic protection while using Claude Code locally.

5. Qovery: Sandbox to Production (Full Lifecycle)

Qovery provides sandboxed runtime environments for AI agents that run on your own Kubernetes infrastructure, with a governed path from sandbox to production.

How it works:

BASH|Platform team creates a blueprint environment
qovery environment create --name agent-sandbox --blueprint ai-agent

# Agent gets an isolated environment with scoped secrets
# Network isolation via allowlists
# Auto-shutdown on idle
# Every action logged and attributed

# When code is ready, deploy through the same platform
qovery deploy --environment production --service backend

Pros:

  • Full lifecycle: sandbox -> staging -> production on the same platform
  • True BYOC: everything runs on your AWS/GCP/Azure account
  • Fine-grained RBAC (per-project, per-environment, per-role)
  • Network isolation (HTTP allowlists, DNS filtering)
  • Scoped secrets (agents only see what they need)
  • Auto-shutdown and cost controls
  • Full audit trail (every agent action logged)
  • Works with any AI agent (Claude Code, Codex, Cursor, OpenCode, custom)

Cons:

  • Not as fast as Daytona for pure sandbox creation (minutes vs. sub-90ms)
  • Requires Kubernetes infrastructure (more setup than a managed sandbox)
  • Overkill for simple code execution / interpreter use cases

Best for: Enterprise teams running AI agents that need to ship code to production, not just execute scripts.

Your agents need infrastructure. Not just a prompt.
Qovery provisions sandboxed, audited runtime environments for every AI agent - from development to production.
Try Qovery free

Comparison Table

FeatureDocker (DIY)DaytonaE2BClaude --sandboxQovery
Isolation levelContainerDedicated kernelContainerOS sandboxKubernetes pod
Network isolationManual configPer-sandbox limitsManagedBasicHTTP allowlists + DNS filtering
Audit trailNoneBasic logsBasic logsNoneFull - every action attributed
Secrets managementEnv varsEnv varsEnv varsNoneScoped per environment + role
Deployment to productionNoNoNoNoYes - full CI/CD
BYOCYes (local)PartialNoYes (local)Full - your cloud account
RBACNoneOrg-levelNoneNoneProject + environment level
Cost controlsNoneAuto-stopPay-per-useNonePer-agent budgets + auto-shutdown
Multi-service environmentsManualNoNoNoYes - full topology
Managed databasesNoNoNoNoPostgreSQL, MySQL, Redis, MongoDB

Which Approach Should You Choose?

Use Docker (DIY) if you're an individual developer experimenting with AI agents and want basic isolation with zero cost.

Use Daytona or E2B if you're building an AI product that needs code execution as a feature - code interpreters, eval platforms, data analysis tools. The sandbox is the product.

Use Claude's built-in sandbox if you want quick local protection while coding with Claude and don't need centralized governance.

Use Qovery if your AI agents need to:

  • Write code that eventually ships to production
  • Access real databases, APIs, and services - but with scoped permissions
  • Run on infrastructure you own (BYOC)
  • Comply with SOC 2, HIPAA, or GDPR requirements
  • Scale from 1 agent to 100 agents with governance

The fundamental question is: does your agent's code need to go to production? If yes, you need more than a sandbox. You need the full lifecycle - and that's the gap Qovery fills.

Getting Started with Qovery Agent Sandboxes

  1. Install the Qovery CLI: curl -fsSL https://get.qovery.com | bash
  2. Connect your cloud account: Qovery provisions infrastructure on your AWS, GCP, or Azure
  3. Create a blueprint environment: Define what agents get access to (databases, APIs, secrets)
  4. Install the MCP Server: AI agents can now provision and deploy through Qovery
  5. Set policies: Define RBAC rules, cost caps, and network allowlists per agent

Read the full agent quickstart guide or book a demo to see it in action.

Further Reading

Romaric Philogene
About the author
Romaric Philogene

Romaric founded Qovery to make Kubernetes accessible to every engineering team. He writes about platform strategy, developer experience, and the future of cloud infrastructure.

Next step

Your agents need infrastructure. Not just a prompt.

Qovery provisions sandboxed, audited runtime environments for every AI agent - from development to production.