This guide explains how to use secure container runtimes with OpenSandbox to provide hardware-level isolation for executing untrusted AI-generated code.
- Overview
- Server Configuration
- Docker Mode
- Kubernetes Mode
- User Guide
- Administrator Guide
- Troubleshooting and Best Practices
Secure container runtimes provide stronger isolation than the standard runc runtime used by Docker and containerd. They add additional security layers through different mechanisms:
| Runtime | Isolation Mechanism | Startup Overhead | Memory Overhead | Best For |
|---|---|---|---|---|
| runc (default) | Process-level cgroups | ~0ms | Minimal | Trusted workloads, local development |
| gVisor | User-space kernel (syscall interception) | ~10-50ms | ~50MB | General workloads with low overhead |
| Kata (QEMU) | Full VM with QEMU hypervisor | ~500ms | ~20-50MB | Maximum compatibility and isolation |
| Kata (Firecracker) | MicroVM with Firecracker hypervisor | ~125ms | ~5MB | High density, minimal footprint |
| Kata (CLH) | Cloud Hypervisor | ~200ms | ~10-20MB | Balanced performance and isolation |
OpenSandbox is designed to execute untrusted code generated by AI models (Claude, GPT-4, Gemini, etc.). Secure runtimes provide:
- Container Escape Protection: Prevents malicious code from breaking out of the container
- Kernel-Level Isolation: Each sandbox gets its own kernel context
- Multi-Tenant Safety: Different users' sandboxes are strongly isolated
- Compliance: Meets security requirements for regulated industries
OpenSandbox supports the following secure runtime types through server-level configuration:
"gvisor"- Google gVisor with runsc"kata"- Kata Containers with QEMU hypervisor (default)"firecracker"- Kata Containers with Firecracker hypervisor""(empty) - Standard runc (default, no secure runtime)
Server-Level Configuration: The secure runtime is configured once at the server level by administrators. All sandboxes on that server transparently use the configured runtime. SDK users and API callers require no code changes.
Secure runtimes are configured through the ~/.sandbox.toml configuration file. The server validates the configured runtime at startup and will refuse to start if the runtime is unavailable.
Edit ~/.sandbox.toml:
[runtime]
type = "docker" # or "kubernetes"
execd_image = "opensandbox/execd:latest"
# Secure container runtime configuration
# When enabled, ALL sandboxes on this server use the specified runtime
[secure_runtime]
# Runtime type: "", "gvisor", "kata", "firecracker"
type = ""
# Docker mode: OCI runtime name (e.g., "runsc" for gVisor, "kata-runtime" for Kata)
# Required when runtime.type = "docker" and type is not empty
docker_runtime = "runsc"
# Kubernetes mode: RuntimeClass name (e.g., "gvisor", "kata-qemu", "kata-fc")
# Required when runtime.type = "kubernetes" and type is not empty
k8s_runtime_class = "gvisor"[runtime]
type = "docker"
execd_image = "opensandbox/execd:latest"
[secure_runtime]
type = "gvisor"
docker_runtime = "runsc"
k8s_runtime_class = "gvisor"[runtime]
type = "kubernetes"
execd_image = "opensandbox/execd:latest"
[secure_runtime]
type = "kata"
docker_runtime = "kata-runtime"
k8s_runtime_class = "kata-qemu"[runtime]
type = "kubernetes"
execd_image = "opensandbox/execd:latest"
[secure_runtime]
type = "firecracker"
docker_runtime = "" # Not supported in Docker mode
k8s_runtime_class = "kata-fc"When the server starts, it automatically validates that the configured secure runtime is available:
$ opensandbox-server
INFO Validating secure runtime for Docker backend
INFO Docker OCI runtime 'runsc' is available: {...}
INFO Application startup complete.If the runtime is not available, the server will refuse to start with a clear error message:
ERROR Configured Docker runtime 'runsc' is not available.
Available runtimes: runc.
Please install and configure it in /etc/docker/daemon.json.
Docker mode is fully supported for secure container runtimes.
- Docker daemon installed and running
- Secure runtime installed on the host
For Docker mode, you only need to install the runsc OCI runtime:
# Ubuntu/Debian
curl -fsSL https://gvisor.dev/archive.key | sudo gpg --dearmor -o /usr/share/keyrings/gvisor-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/gvisor-archive-keyring.gpg] https://storage.googleapis.com/gvisor/releases release main" | \
sudo tee /etc/apt/sources.list.d/gvisor.list
sudo apt-get update && sudo apt-get install -y runsc
# Verify installation
runsc --versionNote: For Docker mode, only
runscis required. Thecontainerd-shim-runsc-v1is only needed for Kubernetes/containerd.Reference: See gVisor Installation Guide for other distributions and installation methods.
Use the runsc install command to automatically configure Docker daemon:
sudo runsc installOr manually edit /etc/docker/daemon.json:
{
"runtimes": {
"runsc": {
"path": "/usr/bin/runsc",
"runtimeArgs": [
"--platform=systrap",
"--network=host"
]
}
}
}Restart Docker:
sudo systemctl restart dockerReference: See gVisor Docker Quick Start for more details.
Edit ~/.sandbox.toml:
[runtime]
type = "docker"
execd_image = "opensandbox/execd:latest"
[secure_runtime]
type = "gvisor"
docker_runtime = "runsc"opensandbox-serverCreate a test sandbox:
curl -X POST http://localhost:8080/v1/sandboxes \
-H "Content-Type: application/json" \
-d '{
"image": {"uri": "python:3.11"},
"timeout": 3600,
"resourceLimits": {"cpu": "500m", "memory": "512Mi"},
"entrypoint": ["python", "-u", "-c", "import time\nwhile True: print('hello from gVisor!'); time.sleep(1)"],
"metadata": {
"name": "gvisor-docker-sandbox"
}
}'Verify the runtime:
docker ps --format "{{.ID}}\t{{.Image}}\t{{.Names}}"
docker inspect <container_id> | grep -A2 Runtime
# Expected output:
# "Runtime": "runsc",Kata Containers requires hardware virtualization support. Verify your system meets the following requirements:
Hardware Virtualization Support:
# Check if CPU supports hardware virtualization (VT-x for Intel, AMD-V for AMD)
lscpu | grep Virtualization
# Expected output: Virtualization: VT-x (Intel) or AMD-V (AMD)
# Alternatively on Intel
grep -E --color=auto 'vmx|svm' /proc/cpuinfo
# Expected: vmx (Intel) or svm (AMD) flags presentKVM Module:
# Check if KVM module is loaded
lsmod | grep kvm
# Expected: kvm_intel (Intel) or kvm_amd (AMD)
# If not loaded, load KVM module
sudo modprobe kvm_intel # For Intel
# or
sudo modprobe kvm_amd # For AMDKernel Requirements:
- Linux kernel 5.10 or later recommended
- KVM enabled in kernel config
Docker Requirements:
- Docker 20.10 or later
/etc/docker/daemon.jsonconfigured for Kata runtime
Download and install Kata Containers static binaries from GitHub releases:
# Find the latest release at https://github.com/kata-containers/kata-containers/releases
KATA_VERSION="3.27.0"
wget https://github.com/kata-containers/kata-containers/releases/download/${KATA_VERSION}/kata-static-${KATA_VERSION}-amd64.tar.zst
# Extract to root directory - Kata will be installed in /opt/kata
zstd -d kata-static-${KATA_VERSION}-amd64.tar.zst
tar -xvf kata-static-${KATA_VERSION}-amd64.tar -C /
# Create symbolic links for PATH access
sudo ln -sf /opt/kata/bin/kata-runtime /usr/local/bin/kata-runtime
sudo ln -sf /opt/kata/bin/containerd-shim-kata-v2 /usr/local/bin/containerd-shim-kata-v2
# Verify installation
kata-runtime --versionEdit /etc/docker/daemon.json to register Kata as a runtime:
{
"default-runtime": "runc",
"runtimes": {
"kata": {
"runtimeType": "io.containerd.kata.v2"
}
}
}Restart Docker to apply changes:
sudo systemctl restart docker
# Verify Kata is available in Docker
docker info | grep -A5 Runtimes
# Expected output should include "io.containerd.runc.v2 kata"Edit ~/.sandbox.toml:
[runtime]
type = "docker"
execd_image = "opensandbox/execd:latest"
[secure_runtime]
type = "kata"
docker_runtime = "kata"Test with OpenSandbox API
Create a sandbox and verify it's running in a VM by checking the kernel:
# Create a test sandbox
curl --location 'http://127.0.0.1:8080/v1/sandboxes' \
--header 'Content-Type: application/json' \
--data '{
"image": {"uri": "ubuntu:latest"},
"timeout": 3600,
"resourceLimits": {"cpu": "500m", "memory": "512Mi"},
"entrypoint": ["/bin/bash", "-c", "while true; do uname -a; sleep 1; done"],
"metadata": {
"name": "kata-sandbox"
}
}'Check the container's kernel to verify VM isolation:
# Get the container ID
docker ps | grep kata-sandbox
# Check the kernel inside the container (should be different from host)
docker exec <container_id> uname -a
# Expected output: Linux <hostname> 5.10.x-generic #x86_64 ... (Kata VM kernel)
# Compare with host kernel
uname -a
# Host kernel might be different version or have different hostnameKey Indicators of Kata VM:
- Container runs in a separate kernel with different hostname
- Kernel version is typically
5.10.x(Kata's guest kernel) - Host process list shows
qemu-system-x86_64or similar hypervisor process
Kubernetes mode supports secure runtimes through RuntimeClass resources.
- Kubernetes cluster with containerd runtime
- Secure runtime installed on all nodes
- RuntimeClass CRDs created
For Kubernetes with containerd, you need to install two components:
- runsc - the gVisor OCI runtime
- containerd-shim-runsc-v1 - the containerd shim for gVisor
# On each node - Ubuntu/Debian
curl -fsSL https://gvisor.dev/archive.key | sudo gpg --dearmor -o /usr/share/keyrings/gvisor-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/gvisor-archive-keyring.gpg] https://storage.googleapis.com/gvisor/releases release main" | \
sudo tee /etc/apt/sources.list.d/gvisor.list
sudo apt-get update
# Install both gVisor components
sudo apt-get install -y runsc containerd-shim-runsc-v1
# Verify installation
runsc --version
containerd-shim-runsc-v1 --versionReference: See gVisor Installation Guide for complete installation instructions and other distributions.
Edit /etc/containerd/config.toml:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc]
runtime_type = "io.containerd.runsc.v1"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc.options]
TypeUrl = "io.containerd.runsc.v1.options"
ConfigPath = "/etc/containerd/runsc.toml"sudo tee /etc/containerd/runsc.toml > /dev/null <<'EOF'
[runsc]
platform = "ptrace"
EOFRestart containerd:
sudo systemctl restart containerd# gvisor-runtimeclass.yaml
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: gvisor
handler: runsc
scheduling:
nodeSelector:
kubernetes.io/arch: amd64kubectl apply -f gvisor-runtimeclass.yamlEdit ~/.sandbox.toml:
[runtime]
type = "kubernetes"
execd_image = "opensandbox/execd:latest"
[secure_runtime]
type = "gvisor"
k8s_runtime_class = "gvisor"# Test the RuntimeClass
kubectl run test-gvisor --restart=Never --image=hello-world --runtime-class=gvisor
kubectl logs test-gvisor
kubectl delete pod test-gvisorFollow the official Kata Containers installation guide.
Quick installation using Helm:
# Install kata-deploy which will set up Kata Containers via DaemonSet
helm install kata-deploy "oci://ghcr.io/kata-containers/kata-deploy-charts/kata-deploy" --version "3.27.0" --namespace kube-system --create-namespace
# Wait for kata-deploy pods to be ready
kubectl wait --for=condition=ready pod -l name=kata-deploy -n kube-system --timeout=300sNote: The
kata-deployDaemonSet will automatically configure containerd on all nodes. Manual containerd configuration is not required when using kata-deploy.
Check that Kata Containers is installed and RuntimeClasses are created:
# Check RuntimeClasses
kubectl get runtimeclass
# Expected output:
# NAME HANDLER AGE
# kata kata-qemu 10m
# kata-qemu kata-qemu 10m
# kata-clh kata-clh 10m
# kata-fc kata-fc 10m
# Test Kata with a simple pod
kubectl run test-kata --restart=Never --image=hello-world --runtime-class=kata-qemu
kubectl logs test-kata
kubectl delete pod test-kataWhen using Pool CRDs for pre-warmed sandboxes, create separate pools for each runtime type:
# gvisor-pool.yaml
apiVersion: sandbox.opensandbox.io/v1alpha1
kind: Pool
metadata:
name: gvisor-pool
labels:
runtime: gvisor
spec:
template:
spec:
runtimeClassName: gvisor
containers:
- name: sandbox-container
image: opensandbox/code-interpreter:v1.0.1
capacitySpec:
bufferMax: 10
bufferMin: 2
poolMax: 20
poolMin: 5This section is for AI application developers using OpenSandbox.
Important: The secure runtime is configured at the server level. Your code does not need to change.
Simply create a sandbox using the OpenSandbox Lifecycle API - the server automatically applies the configured secure runtime:
Create a test sandbox:
curl -X POST http://localhost:8080/v1/sandboxes \
-H "Content-Type: application/json" \
-d '{
"image": {"uri": "python:3.11"},
"timeout": 3600,
"resourceLimits": {"cpu": "500m", "memory": "512Mi"},
"entrypoint": ["python", "-u", "-c", "import time\nwhile True: print(\"hello from secure sandbox!\"); time.sleep(1)"],
"metadata": {
"name": "my-secure-sandbox"
}
}'Response:
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"status": "running"
}The sandbox will automatically use the secure runtime configured on the server (gVisor, Kata, or runc).
- Administrator configures the secure runtime in
~/.sandbox.toml - Server validates the runtime at startup
- Server automatically injects the runtime into each sandbox:
- Docker mode: Adds
runtimeto HostConfig - Kubernetes mode: Adds
runtimeClassNameto Pod spec
- Docker mode: Adds
- User creates sandboxes via API - no runtime parameter needed
After creating a sandbox, verify the runtime being used:
Docker mode:
docker ps --format "{{.ID}}\t{{.Image}}\t{{.Names}}"
docker inspect <container_id> | grep -A2 Runtime
# Expected output for gVisor:
# "Runtime": "runsc",Kubernetes mode:
kubectl get pod <pod-name> -o jsonpath='{.spec.runtimeClassName}'
# Expected output for gVisor:
# gvisorThis section is for platform operators and SREs managing secure runtime infrastructure.
Secure runtimes must be installed and configured on your infrastructure before configuring OpenSandbox. OpenSandbox does not install runtimes automatically.
| Runtime | Docker | Kubernetes |
|---|---|---|
| gVisor | Install runsc → Configure daemon.json | Install runsc → Configure containerd → Create RuntimeClass |
| Kata (QEMU) | Install kata-runtime → Configure daemon.json | Install Kata → Configure containerd → Create RuntimeClass |
| Kata (Firecracker) | Not supported | Install Kata → Configure containerd → Create RuntimeClass |
The server validates secure runtime configuration at startup:
- Docker mode: Checks if the runtime exists in Docker daemon's runtime list
- Kubernetes mode: Checks if the RuntimeClass exists in the cluster
If validation fails, the server refuses to start with a clear error message.
- Default to gVisor: Provides good security with acceptable performance for most workloads
- Use Kata for Untrusted Code: Maximum isolation for completely unknown code
- Regular Updates: Keep runtimes updated for security patches
- Test Compatibility: Validate your workloads with the chosen runtime before production
- Monitor Resources: Secure runtimes have higher memory overhead
| Use Case | Recommended Runtime | Reasoning |
|---|---|---|
| Development/Testing | runc (default) | Fastest startup, lowest overhead |
| Production AI Code Execution | gVisor | Good balance of security and performance |
| High-Security Requirements | Kata (QEMU) | Maximum isolation, full compatibility |
| High-Density Multi-Tenant | Kata (Firecracker) | Minimal memory overhead per sandbox |
| Untrusted Network Code | gVisor or Kata | Syscall filtering prevents network attacks |
Error: Configured Docker runtime 'runsc' is not available.
Solution: Ensure the runtime is configured in /etc/docker/daemon.json and Docker has been restarted:
sudo systemctl restart docker
docker info | grep -A5 RuntimesError: RuntimeClass 'gvisor' does not exist.
Solution: Create the RuntimeClass CRD:
kubectl get runtimeclass
kubectl apply -f gvisor-runtimeclass.yamlError: Container exits with code 1, no logs
Cause: gVisor doesn't implement all syscalls. Some applications may not be compatible.
Solution: Check the gVisor compatibility guide. Try using Kata (QEMU) which has better compatibility.
Cause: RuntimeClass handler not configured on the node.
Solution: Verify containerd configuration:
# On the node
sudo containerd config dump
sudo systemctl restart containerd| Feature | runc | gVisor | Kata (QEMU) | Kata (CLH) | Kata (FC) |
|---|---|---|---|---|---|
| Syscall Compatibility | Full | Partial | Full | Full | Limited |
| GPU Support | Yes | No | Yes | Yes | No |
| IPv6 | Yes | Yes | Yes | Yes | Yes |
| Privileged Mode | Yes | No | Yes | Yes | No |
| Docker Volume | Yes | Yes | Yes | Yes | Yes |
| Systemd | Yes | No | Yes | Yes | No |
- Documentation: OpenSandbox GitHub
- Issues: Report bugs via GitHub Issues
- Design Document: See OSEP-0004 for complete design details