Experimental — Hotcell is under active development and should not be used in production.

VMM Backends

Four backends behind a common trait. Same code, same protocol, same result format. Pick the right trade-off per workload.

Default
macOS + Linux

libkrun

Embedded VMM via FFI. No separate process. Transparent socket networking (TSI). The fastest path from code to VM.

~337ms end-to-end
 
Linux only

Firecracker

AWS Lambda's VMM. Minimal device model, battle-tested jailer. ext4 block device rootfs. The production hardening choice.

~344ms end-to-end
 
Linux only

Cloud Hypervisor

Modern rust-vmm VMM. virtio-fs rootfs via virtiofsd. Snapshot/restore, warm migration, GPU passthrough via VFIO.

~307ms end-to-end
 
Linux only

QEMU

Full device emulation. Broadest hardware support. GPU passthrough with OVMF firmware boot. The escape hatch for complex workloads.

~555ms end-to-end
Fastest boot Most features

Feature Matrix

libkrun
Firecracker
Cloud Hypervisor
QEMU
Platform
macOS + Linux
Linux only
Linux only
Linux only
Boot time (median)
~337ms
~344ms
~307ms
~555ms
Rootfs
virtio-fs (built-in)
ext4 block device
virtio-fs (virtiofsd)
virtio-fs (virtiofsd)
Networking
TSI (proxied sockets)
TAP + NAT + egress filter
TAP + NAT + egress filter
TAP + NAT + egress filter
Shared directories
Yes
No
Yes
Yes
GPU passthrough
No
No
Yes (VFIO)
Yes (VFIO + OVMF)
Snapshot/restore
No
Yes
Yes
Snapshot only
Warm migration
No
Yes
Yes
No (no restore)
Persistent VMs
Yes
Yes
Yes
Yes
Port forwarding
TSI port mapping
iptables DNAT
iptables DNAT
iptables DNAT
Host sandboxing
hotcell-jailer (14 steps)
Firecracker jailer
CH seccomp + hotcell-ch-worker
KVM isolation only
Guest kernel
Built into libkrunfw
Separate vmlinux
Separate vmlinux
Separate vmlinux
Best for
Dev, macOS, low latency
Production, multi-tenant
GPU compute, migration
Complex hardware, CUDA

Boot times measured on bare metal (AMD Ryzen 9 7950X3D, KVM). Full lifecycle: OCI rootfs assembly, VM boot, command execution, result collection, teardown.

Default

libkrun

An embedded VMM that runs inside the hotcell worker process via FFI. No separate binary to manage, no REST API, no socket coordination. The VM starts when you call krun_start_enter() and the worker process becomes the guest.

Uses Hypervisor.framework on macOS and KVM on Linux. The only backend that runs on macOS, making it the default for development.

+Cross-platform (macOS + Linux)
+No external binary dependencies
+TSI networking (transparent socket proxying)
+14-step hotcell-jailer sandbox on Linux
No snapshot/restore or migration
No GPU passthrough
Linux only

Firecracker

The VMM that powers AWS Lambda and Fargate. A separate binary configured via REST API over a Unix socket. Minimal device model with a tiny attack surface. Uses ext4 block device images built from OCI rootfs layers.

Full snapshot/restore support for warm migration between hosts. Pause a running VM, serialize its memory and CPU state, transfer to another host, restore.

+Battle-tested in AWS Lambda (millions of VMs)
+Snapshot/restore and warm migration
+Firecracker's own jailer for additional sandboxing
+TAP networking with egress filtering + rate limits
No shared directories (ext4 rootfs only)
No GPU passthrough
Linux only

Cloud Hypervisor

A modern, Rust-based VMM built on the rust-vmm crate ecosystem. Uses virtio-fs via external virtiofsd processes for rootfs and shared directory access. Configured via REST API over a Unix socket.

The most feature-rich backend: snapshot/restore, warm migration, live pre-copy migration plumbing, GPU passthrough via VFIO with firmware boot mode (CLOUDHV.fd).

+GPU passthrough (VFIO) with CUDA compute verified
+Snapshot/restore and warm migration
+Shared directories via virtio-fs
+Live pre-copy migration plumbing
Requires virtiofsd (external daemon per mount)
Linux only, larger kernel needed for virtio-net
Linux only

QEMU

The most mature VMM in the ecosystem. Full device emulation, broadest hardware support. Configured via command-line arguments, managed via QMP (JSON over Unix socket).

QEMU is hotcell's escape hatch for workloads that need OVMF firmware boot (required for NVIDIA GPU ROM initialization), complex PCI topologies, or specific device emulation. Uses virtio-fs via virtiofsd for rootfs access.

+GPU passthrough with OVMF firmware (NVIDIA CUDA)
+Broadest hardware and device emulation support
+Most mature migration support (pre-copy, post-copy, RDMA)
+Shared directories via virtio-fs
Slowest boot (~555ms, larger attack surface)
Restore path not yet implemented in hotcell

Choosing a Backend

Developing on macOS?

Use libkrun (the default). It's the only backend that supports macOS via Hypervisor.framework.

Multi-tenant production?

Use Firecracker. Battle-tested at AWS scale, minimal attack surface, snapshot/restore for migration.

GPU compute (CUDA)?

Use Cloud Hypervisor or QEMU. Both support VFIO passthrough. QEMU adds OVMF firmware for NVIDIA driver initialization.

Need to share host directories?

Use libkrun, Cloud Hypervisor, or QEMU. All three support virtio-fs shared mounts. Firecracker uses ext4 block devices only.

Experimental — Hotcell is under active development and should not be used in production.