hotcell-server
An HTTP server that exposes the Hotcell library as a JSON-RPC 2.0 service.
Overview
Hotcell includes an HTTP server (hotcell-server) that exposes the library as a JSON-RPC 2.0 service.
Start the server
# Build everything cargo build ./scripts/sign.sh # macOS only # Start the server hotcell-server \ --auth-token my-secret-token \ --worker-bin ./target/debug/hotcell-libkrun-worker \ --listen 127.0.0.1:8080
Or use an environment variable for the token:
HOTCELL_AUTH_TOKEN=my-secret-token hotcell-server \ --worker-bin ./target/debug/hotcell-libkrun-worker
Server options
HOTCELL_AUTH_TOKEN env)disabled, inet, full)libkrun or firecracker)API Methods
All requests are POST /api/v1/rpc with Authorization: Bearer <token>.
health
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"health","id":1}' {"jsonrpc":"2.0","result":{"status":"ok"},"id":1} sandbox.run
Run a command in a VM:
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "sandbox.run",
"id": 2,
"params": {
"image": "docker.io/library/alpine:latest",
"command": ["/bin/sh", "-c", "echo hello && cat /etc/alpine-release"],
"timeout_secs": 15
}
}' {
"jsonrpc": "2.0",
"result": {
"exit_code": 0,
"console_output": "hello\r\n3.23.3\r\n",
"result": null
},
"id": 2
} sandbox.run with networking
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "sandbox.run",
"id": 3,
"params": {
"image": "docker.io/library/alpine:latest",
"command": ["/bin/sh", "-c", "wget -q -O - http://example.com | head -1"],
"timeout_secs": 15,
"network": "inet"
}
}' {
"jsonrpc": "2.0",
"result": {
"exit_code": 0,
"console_output": "<!doctype html>...",
"result": null
},
"id": 3
} sandbox.run with structured input/output
Pass JSON input to the VM via HOTCELL_INPUT and receive structured results:
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "sandbox.run",
"id": 4,
"params": {
"image": "docker.io/library/alpine:latest",
"command": ["/bin/sh", "-c", "echo processing... && echo {\"computed\": 42} > /hotcell/result.json"],
"input": {"key": "value"},
"timeout_secs": 15
}
}' {
"jsonrpc": "2.0",
"result": {
"exit_code": 0,
"console_output": "processing...\r\n",
"result": {"computed": 42}
},
"id": 4
} console_output captures stdout/stderr (logs). result captures the parsed JSON from /hotcell_result/result.json (structured output). They don't interfere with each other.
sandbox.run with backend selection
Select a VMM backend per-request. Pass "backend": "firecracker" to use Firecracker (Linux only), or omit to use the server default:
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "sandbox.run",
"id": 7,
"params": {
"image": "docker.io/library/alpine:latest",
"command": ["/bin/echo", "hello from firecracker"],
"backend": "firecracker"
}
}' sandbox.run_function
Run a named function executor. With an executor registry file, start the server with --registry executors.json, then:
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "sandbox.run_function",
"id": 5,
"params": {
"executor": "python-data",
"input": {"name": "world"}
}
}'
The server resolves python-data to its image, boots the runtime, passes the input via HOTCELL_INPUT, and returns the handler's result.
sandbox.pull
Pre-cache an image so subsequent sandbox.run calls skip the download:
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "sandbox.pull",
"id": 6,
"params": {"image": "docker.io/library/python:3.12-slim"}
}' {"jsonrpc":"2.0","result":{"pulled":true,"image":"docker.io/library/python:3.12-slim"},"id":6} Parameters
Parameters for the sandbox.run method:
disabled, inet, or fullHOTCELL_INPUT env varlibkrun or firecrackerStreaming
The batch JSON-RPC endpoints return output only after the VM exits. For long-running commands, the streaming endpoints deliver console output in real-time via SSE or WebSocket.
Endpoints
sandbox.run output via SSEsandbox.run_function output via SSEsandbox.run output via WebSocketsandbox.run_function output via WebSocketSSE (Server-Sent Events)
SSE endpoints accept the same JSON body as their batch equivalents (without the JSON-RPC envelope). Auth is via Authorization: Bearer header.
curl -N -X POST http://127.0.0.1:8080/api/v1/stream/run \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{
"image": "docker.io/library/alpine:latest",
"command": ["/bin/sh", "-c", "for i in 1 2 3; do echo line $i; sleep 1; done"],
"timeout_secs": 15
}' The server responds with a stream of SSE events. Three event types are emitted:
event: output
data: {"data":"line 1\n"}
event: output
data: {"data":"line 2\n"}
event: output
data: {"data":"line 3\n"}
event: done
data: {"exit_code":0,"console_output":"line 1\nline 2\nline 3\n","result":null} timeout, worker, or internal)WebSocket
WebSocket endpoints authenticate via ?token= query parameter (since browsers cannot set custom headers on WebSocket connections). Send run params as the first text message; output arrives as JSON text frames with the same event types.
// Connect to WebSocket endpoint
const ws = new WebSocket(
"ws://127.0.0.1:8080/api/v1/ws/run?token=my-secret-token"
);
// Send run params as first message
ws.onopen = () => ws.send(JSON.stringify({
image: "docker.io/library/alpine:latest",
command: ["/bin/sh", "-c", "echo hello"],
timeout_secs: 15
}));
// Receive streamed events
ws.onmessage = (e) => {
const msg = JSON.parse(e.data);
// msg.type is "output", "done", or "error"
console.log(msg);
}; Executor Registry
Maps names to OCI image configurations for reusable function executors.
settings JSON config
Define an executors.json file that maps executor names to their OCI images and default configuration. Start the server with --registry executors.json to load it.
{
"python-data": {
"image": "docker.io/library/python:3.12-slim",
"runtime": "python",
"default_timeout": "60s",
"default_memory_mib": 512
},
"node-api": {
"image": "docker.io/library/node:22-slim",
"runtime": "node",
"default_timeout": "30s",
"default_memory_mib": 256
}
} ExecutorEntry fields
"python", "node")Programmatic usage
use hotcell::ExecutorRegistry;
// From a JSON file
let registry = ExecutorRegistry::from_file(Path::new("executors.json"))?;
// From a JSON string
let registry = ExecutorRegistry::from_json(r#"{ ... }"#)?;
// Resolve a name
let entry = registry.resolve("python-data").unwrap();
assert_eq!(entry.image, "docker.io/library/python:3.12-slim");
assert_eq!(entry.default_memory_mib, 512);
assert_eq!(entry.default_timeout, Duration::from_secs(60)); use hotcell::registry::{ExecutorRegistry, ExecutorEntry};
use std::time::Duration;
let mut registry = ExecutorRegistry::new();
registry.register("my-func".into(), ExecutorEntry {
image: "my-registry.io/my-func:latest".into(),
runtime: Some("python".into()),
default_timeout: Duration::from_secs(120),
default_memory_mib: 1024,
default_vcpus: 2,
}); Error Codes
Error response format:
{
"jsonrpc": "2.0",
"error": {"code": -32000, "message": "worker process failed: ..."},
"id": 2
}