API
Overview
RuntimeDog provides a simple JSON API for programmatic access to runtime data. The API returns static JSON and does not require authentication.
Endpoint
Returns a JSON object containing all runtimes with their metadata and scores.
View live endpoint →Response Format
{
"version": "1.0",
"generated": "2026-01-18T00:00:00.000Z",
"count": 10,
"runtimes": [
{
"id": "wasmtime",
"name": "Wasmtime",
"tagline": "Fast, secure WebAssembly runtime",
"type": "wasm",
"execution": "aot",
"interface": "cli",
"languages": ["Rust", "C", "C++", ...],
"isolation": "process",
"maturity": "production",
"performance": {
"cold_start_ms": 1,
"memory_mb": 5,
"startup_overhead_ms": 0.5
},
"score": 88,
"license": "Apache-2.0",
"website": "https://wasmtime.dev",
"github": "https://github.com/...",
"docs": "https://docs.wasmtime.dev"
},
...
]
}Fields
| Field | Type | Description |
|---|---|---|
| id | string | Unique identifier (URL-safe) |
| name | string | Display name |
| type | string | language | wasm | container | microvm | edge | serverless |
| execution | string | interpreted | jit | aot | hybrid |
| score | number | RuntimeScore (0-100) |
| performance | object | cold_start_ms, memory_mb, startup_overhead_ms |
Usage Example
// Fetch all runtimes
const res = await fetch('https://runtimedog.com/api/runtimes.json');
const data = await res.json();
// Filter by type
const wasmRuntimes = data.runtimes.filter(r => r.type === 'wasm');
// Sort by score
const topRated = data.runtimes.sort((a, b) => b.score - a.score);🏠 Local LLM API
Dedicated endpoints for local LLM tools and stack recommendations.
GET /api/local-llm.json
Returns all local LLM tools (launchers, engines, formats, backends).
View live endpoint →{
"count": 25,
"runtimes": [
{
"id": "ollama",
"name": "Ollama",
"role": "launcher",
"localFitScore": 95,
"backends": ["cuda", "metal", "rocm", "cpu"],
"formats": ["gguf"],
"install": { "mac": "brew install ollama", ... },
...
},
...
]
}GET /api/local-stacks.json
Returns pre-configured stacks by hardware target (NVIDIA/Mac/CPU/AMD).
View live endpoint →{
"count": 4,
"targets": ["nvidia", "mac", "cpu", "amd"],
"stacks": [
{
"target": "nvidia",
"description": "NVIDIA GPU users...",
"bands": [
{
"vram_gb": 8,
"label": "8GB VRAM (RTX 3060/3070)",
"recipes": [
{
"name": "Beginner",
"launcher": "ollama",
"engine": "llama.cpp",
"formats": ["gguf"],
"quant_hint": "Q4_K_M",
"install_steps": ["curl ...", "ollama pull ..."],
"notes": "7Bモデル快適"
},
...
]
},
...
]
},
...
]
}Score Philosophy
RuntimeDog scores different item classes using different criteria. This ensures fair comparison within each category.
Runtime Score (0-100)
For runtimes (Node.js, Wasmtime, Docker, Firecracker, etc.), the score reflects:
- Performance — Cold start time, memory footprint, startup overhead
- Isolation — Security boundary strength (process/sandbox/VM)
- Maturity — Production readiness, ecosystem, documentation
AI Tool Score (0-100)
For AI tools (Ollama, LM Studio, vLLM, etc.), the score reflects:
- Install Ease — One-command install, minimal dependencies
- Backend Support — GPU/CPU coverage (CUDA, Metal, ROCm, CPU)
- Operability — API compatibility, model management, updates
Formats & Backends
Model formats (GGUF, ONNX, SafeTensors) and GPU backends (CUDA, Metal, Vulkan) are not scored. These are infrastructure components—choosing one depends on your hardware and toolchain, not a quality ranking.
Why separate scoring? A container runtime like Docker and an inference engine like vLLM serve fundamentally different purposes. Comparing them on the same scale would be misleading. Each class has its own criteria optimized for what matters in that domain.
Notes
- No authentication required
- No rate limits (please be reasonable)
- Data is updated periodically
- CORS enabled for browser access