MLC LLM
Machine Learning Compilation for LLMs
F
Score: 38/100
Type
Execution
aot
Interface
sdk
About
MLC LLM is a universal solution for deploying LLMs with native hardware acceleration on any device. It uses Apache TVM for compilation, supporting CUDA, Metal, Vulkan, OpenCL, and WebGPU. Runs on phones, tablets, and browsers.
Performance
2000ms
Cold Start
500MB
Base Memory
500ms
Startup Overhead
✓ Last Verified
Date: Jan 18, 2026
Method: manual test
Manually verified
Languages
PythonC++
Details
- Isolation
- process
- Maturity
- stable
- License
- Apache-2.0