← Back to index

Model Explorer

What AI can run on your hardware? From Raspberry Pi to cloud GPU — find the right model for your project.

Quick Select Your Device
Available RAM 16 GB
512 MB 4 GB 16 GB 64 GB 192 GB
0
Can Run
0
Total Models
About Quantization

RAM estimates assume Q4 quantization (4-bit). Full-precision models use 2-4x more memory. Most local tools (Ollama, LM Studio) use quantized models by default. Actual performance depends on CPU/GPU — these are minimum RAM requirements.

Export Model Report