What AI can run on your hardware? From Raspberry Pi to cloud GPU — find the right model for your project.
RAM estimates assume Q4 quantization (4-bit). Full-precision models use 2-4x more memory. Most local tools (Ollama, LM Studio) use quantized models by default. Actual performance depends on CPU/GPU — these are minimum RAM requirements.