| ▲ | andyyyy64 2 hours ago | |
You're right that it doesn't run anything — it's a pre-download / pre-purchase decision tool, so it estimates rather than measures by design (you can simulate a GPU you don't own with --gpu). That's a genuine limitation vs running the model: a measured t/s on your exact backend/quant will always beat my estimate. The estimate is bandwidth-bound, per-quant and per-backend, and deliberately conservative on VRAM (weights + GQA-aware KV + activation) so it errs toward "won't fit" rather than crashing you mid-run. Where I can get real measurements I fold them in — calibration data / PRs for specific hardware are very welcome; that's the path to numbers you can trust rather than just plausible ones. On-device measurers like RapidMLX are complementary, a different point in the workflow. | ||