| ▲ | NitpickLawyer 6 hours ago | |
Yeah, but that's personal use at best, not much agentic anything happening on that hardware. Macs are great for small models at small-medium context lengths, but at > 64k (something very common with agentic usage) it struggles and slows down a lot. The ~100k hardware is suitable for multi-user, small team usage. That's what you'd use for actual work in reasonable timeframes. For personal use, sure macs could work. | ||
| ▲ | osti 3 hours ago | parent [-] | |
True, but I think for local models, we are mostly considering personal usage. | ||