Remix.run Logo
zozbot234 5 hours ago

If the bottleneck is storage bandwidth that's not "slow". It's only slow if you insist on interactive speeds, but the point of this is that you can run cheap inference in bulk on very low-end hardware.

Aurornis 3 hours ago | parent | next [-]

> If the bottleneck is storage bandwidth that's not "slow"

It is objectively slow at around 100X slower than what most people consider usable.

The quality is also degraded severely to get that speed.

> but the point of this is that you can run cheap inference in bulk on very low-end hardware.

You always could, if you didn't care about speed or efficiency.

zozbot234 3 hours ago | parent [-]

You're simply pointing out that most people who use AI today expect interactive speeds. You're right that the point here is not raw power efficiency (having to read from storage will impact energy per operation, and datacenter-scale AI hardware beats edge hardware anyway by that metric) but the ability to repurpose cheaper, lesser-scale hardware is also compelling.

Terretta 4 hours ago | parent | prev [-]

> very low-end hardware

iPhone 17 Pro outperforms AMD’s Ryzen 9 9950X per https://www.igorslab.de/en/iphone-17-pro-a19-pro-chip-uebert...

pinkgolem 4 hours ago | parent [-]

In single threaded workloads, still impressive