| ▲ | zozbot234 5 hours ago | |||||||
If the bottleneck is storage bandwidth that's not "slow". It's only slow if you insist on interactive speeds, but the point of this is that you can run cheap inference in bulk on very low-end hardware. | ||||||||
| ▲ | Aurornis 3 hours ago | parent | next [-] | |||||||
> If the bottleneck is storage bandwidth that's not "slow" It is objectively slow at around 100X slower than what most people consider usable. The quality is also degraded severely to get that speed. > but the point of this is that you can run cheap inference in bulk on very low-end hardware. You always could, if you didn't care about speed or efficiency. | ||||||||
| ||||||||
| ▲ | Terretta 4 hours ago | parent | prev [-] | |||||||
> very low-end hardware iPhone 17 Pro outperforms AMD’s Ryzen 9 9950X per https://www.igorslab.de/en/iphone-17-pro-a19-pro-chip-uebert... | ||||||||
| ||||||||