▲ | haiku2077 19 hours ago | |
Specifically, I upgraded my mac and ported my software, which ran on Windows/Linux, to macos and Metal. Literally >100x faster in benchmarks, and overall user workflows became fast enough I had to "spend" the performance elsewhere or else the responses became so fast they were kind of creepy. Have a bunch of _very_ happy users running the software 24/7 on Mac Minis now. | ||
▲ | oblio 4 hours ago | parent [-] | |
The thing is, these kinds of optimizations happen all the time. Some of them can be as simple as using a hashmap instead of some home-baked data structure. So what you're describing is not necessarily some LLM specific improvement (though in your case it is, we can't generalize to every migration of a feature to an LLM). And nothing I've seen about recent GPUs or TPUs, from ANY maker (Nvidia, AMD, Google, Amazon, etc) say anything about general speedups of 100x. Heck, if you go across multiple generations of what are still these very new types of hardware categories, for example for Amazon's Inferentia/Trainium, even their claims (which are quite bold), would probably put the most recent generations at best at 10x the first generations. And as we all know, all vendors exaggerate the performance of their products. |