| ▲ | armchairhacker 2 hours ago | ||||||||||||||||||||||
And there’s an incentive to publish evidence of this to discourage it, do you have any? | |||||||||||||||||||||||
| ▲ | woadwarrior01 an hour ago | parent | next [-] | ||||||||||||||||||||||
There's this[1]. Model providers have a strong incentive to switch (a part of) their inference fleet to quantized models during peak loads. From a systems perspective, it's just another lever. Better to have slightly nerfed models than complete downtime. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | TeMPOraL 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
Models aren't just big bags of floats you imagine them to be. Those bags are there, but there's a whole layer of runtimes, caches, timers, load balancers, classifiers/sanitizers, etc. around them, all of which have tunable parameters that affect the user-perceptible output. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | coldtea 22 minutes ago | parent | prev [-] | ||||||||||||||||||||||
Anybody with more than five years in the tech industry has seen this done in all domains time and again. What evidence you have AI is different, which is the extraordinary claim in this case... | |||||||||||||||||||||||