| ▲ | zozbot234 an hour ago | |||||||
> No one is going to run models that are comparable to frontier locally without spending enormous sums for use at scale You can always run these models cheaper locally if you're willing to compromise on total throughput and speed of inference. For most end-user or small-scale business needs, you don't really need a lot of either. | ||||||||
| ▲ | 9dev an hour ago | parent [-] | |||||||
It would be awful if running models locally became the primary way of using LLMs. On dedicated servers sharing GPUs across requests, energy usage and environmental impact is way lower overall than if everyone and their mother suddenly needs beefy GPUs. It’s the equivalent of everyone commuting alone in their own car instead of a train picking up hundreds at once. | ||||||||
| ||||||||