Remix.run Logo
breatheoften a day ago

I'm more and more convinced of the importance of this.

There is a very interesting thing happening right now where the "llm over promisers" are incentivized to over promise for all the normal reasons -- but ALSO to create the perception that the "next/soon" breakthrough is only going to be applicable when run on huge cloud infra such that running locally is never going to be all that useful ... I tend to think that will prove wildly wrong and that we will very soon arrive at a world where state of art LLM workloads should be expected to be massively more efficiently runnable than they currently are -- to the point of not even being the bottleneck of the workflows that use these components. Additionally these workloads will be viable to run locally on common current_year consumer level hardware ...

"llm is about to be general intelligence and sufficient llm can never run locally" is a highly highly temporary state that should soon be falsifiable imo. I don't think the llm part of the "ai computation" will be the perf bottleneck for long.

lwhi a day ago | parent [-]

Is there any utility in thinking about LLM provision in terms of the electricity grid?

I've often thought that local power generation (via solar or wind) could be (or could have been) a viable alternative to national grid supply.

tablets a day ago | parent [-]

I think you're onto something re: electricity - https://www.latitudemedia.com/news/in-africa-the-first-signs...