| ▲ | lpcvoid 3 hours ago | |||||||||||||||||||||||||||||||
> Open source efforts need to give up on local AI and embrace cloud compute. Oh god no, please not more slop, you're already consuming over 1 percent of human energy output, could you, like, chill a bit? | ||||||||||||||||||||||||||||||||
| ▲ | nhecker 2 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||
In a similar vein: seek efficiency. I.e., /if/ I am going to consume LLM tokens, I figure that a local LLM with 10s of billions of parameters running on commodity hardware at home will still consume far more energy per token than that of a frontier model running on commercial hardware which is very strongly incentivized to be as efficient as possible. Do the math; it isn't even close. (Maybe it'd be closer in your local winter, where your compute heat could offset your heating requirements. But that gets harder to quantify.) Maybe it's different if you have insane and modern local hardware, but at least in my situation that is not the case. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
| ▲ | echelon 14 minutes ago | parent | prev [-] | |||||||||||||||||||||||||||||||
Y'all aren't getting it. - Our career is reaching the end of the line - 99.9999% of users will be using the cloud - if we don't have strong open source models, we're going to be locked into hyperscaler APIs for life Why are you building for hobby uses? Build for freedom of the ability to make and scale businesses. To remain competitive. To have options in the future independent of hyperscalers. We're going to be locked out of the game soon. Everyone should be panicking about losing the ability to participate. Play with your RTXes all you like. They might as well be raspberry pis. They're toys. | ||||||||||||||||||||||||||||||||