Remix.run Logo
mustyoshi 5 days ago

I don't see how we'll ever get to widespread local LLM.

The power efficiency alone is a strong enough pressure to use centralized model providers.

My 3090 running 24b or 32b models is fun, but I know I'm paying way more per token in electricity, on top of lower quality tokens.

It's fun to run them locally, but for anything actually useful it's cheaper to just pay API prices currently.

leptons 5 days ago | parent | next [-]

AI is not cheap to run no matter where it is running. The price we get charged today for AI is a loss-leader. The actual cost is much higher, so much higher that the average paying user today would balk at what it actually costs to run. These AI companies are trying to get people hooked on their product, to get it integrated into every business and workflow that they can, then start raising prices.

singpolyma3 5 days ago | parent | prev [-]

Until you put up your solar and then power is almost free...

vidarh 5 days ago | parent [-]

The amortised cost including the panels and labour is nowhere near "almost free".

boredatoms 5 days ago | parent [-]

It is over a couple of years

vidarh 5 days ago | parent [-]

Even if you live somewhere where it does, that is not remotely "almost free", and lots of places the payback period is more in the range of 10-15 years even with subsidies.