Remix.run Logo
brokencode 4 days ago

Perhaps said software could even form an end to end encrypted tunnel from your phone to your local LLM server anywhere over the internet via a simple server intermediary.

The amount of data transferred is tiny and the latency costs are typically going to be dominated by the LLM inference anyway. Not much advantage to doing LAN only except that you don’t need a server.

Though the amount of people who care enough to buy a $3k - $10k server and set this up compared to just using ChatGPT is probably very small.

JumpCrisscross 4 days ago | parent [-]

> amount of people who care enough to buy a $3k - $10k server and set this up compared to just using ChatGPT is probably very small

So I maxed that out, and it’s with Apple’s margins. I suspect you could do it for $5k.

I’d also note that for heavy users of ChatGPT, the difference in energy costs for a home setup and the price for ChatGPT tokens may make this financially compelling for heavy users.

brokencode 4 days ago | parent [-]

True, it may be profitable for pro users. At $200 a month for ChatGPT Pro, it may only take a few years to recoup the initial costs. Not sure about energy costs though.

And of course you’d be getting a worse model, since no open source model currently is as good as the best proprietary ones.

Though that gap should narrow as the open models improve and the proprietary ones seemingly plateau.