Remix.run Logo
Aurornis 5 hours ago

> Americans may end up in a situation where they have some $1000-2000 device at home with an open Chinese model running on it, if they care about privacy or owning their data.

I think HN vastly overestimates the market for something like this. Yes, there are some people who would spend $2,000 to avoid having prompts go to any cloud service.

However, most people don’t care. Paying $20 per month for a ChatGPT subscription is a bargain and they automatically get access to new versions as they come.

I think the at-home self hosting hobby is interesting, but it’s never going to be a mainstream thing.

reilly3000 3 hours ago | parent | next [-]

There is going to be a big market for private AI appliances, in my estimation at least.

Case in point: I give Gmail OAuth access to nobody. I nearly got burned once and I really don’t want my entire domain nuked. But I want to be able to have an LLM do things only LLMs can do with my email.

“Find all emails with ‘autopay’ in the subject from my utility company for the past 12 months, then compare it to the prior year’s data.” GPT-OSS-20b tried its best but got the math obviously wrong. Qwen happily made the tool calls and spat out an accurate report, and even offered to make a CSV for me.

Surely if you can’t trust npm packages or MS to not hand out god tokens to any who asks nicely, you shouldn’t trust a random MCP server with your credentials or your model. So I had Kilocode build my own. For that use case, local models just don’t quite cut it. I loaded $10 into OpenRouter, told it what I wanted, and selected GPT5 because it’s half off this week. 45 minutes, $0.78, and a few manual interventions later I had a working Gmail MCP that is my very own. It gave me some great instructions on how to configure an OAuth app in GCP, and I was able to get it running queries within minutes from my local models.

There is a consumer play for a ~$2499-$5000 box that can run your personal staff of agents on the horizon. We need about one more generation of models and another generation of low-mid inference hardware to make it commercially feasible to turn a profit. It would need to pay for itself easily in the lives of its adopters. Then the mass market could open up. A more obvious path goes through SMBs who care about control and data sovereignty.

If you’re curious, my power bill is up YoY, but there was a rate hike, definitely not my 4090;).

CJefferson an hour ago | parent | prev | next [-]

The reason people will pay $2,000 for a private at home AI is porn.

malnourish 3 hours ago | parent | prev | next [-]

The sales case for having LLMs at the edge is to run inference everywhere on everything. Video games won't go to the cloud for every AI call, but they will use on-device models that will run on the next iteration of hardware.

rafterydj 4 hours ago | parent | prev [-]

I agree the market is niche atm, but I can't help but disagree with your outlook long term. Self hosted models don't have the problems ChatGPT subscribers are facing with models seemingly performing worse over time, they don't need to worry about usage quotas, they don't need to worry about getting locked out of their services, etc.

All of these things have a dark side, though; but it's likely unnecessary for me to elaborate on that.