Remix.run Logo
pksebben 3 days ago

That 'average' is doing a lot of work to obfuscate the landscape. Open source continues to grow (indicating a robust ecosystem of individuals who use their computers for local work) and more importantly, the 'average' looks like it does not necessarily due to a reduction in local use, but to an explosion of users that did not previously exist (mobile first, SAAS customers, etc.)

The thing we do need to be careful about is regulatory capture. We could very well end up with nothing but monolithic centralized systems simply because it's made illegal to distribute, use, and share open models. They hinted quite strongly that they wanted to do this with deepseek.

There may even be a case to be made that at some point in the future, small local models will outperform monoliths - if distributed training becomes cheap enough, or if we find an alternative to backprop that allows models to learn as they infer (like a more developed forward-forward or something like it), we may see models that do better simply because they aren't a large centralized organism behind a walled garden. I'll grant that this is a fairly polyanna take and represents the best possible outcome but it's not outlandishly fantastic - and there is good reason to believe that any system based on a robust decentralized architecture would be more resilient to problems like platform enshittification and overdeveloped censorship.

At the end of the day, it's not important what the 'average' user is doing, so long as there are enough non-average users pushing the ball forward on the important stuff.

TheOtherHobbes 3 days ago | parent | next [-]

We already have monolithic centralised systems.

Most open source development happens on GitHub.

You'd think non-average developers would have noticed their code is now hosted by Microsoft, not the FSF. But perhaps not.

The AI end game is likely some kind of post-Cambrian, post-capitalist soup of evolving distributed compute.

But at the moment there's no conceivable way for local and/or distributed systems to have better performance and more intelligence.

Local computing has latency, bandwidth, and speed/memory limits, and general distributed computing isn't even a thing.

idiotsecant 3 days ago | parent | prev [-]

I can't imagine a universe where a small mind with limited computing resources has an advantage against a datacenter mind, no matter the architecture.

bee_rider 3 days ago | parent | next [-]

The small mind could have an advantage if it is closer or more trustworthy to users.

It only has to be good enough to do what we want. In the extreme, maybe inference becomes cheap enough that we ask “why do I have to wake up the laptop’s antenna?”

galaxyLogic 2 days ago | parent [-]

I would like to have a personal AI agent which basically has a copy of my knowledge, a reflection of me, so it could help me mupltiply my mind.

heavyset_go 3 days ago | parent | prev | next [-]

I don't want to send sensitive information to a data center, I don't want it to leave my machine/network/what have you. Local models can help in that department.

You could say the same about all self-hosted software, teams with billions of dollars to produce and host SaaS will always have an advantage over smaller, local operations.

pksebben 2 days ago | parent | prev | next [-]

The advantage it might have won't be in the form of "more power", it would be in the form of "not burdened by sponsored content / training or censorship of any kind, and focused on the use-cases most relevant to the individual end user."

We're already very, very close to "smart enough for most stuff". We just need that to also be "tuned for our specific wants and needs".

hakfoo 2 days ago | parent | prev | next [-]

Abundant resources could enable bad designs. I could in particular see a lot of commercial drive for huge models that can solve a bazillion different use cases, but aren't efficient for any of them.

There might be also local/global bias strategies. A tiny local model trained on your specific code/document base may be better aligned to match your specific needs than a galaxy scale model. If it only knows about one "User" class, the one in your codebase, it might be less prone to borrowing irrelevant ideas from fifty other systems.

gizajob 3 days ago | parent | prev | next [-]

The only difference is latency.

bigfatkitten 3 days ago | parent | prev [-]

Universes like ours where the datacentre mind is completely untrustworthy.