Remix.run Logo
hosh 19 hours ago

While the Internet and LLMs are huge turning points — the metaphor that comes to mind are phase change thresholds, from solid to gas, from gas to solids — there is a crucial difference between the internet and LLMs.

The early internet connected personal computing together. It built on technology that was democratizing.

LLMs appear to be democratizing, but it is not. The enshittification is proceeding much more rapidly. No one wants to be left behind on the land grab. Many of us remember the rise of the world wide web, and perhaps even personal computing that made the internet mainstream.

I am excited to hear the effort of the Swiss models being trained, though it is a step behind. I remember people talking about how fine tuning will accelerate advances out in the open, and that large companies such as Google can’t keep up with that. Perhaps.

I’ve been diving into history. The Industrial Revolution was a time of rapid progress when engines accelerated the development of cheaper access to fuels, more powerful engines. We were able to afford abundance for a middle class, but we also had enshittification then too.

While there is a _propensity_ for enshittification, I for one don’t see it as inevitable, and neither do I think an AI future is inevitable.

Karrot_Kream 18 hours ago | parent | next [-]

For the internet to be democratizing it needed PCs first. Before that computing was like LLMs: the mainframe era. You either had access to an institution with a mainframe or you were luckily able to get a thin client to a mainframe (the early time-sharing systems.) Even after PCs were invented, for decades mainframes were inarguably better than PCs. Mainframes and thin clients were even some of the earliest computer networks.

I am optimistic that local models will catch up and hit the same pareto-optimal point. At some point your OS will ship with a local model, your system will have access to some Intelligence APIs, and that's that. Linux and BSDs will probably ship with an open-weights model. I may be wrong, but this is my hope.

If you're interested in a taste of that future try the Gemma3 class of models. While I haven't tried agentic coding with them yet, I find them more than good enough for day-to-day use.

hosh 5 hours ago | parent [-]

I have been keenly watching for locally-run AIs. This includes the price point for running 70b models, such as the one recently announced by Switzerland. I've also been looking at what it would take to run these in much smaller compute, such as microcontrollers.

However, fine-tuning may be run locally -- what are you thinking about in terms of training?

"At some point your OS will ship with a local model, your system will have access to some Intelligence APIs, and that's that."

There's a secondary effect that I had not even discussed in detail here. I don't know how to explain it concisely because it requires reframing a lot of things just to be able to see it, let alone to understand it as a problem.

Let me see how concise I can be:

1. There are non-financial capital such as social capital, knowledge capital, political capital, natural capital, etc.

2. The propensity is to convert non-financial capital into financial capital at the expense of the other forms of capital. I think this is the core dynamic driving enshittification (beyond how Cory Doctrow described it when he coined it).

3. While LLMs and AIs can be designed to enhance the human experience, right now, the propensity is to deploy them in a way that does not develop social and knowledge capital for the next generation.

TeMPOraL 19 hours ago | parent | prev [-]

> Many of us remember the rise of the world wide web, and perhaps even personal computing that made the internet mainstream.

I do. The web was the largest and most widespread enshittification process to date, and it started with the first sale made online, with the first ad shown on a web page - this quickly went into full-blown land grab in the late 90s, and then dotcom and smartphones and social media and SaaS and IoT and here we are today.

The "propensity for enshittification" is just called business, or entrepreneurship. It is orthogonal to AI.

I think comparing rise of LLMs to the web taking off is quite accurate, both with the good and bad sides.

hosh 18 hours ago | parent [-]

I have seen people conduct business that doesn’t enshittify. Though rare, it is not an universal trait for conducting business.

The process of creating the AIs require mobilizing vast amount of energy, capital, and time. It is a product of capital with the expectation of locking down future markets. It is not orthogonal to enshittification.

Small web was still a thing through the 90s and early ‘00s. Web servers were not so concentrated as they are with hardware capable of running AI, let alone training them.

TeMPOraL 17 hours ago | parent [-]

> I have seen people conduct business that doesn’t enshittify. Though rare, it is not an universal trait for conducting business.

Exception that proves some markets are still inefficient enough to allow people of good conscience to thrive. Doesn't change the overall trajectory.

> The process of creating the AIs require mobilizing vast amount of energy, capital, and time. It is a product of capital with the expectation of locking down future markets.

So are computers themselves. However free and open the web once was, or could've been, hardware was always capital-heavy, and it only got heavier with time. Cheap, ubiquitous computers and TSMC are two sides of the same coin.

> It is not orthogonal to enshittification.

That's, again, because business begets enshittification; it's one of those failure modes that are hard to avoid.

> Small web was still a thing through the 90s and early ‘00s. Web servers were not so concentrated as they are with hardware capable of running AI, let alone training them.

You can "run AI" on your own computer if you like. I hear Apple Silicon is good for LLMs this time of year. A consumer-grade GPU is more than enough to satisfy your amateur and professional image generation needs too; grab ComfyUI from GitHub and a Stable Diffusion checkpoint from HuggingFace, and you're in business; hell, you're actually close to bleeding edge and have a shot at contributing to SOTA if you're so inclined.

Of course, your local quantized Llama is not going to be as good as ChatGPT o3 - but that's just economies at scale at play. Much like with the web - most of it is concentrated, but some still find reasons to run servers themselves.

hosh 5 hours ago | parent [-]

"So are computers themselves. However free and open the web once was, or could've been, hardware was always capital-heavy, and it only got heavier with time. Cheap, ubiquitous computers and TSMC are two sides of the same coin."

Ok, I can see that is true.

"Exception that proves some markets are still inefficient enough to allow people of good conscience to thrive. Doesn't change the overall trajectory."

That depends on what you are measuring to determine market efficiency. Social, political, knowledge, and natural capital are excluded from consideration, so of course we optimize towards financial efficiency at the expense of everything else.

Which comes back to: business does not have beget enshittification, and it isn't because of market inefficiencies.

I think we're going to have to agree to disagree on some of these points.