Remix.run Logo
crystal_revenge a day ago

I wish people would be more vocal in calling out that LLMs have unquestionably failed to deliver on the 2022-2023 promises of exponential improvement at the foundation model level. Yes they have improved, and there is more tooling around them, but clearly the difference between LLMs in 2025 and 2023 is not as large as 2023 and 2021. If there was truly exponential progress, there would be no possibility of debating this. Which makes comments like this:

> The fundamental challenge in AI for the next 20 years is avoiding extinction.

Seem to be almost absurd without further, concrete justification.

LLMs are still quite useful, I'm glad they exist and honestly am still surprised more people don't use them in software. Last year I was very optimistic that LLMs would entirely change how we write software by making use of them as a fundamental part of our programming tool kit (in a similar way that ML fundamentally changed the options available to programmers for solving problems). Instead we've just come up with more expensive ways to extend the chat metaphor (the current generation of "agents" is disappointingly far from the original intent of agents in AI/CS).

The thing I am increasingly confused about is why so many people continue to need LLMs to be more than they obviously are. I get why crypto boosters exist, if I have 100 BTC, I have a very clear interest getting others to believe that they are valuable. But with "AI", I don't quite get, for the non-VS/founder, why it matters that people start foaming out the mouth over AI rather than just using it for the things it's good at.

Though I have some growing sense that this need is related to another trend I've personally started with witness: AI psychosis is very real. I personally know an increasing number of people who are spiraling into an LLM induced hallucinated world. The most shocking was someone talking about how losing human relationships is inevitable because most people can't keep up with those enhanced by AI acceleration. On the softer end I know more and more people who quietly confess how much they let AI work as a perpetual therapist, guiding their every decision (which is more than most people would let a human therapist guide there directions).

spopejoy 4 hours ago | parent | next [-]

My conspiracy theory du jour is that AGI doomerism is a product of supremacist thinking. AGI futures are purely speculative right? So why are they always doom and gloom?

Why can't an AGI be inherently classless, unconcerned with profit or scarcity, and inherently "arc-ing toward justice"?

Because that isn't good news for nerds who think they rightly sit at the top of a meritocracy. An evil AGI is one that confirms tech is the ultimate unconquerable power that only the tech elite can even hope to master.

redlock 20 hours ago | parent | prev [-]

“But clearly the difference between LLMs in 2025 and 2023 is not as large as between 2023 and 2021.”

This is a ridiculous statement. A simple example of the huge difference is context size.

GPT-4 was, what, 8K? Now we’re in the millions with good retention. And this is just context size, let alone reasoning, multimodality, etc.

Anamon 18 hours ago | parent | next [-]

I don't think that refutes the point. I'd readily agree with the parent that in terms of actual usefulness and efficiency gains, we're on a trajectory of diminishing returns.

emp17344 16 hours ago | parent | prev [-]

Gemini’s 2M context window is kind of a gimmick and not useable in practice.