Remix.run Logo
danaris 2 hours ago

> Don't worry about where AI is today, worry about where it will be in 5-10 years.

And where will it be in 5-10 years?

Because right now, the trajectory looks like "right about where it is today, with maybe some better integrations".

Yes, LLMs experienced a period of explosive growth over the past 5-8 years or so. But then they hit diminishing returns, and they hit them hard. Right now, it looks like a veritable plateau.

If we want the difference between now and 5-10 years from now and the difference between now and 5-10 years ago to look similar, we're going to need a new breakthrough. And those don't come on command.

CuriouslyC an hour ago | parent | next [-]

Right about where it is today with better integrations?

One year is the difference between Sonnet 3.5 and Opus 4.5. We're not hitting diminishing returns yet (mostly because of exponential capex scaling, but still). We're already committed to ~3 years of the current trajectory, which means we can expect similar performance boosts year over year.

The key to keep in mind is that LLMs are a giant bag of capabilities, and just because we hit diminishing returns on one capability, that doesn't say much if anything about your ability to scale other capabilities.

catlifeonmars 12 minutes ago | parent | next [-]

You buried the lede with “exponential capex scaling”. How is this technology not like oil extraction?

The bulk of that capex is chips, and those chips are straight up depreciating assets.

16 minutes ago | parent | prev [-]
[deleted]
lupire 44 minutes ago | parent | prev | next [-]

It's a trope that people say this and then someone points out that while the comment was being drafted another model or product was released that took a substantial step up on problem solving power.

enraged_camel 29 minutes ago | parent | prev [-]

I use LLMs all day every day. There is no plateau. Every generation of models has resulted in substantial gains in capability. The types of tasks (both in complexity and scope) that I can assign to an LLM with high confidence is frankly absurd, and I could not even dream of it eight months ago.