Remix.run Logo
IshKebab a day ago

This is hilariously over-optimistic on the timescales. Like on this timeline we'll have a Mars colony in 10 years, immortality drugs in 15 and Half Life 3 in 20.

danpalmer a day ago | parent | next [-]

These timelines always assume that things progress as quickly as they can be conceived of, likely because these timelines come from "Ideas Guys" whose involvement typically ends at that point.

Orbital mechanics begs to disagree about a Mars colony in 10 years. Drug discovery has many steps that take time, even just the trials will take 5 years, let alone actually finding the drugs.

movpasd 14 hours ago | parent | next [-]

It reminds me of this rather classic post: http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...

Science is not ideas: new conceptual schemes must be invented, confounding variables must be controlled, dead-ends explored. This process takes years.

Engineering is not science: kinks must be worked out, confounding variables incorporated. This process also takes years.

Technology is not engineering: the purely technical implementation must spread, become widespread and beat social inertia and its competition, network effects must be established. Investors and consumers must be convinced in the long term. It must survive social and political repercussions. This process takes yet more years.

wkat4242 a day ago | parent | prev [-]

Didn't the covid significantly reduce trial times? I thought that was such a success that they continued on the same foot.

danpalmer 21 hours ago | parent | next [-]

The other reply has better info on covid specifically, but also consider that this refers to "immortality drugs". How long do we have to test those to conclude that they do in fact provide "immortality"?

Now sure, they don't actually mean immortality, and we don't need to test forever to conclude they extend life, but we probably do have to test for years to get good data on whether a generic life extension drug is effective, because you're testing against illness, old age, etc, things that take literally decades to kill.

That's not to mention that any drug like that will be met with intense skepticism and likely need to overcome far more scrutiny than normal (rather than the potentially less scrutiny that covid drugs might have managed).

agos 16 hours ago | parent | prev | next [-]

trial times were very brief for Covid vaccines because 1) there was no shortage of volunteers, capital, and political alignment at every level 2) the virus was everywhere and so it was really, really easy to verify if it was working. Compare this with a vaccination for a very rare but deadly disease: it's really hard to know if it's working because you can't just expose your test subjects to the deadly disease!

pama a day ago | parent | prev [-]

No it didn’t. At least not for new small molecule drugs. It did reduce times a bit for the first vaccines because there were many volunteers available, and it did allow some antibody drug candidates to be used before full testing was complete. The only approved small molecule drug for covid is paxlovid, with both components of its formulation tested on humans for the first time many years before covid. All the rest of the small molecule drugs are still in early parts of the pipeline or have been abandoned.

mchusma a day ago | parent | prev | next [-]

I like that the "slowdown" scenario has by 2030 we have a robot economy, cure for aging, brain uploading, and are working on a Dyson Sphere.

Aurornis a day ago | parent [-]

The story is very clearly modeled to follow the exponential curve they show.

Like the drew the curve out into the shape they wanted, put some milestones on it, and then went to work imagining what would happen if it continued with a heavy dose of X-risk doomerism to keep it spicy.

It conveniently ignores all of the physical constraints around things like manufacturing GPUs and scaling training networks.

joshjob42 19 hours ago | parent [-]

https://ai-2027.com/research/compute-forecast

In section 4 they discuss their projections specifically for model size, the state of inference chips in 2027, etc. It's largely pretty in line with expectations in terms of the capacity, and they only project them using 10k of their latest gen wafer scale inference chips by late 2027, roughly like 1M H100 equivalents. That doesn't seem at all impossible. They also earlier on discuss expectations for growth in efficiency of chips, and for growth in spending, which is only ~10x over the next 2.5 years, not unreasonable in absolute terms at all given the many tens of billions of dollars flooding in.

So on the "can we train the AI" front, they mostly are just projecting 2.5 years of the growth in scale we've been seeing.

The reason they predict a fairly hard takeoff is they expect that distillation, some algorithmic improvements, and iterated creation of synthetic data, training, and then making more synthetic data will enable significant improvements in efficiency of the underlying models (something still largely in line with developments over the last 2 years). In particular they expect a 10T parameter model in early 2027 to be basically human equivalent, and they expect it to "think" at about the rate humans do, 10 words/second. That would require ~300 teraflops of compute per second to think at that rate, or ~0.1H100e. That means one of their inference chips could potentially run ~1000 copies (or fewer copies faster etc. etc.) and thus they have the capacity for millions of human equivalent researchers (or 100k 40x speed researchers) in early 2027.

They further expect distillation of such models etc. to squeeze the necessary size down / more expensive models overseeing much smaller but still good models squeezing the effective amount of compute necessary, down to just 2T parameters and ~60 teraflops each, or 5000 human-equivalents per inference chip, making for up to 50M human-equivalents by late 2027.

This is probably the biggest open question and the place where the most criticism seems to me to be warranted. Their hardware timelines are pretty reasonable, but one could easily expect needing 10-100x more compute or even perhaps 1000x than they describe to achieve Nobel-winner AGI or superintelligence.

tsurba 13 hours ago | parent [-]

I don’t believe so. I think all important parts that each need to be scaled to advance significantly in the LLM paradigm are at or near the end of the steep part of the sigmoid:

1) useful training data available in the internet 2) number of humans creating more training data ”manually” 3) parameter scaling 4) ”easy” algorithmic inventions 5) available+buildable compute

”Just” needing a few more algorithmic inventions to keep the graphs exponential is a cop out. It is already obvious that just scaling parameters and compute is not enough.

I personally predict that scaling LLMs for solving all physical tasks (eg cleaning robots) or intellectual pursuits (they suck at multiplication) will not work out.

We will get better specialized tools by collecting data from specific, high economic value, constrained tasks, and automating them, but scaling a (multimodal) LLM to solve everything in a single model will not be economically viable. We will get more natural interfaces for many tasks.

This is how I think right now as a ML researcher, will be interesting to see how wrong was I in 2 years.

EDIT: addition about latest algorithmic advances:

- Deepseek style GRPO requires a ladder of scored problems progressively more difficult and appropriate to get useful gradients. For open-ended problems (like most interesting ones are) we have no ladders for, and it doesn’t work. In particular, learning to generate code for leetcode problems with a good number of well made unit tests is what it is good for.

- Test-time inference is just adding an insane amount of more compute after training to brute-force double-check the sanity of answers

Neither will keep the graphs exponential.

ctoth a day ago | parent | prev | next [-]

Can you share your detailed projection of what you expect the future to look like so I can compare?

IshKebab a day ago | parent | next [-]

Sure

5 years: AI coding assistants are a lot better than they are now, but still can't actually replace junior engineers (at least ones that aren't shit). AI fraud is rampant, with faked audio commonplace. Some companies try replacing call centres with AI, but it doesn't really work and everyone hates it.

Tesla's robotaxi won't be available, but Waymo will be in most major US cities.

10 years: AI assistants are now useful enough that you can use them in the ways that Apple and Google really wanted you to use Siri/Google Assistant 5 years ago. "What have I got scheduled for today?" will give useful results, and you'll be able to have a natural conversation and take actions that you trust ("cancel my 10am meeting; tell them I'm sick").

AI coding assistants are now very good and everyone will use them. Junior devs will still exist. Vibe coding will actually work.

Most AI Startups will have gone bust, leaving only a few players.

Art-based AI will be very popular and artists will use it all the time. It will be part of their normal workflow.

Waymo will become available in Europe.

Some receptionists and PAs have been replaced by AI.

15 years: AI researchers finally discover how to do on-line learning.

Humanoid robots are robust and smart enough to survive in the real world and start to be deployed in controlled environments (e.g. factories) doing simple tasks.

Driverless cars are "normal" but not owned by individuals and driverful cars are still way more common.

Small light computers become fast enough that autonomous slaughter it's become reality (i.e. drones that can do their own navigation and face recognition etc.)

20 years: Valve confirms no Half Life 3.

FeepingCreature 16 hours ago | parent | next [-]

It kind of sounds like you're saying "exactly everything we have today, we will have mildly more of."

Quarrelsome a day ago | parent | prev | next [-]

you should add a bit where AI is pushed really hard in places where the subjects have low political power, like management of entry level workers, care homes or education and super bad stuff happens.

Also we need a big legal event to happen where (for example) autonomous driving is part of a really big accident where lots of people die or someone brings a successful court case that an AI mortgage underwriter is discriminating based on race or caste. It won't matter if AI is actually genuinely responsible for this or not, what will matter is the push-back and the news cycle.

Maybe more events where people start successfully gaming deployed AI at scale in order to get mortgages they shouldn't or get A-grades when they shouldn't.

WXLCKNO 5 hours ago | parent | prev | next [-]

So in the past 5 years we went from not having ChatGPT at all and it being released in 2022 (with non "chat" models before that) but in the next 5 now that the entire tech world is consumed with making better AI models, we'll just get slightly better AI coding assistants?

Reminds me of that comment about the first iPod being lame and having less space than a nomad. Worst take I've ever seen on here recently.

9dev a day ago | parent | prev | next [-]

It’s soothing to read a realistic scenario amongst all of the ludicrous hype on here.

FairlyInvolved a day ago | parent | prev | next [-]

We are going to scale up GPT4 by a factor of ~10,000 and that will result in getting an accurate summary of your daily schedule?

tsunagatta 19 hours ago | parent | next [-]

If we’re lucky.

stale2002 a day ago | parent | prev [-]

Unfortunately with the way scaling laws are working out, each order of magnitude increase in computer only makes models a little better.

Meaning they nobody will even bother to 10,000X GPT4.

archagon a day ago | parent | prev | next [-]

> Small light computers become fast enough that autonomous slaughter it's become reality

This is the real scary bit. I'm not convinced that AI will ever be good enough to think independently and create novel things without some serious human supervision, but none of that matters when applied to machines that are destructive by design and already have expectations of collateral damage. Slaughterbots are going to be the new WMDs — and corporations are salivating at the prospect of being first movers. https://www.youtube.com/watch?v=UiiqiaUBAL8

Trumpion a day ago | parent | next [-]

Why do you believe that?

The lowest estimations of how much compute our brain represents was already achieved with the last chip from Nvidia (Blackwell).

The newest gpu cluster from Google, Microsoft, Facebook, iax, and co have added so crazy much compute it's absurd.

pixl97 a day ago | parent | next [-]

>I'm not convinced that AI will ever be good enough to think independently a

and

>Why do you believe that?

What takes less effort, time to deploy, and cost? I mean there is at least some probability we kill ourselves off with dangerous semi-thinking war machines leading to theater scale wars to the point society falls apart and we don't have the expensive infrastructure to make AI as envisioned in the future.

With that said, I'm in the camp that we can create AGI as nature was able to with a random walk, we'll be able to reproduce it with intelligent design.

baq 16 hours ago | parent | prev [-]

If you bake the model onto the chip itself, which is what should be happening for local LLMs once a good enough one is trained eventually, you’ll be looking at orders of magnitude reduction in power consumption at constant inference speed.

dontlikeyoueith a day ago | parent | prev [-]

Zero Dawn future confirmed.

petesergeant 18 hours ago | parent | prev [-]

> Some companies try replacing call centres with AI, but it doesn't really work and everyone hates it.

I think this is much closer than you think, because there's a good percentage of call centers that are basically just humans with no power cosplaying as people who can help.

My fiber connection went to shit recently. I messaged the company, and got a human who told me they were going to reset the connection from their side, if I rebooted my router. 30m later with no progress, I got a human who told me that they'd reset my ports, which I was skeptical about, but put down to a language issue, and again reset my router. 30m later, the human gave me an even more outlandish technical explanation of what they'd do, at which point I stumbled across the magical term "complaint" ... an engineer phoned me 15m later, said there was something genuinely wrong with the physical connection, and they had a human show up a few hours later and fix it.

No part of the first-layer support experience there would have been degraded if replaced by AI, but the company would have saved some cash.

Gud a day ago | parent | prev | next [-]

Slightly slower web frameworks by 2026. By 2030, a lot slower.

arduanika 3 hours ago | parent | prev [-]

With each passing year, AI doom grifters will learn more and more web design gimmicks.

Trumpion a day ago | parent | prev | next [-]

We currently don't see any ceiling if this continues in this speed, we will have cheaper, faster and better models every quarter.

Therewas never something progressing so fast

It would be very ignorant not to keep a very close eye on it

There is still a a chance that it will happen a lot slower and the progression will be slow enough that we adjust in time.

But besides AI we also now get robots. The impact for a lot of people will be very real

zvitiate a day ago | parent | prev | next [-]

No, sooner lol. We'll have aging cures and brain uploading by late 2028. Dyson Swarms will be "emerging tech".

turnsout a day ago | parent | prev | next [-]

IMO they haven't even predicted mid-2025.

  > Coding AIs increasingly look like autonomous agents rather than mere assistants: taking instructions via Slack or Teams and making substantial code changes on their own, sometimes saving hours or even days.
Yeah, we are so not there yet.
Tossrock a day ago | parent [-]

That is literally the pitch line for Devin. I recently spoke to the CTO of a small healthtech startup and he was very pro-Devin for small fixes and PRs, and thought he was getting his money worth. Claude Code is a little clunkier but gives better results, and it wouldn't take much effort to hook it up to a Slack interface.

turnsout a day ago | parent [-]

Yeah, I get that there are startups trying to do it. But I work with Cursor quite a bit… there is no way I would trust an LLM code agent to take high-level direction and issue a PR on anything but the most trivial bug fix.

baq 16 hours ago | parent [-]

Last year they couldn’t even do a simple fix (they could add a null coalescing operator or an early return which didn’t make sense, that’s about it). Now I’m getting hundreds of LOC of functionality with multiple kLOC of tests out of the agent mode. No way it gets in without a few iterations, but it’s sooo much better than last April.

sva_ a day ago | parent | prev [-]

You forgot fusion energy

klabb3 a day ago | parent [-]

Quantum AI powered by cold fusion and blockchain when?