Remix.run Logo
Aurornis a day ago

> ACT post where Scott Alexander provides some additional info: https://www.astralcodexten.com/p/introducing-ai-2027

The pattern where Scott Alexander puts forth a huge claim and then immediately hedges it backward is becoming a tiresome theme. The linguistic equivalent of putting claims into a superposition where the author is both owning it and distancing themselves from it at the same time, leaving the writing just ambiguous enough that anyone reading it 5 years from now couldn't pin down any claim as false because it was hedged in both directions. Schrödinger's prediction.

> Do we really think things will move this fast? Sort of no

> So maybe think of this as a vision of what an 80th percentile fast scenario looks like - not our precise median, but also not something we feel safe ruling out.

The talk of "not our precise median" and "Not something we feel safe ruling out" is an elaborate way of hedging that this isn't their actual prediction but, hey, anything can happen so here's a wild story! When the claims don't come true they can just point back to those hedges and say that it wasn't really their median prediction (which is conveniently not noted).

My prediction: The vague claims about AI becoming more powerful and useful will come true because, well, they're vague. Technology isn't about to reverse course and get worse.

The actual bold claims like humanity colonizing space in the late 2020s with the help of AI are where you start to realize how fanciful their actual predictions are. It's like they put a couple points of recent AI progress on a curve, assumed an exponential trajectory would continue forever, and extrapolated from that regression until AI was helping us colonize space in less than 5 years.

> Manifold currently predicts 30%:

Read the fine print. It only requires 30% of judges to vote YES for it to resolve to YES.

This is one of those bets where it's more about gaming the market than being right.