Remix.run Logo
jb1991 2 days ago

I would bet all of my assets of my life that AGI will not be seen in the lifetime of anyone reading this message right now.

That includes anyone reading this message long after the lives of those reading it on its post date have ended.

Which of course raises the interesting question of how I can make good on this bet.

ashivkum a day ago | parent | next [-]

genuinely curious to hear your reasoning for why this is the case. i'm always somewhere between bemused and annoyed opening the daily HN thread about AGI and seeing everyone's totally unfounded confidence in their predictions.

my position is I have no idea what is going to happen.

makotech221 a day ago | parent | next [-]

its incredibly stupid to believe general intelligence is just a series of computations that can be done by a computer. The stemlords on the west coast need to take philosophy classes.

KylerAce a day ago | parent | next [-]

I don't think it's stupid to believe that the brain is somehow beyond turing computable considering how easy it is to create a system exactly as capable as a turing machine. I also don't think that anything in philosophy can provide empirical evidence that the brain is categorically special as opposed to emergently special. The sum total of the epistemology I've studied boiled down to people saying "I think human consciousness / the brain works like this" with varying degrees of complexity.

tokioyoyo a day ago | parent | prev [-]

The problem with this argument is assuming there is general consensus on “what intelligence is”.

BoorishBears a day ago | parent | prev [-]

what about the fact frontier labs are spending more compute on viral AI video slop and soon-to-be-obsoleted workplace usecases than research?

Even if you don't understand the technicals, surely you understand if any party was on the verge of AGI they wouldn't behave as these companies behave?

echoangle a day ago | parent | next [-]

What does that tell you about AI in 100 years though? We could have another AI winter and then a breakthrough and maybe the same cycle a few times more and could still somehow get AGI at the end. I’m not saying it’s likely but you can’t predict the far future from current companies.

BoorishBears a day ago | parent [-]

You're making the mistake of assuming the failure of the current companies would be seperated from the failures of AI as a technology.

If we continue the regime where OpenAI gets paid to buy GPUs and they fail, we'll have a funding winter regardless of AI's progress.

I think there is a strong bull case for consumer AI but it looks nothing like AGI, and we're increasingly pricing in AGI-like advancements.

Rudybega a day ago | parent | prev | next [-]

> what about the fact frontier labs are spending more compute on viral AI video slop and soon-to-be-obsoleted workplace usecases than research?

That's a bold claim, please cite your sources.

It's hard to find super precise sources on this for 2025, but epochAI has a pretty good summary for 2024. (with core estimates drawn from the Information and NYT

https://epoch.ai/data-insights/openai-compute-spend

The most relevant quote: "These reports indicate that OpenAI spent $3 billion on training compute, $1.8 billion on inference compute, and $1 billion on research compute amortized over “multiple years”. For the purpose of this visualization, we estimate that the amortization schedule for research compute was two years, for $2 billion in research compute expenses incurred in 2024."

Unless you think that this rough breakdown has completely changed, I find it implausible that Sora and workplace usecases constitute ~42% of total training and inference spend (and I think you could probably argue a fair bit of that training spend is still "research" of a sort, which makes your statement even more implausible).

BoorishBears a day ago | parent [-]

Sorry I'm giving too much credit to the reader here I guess.

"AI slop and workplace usecases" is a synecdoche for "anything that is not completing then deploying AGI".

The cost of Sora 2 is not the compute to do inference on videos, it's the ablations that feed human preference vs general world model performance for that architecture for example. It's the cost of rigorous safety and alignment post-training. It's the legal noise and risk that using IP in that manner causes.

And in that vein, the anti-signal is stuff like the product work that is verifying users to reduce content moderation.

These consumer usecases could be viewed as furthering the mission if they were more deeply targeted at collecting tons of human feedback, but these applications overwhelmingly are not architected to primarily serve that benefit. There's no training on API usage, there's barely any prompts for DPO except when they want to test a release for human preference, etc.

None of this noise and static has a place if you're serious about to hit AGI or even believe you can on any reasonable timeline. You're positing that you can turn grain of sand into thinking intelligent beings, ChatGPT erotica is not on the table.

dwaltrip a day ago | parent | prev [-]

They don’t.

BoorishBears a day ago | parent [-]

Is that why Sam is on Twitter people paying them $20 a month is their top compute priority as they double compute in response to people complaining about their not-AGI that is a constant suck between deployment, and stuff like post-training specifically for making the not-AGI compatible with outside brand sensibilities?

giardini 6 hours ago | parent | prev | next [-]

I will tell my wife (who does our investing) of your bet: I've always felt a bit too invested in AI promises.

jb1991 says >"Which of course raises the interesting question of how I can make good on this bet."<

Have children...

tim333 a day ago | parent | prev | next [-]

I'd bet the other way because I think Moore's law like advances in compute will make things much easier for researchers.

Like I was watching Hinton explain LLMs to Jon Stewart and they were saying they came up with the algorithm in 1986 but then it didn't really work for the decades until now because the hardware wasn't up to it (https://youtu.be/jrK3PsD3APk?t=1899)

If things were 1000x faster you could semi randomly try all sorts of arrangements of neural nets to see which think better.

jb1991 a day ago | parent [-]

You’re making the common assumption that “the algorithm“ is everything we need to get to AGI and it’s just a question of scaling.

tim333 a day ago | parent [-]

I guess so. Is there reason to think an appropriate algorithm and scale can't do that?

jb1991 18 hours ago | parent [-]

Yes, perhaps an "appropriate algorithm" could, but it is my opinion that we have not found that algorithm. LLMs are cool but I think they are very primitive compared to human intelligence and we aren't even close to getting AGI via that route.

tim333 15 hours ago | parent [-]

I agree with you that we are not there yet, algorithm wise.

zurfer a day ago | parent | prev | next [-]

Well you wouldn't bet all your assets because it would be an illiquid market that could only resolve in your favor in earliest 80 years.

If you're really serious about it put the money into a prediction market. Poly market has multiple AGI bets.

yodsanklai a day ago | parent [-]

I see only one with 4% chance in 2025 (obviously...). And AGI is defined as "OpenAI announces they reached AGI".

https://polymarket.com/event/openai-announces-it-has-achieve...

encroach a day ago | parent [-]

Here's 46% for 2030. It's had $350k in volume across the 4 markets.

https://kalshi.com/markets/kxoaiagi/openai-achieves-agi/oaia...

echoangle a day ago | parent [-]

That’s also about OpenAI claiming they have AGI. That doesn’t resolve based on actual AGI.

tim333 a day ago | parent [-]

I wonder if there is a test for AGI which is definite enough to bet on? My personal test idea is when you can send for a robot to come fix your plumbing rather than needing a human.

plaidfuji 2 days ago | parent | prev | next [-]

Should probably just short nvidia

Thrymr a day ago | parent | next [-]

"just short nvidia" is not simple. Even if you believe it is overvalued, and you are correct, a short is a specific bet that the market will realize that fact in a precise amount of time. There are very significant risks in short selling, and famously, the market can stay irrational longer than you can remain solvent.

simonsarris a day ago | parent | prev | next [-]

There is a wide space where LLMs and their offshoots make enormous productivity gains, while looking nothing like actual artificial intelligence (which has been rebranded AGI), and Nvidia turns out to have a justified valuation etc.

lm28469 a day ago | parent [-]

It's been three years now, where is it? Everyone on hn is now a 10x developers, where are all the new startups making $$$? Employees are 10x more productive, where are the 10x revenues? Or even 2x?

Why is growth over the last 3 years completely flat once you remove the proverbial AI pickaxes sellers?

What if all the slop generated by llms counterbalance any kind of productivity boost? 10x more bad code, 10x more spam emails, 10x more bots

Etheryte a day ago | parent | prev | next [-]

You can generally buy options only a few years out. A few years is decidedly shorter than the lifetime of everyone reading this thread.

lbhdc 2 days ago | parent | prev | next [-]

“Markets can remain irrational longer than you can remain solvent.”

guluarte a day ago | parent | prev [-]

that's probably a good idea, either AI bubble explodes or competitors catch up

asah 2 days ago | parent | prev | next [-]

Depends on the definition, I might take that bet because under some definitions were already here.

Example: better than average human across many thinking tasks is done.

rootusrootus 2 days ago | parent | next [-]

I think that the definition needs to include something about performance on out-of-training tasks. Otherwise we're just talking about machine learning, not anything like AGI.

balder1991 a day ago | parent [-]

Yes, like stated in this video: https://youtu.be/COOAssGkF6I

Yizahi a day ago | parent | prev | next [-]

Calculator can do arithmetic better than a human. Does this mean we have so called AI for half a century now?

KeplerBoy a day ago | parent | next [-]

That's how the term was sometimes used before. Think of video games AIs, those weren't (and still aren't) especially clever, but they were called AIs and nobody batted an eye at that.

Yizahi a day ago | parent [-]

When I write AI I mean what LLM apologists mean by AGI. So to rephrase I was talking about so called AGI 50 years ago in a calculator. I don't like this recent term inflation.

xboxnolifes a day ago | parent | prev | next [-]

A calculator does 1 thinking task.

Yizahi a day ago | parent [-]

First of all, it's zero thinking tasks, calculators can't think. But let's call it that way for the sake of an argument. LLM can do less than a dozen thinking tasks, and I'm generous here. Generating text, generating still images, generating digital music, generating video, and generate computer code. That's about it. Is that a complete and exhaustive list of all what constitutes a human? Or at least a human mind? If some piece of silicon can do 5-6 tasks it is a human equivalent now? (AI aka AGI presumes human mind parity)

CamperBob2 a day ago | parent | prev [-]

Let's get an English major to take a calculator to the International Math Olympiad, and see how that goes.

Yizahi a day ago | parent [-]

So a sign of AGI or intelligence on par with human is the ability to solve small generic math problems? And it still requires a handler human level intellinge to be paired with, to even start solving those math problems? Is that about right?

CamperBob2 a day ago | parent [-]

Not even close to right. First of all, the "small generic math problems" given at IMO are designed to challenge the strongest students in the world, and second, the recent results have been based on zero-shot prompts. The human operator did nothing but type in the questions and hit Enter.

If you do not understand the core concepts very well, by any rational definition of "understand," then you will not succeed at competitions like IMO. A calculator alone won't help you with math at this level, any more than a scalpel by itself would help you succeed at brain surgery.

rhetocj23 a day ago | parent [-]

It may be difficult for you to believe or digest, but this means nothing for actual innovation. Im yet to see the effects of LLMs send a shockwave in the real economy.

Ive actually hung around Olympiad level folks and unfortunately, their reach of intellect was limited in specific ways that didnt mean anything in regards to the real economy.

CamperBob2 a day ago | parent [-]

You seem to be arguing with someone who isn't here. My point is that if you think a calculator is going to help you do math you don't understand, you are going to have a really tough time once you get to 10th grade.

sambapa a day ago | parent | prev [-]

Good ol' Turing Test, but the real one, not the pop-sci one.

yodsanklai a day ago | parent | prev | next [-]

> how I can make good on this bet.

I agree with you, and I think that's where Polymarket or similar could be used to see if these people would put your money where their mouth is (my guess is that most won't).

But first we would need a precise definition of AGI. They may be able to come with a definition that makes the bet winnable for them.

rokkamokka 2 days ago | parent | prev | next [-]

Will you take a wager of my one dollar versus your life assets? :)

FL33TW00D a day ago | parent | prev | next [-]

How certain are you of this really? I'd take this bet with you.

You're saying that we won't achieve AGI in ~80 years, or roughly 2100, equivalent to the time since the end WW2.

To quote Shane Legg from 2009:

"It looks like we’re heading towards 10^20 FLOPS before 2030, even if things slow down a bit from 2020 onwards. That’s just plain nuts. Let me try to explain just how nuts: 10^20 is about the number of neurons in all human brains combined. It is also about the estimated number of grains of sand on all the beaches in the world. That’s a truly insane number of calculations in 1 second."

Are humans really so incompetent that we can't replicate what nature produced through evolutionary optimization with more compute than in EVERY human brain?

yodsanklai a day ago | parent [-]

How does a neuron compare to a flop?

colecut 2 days ago | parent | prev | next [-]

If you are right, you don't have to

jaza a day ago | parent | prev | next [-]

Agreed. But I'd also be willing to bet big, that the cycle of "new AI breakthrough is made, AI bubble ensues and hypesters claim AGI is just around the corner for several years, bubble bursts, all quiet on the AI front for a decade or two" continues beyond the lifetime of anyone reading this message right now.

vonneumannstan a day ago | parent | prev | next [-]

>I would bet all of my assets of my life that AGI will not be seen in the lifetime of anyone reading this message right now. That includes anyone reading this message long after the lives of those reading it on its post date have ended.

By almost any definition available during the 90s GPT-5 Thinking/Pro would pretty much qualify. The idea that we are somehow not going to make any progress for the next century seems absurd. Do you have any actual justification for why you believe this? Every lab is saying they see a clear path to improving capabilities and theres been nothing shown by any research I'm aware of to justify doubting that.

jb1991 a day ago | parent | next [-]

The fact is that no matter how "advanced" AI seems to get, it always falls short and does not satisfy what we think of as true AI. It's always a case of "it's going to get better", and it's been said like this for decades now. People have been predicting AGI for a lot longer than the time I predict we will not attain it.

LLMs are cool and fun and impressive (and can be dangerous), but they are not any form of AGI -- they satisfy the "artificial", and that's about it.

GPT by any definition of AGI is not AGI. You are ignoring the word "general" in AGI. GPT is extremely niche in what it does.

vonneumannstan a day ago | parent [-]

>GPT by any definition of AGI is not AGI. You are ignoring the word "general" in AGI. GPT is extremely niche in what it does.

Definitions in the 90s basically required passing the Turing Test which was probably passed by GPT3.5. Current definitions are too broad but something like 'better than the average human at most tasks' seems to be basically passed by say GPT5, definitions like 'better than all humans at all tasks' or 'better than all humans at all economically useful tasks' are closer to Superintelligence.

jb1991 a day ago | parent [-]

The Turing Test was never about AGI.

nearbuy a day ago | parent [-]

That's pretty much exactly what Alan Turing made the Turing test for. From the Wikipedia entry:

> The Turing test, originally called the imitation game by Alan Turing in 1949, is a test of a machine's ability to exhibit intelligent behaviour equivalent to that of a human.

> The test was introduced by Turing in his 1950 paper "Computing Machinery and Intelligence" while working at the University of Manchester. It opens with the words: "I propose to consider the question, 'Can machines think?'"

> This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against the major objections to the proposition that "machines can think".

jb1991 a day ago | parent [-]

Cherry-picking is not a meaningful contribution to this discussion. You are ignoring the entire section on that page called “Weaknesses”.

nearbuy 15 hours ago | parent [-]

Cherry-picking? You made a completely factually wrong statement. There was no cherry-picking. You said the Turing test was never about AGI. You didn't say it has weaknesses. Even if it were the worst test ever made, it was still about AGI.

Ignoring the entire article including the "Strengths" section and only looking at "Weaknesses" is the only cherry-picking happening.

And if you read the Weaknesses section, you'll see very little of it is relevant to whether the Turing test demonstrates AGI. Only 1 of the 9 subsections is related to this. The other weaknesses listed include that intelligent entities may still fail the Turing test, that if the entity tested remains silent there is no way to evaluate it, and that making AI that imitates humans well may lower wages for humans.

port3000 a day ago | parent | prev [-]

They have to say that, or there'll be a loud sucking sound and hundreds of billions in capital will be withdrawn overnight

vonneumannstan a day ago | parent [-]

Ok that's great do you have evidence suggesting scaling is actually plateauing or that capabilities of GPT6 and Claude 4.5 Opus won't be better than models now?

jb1991 a day ago | parent [-]

You are suggesting, in your reference to scaling, that this is a game of quantity. It is not.

lvl155 a day ago | parent | prev | next [-]

We are pretty close. There are some insane cutting edge developments being done in private.

jb1991 a day ago | parent [-]

I doubt your use of "insane".

tymscar a day ago | parent | prev | next [-]

Escrow

louiereederson 2 days ago | parent | prev | next [-]

short oracle

OtherShrezzing a day ago | parent [-]

Is anyone _not_ short Oracle? The downside risk for them is that they’ll lose a deal worth 10x their annual revenues.

Their potential upside is that OpenAI (a company with lifetime revenues of ~$10bn) have committed to a $300bn lease, if Oracle manages build a fleet of datacenters faster than any company in history.

If you’re not short, you definitely shouldn’t be long. They’re the only one of the big tech companies I could reasonably see going to $0 if the bubble pops.

benregenspan a day ago | parent [-]

With the executive branch now picking "national champion" companies (as in Intel deal), I feel like there's a big new short risk to consider. Would the current administration allow Oracle to go to zero?

guluarte a day ago | parent | prev | next [-]

my bet is we will just slowly automate things more and more until one day someone will point out when we reached "AGI"

akomtu a day ago | parent | prev | next [-]

It's about the same as betting all life savings on nuclear war not breaking out in our lifetime. If AI gets created, we are toast and those assets won't be worth anything.

vonneumannstan a day ago | parent | prev | next [-]

You can make this bet functional if you really believe it, which you of course really don't. If you actually do then I can introduce you to some people happy to take your money in perpetuity.

nextworddev a day ago | parent | prev [-]

you can do that by shorting Oracle here

stock_toaster a day ago | parent [-]

“Markets can remain irrational longer than you can remain solvent.” - John Maynard Keynes

nextworddev a day ago | parent [-]

yeah, of course. just framing the OP's bravado