Remix.run Logo
PurpleRamen 4 hours ago

They redefined AGI to be an economical thing, so they can continue making up their stories. All that talk is really just business, no real science in the room there.

weatherlite 2 hours ago | parent | next [-]

It's not a great definition but it's also not a terrible one either. For an AI system to be able to do all or even most of the jobs in an economy it has to be well rounded in a way it still isn't today, meaning: reliability, planning, long term memory, physical world manipulation etc. A system that can do all of that well enough so it can do the jobs of doctors, programmers and plumbers is generally intelligent in my view.

chromacity 2 hours ago | parent | next [-]

> It's not a great definition but it's also not a terrible one either. For an AI system to be able to do all or even most of the jobs in an economy

That's not the definition they have been using. The definition was "$100B in profits". That's less than the net income of Microsoft. It would be an interesting milestone, but certainly not "most of the jobs in an economy".

chaos_emergent 2 hours ago | parent | prev [-]

Yeah I think this is more coherent than people realize. Economically relevant knowledge work is things that humans find cognitively demanding. Otherwise they wouldn't be valued in the first place.

It ties the definition to economic value, which I think is the best definition that we can conjure given that AGI is otherwise highly subjective. Economically relevant work is dictated by markets, which I think is the best proxy we have for something so ambiguous.

3form 2 hours ago | parent | next [-]

It's maybe somewhat nice conceptually, and certainly an useful added value - but the elsewhere mentioned $100 billion profit is not the right metric.

And then I think coming up with the right metric is just as subjective on this field as the technological one.

aleph_minus_one 2 hours ago | parent | prev | next [-]

> Economically relevant knowledge work is things that humans find cognitively demanding. Otherwise they wouldn't be valued in the first place.

Deep scientific discoveries are also cognitively demanding, but are not really valued (see the precarious work environment in academia).

Another point: a lot of work is rather valued in the first place because the work centers around being submissive/docile with regard to bullshit (see the phenomenon of bullshit jobs). You really know better, but you have to keep your mouth shut.

Barbing 2 hours ago | parent | prev [-]

Was there a better way than setting an arbitrary $100b threshold?

e.g. average cost to complete a set of representative tasks

3form 2 hours ago | parent [-]

Yeah, I'm sure there could be a better metric, if the metric's purpose was to check on the progress until the AGI target rather than doing business based on it (and so, hammering the metric to fit the shape of "realistic goal")

JumpCrisscross 4 hours ago | parent | prev | next [-]

> They redefined AGI to be an economical thing

Huh. Source? I mean, typical OpenAI bullshit, but would love to know how they defined it.

a2128 3 hours ago | parent | next [-]

Around the end of 2024, it was reported that OpenAI and Microsoft agreed that for the purposes of their exclusivity agreement, AGI will be achieved when their AI system generates $100 billion in profit: https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...

JumpCrisscross 2 hours ago | parent | next [-]

> OpenAI and Microsoft agreed that for the purposes of their exclusivity agreement, AGI will be achieved when their AI system generates $100 billion in profit

Wow. Maybe they spelled it out as aggregate gross income :P.

Robdel12 2 hours ago | parent | prev | next [-]

Yea, seems like this was stage setting for them to exit. They were already trying to break the deal then. So, I feel like that is lawyers find a way to bend whatever to get out of the deal.

gowld 2 hours ago | parent | prev | next [-]

Companies that have created "AGI":

Apple, Alphabet, Amazon, NVIDIA, Samsung, Intel, Cisco, Pfizer, UnitedHealth , Procter & Gamble, Berkshire Hathaway, China Construction Bank, Wells Fargo, ...

9rx 2 hours ago | parent | next [-]

Those were all achieved by "GI".

AndrewKemendo 2 hours ago | parent | prev [-]

For some definition of Artificial this holds perfectly

A self-running massive corporation with no people that generates billions in profit, no matter what you call it, would completely upend all previous structural assumptions under capitalism

bena 2 hours ago | parent | prev [-]

So no human on Earth is intelligent by that metric.

aleph_minus_one 2 hours ago | parent [-]

> So no human on Earth is intelligent by that metric.

That's a relevent aspect of the AGI concept.

wrs 3 hours ago | parent | prev | next [-]

It’s a system that generates $100 billion in profit. [0]

[0] https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...

pigeons 2 hours ago | parent [-]

Are there inflation markers included?

rvz 2 hours ago | parent | prev | next [-]

Here's the sauce you requested: [0]

"OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits."

Given that the definition of AGI is beyond meaningless, it is clear that the "I" in AGI stands for IPO.

[0] https://finance.yahoo.com/news/microsoft-openai-financial-de...

binary0010 3 hours ago | parent | prev [-]

OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity

From: https://openai.com/charter/

Fomite 2 hours ago | parent | next [-]

All humanity will benefit, but some humanity will benefit more than others.

red-iron-pine 2 hours ago | parent [-]

i am highly skeptical "all" of humanity will benefit, and many will have extreme negatives.

if you think drone targeting in Ukraine is scary now, wait until AGI is on it...

ditto for exploiting vulns via mythos

ahoka 2 hours ago | parent | prev | next [-]

AGI is when the capitalists are not forced to share their profits with the intelligentsia.

freejazz 2 hours ago | parent | prev | next [-]

Marketing

binary0010 2 hours ago | parent [-]

I'm so confused why I was down voted for answering the question that was asked?

benterix 2 hours ago | parent [-]

Because 1) your answer had nothing to do with the question, 2) you quoted a slogan that life verified as false.

binary0010 2 hours ago | parent [-]

[flagged]

JumpCrisscross 2 hours ago | parent [-]

> They redefined AGI to be an economical thing Huh. Source?

I don't think your original comment deserve to be downvoted. (Calling someone illiterate, on the other hand.)

But the "it" I was asking about was "AGI" as "an economical thing." You technically correctly answered how OpenAI defines AGI in public, i.e. with no reference to profits. But it did not address the economic definition OP initially alluded to.

For what it's worth, I could have been clearer in my ask.

binary0010 2 hours ago | parent [-]

Yeah I deserve to be down voted for the last message no doubt on that lol.

But originally I was just trying to be helpful by quoting their charter on what they consider "agi" now.

rvz 2 hours ago | parent | prev [-]

Translation: IPO.

atleastoptimal 2 hours ago | parent | prev | next [-]

It makes sense though. Humans are coherent to the economy based on their ability to perform useful work. If an AI system can perform work as well as or better than any human, than with respect to "anything any human has ever been willing to pay for", it is AGI.

I don't get why HN commenters find this so hard to understand. I have a sense they are being deliberately obtuse because they resent OpenAI's success.

techpression 2 hours ago | parent [-]

It doesn’t though, AGI have far greater implications than doing mundane work of today. Actual AGI would self improve, that in itself would change literally every single thing of human civilization, instead we are talking about replacing white collar jobs.

fragmede 12 minutes ago | parent [-]

Not to worry, humanoid, generally useful robots are only a few years away.

senordevnyc 2 hours ago | parent | prev [-]

Please reveal the “scientific” definition of AGI.

Avicebron 2 hours ago | parent [-]

When we are having serious conversations about AI rights and shutting off a model + harness was impactful as a death sentence. (I'm extremely skeptical that given the scale of computer/investment needed to produce the models we have _good as they are_ that our current llm architecture gets us there if there is even somewhere we want to go).