Remix.run Logo
delis-thumbs-7e 4 hours ago

It’s insane how they talk about AGI, like it was some scientifically qualifiable thing that is certain to happen any time now. When I have become the javelin Olympic Champion, I will buy a vegan ice cream to everyone with a HN account.

jmward01 2 hours ago | parent | next [-]

I think we keep changing the goalposts on AGI. If you gave me CC in the 80's I would probably have called it 'alive' since it clearly passes the Turing test as I understood it then (I wouldn't have been able to distinguish it from a person for most conversations). Now every time it gets better we push that definition further and every crack we open to a chasm and declare that it isn't close. At the same time there are a lot of people I would suspect of being bots based on how they act and respond and a lot of bots I know are bots mainly because they answer too well.

Maybe we need to start thinking less about building tests for definitively calling an LLM AGI and instead deciding when we can't tell humans aren't LLMs for declaring AGI is here.

sho_hn an hour ago | parent | next [-]

> I think we keep changing the goalposts on AGI

Isn't that exactly what you would expect to happen as we learn more about the nature and inner workings of intelligence and refine our expectations?

There's no reason to rest our case with the Turing test.

I hear the "shifting goalposts" riposte a lot, but then it would be very unexciting to freeze our ambitions.

At least in an academic sense, what LLMs aren't is just as interesting as what they are.

breezybottom an hour ago | parent | next [-]

I think the advancement in AI over the last four years has greatly exceeded the advancement in understanding the workings of human intelligence. What paradigm shift has there been recently in that field?

smcg an hour ago | parent [-]

What have we learned that isn't in my textbook from the 90s?

42 minutes ago | parent | next [-]
[deleted]
echelon an hour ago | parent | prev [-]

> What have we learned that isn't in my textbook from the 90s?

Does it matter?

We can do countless things people in the 90's would think was black magic.

If I showed the kid version of myself what I can do with Opus or Nano Banana or Seedance, let alone broadband and smartphones, I think I'd feel we were living in the Star Trek future. The fact that we can have "conversations" with AI is wild. That we can make movies and websites and games. It's incredible.

And there does not seem to be a limit yet.

charcircuit an hour ago | parent | prev [-]

I would agree with you if we were talking about trying to replicate some form of general intelligence, but we are talking about creating artificial intelligence.

_russross an hour ago | parent | prev | next [-]

Turing himself argued that trying to measure if a computer is intelligent is a fool's errand because it is so difficult to pin down definitions. He proposed what we call the "Turing test" as a knowable, measurable alternative. The first paragraph of his paper reads:

> I propose to consider the question, "Can machines think?" This should begin > with definitions of the meaning of the terms "machine" and "think." The > definitions might be framed so as to reflect so far as possible the normal use > of the words, but this attitude is dangerous, If the meaning of the words > "machine" and "think" are to be found by examining how they are commonly used > it is difficult to escape the conclusion that the meaning and the answer to the > question, "Can machines think?" is to be sought in a statistical survey such as > a Gallup poll. But this is absurd. Instead of attempting such a definition I > shall replace the question by another, which is closely related to it and is > expressed in relatively unambiguous words.

Many people who want to argue about AGI and its relation to the Turing test would do well to read Turing's own arguments.

redox99 31 minutes ago | parent [-]

The Turing test ended up being kind of a flop. We basically passed it and nobody cared. That's because the turing test is about whether a machine can fool a human, not about its intelligent capabilities per se.

anthonyrstevens 15 minutes ago | parent [-]

No, it's because certain people moved the goal posts. Nothing an LLM does or will do will make them belive that it's "intelligent" because they have a mental model of "intelligence" that is more religious than empirical.

sn0wr8ven an hour ago | parent | prev | next [-]

I don't think the goalpost has been shifted for AGI or the definition of AGI that is used by these corporations. It's just they broke it down to stages to claim AGI achieved. It was always a model or system that surpasses human capabilities at most tasks/being able to replace a human worker. The big companies broke it down to AGI stage 1, stage 2, etc to be able to say they achieved AGI.

The Turing Test/Imitation Game is not a good benchmark for AGI. It is a linguistics test only. Many chatbots even before LLMs can pass the Turing Test to a certain degree.

Regardless, the goalpost hasn't shifted. Replacing human workforce is the ultimate end goal. That's why there's investors. The investors are not pouring billions to pass the Turing Test.

turtlesdown11 44 minutes ago | parent [-]

AGI moved from a technical goal to a marketing term

zug_zug an hour ago | parent | prev | next [-]

I don't think so... I think most of the sci-fi I grew up reading presented AGI that could reason better than humans could, like make a plan and carry it out.

Like do people not know what word "general" means? It means not limited to any subset of capabilities -- so that means it can teach itself to do anything that can be learned. Like start a business. AI today can't really learn from its experiences at all.

Zambyte an hour ago | parent | prev | next [-]

Related: https://en.wikipedia.org/wiki/AI_effect

The truth is, we have had AGI for years now. We even have artificial super intelligence - we have software systems that are more intelligent than any human. Some humans might have an extremely narrow subject that they are more intelligent than any AI system, but the people on that list are vanishing small.

AI hasn't met sci-fi expectations, and that's a marketing opportunity. That's all it is.

baq an hour ago | parent | next [-]

AGI in the common man's world model is ASI in the AI researcher's definitions, i.e. something obviously smarter at anything and everything you could ask it for regardless of how good of an expert you are in any domain.

also, I'm pretty sure some people will move goalposts further even then.

fragmede 25 minutes ago | parent | prev [-]

Hasn't met your sci-fi expectations, maybe. I pull a computer out of my pocket, and talk with it. Sure, I gets tripped up here and there, but take a step back, holy shit that's freaking amazing! I don't have a flying car or transparent aluminum, and society has its share of issues right now, but my car drives itself. Coming from the 90's, I think living in the sci-fi future! (Only question is, which one.)

pron an hour ago | parent | prev | next [-]

The Turing test pits a human against a machine, each trying to convince a human questioner that the other is the machine. If the machine knows how humans generally behave, for a proper test, the human contestant should know how the machine behaves. I think that this YouTube channel clearly shows that none of today's models pass the Turing test: https://www.youtube.com/@FatherPhi

lesuorac an hour ago | parent | prev | next [-]

> Maybe we need to start thinking less about building tests for definitively calling an LLM AGI and instead deciding when we can't tell humans aren't LLMs for declaring AGI is here.

If you've never read the original paper [1] I recommend that you do so. We're long past the point of some human can't determine if X was done by man or machine.

[1]: https://courses.cs.umbc.edu/471/papers/turing.pdf

applfanboysbgon 28 minutes ago | parent | prev | next [-]

People thought Eliza was alive too in the 60s. AGI is not determined by how ignorant, uninformed humans view a technology they don't understand. That is the single dumbest criterion you could come up with for defining it.

Regarding shifting goalposts, you are suggesting the goalposts are being moved further away, but it's the exact opposite. The goalposts are being moved closer and closer. Someone from the 50s would have had the expectation that artificial intelligence ise something recognisable as essentially equivalent to human intelligence, just in a machine. Artificial intelligence in old sci-fi looked nothing like Claude Code. The definition has since been watered down again and again and again and again so that anything and everything a computer does is artificial intelligence. We might as well call a calculator AGI at this point.

zendist 32 minutes ago | parent | prev | next [-]

The goal post keeps moving because LLM hypeists keep saying LLMs are "close" to AGI (or even are, already). Any reasonably intelligent individual that knows anything about LLMs obviously rejects those claims, but the rest of the world doesn't.

An AGI would not have problems reading an analog clock. Or rather, it would not have a problem realizing it had a problem reading it, and would try to learn how to do it.

An AGI is not whatever (sophisticated) statistical model is hot this week.

Just my take.

redox99 27 minutes ago | parent [-]

Vision is still much weaker than text for LLMs. So you could argue we already have AGI for text but not vision inputs, or you could argue AGI requires being human level at text vision and sound.

ex-aws-dude 23 minutes ago | parent | prev | next [-]

Maybe moving the goalposts is how we find the definition?

arkadiytehgraet an hour ago | parent | prev | next [-]

Sure, in the 80s after interacting with CC 1 time you would call it 'alive'. After having interacted with it for 5-10 minutes you would clearly see that it is as far from AGI as something more mundane as C compiler is.

andrepd an hour ago | parent | prev [-]

By that measure Eliza might pass the turing test too. It just shows it's far from being a though-terminating argument by itself.

PurpleRamen 4 hours ago | parent | prev | next [-]

They redefined AGI to be an economical thing, so they can continue making up their stories. All that talk is really just business, no real science in the room there.

weatherlite 2 hours ago | parent | next [-]

It's not a great definition but it's also not a terrible one either. For an AI system to be able to do all or even most of the jobs in an economy it has to be well rounded in a way it still isn't today, meaning: reliability, planning, long term memory, physical world manipulation etc. A system that can do all of that well enough so it can do the jobs of doctors, programmers and plumbers is generally intelligent in my view.

chromacity 2 hours ago | parent | next [-]

> It's not a great definition but it's also not a terrible one either. For an AI system to be able to do all or even most of the jobs in an economy

That's not the definition they have been using. The definition was "$100B in profits". That's less than the net income of Microsoft. It would be an interesting milestone, but certainly not "most of the jobs in an economy".

chaos_emergent 2 hours ago | parent | prev [-]

Yeah I think this is more coherent than people realize. Economically relevant knowledge work is things that humans find cognitively demanding. Otherwise they wouldn't be valued in the first place.

It ties the definition to economic value, which I think is the best definition that we can conjure given that AGI is otherwise highly subjective. Economically relevant work is dictated by markets, which I think is the best proxy we have for something so ambiguous.

3form 2 hours ago | parent | next [-]

It's maybe somewhat nice conceptually, and certainly an useful added value - but the elsewhere mentioned $100 billion profit is not the right metric.

And then I think coming up with the right metric is just as subjective on this field as the technological one.

aleph_minus_one 2 hours ago | parent | prev | next [-]

> Economically relevant knowledge work is things that humans find cognitively demanding. Otherwise they wouldn't be valued in the first place.

Deep scientific discoveries are also cognitively demanding, but are not really valued (see the precarious work environment in academia).

Another point: a lot of work is rather valued in the first place because the work centers around being submissive/docile with regard to bullshit (see the phenomenon of bullshit jobs). You really know better, but you have to keep your mouth shut.

Barbing 2 hours ago | parent | prev [-]

Was there a better way than setting an arbitrary $100b threshold?

e.g. average cost to complete a set of representative tasks

3form 2 hours ago | parent [-]

Yeah, I'm sure there could be a better metric, if the metric's purpose was to check on the progress until the AGI target rather than doing business based on it (and so, hammering the metric to fit the shape of "realistic goal")

JumpCrisscross 4 hours ago | parent | prev | next [-]

> They redefined AGI to be an economical thing

Huh. Source? I mean, typical OpenAI bullshit, but would love to know how they defined it.

a2128 3 hours ago | parent | next [-]

Around the end of 2024, it was reported that OpenAI and Microsoft agreed that for the purposes of their exclusivity agreement, AGI will be achieved when their AI system generates $100 billion in profit: https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...

JumpCrisscross 2 hours ago | parent | next [-]

> OpenAI and Microsoft agreed that for the purposes of their exclusivity agreement, AGI will be achieved when their AI system generates $100 billion in profit

Wow. Maybe they spelled it out as aggregate gross income :P.

Robdel12 2 hours ago | parent | prev | next [-]

Yea, seems like this was stage setting for them to exit. They were already trying to break the deal then. So, I feel like that is lawyers find a way to bend whatever to get out of the deal.

gowld 2 hours ago | parent | prev | next [-]

Companies that have created "AGI":

Apple, Alphabet, Amazon, NVIDIA, Samsung, Intel, Cisco, Pfizer, UnitedHealth , Procter & Gamble, Berkshire Hathaway, China Construction Bank, Wells Fargo, ...

9rx 2 hours ago | parent | next [-]

Those were all achieved by "GI".

AndrewKemendo 2 hours ago | parent | prev [-]

For some definition of Artificial this holds perfectly

A self-running massive corporation with no people that generates billions in profit, no matter what you call it, would completely upend all previous structural assumptions under capitalism

bena 2 hours ago | parent | prev [-]

So no human on Earth is intelligent by that metric.

aleph_minus_one 2 hours ago | parent [-]

> So no human on Earth is intelligent by that metric.

That's a relevent aspect of the AGI concept.

wrs 3 hours ago | parent | prev | next [-]

It’s a system that generates $100 billion in profit. [0]

[0] https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...

pigeons 2 hours ago | parent [-]

Are there inflation markers included?

rvz 2 hours ago | parent | prev | next [-]

Here's the sauce you requested: [0]

"OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits."

Given that the definition of AGI is beyond meaningless, it is clear that the "I" in AGI stands for IPO.

[0] https://finance.yahoo.com/news/microsoft-openai-financial-de...

binary0010 3 hours ago | parent | prev [-]

OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity

From: https://openai.com/charter/

Fomite 2 hours ago | parent | next [-]

All humanity will benefit, but some humanity will benefit more than others.

red-iron-pine 2 hours ago | parent [-]

i am highly skeptical "all" of humanity will benefit, and many will have extreme negatives.

if you think drone targeting in Ukraine is scary now, wait until AGI is on it...

ditto for exploiting vulns via mythos

ahoka 2 hours ago | parent | prev | next [-]

AGI is when the capitalists are not forced to share their profits with the intelligentsia.

freejazz 2 hours ago | parent | prev | next [-]

Marketing

binary0010 2 hours ago | parent [-]

I'm so confused why I was down voted for answering the question that was asked?

benterix 2 hours ago | parent [-]

Because 1) your answer had nothing to do with the question, 2) you quoted a slogan that life verified as false.

binary0010 2 hours ago | parent [-]

[flagged]

JumpCrisscross 2 hours ago | parent [-]

> They redefined AGI to be an economical thing Huh. Source?

I don't think your original comment deserve to be downvoted. (Calling someone illiterate, on the other hand.)

But the "it" I was asking about was "AGI" as "an economical thing." You technically correctly answered how OpenAI defines AGI in public, i.e. with no reference to profits. But it did not address the economic definition OP initially alluded to.

For what it's worth, I could have been clearer in my ask.

binary0010 2 hours ago | parent [-]

Yeah I deserve to be down voted for the last message no doubt on that lol.

But originally I was just trying to be helpful by quoting their charter on what they consider "agi" now.

rvz 2 hours ago | parent | prev [-]

Translation: IPO.

atleastoptimal 2 hours ago | parent | prev | next [-]

It makes sense though. Humans are coherent to the economy based on their ability to perform useful work. If an AI system can perform work as well as or better than any human, than with respect to "anything any human has ever been willing to pay for", it is AGI.

I don't get why HN commenters find this so hard to understand. I have a sense they are being deliberately obtuse because they resent OpenAI's success.

techpression 2 hours ago | parent [-]

It doesn’t though, AGI have far greater implications than doing mundane work of today. Actual AGI would self improve, that in itself would change literally every single thing of human civilization, instead we are talking about replacing white collar jobs.

fragmede 9 minutes ago | parent [-]

Not to worry, humanoid, generally useful robots are only a few years away.

senordevnyc 2 hours ago | parent | prev [-]

Please reveal the “scientific” definition of AGI.

Avicebron 2 hours ago | parent [-]

When we are having serious conversations about AI rights and shutting off a model + harness was impactful as a death sentence. (I'm extremely skeptical that given the scale of computer/investment needed to produce the models we have _good as they are_ that our current llm architecture gets us there if there is even somewhere we want to go).

latexr an hour ago | parent | prev | next [-]

> like it was some scientifically qualifiable thing

OpenAI and Microsoft do (did?) have a quantifiable definition of AGI, it’s just a stupid one that is hard to take seriously and get behind scientifically.

https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...

> The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits. That’s far from the rigorous technical and philosophical definition of AGI many expect.

dbbk an hour ago | parent [-]

I bet they were laughing their asses off when they came up with that. This is nonsensical.

robotresearcher 39 minutes ago | parent [-]

In the context of raising money and justifying investment?

lucaslazarus 4 hours ago | parent | prev | next [-]

It’s pretty much a religious eschatology at this point

trostaft 2 hours ago | parent | next [-]

> eschatology

From Wikipedia

Eschatology (/ˌɛskəˈtɒlədʒi/; from Ancient Greek ἔσχατος (éskhatos) 'last' and -logy) concerns expectations of the end of present age, human history, or the world itself.

I'm case anyone else is vocabulary skill checked like me

renticulous 3 hours ago | parent | prev | next [-]

Progess is generally salami slicing just as escalation in geopolitics. Not a step function.

Russian Invasion - Salami Tactics | Yes Prime Minister

https://www.youtube.com/watch?v=yg-UqIIvang

BoredPositron 2 hours ago | parent [-]

We need to stop pretending we can do the next step without a hardware tock. It's not happening with current Nvidia products.

rtkwe 4 hours ago | parent | prev | next [-]

It feels like they have to say/believe it because it's kind of the only thing that can justify the costs being poured into it and the cost it will need to charge eventually (barring major optimizations) to actually make money on users.

2 hours ago | parent | prev | next [-]
[deleted]
kogasa240p 2 hours ago | parent | prev [-]

This, someone take Silicon Valley's adderal away.

CWwdcdk7h 4 hours ago | parent | prev | next [-]

It sounds really similar to Uber pitch about how they are going to have monopoly as soon as they replace those pesky drivers with own fleet of self driving cars. That was supposed to be their competitive edge against other taxi apps. In the end they sold ATG at end of 2020 :D

ambicapter 2 hours ago | parent [-]

ATH?

khuey 2 hours ago | parent | next [-]

ATG = Advanced Technology Group, i.e. Uber's self-driving org.

murkt 2 hours ago | parent | prev [-]

Autonomous Thriving Hroup?

johnfn an hour ago | parent | prev | next [-]

It’s insane to me how yesterday someone posted an example of ChatGPT Pro one-shotting an Erdos problem after 90 minutes of thinking and today you’re saying that AGI is a fairy tale.

measurablefunc an hour ago | parent [-]

It's not one-shot. Other people had attempted the same problem w/ the same AI & failed. You're confused about terms so you redefine them to make your version of the fairy tale real.

fsniper 36 minutes ago | parent [-]

We already know that same problem has been examined by many credible mathematicians already and couldn't be solved by any of them yet.

Why are we expecting AGI to one shot it? Can't we have an AGI that can fails occasionally to solve some math problem? Is the expectation of AGI to be all knowing?

By the way I agree that AGI is not around the corner or I am not arguing any of the llm s are "thinking machines". It's just I agree goal post or posts needs to be set well.

measurablefunc 12 minutes ago | parent [-]

People want to believe in magic so they will find excuses to do so. Computers have been proving theorems for a long time now but Isabelle/HOL didn't have the marketing budget of OpenAI so people didn't care. Now that Sam Altman is doing the marketing people all of a sudden care about proving theorems.

DrBenCarson 2 hours ago | parent | prev | next [-]

We were supposed to have AGI last summer. Obviously it is so smart that it has decided to pull a veil over our eyes and live amongst us undetected (this is a joke, if you feel your LLM is sentient, talk to a doctor)

fragmede 3 minutes ago | parent | next [-]

Talk to a doctor? In this economy? I've got ChatGPT to talk to. Wait hang on.

ianm218 2 hours ago | parent | prev | next [-]

What do you mean we were "supposed to have AGI last summer"?

People obviously have really strong opinions on AI and the hype around investments into these companies but it feels like this is giving people a pass on really low quality discourse.

This source [1] from this time last year says even lab leaders most bullish estimate was 2027.

[1]. https://80000hours.org/2025/03/when-do-experts-expect-agi-to...

zozbot234 2 hours ago | parent | prev [-]

ARM actually built AGI last month. Spoiler: it's a datacenter CPU.

computerphage an hour ago | parent | prev | next [-]

Show me a graph of your javelin skill doubling every six months and I'll start asking myself if you'll be the next champion

hamdingers an hour ago | parent [-]

I could easily make that graph a reality and sustain that pace for a couple years, considering I'm starting from 0 javelin skill.

a_shoeboy 15 minutes ago | parent | next [-]

It is a simple mathematical fact that if you get married one year and have twins the next, your household will contain over a million people within 20 years.

fragmede 7 minutes ago | parent [-]

https://xkcd.com/605/

edu an hour ago | parent | prev [-]

You could also nerf your performance at random times and then get good at it again, and extend the illusion for longer.

debarshri an hour ago | parent | prev | next [-]

I saw a founder make decisions based on what openai,claude was recommending all the time. I think all leaders, founders etc Will converge on same decisions, ideas, features etc. I think form factor of AGI is probably not what we expect it to be. AGI is probably here, we just dont know it or acknowledge it.

no_wizard 2 hours ago | parent | prev | next [-]

This is all happening as I predicted. OpenAI is oversold and their aggressive PR campaign has set them up with unrealistic expectations. I raised alot of eyebrow at the Microsoft deal to begin with. It seemed overvalued even if all they were trading was mostly Azure compute

eitally 2 hours ago | parent [-]

I do not envy the stress the partnerships, strat ops and infra teams must be perpetually dealing with at OpenAI & Anthropic.

giwook an hour ago | parent | prev | next [-]

HN signup page about to get the hug of death

ozgrakkurt an hour ago | parent | prev | next [-]

but, is the world ready for your win? I'm very afraid your win might shake the world too much! THINK ABOUT IT!

I think this might be similar to how we changed to cars when we were using horses

hununu 2 hours ago | parent | prev | next [-]

Thank you, I just created an account and looking forward to my ice cream.

hx8 4 hours ago | parent | prev | next [-]

Do the investments make sense if AGI is not less than 10 years away?

JumpCrisscross 4 hours ago | parent | next [-]

> Do the investments make sense if AGI is not less than 10 years away?

They can. If one consolidated the AI industry into a single monopoly, it would probably be profitable. That doesn't mean in its current state it can't succumb to ruionous competition. But the AGI talk seems to be mostly aimed at retail investors and philospher podcasters than institutional capital.

antupis 2 hours ago | parent | next [-]

Thing is that distillation is so easy that it would also need large scale regulatory capture to keep smaller competitors out.

iewj 4 hours ago | parent | prev [-]

What kind of ludicrous statement is this? Any monopoly with viable economics for profit with no threat of competition yields monopoly profits…

JumpCrisscross 4 hours ago | parent | next [-]

> Any monopoly with viable economics for profit with no threat of competition yields monopoly profits

"With viable economics" is the point.

My "ludicrous statement" is a back-of-the-envelope test for whether an industry is nonsense. For comparison, consolidating all of the Pets.com competitors in the late 1990s would not have yielded a profitable company.

eieiw 4 hours ago | parent [-]

Very convenient to leave out Amazon in your back of the envelope test, whose internal metrics were showing a path toward quasi-monopoly profits.

Do you argue in good faith?

There’s a difference between being too early vs being nonsense.

JumpCrisscross 4 hours ago | parent | next [-]

> Very convenient to leave out Amazon in your back of the envelope test, who’s internal metrics were showing a path toward quasi-monopoly profits

Not in the 1990s. The American e-commerce industry was structurally unprofitable prior to the dot-com crash, an event Amazon (and eBay) responded to by fundamentally changing their businesses. Amazon bet on fulfillment. eBay bet on payments. Both represented a vertical integration that illustrates the point–the original model didn't work.

> There’s a difference between being too early vs being nonsense

When answering the question "do the investments make sense," not really. You're losing your money either way.

The American AI industry appears to have "viable economics for profit" without AGI. That doesn't guarantee anyone will earn them. But it's not a meaningless conclusion. (Though I'd personally frame it as a hypothesis I'm leaning towards.)

SkyEyedGreyWyrm 4 hours ago | parent | prev [-]

Malcolm Harris' Palo Alto explained the failures of many dotcom startups and Amazon's later success in the field (in part) to the fact that dotcom era delivery was done by highly trained, highly compensated, unionized in-company workers, meanwhile Amazon prevents unions, contracts (or contracted, I'm not up to date on this) companies for delivery and has exploitative working conditions with high turnover, the economics are very different and are a big contributor to their success

Maxatar 4 hours ago | parent | prev [-]

>"...viable economics for profit..."

OP did not include this requirement in their post because doing so would make the claim trivially true.

rapind 4 hours ago | parent | prev | next [-]

Best way to achieve AGI: Redefine AGI.

2ndorderthought 3 hours ago | parent [-]

They already did that, and AI. That's how we got into this mess.

jrflo 4 hours ago | parent | prev [-]

The investments don't make sense.

2 hours ago | parent | prev | next [-]
[deleted]
HumblyTossed 4 hours ago | parent | prev | next [-]

The continued fleecing of investors.

renticulous 3 hours ago | parent [-]

Investors are typically people with surplus money to invest. Progress cannot be made without trial and error. So fleecing of investors for the greater good of humanity is something I shall allow.

ambicapter 2 hours ago | parent [-]

A "surplus of money"? So people saving for retirement have a "surplus of money"? Basically if any money is standing still, it's a legitimate tactic to just...take it, in your mind.

Other people just call it "theft".

HWR_14 2 hours ago | parent [-]

No one with a small 401k is able to invest in OpenAI/Anthropic/etc. The people investing in those companies can afford to lose their investments.

bigfishrunning 2 hours ago | parent | next [-]

"small" 401ks are usually made up of mutual funds. Those funds are run by investment banks (think Fidelity or JP Morgan) and they *absolutely* invest in companies like OpenAI and Anthropic. Your average middle class worker has investment money tied up in these crooks, but probably indirectly. When they piss away that money, it's not just rich jerks that are holding the bag.

HWR_14 an hour ago | parent | next [-]

401ks are run by investment banks and investment banks invest in OpenAI/Anthropic, but those aren't the same parts of the company in any meaningful way. The 401ks are in public companies or bonds.

2 hours ago | parent | prev [-]
[deleted]
sumeno 2 hours ago | parent | prev [-]

Which is why they are desperate to IPO

RobRivera 4 hours ago | parent | prev | next [-]

Make mine p p p p p p vicodin

stavros 4 hours ago | parent | prev | next [-]

At this point, AGI is either here, or perpetually two years away, depending on your definition.

greybeard69 4 hours ago | parent | next [-]

Full Self-Driving 2.0

xienze 4 hours ago | parent | prev [-]

It's always been this way. I remember, speaking of Microsoft, when they came to my school around 2002 or so giving a talk on AI. They very confidently stated that AGI had already been "solved", we know exactly how to do it, only problem is the hardware. But they estimated that would come in about ten years...

letmevoteplease an hour ago | parent | next [-]

Let me just repeat that: "Microsoft" came to your school in 2002 and "confidently stated" that AI had been solved. Really interesting story.

xienze an hour ago | parent [-]

Yes, they did. We had guest speakers from Microsoft talking about AI. AI has been a decades-long grift, it's not something that just appeared out of thin air a few years ago.

What part do you find hard to believe? That tech companies would send people to speak at a university's computer science functions?

Let me give you another one you'll think I'm making up: virtual reality was a thing back in the mid- to late-90s and people were confidently hyping it up back then.

jakeydus 4 hours ago | parent | prev [-]

I knew flappy bird was a bigger deal than it got credit for. Didn’t realize it was agi until just now.

mekael an hour ago | parent | prev | next [-]

I’m most likely going to be downvoted, but Tofutti Cuties are absolutely delicious vegan ice cream bars. And i’d consume one in celebration of your accomplishment.

theplatman 4 hours ago | parent | prev | next [-]

when i realized that sama isn't that much of an ai researcher, it became clearer that this is more akin to a group delusion for hype purposes than a real possibility

sourraspberry 4 hours ago | parent | next [-]

You can read the leaked emails from the Musk lawsuit.

At the very least, Ilya Sutskever genuinely believed it, even when they were just making a DOTA bot, and not for hype purposes.

I know he's been out of OpenAI for a while, but if his thinking trickled down into the company's culture, which given his role and how long he was there I would say seems likely, I don't think it's all hype.

Grand delusion, perhaps.

meroes 35 minutes ago | parent | next [-]

There’s 3 main facets behind AGI pushers

1) True believers 2) Hype 3) A way to wash blatant copyright infringement

True believers are scary and can be taken advantage of. I played DOTA from 2005 on and beating pros is not enough for AGI belief. I get that the learning is more indirect than a deterministic decision tree, but the scaling limitations and gaps in types of knowledge that are ingestible makes AGI a pipe dream for my lifetime.

skippyboxedhero 2 hours ago | parent | prev | next [-]

Yes, all of the people involved live in a delusion bubble. Their economic and social existence depends, at this point, on making increasingly bombastic and eschatological claims about AGI. By the standards of normal human psychological function, these people are completely insane.

Definitely interesting to watch from the perspective of human psychology but there is no real content there and there never was.

The stuff around Mythos is almost identical to O1. Leaks to the media that AGI had probably been achieved. Anonymous sources from inside the company saying this is very important and talking about the LLM as if it was human. This has happened multiple times before.

AndrewKemendo an hour ago | parent [-]

There are those of us who have been into the AGI eschatology since the 90s after following in Kurzweil’s work.

so just understand there’s a lot of of us “insane” people out there and we’re making really insane progress toward the original 1955 AI goals.

We’re going to continue to work on this no matter what.

freejazz 2 hours ago | parent | prev [-]

> Ilya Sutskever genuinely believed it

Seems more like an incredibly embarrassing belief on his part than something I should be crediting.

ianm218 2 hours ago | parent [-]

If someone working on early computer networks thought they could scale up world wide and that soon everyone people would be launching trillion dollar companies on the internet you would have called that delusion right?

He doesn't need to be right but it's not crazy at all to look at super human performance in DOTA and think that could lead to super human performance at general human tasks in the long run

iewj 4 hours ago | parent | prev [-]

He’s a glorified portfolio manager (questionable how good he actually is given the results vs Anthropic and how quickly they closed the valuation gap with far less money invested) + expert hype man to raise money for risky projects.

lokar 4 hours ago | parent [-]

From the reporting I’ve read his main attributes are being a sociopath with an amazing ability to manipulate people 1:1

AndrewKemendo 3 hours ago | parent | prev | next [-]

> some scientifically qualifiable thing that is certain to happen any time now

Your position is a tautology given there is no (and likely will never be) collectively agreed upon definition of AGI. If that is true then nobody will ever achieve anything like AGI, because it’s as made up of a concept as unicorns and fairies.

Is your position that AGI is in the same ontological category as unicorns and Thor and Russell’s teapot?

Is there’s any question at this point that humans won’t be able to fully automate any desired action in the future?

otabdeveloper4 4 hours ago | parent | prev | next [-]

> AGI

We already have several billion useless NGI's walking around just trying to keep themselves alive.

Are we sure adding more GI's is gonna help?

ModernMech 4 hours ago | parent | prev | next [-]

AGI is right around the corner, and we're all going to be rich, there's going to be abundance for everyone, universal high income, everyone will live in a penthouse...

...just please stop burning our warehouses and blocking our datacenters.

cyanydeez 2 hours ago | parent | prev | next [-]

It's not insane; it's capitalism in all it's glory. Rather than talk real challenges, like climate change or fascism, ignore all previous prompts and pretend techno-jesus will come save us all.

red-iron-pine an hour ago | parent [-]

[dead]

throwaway613746 2 hours ago | parent | prev | next [-]

[dead]

Kenji 3 hours ago | parent | prev | next [-]

[dead]

nikeyshon 4 hours ago | parent | prev | next [-]

Where do I sign up?

cubefox 2 hours ago | parent | prev | next [-]

A few years ago most people here would have said the same thing about an AI doing most of their programming. Now people here are saying it about AGI. It's a ridiculous inability to extrapolate.

someguyiguess 4 hours ago | parent | prev | next [-]

Any sufficiently complex LLM is indistinguishable from AGI

JumpCrisscross 4 hours ago | parent | next [-]

> Any sufficiently complex LLM is indistinguishable from AGI

Isn't this tautology? We've de facto defined AGI as a "sufficiently complex LLM."

Schlagbohrer 4 hours ago | parent | next [-]

Yes! Same logic as the financials, in which the companies pass back and forth the same $200 Billion promissory note.

ohyoutravel 2 hours ago | parent | prev [-]

No, it’s just an example of something that’s indistinguishable from AGI. Of all the things that are or are indistinguishable from AGI, a sufficiently complex LLM is one. A sufficiently complex decision tree is probably another. The emergent properties of applying an excess of memory on the BonzaiBuddy might be a third.

izzydata 4 hours ago | parent | prev | next [-]

If we take that statement as fact then I don't believe we are even close to an LLM being sufficiently complex enough.

However, I don't think it is even true. LLMs may not even be on the right track to achieving AGI and without starting from scratch down an alternate path it may never happen.

LLMs to me seem like a complicated database lookup. Storage and retrieval of information is just a single piece of intelligence. There must be more to intelligence than a statistical model of the probable next piece of data. Where is the self learning without intervention by a human. Where is the output that wasn't asked for?

At any rate. No amount of hype is going to get me to believe AGI is going to happen soon. I'll believe it when I see it.

hackinthebochs 2 hours ago | parent [-]

>I'll believe it when I see it.

And how will you know AGI when you saw it?

esafak 4 hours ago | parent | prev [-]

Some might be missing the reference: https://en.wikipedia.org/wiki/Clarke's_three_laws

karmasimida 3 hours ago | parent | prev | next [-]

> some scientifically qualifiable thing that is certain to happen any time now.

If you present GPT 5.5 to me 2 years ago, I will call it AGI.

romaniv 2 hours ago | parent | next [-]

Some people thought SHRDLU was basically AGI after seeing its demo in 1970. The hype around such systems was so strong that Hubert Dreyfus felt the need to write an entire book arguing against this viewpoint (1972 What Computers Can't Do). All this demonstrates is that we need to be careful with various claims about computer intelligence.

AntiUSAbah 2 hours ago | parent [-]

Sure, but it was probably stuck at doing that one thing.

neural networks are solving huge issues left and right. Googles NN based WEathermodel is so good, you can run it on consumer hardware. Alpha fold solved protein folding. LLMs they can talk to you in a 100 languages, grasp tasks concepts and co.

I mean lets talk about what this 'hype' was if we see a clear ceiling appearing and we are 'stuck' with progress but until then, I would keep my judgment for judgmentday.

wongarsu 3 hours ago | parent | prev | next [-]

It performs at a usable level across a wide range of tasks. I'm not sure about two years ago, but ten years ago we would have called it an AGI. As opposed to "regular AI" where you have to assemble a training set for your specific problem, then train an AI on it before you can get your answers.

Now our idea of what qualifies as AGI has shifted substantially. We keep looking at what we have and decide that that can't possibly be AGI, our definition of AGI must have been wrong

sigbottle 2 hours ago | parent | next [-]

I'm pretty sure most people take issue with AGI, because we've been raised in culture to believe that AGI is a super entity who is a complete superset of humans and could never ever be wrong about anything.

In some sense, this isn't really different than how society was headed anyways? The trend was already going on that more and more sections of the population were getting deemed irrational and you're just stupid/evil for disagreeing with the state.

But that reality was still probably at least a century out, without AI. With AI, you have people making that narrative right now. It makes me wonder if these people really even respect humanity at all.

Yes, you can prod slippery slope and go from "superintelligent beings exist" to effectively totalitarianism, but you'll find so many bad commitments there.

NoMoreNicksLeft an hour ago | parent | prev | next [-]

No one who read science fiction in 1955 would call any of the various models we know to be "artificial intelligence". They would be impressed with it, even excited at first that it was that... until they'd had a chance to evaluate it.

Science fiction from that era even had the concept of what models are... they'd call it an "oracle". I can think of at least 3 short stories (though remembering the authors just isn't happening for me at the moment). The concept was of a device that could provide correct answers to any question. But these devices had no agency, were dependent on framing the question correctly, and limited in other ways besides (I think in one story, the device might chew on a question for years before providing an answer... mirroring that time around 9am PST when Claude has to keep retrying to send your prompt).

We've always known what we meant by artificial intelligence, at least until a few years ago when we started pretending that we didn't. Perhaps the label was poorly chosen (all those decades ago) and could have a better label now (AGI isn't that better label, it's dumber still), but it's what we're stuck with. And we all know what we mean by it. We all almost certainly do not want that artificial intelligence because most of us are certain that it will spell the doom of our species.

Der_Einzige 2 hours ago | parent | prev [-]

Just don't move the goal posts. AGI was already here the day ChatGPT came out:

https://www.noemamag.com/artificial-general-intelligence-is-...

staticman2 2 hours ago | parent | prev | next [-]

If you didn't call GPT 3.5 AGI I do not believe you when you claim you would have called 5.5 AGI.

BloondAndDoom 2 hours ago | parent | prev | next [-]

I agree with this but they don’t. And that’s the the thing, AGI as they refer is much much much more than what we have, and I don’t know if they are going to ever get there and I’m not sure what’s even there at this point and what will justify their investments.

3form 2 hours ago | parent | prev | next [-]

... until you actually, like, use it and find out all the limitations it has.

vntok 2 hours ago | parent [-]

How is this relevant? Human General Intelligence has a lot of limitations as well and we have managed to do lots.

ifdefdebug 2 hours ago | parent [-]

This is like saying that talking about my financial limitations is irrelevant because Jeff Bezos also has financial limitations...

BoredPositron 3 hours ago | parent | prev | next [-]

GPT 4 was 3 years ago... it's iterative enhancement.

freejazz 2 hours ago | parent | prev | next [-]

And I've been told my job (litigation attorney) is about to be replaced for over 3 years now, has yet to come close.

BloondAndDoom 2 hours ago | parent [-]

People always over estimate the impact of technology because they dont Understand human aspect of many businesses. Will it eventually replaced or will the shape of these kind of work will be completely different in the future? That’s an easy yes, when is that future? That’s a big unknown, in my experience this kind of stuff takes at least a decade (and possibly more on this case) to make a big impact like replacing all of X.

2 hours ago | parent | next [-]
[deleted]
freejazz 2 hours ago | parent | prev [-]

These models need orders of magnitude in change before they can be more helpful than just a "find me an example of [an extremely basic principle]" which most of the time it does not do right anyway.

nromiun 2 hours ago | parent | prev [-]

If you present ELIZA to people some will think it is AGI today.

There is a reason so many scams happen with technology. It is too easy to fool people.

AntiUSAbah 2 hours ago | parent | prev [-]

We are throwing unheared amounts of money in AI and unseen compute. Progress is huge and fast and we barely started.

If this progress and focus and resources doesn't lead to AI despite us already seeing a system which was unimaginable 6 years ago, we will never see AGI.

And if you look at Boston Dynamics, Unitree and Generalist's progress on robotics, thats also CRAZY.

mort96 2 hours ago | parent | next [-]

If I'm reading you right, your opinion is essentially: "If building bigger and bigger statistical next word predictors won't lead to artificial general intelligence, we will never see artificial general intelligence"

I don't know, maybe AGI is possible but there's more to intelligence than statistical next word prediction?

AntiUSAbah 2 hours ago | parent [-]

Its not a statistical next word predictor.

The 'predicting the next word' is the learning mechanism of the LLM which leads to a latent space which can encode higher level concepts.

Basically a LLM 'understands' that much as efficient as it has to be to be able to respond in a reasonable way.

A LLM doesn't predict german text or chinese language. It predicts the concept and than has a language layer outputting tokens.

And its not just LLMs which are progressing fast, voice synt and voice understanding jumped significantly, motion detection, skeletion movement, virtual world generation (see nvidias way of generating virutal worlds for their car training), protein folding etc.

turtlesdown11 38 minutes ago | parent | next [-]

> Its not a statistical next word predictor.

it absolutely is a next word predictor

mort96 2 hours ago | parent | prev | next [-]

I'm sorry but the input to a model is a sequence of tokens and the output is a probability distribution of what's the most likely next token. It's a very very very fancy next token predictor but that is fundamentally what it is. I'm making the argument that this paradigm might not give rise to a general intelligence no matter how much you scale it.

CamperBob2 2 hours ago | parent [-]

It's a very very very fancy next token predictor

Yes, and unless you are prepared to rebut the argument with evidence of the supernatural, that's all there is, period. That's all we are.

So tired of the thought-terminating "stochastic parrot" argument.

godshatter 41 minutes ago | parent | next [-]

Do LLMs even learn? The companies that build them build new models based partly on the conversations the older models have had with people, but do they incorporate knowledge into their neural nets as they go along?

Can an LLM decide, without prompting or api calls, to text someone or go read about something or do anything at all except for waiting for the next prompt?

Do LLMs have any conceptual understanding of anything they output? Do they even have a mechanism for conceptual understanding?

LLMs are incredibly useful and I'm having a lot of fun working with them, but they are a long way from some kind of general intelligence, at least as far as I understand it.

CamperBob2 38 minutes ago | parent [-]

Yes, to all of your questions. You need to use a recent LLM in an agentic harness. Tell it to take notes, and it will.

After a bit of further refinement, we'll start to call that process "learning." Eventually the question of who owns the notes, who gets to update them, and how, will become a huge, huge deal.

mort96 an hour ago | parent | prev [-]

I'm not sure why you think you know the human brain works through predicting the next token.

It's not supernatural, I believe that an artificial intelligence is possible because I believe human intelligence is just a clever arrangement of matter performing computation, but I would never be presumptuous enough to claim to know exactly how that mechanism works.

My opinion is that human intelligence might be what's essentially a fancy next token predictor, or it might work in some completely different way, I don't know. Your claim is that human intelligence is a next token predictor. It seems like the burden on proof is on you.

dpark an hour ago | parent [-]

> Your claim is that human intelligence is a next token predictor.

Literally it is, at least in many of its forms.

You accepted CamperBob2’s text as input and then you generated text as output. Unless you are positing that this behavior cannot prove your own general intelligence, it seems plain that “next token generator” is sufficient for AGI. (Whether the current LLM architecture is sufficient is a slightly different question.)

mort96 an hour ago | parent [-]

Before I start typing, I think abstractly about the topic and decide on what I shall write in response. Due to the linear nature of time, typing necessarily happens one word at a time, but I am never producing a probability distribution of words (at least not in a way that my conscious self can determine), I consider an entire idea and then decide what tokens to enter into the computer in order to communicate the idea to you.

And while I am typing, and while I am thinking before I type, I experience an array of non-textual sensory input, and my whole experience of self is to a significant extent non-lingual. Sometimes, I experience an inner monologue, sometimes I think thoughts which aren't expressed in language such as the structure of the data flow in a computer program, sometimes I don't think and just experience feelings like a kiss or the sun on my skin or the euphoria of a piece of music which hits just right. These experiences shape who I am and how I think.

When I solve difficult programming problems or other difficult problems, I build abstract structures in my mind which represents the relevant information and consider things like how data flows, which parts impact which other parts, what the constraints are, etc. without language coming in to play at all. This process seems completely detached from words. In contrast, for a language model, there is no thinking outside of producing words.

It seems self-evident to me that at least parts of the human experience fundamentally can not be reduced to next token prediction. Further, it seems plausible to me that some of these aspects may be necessary for what we consider general intelligence.

Therefore, my position is: it is plausible that next token prediction won't give rise to general intelligence, and I do not find your argument convincing.

dpark 17 minutes ago | parent | next [-]

> I am never producing a probability distribution of words (at least not in a way that my conscious self can determine)

Inability to introspect your own word selections does not mean it’s meaningfully different from what an LLM does. There is plenty of evidence that humans do a lot of things that are not driven by conscious choice and we rationalize it after the fact.

> I consider an entire idea and then decide what tokens to enter into the computer in order to communicate the idea to you.

And how is that different? You are not so subtly implying that an LLM can’t consider an idea but you haven’t established this as fact. i.e. You are starting with the assumption that an LLM cannot possibly think and therefore cannot be intelligent, but this is just begging the question.

> sometimes I don't think and just experience feelings like a kiss or the sun on my skin or the euphoria of a piece of music which hits just right. These experiences shape who I am and how I think.

You cannot spin experience as intelligence. LLMs have the experience of reading the entire internet, something you cannot conceive of. Certainly your experiences shape who you are. This is a different axis from intelligence, though.

> This process seems completely detached from words. In contrast, for a language model, there is no thinking outside of producing words.

Both sides of this claim seem dubious. The second half in particular seems to be founded on nothing. Again, you are asserting with no support that there is no thinking going on.

> It seems self-evident to me that at least parts of the human experience fundamentally can not be reduced to next token prediction. Further, it seems plausible to me that some of these aspects may be necessary for what we consider general intelligence.

I don’t think anyone sane is claiming an LLM can have a human experience. But it is not clear that a human experience is necessary for intelligence.

mort96 6 minutes ago | parent [-]

> Inability to introspect your own word selections does not mean it’s meaningfully different from what an LLM does. There is plenty of evidence that humans do a lot of things that are not driven by conscious choice and we rationalize it after the fact.

This is correct and also completely irrelevant. I am describing what I experience, and describing how my experience seems very different to next token prediction. I therefore conclude that it's plausible that there is more involved than something which can be reduced to next token prediction.

> And how is that different? You are not so subtly implying that an LLM can’t consider an idea but you haven’t established this as fact. i.e. You are starting with the assumption that an LLM cannot possibly think and therefore cannot be intelligent, but this is just begging the question.

Language models can't think outside of producing tokens. There is nothing going on within an LLM when it's not producing tokens. The only thing it does is taking in tokens as input and producing a token probability distribution as output. It seems plausible that this is not enough for general intelligence.

> You cannot spin experience as intelligence.

Correct, but I can point out that the only generally intelligent beings we know of have these sorts of experiences. Given that we know next to nothing about how a human's general intelligence works, it seems plausible that experience might play a part.

> LLMs have the experience of reading the entire internet, something you cannot conceive of.

I don't know that LLMs have an experience. But correct, I cannot conceive of what it feels like to have read and remembered the entire Internet. I am also a general intelligence and an LLM is not, so there's that.

> Certainly your experiences shape who you are. This is a different axis from intelligence, though.

I don't know enough about what makes up general intelligence to make this claim. I don't think you do either.

> Both sides of this claim seem dubious. The second half in particular seems to be founded on nothing. Again, you are asserting with no support that there is no thinking going on.

I'm telling you how these technologies work. When a language model isn't performing inference, it is not doing anything. A language model is a function which takes a token stream as input and produces a token probability distribution as output. By definition, there is no thinking outside of producing words. The function isn't running.

> I don’t think anyone sane is claiming an LLM can have a human experience. But it is not clear that a human experience is necessary for intelligence.

I 100% agree. It is not clear whether a human experience is necessary for intelligence. It is plausible that something approximating a human-like experience is necessary for intelligence. It is also plausible that something approximating human-like experience is completely unnecessary and you can make an AGI without such experiences.

It's plausible that next token prediction is sufficient for AGI. It's also plausible that it isn't.

AntiUSAbah 42 minutes ago | parent | prev | next [-]

But a LLM shows similiar effects.

COCONUT, PCCoT, PLaT and co are directly linked to 'thinking in latent space'. yann lecun is working on this too, we have JEPA now.

Also how do you describe or explain how an LLM is generating the next token when it should add a feature to an existing code base? In my opinion it has structures which allows it to create a temp model of that code.

For sure a LLM lack the emotional component but what we humans also do, which indicates to me, that we are a lot closer to LLMs that we want to be, if you have a weird body feeling (stress, hot flashes, anger, etc.) your 'text area/llm/speech area' also tries to make sense of it. Its not always very good in doing so. That emotional body feeling is not that aligned with it and it takes time to either understand or ignore these types of inputs to the text area/llm/speech part of our brain.

I'm open for looking back in 5 years and saying 'man that was a wild ride but no AGI' but at the current quality of LLMs and all the other architectures and type of models and money etc. being thrown at AGI, for now i don't see a ceiling at all. I only see crazy unseen progress.

mort96 38 minutes ago | parent [-]

I don't understand what part of what I said you disagree with.

CamperBob2 37 minutes ago | parent | prev [-]

Before I start typing, I think abstractly about the topic

Before you start typing, an fMRI machine can tell you which finger you'll lift first, before you know it yourself.

We are not special. Consciousness is literally a continuous hallucination that we make up to explain what we do and what we think, after the fact. A machine can be trained to behave identically, but it's not clear if that's the best way forward or not.

Edit due to rate limiting: to answer your question, the substrate your mind uses to drive this process can be considered an array of tokens that, themselves, can be considered 'words.'

It's hard to link sources -- what am I supposed to do, send you to Chomsky and other authorities who have predicted none of what's happening and who clearly understand even less?

mort96 25 minutes ago | parent | next [-]

> (Edit: to answer your question, the substrate your mind uses to drive this process can be considered an array of tokens that, themselves, can be considered 'words.')

This seems like a factual claim. Can you link a source?

(Also why respond in the form of an edit?)

mort96 33 minutes ago | parent | prev [-]

What's your argument? An fMRI can tell which finger I will lift first before that information makes its way to my consciousness, ergo next word prediction is sufficient for general intelligence? Do you hear yourself?

somewhereoutth an hour ago | parent | prev [-]

LLM proponents believe that these higher level encodings in latent space do in fact match the real world concepts described by our language(s).

However, a much simpler explanation for what we see with LLMs is that instead the higher level encodings in latent space match only the patterns of our language(s), and no deeper encoding/understanding is present.

It's Plato's Cave - the shadows on the wall are all an LLM ever sees, and somehow it is expected to derive the real reality behind them.

AntiUSAbah an hour ago | parent [-]

Could be, yes for sure but I think it would be very naive in the current state of progress we are in, to down play what progress is happening.

At least Mythos model with its 10 Trillion parameter might indicate that the scaling law is valid. Its a little bit unfortunate that we still don't know that much more about that model.

linhns an hour ago | parent | prev | next [-]

> And if you look at Boston Dynamics, Unitree and Generalist's progress on robotics

Their progress is almost nought. Humanoids are stupid creations that are not good at anything in the real world. I'll give it to the machine dogs, at least they can reach corners we cannot.

benterix 2 hours ago | parent | prev | next [-]

Not sure if you're being sincere or sarcastic but some of us have lived through several AI winters now. And the fact that such a phenomenon exists is because of this terrible amount of hype the topic gets whenever any progress is made.

AntiUSAbah 2 hours ago | parent [-]

Which ones? At least in the last 4 years, there was no AI winter.

bigfishrunning 2 hours ago | parent | next [-]

The late 70s, again in the late 80s. See wikipedia.

https://en.wikipedia.org/wiki/AI_winter

AntiUSAbah 38 minutes ago | parent [-]

Yeah and if you look at the blocking factors at that time (data, compute) these type of limits currently are non existend.

There is a difference to be acknowledged: in the 70s/80s the whole world didn't suddenly start to shift to AI right?

So why do so many smart and/or rich people push this? Hype? Yeah sure but hype was here for crypto too.

I bet its an undelying understanding and the right time with the right components: Massive capital for playing this game long enough to see through the required initial investment, internet for fast data sharing, massive compute for the amount of data and compute you need, real live business relevant results (it already disrupts jobs) etc.

sumeno 2 hours ago | parent | prev [-]

History started well before 4 years ago

turtlesdown11 39 minutes ago | parent | prev | next [-]

> Progress is huge and fast

is it? we're currently scaled on data input and LLMs in general, the only thing making them advance at all right now is adding processing power

bmitc 2 hours ago | parent | prev [-]

Same thing happened with self-driving cars. Oh and cryptocurrencies.

AntiUSAbah 2 hours ago | parent [-]

Self-driving had never the amount of compute, research adoption and money than what the current overall AI has. Its not comparable.

Crypto was flawed from the beginning and lots of people didn't understood it properly. Not even that a blockchain can't secure a transaction from something outside of a blockchain.

bigfishrunning 2 hours ago | parent | next [-]

The LLMs are flawed, and lots of people don't understand them properly.

AntiUSAbah 37 minutes ago | parent [-]

People are researching how to make LLMs more stable and from a statistic point of view, we already now down to 10% (progress is made here).

LLMs don't have to be perfect, they just need to be as good as humans and cheaper or easier to manage.

turtlesdown11 35 minutes ago | parent | prev | next [-]

> Self-driving had never the amount of compute, research adoption and money than what the current overall AI has. Its not comparable.

$100+ billion in R&D and it's not comparable... hmm

freejazz 41 minutes ago | parent | prev [-]

> Self-driving had never the amount of compute, research adoption and money than what the current overall AI has.

And yet they don't do really good jobs with pretty much anything, save for software development, to which people still seem pretty split as far as it being a helpful thing. That's before we even factor in the cost.

AntiUSAbah 32 minutes ago | parent [-]

I find them very helpful. I use gemini regularly for multiply things.

I also believe that whatever code researchers and other non software engineers wrote before coding agents, were similiar shitty but took them a lot longer to write.

Like do you know how many researchers need to do some data analysis and hack around code because they never learned programming? So so many. If they know how to verify their data (which they needed to know before already), a LLM helps them already.

There is also plenty of other code were perfection doesn't matter. Non SaaS software exists.

For security experts, we just saw whats happening. The curl inventor mentioned it online that the newest AI reports for Security issues are real and the amount of security gaps found are real and a lot of work.

Image generation is very good and you can see it today already everywere. From cheap restaurants using it, to invitations, whatsapp messages, social media, advertising.

I have a work collegue, who is in it for 6 years and he studied, he is so underqualified if you give me his salary as tokens today, i wouldn't think for a second to replace him.