Remix.run Logo
_heimdall 3 days ago

The bigger elephant in the room is that LLMs will never be AGI, even by the purely economic definition many LLM companies use.

michaelbrave a day ago | parent | next [-]

I always kinda figured that AGI would need to be sort of similarly modeled like a brain, for which LLMs could at least fit the function for language. Meaning AGI won't be LLM based, but maybe parts of it could be.

Insanity 3 days ago | parent | prev | next [-]

I've been saying this for years now. LLMs are _not_ the right methodology to get to AGI. My friends who were drinking the kool-aid are only recently coming around to "hey, this might not get us AGI".

But sometimes it feels like I'm the lone voice in a bubble where people are convinced AGI is just around the corner.

I'm wondering if it's because people are susceptible to the marketing, or are just doing some type of 'wishful thinking' - as some seem genuinely interested in AGI.

_heimdall 3 days ago | parent [-]

Yeah I've had those conversations since GPT-3 first came out. I usually look like the one way off base, but I never did hear a clear explanation of how the LLM architecture could lead to AGI.

In my experience it was a combination of the hype and an overconfidence in the person's understanding of how LLMs work and what AGI actually means. To be fair, AGI definitions are all over the place and LLMs were rarely described in detail beyond "its AI that read the whole internet and sounds like a human."

panarky 3 days ago | parent | prev [-]

There are two big innovations required to achieve inexpensive AGI.

LLMs will accelerate discovery and development of Innovation 1, for insanely expensive AGI.

Innovation 1 will accelerate discovery and development of Innovation 2 which will make it too cheap to meter.

_heimdall 3 days ago | parent | next [-]

> LLMs will accelerate discovery and development of Innovation 1, for insanely expensive AGI.

Can you expand on this more? As far as I'm aware LLMs have yet to invent anything novel.

At best they may have inferred one response of many that, when tested by humans, may have proven out. I'm not aware of a specific example of even that, but it is at least possible where claims that LLMs will "cure cancer" seem plainly false (I'm not trying to put those words in your mouth, just using an example for my point).

diamond559 3 days ago | parent | prev [-]

Wishful thinking