Remix.run Logo
adastra22 4 days ago

I’ve posted this before, but here goes: we achieved AGI in either 2017 or 2022 (take your pick) with the transformer architecture and the achievement of scaled-up NLP in ChatGPT.

What is AGI? Artificial. General. Intelligence. Applying domain independent intelligence to solve problems expressed in fully general natural language.

It’s more than a pedantic point though. What people expect from AGI is the transformative capabilities that emerge from removing the human from the ideation-creation loop. How do you do that? By systematizing the knowledge work process and providing deterministic structure to agentic processes.

Which is exactly what these developments are doing.

colechristensen 4 days ago | parent | next [-]

>What is AGI? Artificial. General. Intelligence.

Here's the thing, I get it, and it's easy to argue for this and difficult to argue against it. BUT

It's not intelligent. It just is not. It's tremendously useful and I'd forgive someone for thinking the intelligence is real, but it's not.

Perhaps it's just a poor choice of words. What a LOT of people really mean would go along the lines more like Synthetic Intelligence.

That is, however difficult it might be to define, REAL intelligence that was made, not born.

Transformer and Diffusion models aren't intelligent, they're just very well trained statistical models. We actually (metaphorically) have a million monkeys at a million typewriters for a million years creating Shakespeare.

My efforts manipulating LLMs into doing what I want is pretty darn convincing that I'm cajoling a statistical model and not interacting with an intelligence.

A lot of people won't be convinced that there's a difference, it's hard to do when I'm saying it might not be possible to have a definition of "intelligence" that is satisfactory and testable.

adastra22 4 days ago | parent | next [-]

“Intelligence” has technical meaning, as it must if we want to have any clarity in discussions about it. It basically boils down to being able to exploit structure in a problem or problem domain to efficiently solve problems. The “G” and AGI just means that it is unconstrained by problem domain, but the “intelligence” remains the same: problem solving.

Can ChatGPT solve problems? It is trivial to see that it can. Ask it to sort a list of numbers, or debug a piece of segfaulting code. You and I both know that it can do that, without being explicitly trained or modified to handle that problem, other than the prompt/context (which itself natural language that can express any problem, hence generality).

What you are sneaking into this discussion is the notion of human-equivalence. Is GPT smarter than you? Or smarter than some average human?

I don’t think the answer to this is as clear-cut. I’ve been using LLMs on my work daily for a year now, and I have seen incredible moments of brilliance as well as boneheaded failure. There are academic papers being released where AIs are being credited with key insights. So they are definitely not limited to remixing their training set.

The problem with the “AI are just statistical predictors, not real intelligence” argument is what happens when you turn it around and analyze your own neurons. You will find that to the best of our models, you are also just a statistical prediction machine. Different architecture, but not fundamentally different in class from an LLM. And indeed, a lot of psychological mistakes and biases start making sense when you analyze them from the perspective of a human being like an LLM.

But again, you need to define “real intelligence” because no, it is not at all obvious what that phrase means when you use it. The technical definitions of intelligence that have been used in the past, have been met by LLMs and other AI architectures.

baq 4 days ago | parent [-]

> You will find that to the best of our models, you are also just a statistical prediction machine.

I think there’s a set of people whose axioms include ‘I’m not a computer and I’m not statistical’ - if that’s your ground truth, you can’t be convinced without shattering your world view.

kalkin 4 days ago | parent | prev [-]

If you can't define intelligence in a way that distinguishes AIs from people (and doesn't just bake that conclusion baldly into the definition), consider whether your insistence that only one is REAL is a conclusion from reasoning or something else.

colechristensen 3 days ago | parent [-]

About a third of Zen and the Art of Motorcycle Maintenance is about exactly this disagreement except about the ability to come to a definition of a specific usage of the word "quality".

Let's put it this way: language written or spoken, art, music, whatever... a primary purpose these things is a sort of serialization protocol to communicate thought states between minds. When I say I struggle to come to a definition I mean I think these tools are inadequate to do it.

I have two assertions:

1) A definition in English isn't possible

2) Concepts can exist even when a particular language cannot express them

aaronblohowiak 4 days ago | parent | prev | next [-]

We have achieved AGI no more than we have achieved human flight.

kelchm 4 days ago | parent | next [-]

Are you really making the argument that human flight hasn’t been effectively achieved at this point?

I actually kind of love this comparison — it demonstrates the point that just like “human flight”, “true AGI” isn’t a single point in time, it’s a many-decade (multi-century?) process of refinement and evolution.

Scholars a millennia from now will be debating about when each of these were actually “truly” achieved.

mbreese 4 days ago | parent [-]

I’ve never heard it described this way: AGI as similar to human flight. I think it’s subtle and clever - my two most favorite properties.

To me, we have both achieved and not human flight. Can humans themselves fly? No. Can people fly in planes across continents. Yes.

But, does it really matter if it counts as “human flight” if we can get from point A to point B faster? You’re right - this is an argument that will last ages.

It’s a great turn of phrase to describe AGI.

aaronblohowiak 4 days ago | parent [-]

Thank you! I’m bored of “moving goalposts” arguments as I think “looks different than we expected” is the _ordinary_ way revolutions happen.

adastra22 4 days ago | parent | prev [-]

Yes, I agree! Thank you for that apt comparison.

bluefirebrand 4 days ago | parent | prev [-]

> we achieved AGI in either 2017 or 2022

Even if this is true, which I disagree with, it simply creates a new bar: AGCI. Artificial Generally Correct Intelligence

Because Right now it is more like Randomly correct

micromacrofoot 4 days ago | parent | next [-]

to be fair we accept imperfection as some natural trait of life, to err, human

doug_durham 4 days ago | parent | prev | next [-]

Kind of like humans.

freeone3000 4 days ago | parent [-]

The reason we made systems on computers is so they would not be falliable like humans would be.

derac 4 days ago | parent [-]

No it isn't, it's because they are useful tools for doing a lot of calculations quickly.

bluefirebrand 4 days ago | parent [-]

accurate calculations, quickly

If they did calculations as sloppily as AI currently produces information, they would not be as useful

adastra22 4 days ago | parent [-]

A stochastically correct oracle just requires a little more care units use, that’s all.

decremental 4 days ago | parent | prev [-]

[dead]