Remix.run Logo
altruios 2 days ago

Okay. So to be clear, you believe that replicating/templating a brain is the ONLY way to make an intelligent machine?

What makes you think that? That there are no other patterns of intelligence?

gslepak 2 days ago | parent [-]

I can see how that would be implied by my comments so you're right to question that.

The principles that are found in the brain are what gives qualification to "AGI", not the brain itself, so it's possible there are other architectures that would qualify.

A few observations on LLMs that give the game away:

- They require releases. You get a single binary blob and that blob is forever stuck at its so-called "intelligence" level. It never learns anything new.

- They're stuck approaching the limit of human intelligence. This is because the technique cannot exceed human intelligence. I realize that OpenAI has made claims to the contrary, saying things like "oh our model found out some proof that was never proven before" — this doesn't count. It's a side effect of training on the Internet. In fact that proof probably did exist (in pieces) somewhere on the Internet, it just wasn't widely publicized.

So, you'll know it's AGI when you no longer see companies releasing new models. AGI won't require new models because the architecture will be what matters as whatever models you have will be constantly updating themselves in real-time, just like the human brain does (and every other brain).

And, you'll start to see the AIs actually outsmarting the smartest humans on the planet in every subject.

altruios 2 days ago | parent [-]

> - They require releases. You get a single binary blob and that blob is forever stuck at its so-called "intelligence" level. It never learns anything new.

True. But learning isn't the same thing as intelligence. My father who has dementia and is unable to learn anything new due to memory issues is still 'intelligent'.

> - They're stuck approaching the limit of human intelligence.

Is general intelligence > human intelligence then? Is there some static 'human level' that I should be measuring myself against?

There is considerable overlap between the smartest bear and the dumbest human. same is true with LLM's and humans how.

What you seem to be describing isn't AG(eneral)I, but artificial greater intelligence.

gslepak 2 days ago | parent [-]

> What you seem to be describing isn't AG(eneral)I, but artificial greater intelligence.

If you ignore what I said in answer to you earlier then perhaps it would make sense to draw this conclusion. But if you take the full context of what I said then no, it's clear that I am not referring to "artificial greater intelligence".

Just in the previous comment I said that rats would qualify, because the architecture is what matters.

Your example with dementia is clever but that's an example of the biological architecture breaking down. Please forgive the crude analogy but it's like asking if a house is still a house if it's been burned down partially. I suppose part of it is still a house.

gslepak 2 days ago | parent | next [-]

FWIW there are other definitions of intelligence that are wholly immaterial.

Spirits are considered intelligent even though they have no body because they are composed of pure non-physical consciousness. Plants are intelligent even though they also have no brain.

That fundamental sort of living conscious intelligence isn't what I see discussed much in these contexts though.

What you will notice about it though is that unlike frozen LLMs, this type of intelligence also has the capacity to change, interact, and learn from its environment.

If we go with this definition instead, then on a large enough timescale everything can be considered intelligent, even rocks.

altruios 2 days ago | parent [-]

>If we go with this definition instead

...Let's not go with the nonsense definitions then.

I agree, systems don't need a brain to be intelligent, and (on a related point:) I don't think systems need to be conscious to be 'intelligent'.

You are excluding this system (llm+harness) that learns (separately), can modify it's surrounding environment via a shell interface (including setting up a nightly training loop to reweight itself based on it's daily actions and interactions) from being intelligent. Do I have that right? Or are you thinking in terms of 'only' the LLM?

gslepak 2 days ago | parent [-]

I do call openclaw style agents "living agents", although they might be closer to a kind of zombie. Living agents like openclaw et. al. do have a self-modifying property of sorts thanks to their memory, and so that system might be more AGI-ish, but, still, it has a fundamental cap to its potential, which remains frozen at the LLM.

> (including setting up a nightly training loop to reweight itself based on it's daily actions and interactions) from being intelligent

I'd have a harder time arguing that sort of system isn't AGI.

altruios 2 days ago | parent | prev [-]

My point is learning may be required to create intelligence, but not 'run' intelligence. And LLM's 'learn' in their training, no? It happening at a different times doesn't truly matter.

gslepak 2 days ago | parent [-]

What is doing the intelligencing though? Is it the LLM or the person training it?

To me, that seems awfully close to arguing that a puppet is intelligent because a human is pulling the strings and making it dance.

We can agree to disagree on this.