Remix.run Logo
gslepak 2 days ago

> What you seem to be describing isn't AG(eneral)I, but artificial greater intelligence.

If you ignore what I said in answer to you earlier then perhaps it would make sense to draw this conclusion. But if you take the full context of what I said then no, it's clear that I am not referring to "artificial greater intelligence".

Just in the previous comment I said that rats would qualify, because the architecture is what matters.

Your example with dementia is clever but that's an example of the biological architecture breaking down. Please forgive the crude analogy but it's like asking if a house is still a house if it's been burned down partially. I suppose part of it is still a house.

gslepak 2 days ago | parent | next [-]

FWIW there are other definitions of intelligence that are wholly immaterial.

Spirits are considered intelligent even though they have no body because they are composed of pure non-physical consciousness. Plants are intelligent even though they also have no brain.

That fundamental sort of living conscious intelligence isn't what I see discussed much in these contexts though.

What you will notice about it though is that unlike frozen LLMs, this type of intelligence also has the capacity to change, interact, and learn from its environment.

If we go with this definition instead, then on a large enough timescale everything can be considered intelligent, even rocks.

altruios 2 days ago | parent [-]

>If we go with this definition instead

...Let's not go with the nonsense definitions then.

I agree, systems don't need a brain to be intelligent, and (on a related point:) I don't think systems need to be conscious to be 'intelligent'.

You are excluding this system (llm+harness) that learns (separately), can modify it's surrounding environment via a shell interface (including setting up a nightly training loop to reweight itself based on it's daily actions and interactions) from being intelligent. Do I have that right? Or are you thinking in terms of 'only' the LLM?

gslepak 2 days ago | parent [-]

I do call openclaw style agents "living agents", although they might be closer to a kind of zombie. Living agents like openclaw et. al. do have a self-modifying property of sorts thanks to their memory, and so that system might be more AGI-ish, but, still, it has a fundamental cap to its potential, which remains frozen at the LLM.

> (including setting up a nightly training loop to reweight itself based on it's daily actions and interactions) from being intelligent

I'd have a harder time arguing that sort of system isn't AGI.

altruios 2 days ago | parent | prev [-]

My point is learning may be required to create intelligence, but not 'run' intelligence. And LLM's 'learn' in their training, no? It happening at a different times doesn't truly matter.

gslepak 2 days ago | parent [-]

What is doing the intelligencing though? Is it the LLM or the person training it?

To me, that seems awfully close to arguing that a puppet is intelligent because a human is pulling the strings and making it dance.

We can agree to disagree on this.