Remix.run Logo
joshellington a day ago

To throw two pennies in the ocean of this comment section - I’d argue we still lack schematic-level understanding of what “intelligence” even is or how it works. Not to mention how it interfaces with “consciousness”, and their likely relation to each other. Which kinda invalidates a lot of predictions/discussions of “AGI” or even in general “AI”. How can one identify Artificial Intelligence/AGI without a modicum of understanding of what the hell intelligence even is.

qudat a day ago | parent | next [-]

The reason why it’s so hard to define intelligence or consciousness is because we are hopelessly biased with a datapoint of 1. We also apply this unjustified amount of mysticism around it.

https://bower.sh/who-will-understand-consciousness

__MatrixMan__ a day ago | parent | prev | next [-]

I don't think we can ever know that we are generally intelligent. We can be unsure, or we can meet something else which possesses a type of intelligence that we don't, and then we'll know that our intelligence is specific and not general.

So to make predictions about general intelligence is just crazy.

And yeah yeah I know that OpenAI defines it as the ability to do all economically relevant tasks, but that's an awful definition. Whoever came up with that one has had their imagination damaged by greed.

judahmeek a day ago | parent [-]

All intelligence is specific, as evidenced by the fact that a universal definition regarding the specifics of "common sense" doesn't exist.

__MatrixMan__ 14 hours ago | parent | next [-]

Common is not the same as general. A general key would open every lock. Common keys... well they're quite familiar.

judahmeek 12 hours ago | parent [-]

My point was that all intelligence is based on an individual's experiences, therefore an individual's intelligence is specific to those experiences.

Even when we "generalize" our intelligence, we can only extend it within the realm of human senses & concepts, so it's still intelligence specific to human concerns.

__MatrixMan__ 4 hours ago | parent [-]

So if you encounter an unknown intelligence, like I dunno some kind of extra dimensional pen pal with a wildly different biology and environment than our own... Would you be open to the possibilities:

- despite our difference we have the same kind of intelligence

- our intelligences intersect, but there are capacities that each has that the other doesn't

?

It seems like for either to be true there would have to be some place of common ground into which we could both generalize independently of our circumstance. Mathematics is often thought to be such a place for instance, there's plenty of sci fi about beaming prime numbers into space as an attempt to leverage that common ground. Are you saying there aren't such places? That SETI is hopeless?

walkabout 17 hours ago | parent | prev [-]

A universal definition of “chair” is pretty hard to pin down, too…

judahmeek 12 hours ago | parent [-]

What are your sources for that claim?

chadcmulligan 18 hours ago | parent | prev | next [-]

I did the math some years ago on how much computing is required to simulate a human brain - a brain has around 90 billion neurons with each neuron having an average of 7,000 connections to other neurons. Lets assume thats all we need. So what do we need to simulate a neuron, one cpu? or can we fit more than one in a CPU, lets say 100 so we're down to one billion cpu's and 70 trillion messages flying between them every what? mSec?.

Simulating that is a long way away - so the only possibility is that brains have some sort of redundancy and we can optimise that away. Though computers are faster than brains so its possible maybe, how much faster? So lets say a neuron does its work in a mS and we can simulate this work in 1uS, ie a thousand times faster - thats still a lot. Can we get to a million times faster? even then its still a lot. Not to mention the power required for this.

Even if we can fit a million neurons in a CPU thats still 90 million CPU's. Only 10% are active say, still 9 million CPU's, a thousand times faster - 9,000 cpu's nearly there but still a while away.

cmrdporcupine 16 hours ago | parent [-]

We don't even have an accurate convincing model of how the functions of the brain really work, so it's crazy to even think about its simulation like that. I have no doubt that the cost would be tremendous if we could even do it, but I don't even think we know what to do.

The LLM stuff seems most distinctly to not be an emulation of the human brain in any sense, even if it displays human-like characteristics at times.

vannucci a day ago | parent | prev | next [-]

This so much this. We don’t even have a good model for how invertebrate minds work or a good theory of mind. We can keep imitating understanding but it’s far from any actual intelligence.

tim333 a day ago | parent [-]

I'm not sure we or evolution needed a theory of mind. Evolution stuck neurons together in various ways and fiddled with it till it worked without a master plan and the LLM guys seem to be doing something rather like that.

zargon a day ago | parent | next [-]

LLM guys took a very specific layout of neurons and said “if we copy paste this enough times, we’ll get intelligence.”

whiplash451 19 hours ago | parent | prev [-]

mmm, no because unlike biological entities, large models learn by imitation, not by experience

visarga a day ago | parent | prev | next [-]

> we still lack schematic-level understanding of what “intelligence” even is or how it works. Not to mention how it interfaces with “consciousness”, and their likely relation to each other

I think you can get pretty far starting from behavior and constraints. The brain needs to act in such a way as to pay for its costs. And not just day to day costs, also ability to receive and give that initial inheritance.

From cost of execution we can derive an imperative for efficiency. Learning is how we avoid making the same mistakes and adapt. Abstractions are how we efficiently carry around past experience to be applied in new situations. Imagination and planning are how we avoid the high cost of catastrophic mistakes.

Consciousness itself falls from the serial action bottleneck. We can't walk left and right at the same time, or drink coffee before brewing it. Behavior has a natural sequential structure, and this forces the distributed activity in the brain to centralized on a serial output sequence.

My mental model is that of a structure-flow recursion. Flow carves structure, and structure channels flow. Experiences train brains and brain generated actions generate experiences. Cutting this loop and analyzing parts of it in isolation does not make sense, like trying to analyze the matter and motion in a hurricane separately.

keiferski a day ago | parent | prev | next [-]

That would require philosophical work, something that the technicians building this stuff refuse to acknowledge as having value.

Ultimately this comes down to the philosophy of language and of the history of specific concepts like intelligence or consciousness - neither of which exist in the world as a specific quality, but are more just linguistic shorthands for a bundle of various abilities and qualities.

Hence the entire idea of generalized intelligence is a bit nonsensical, other than as another bundle of various abilities and qualities. What those are specifically doesn’t seem to be ever clarified before the term AGI is used.

whiplash451 19 hours ago | parent | prev | next [-]

Without going to deep into the rabbit hole, one could argue that at the first-order, intelligence is the ability to learn from experience towards a goal. In that sense, LLMs are not intelligent. They are just a (great) tool at the service of human intelligence. And so we’re just extremely far from machine intelligence.

Culonavirus a day ago | parent | prev [-]

> I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description ["<insert general intelligence buzzword>"], and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the <insert llm> involved in this case is not that.

https://en.wikipedia.org/wiki/I_know_it_when_I_see_it