Remix.run Logo
ants_everywhere 3 days ago

> there is missing philosophy

I doubt it. Human intelligence evolved from organisms much less intelligent than LLMs and no philosophy was needed. Just trial and error and competition.

solid_fuel 3 days ago | parent | next [-]

We are trying to get there without a few hundred million years of trial and error. To do that we need to lower the search space, and to do that we do actually need more guiding philosophy and a better understanding of intelligence.

tim333 2 days ago | parent | next [-]

If you look at AI systems that have worked like chess and go programs and LLMs, they came from understanding the problems and engineering approaches but not really philosophy.

fzzzy 2 days ago | parent | prev [-]

Lower the search space or increase the search speed

balamatom 2 days ago | parent [-]

Instead what they usually do is lower the fidelity and think they've done what you said. Which results in them getting eaten. Once eaten, they can't learn from mistakes no mo. Their problem.

Because if we don't mix up "intelligence" the phenomenon of increasingly complex self-organization in living systems, with "intelligence" our experience of being able to mentally model complex phenomena in order to interact with them, then it becomes easy to see how the search speed you talk of is already growing exponentially.

In fact, that's all it does. Culture goes faster than genetic selection. Printing goes faster than writing. Democracy is faster than theocracy. Radio is faster than post. A computer is faster than a brain. LLMs are faster than trained monkeys and complain less. All across the planet, systems bootstrap themselves into more advanced systems as soon as I look at 'em, and I presume even when I don't.

OTOH, all the metaphysics stuff about "sentience" and "sapience" that people who can't tell one from the other love to talk past each other about - all that only comes into view if one were to what's happening with the search space if the search speed is increasing at a forever increasing rate.

Such as, whether the search space is finite, whether it's mutable, in what order to search, is it ethical to operate from quantized representations of it, funky sketchy scary stuff the lot of it. One's underlying assumptions about this process determine much of one's outlook on life as well as complex socially organized activities. One usually receives those through acculturation and may be unaware of what they say exactly.

crystal_revenge 3 days ago | parent | prev | next [-]

The magical thinking around LLMs is getting bizarre now.

LLMs are not “intelligent” in any meaningful biological sense.

Watch a spider modify its web to adapt to changing conditions and you’ll realize just how far we have to go.

LLMs sometimes echo our own reasoning back at us in a way that sounds intelligent and is often useful, but don’t mistake this for “intelligence”

bubblyworld 3 days ago | parent | next [-]

Watch a coding agent adapt my software to changing requirements and you'll realise just how far spiders have to go.

Just kidding. Personally I don't think intelligence is a meaningful concept without context (or an environment in biology). Not much point comparing behaviours born in completely different contexts.

tim333 2 days ago | parent | prev | next [-]

They pass human intelligence tests like exams and IQ tests.

If I ask chatgpt how to get rid of spiders I'm probably going to get further than the spiders would scheming to get rid of chatgpt.

habinero 2 days ago | parent [-]

And Clever Hans could pass a math exam

"Some tests can be cheesed by a statistical model" is much less sexy and clickable than "my computer is sentient", but it's what's actually going on lol

tim333 a day ago | parent [-]

Some fibs and goalpost shifting there. Hans couldn't and sentience wasn't mentioned.

danenania 3 days ago | parent | prev [-]

The idea that biological intelligence is impossible to replicate by other means would seem to imply that there’s something magical about biology.

crystal_revenge 3 days ago | parent | next [-]

I'm nowhere implying that it's impossible to replicate, just that LLMs have almost nothing to do with replicating intelligence. They aren't doing any of the things even simple life forms are doing.

danenania 2 days ago | parent [-]

They lack many abilities of simple life forms, but they can also do things like complex abstract reasoning, which only humans and LLMs can do.

habinero 2 days ago | parent [-]

They don't reason. They can generate an illusion of it through a statistical model.

You don't gotta work hard to break the illusion, either.

People really really really want to believe this thing and I do not understand why. I wish I did lol

danenania 2 days ago | parent [-]

Ok, it’s an “illusion” of reasoning. Doesn’t change that it got a gold medal in the IMO or helped me fix a race condition the other day.

charcircuit 3 days ago | parent | prev | next [-]

There very well could be something magical about it.

danenania 3 days ago | parent [-]

It’s fine to think that—many clearly do.

But it would be more honest and productive imo if people would just say outright when they don’t think AGI is possible (or that AI can never be “real intelligence”) for religious reasons, rather than pretending there’s a rational basis.

gls2ro 3 days ago | parent | next [-]

AGI is not possible because we dont yet have a clear and commonly agreed definition of intelligence and more importantly we dont have a definition for consciousness nor we can define clearly (if there is) the link between those two.

until we got that AGI is just a magic word.

When we will have those two clear definitions that means we understood them and then we can work toward AGI.

3 days ago | parent | prev | next [-]
[deleted]
Mikhail_Edoshin 3 days ago | parent | prev | next [-]

When you try to solve a problem the goal or the reason to reject the current solution are often vague and hard to put in words. Irrational. For example, for many years the fifth postulate of Euclid was a source of mathematical discontent because of a vague feeling that it was way too complex compared to the other four. Such irrationality is a necessary step in human thought.

danenania 2 days ago | parent [-]

Yes, that’s fair. I’m not saying there’s no value to irrational hunches (or emotions, or spirituality). Just that you should be transparent when that’s the basis for your beliefs.

habinero 2 days ago | parent | prev | next [-]

That's not a good way to think about it.

Plenty of things could theoretically exist that aren't possible and likely will never be possible.

Like, sure, a Dyson sphere would solve our energy needs. We can't build one now and we almost certainly never will lol

"AGI" is theoretically feasible, sure. Our brains are just matter. But they're also an insanely complex and complicated system that came out of a billion years of evolution.

A little rinky dink statistical model doesn't even scratch the surface of it, and I don't understand why people think it does.

danenania 2 days ago | parent [-]

> But they're also an insanely complex and complicated system that came out of a billion years of evolution.

As are birds, yet we can still build airplanes.

player1234 a day ago | parent [-]

We know the laws of aerodynamics, what are the known laws of intelligence and consciousness you are replicating through other means with LLMs?

Weak ass gotcha, hang your head in shame, call your mom and tell her what a fraud you are.

danenania a day ago | parent [-]

Sorry you got triggered. I know it can be an emotional topic for some people. I'll try to explain in a simple way.

We clearly are replicating at least some significant aspects of human intelligence via LLMs, despite biological complexity. So we obviously don't need a 100% complete understanding of the corresponding biology to build things which achieve similar goals.

In other words, we can (conceivably) figure out how intelligence works and how to produce it independently of figuring out exactly how the human brain produces intelligence, just like we learned the laws of aerodynamics well enough to build airplanes independently of understanding everything about the biology of birds.

Whether we will achieve this or not to the point of AGI is a separate engineering question. I'm only pointing out how flawed these lines of argument are.

hitarpetar 2 days ago | parent | prev [-]

rationalism has become the new religion. Roko's basilisk is a ghost story and the quest for AGI is today's quest for the philosopher's stone. and people believe this shit because they can articulate a "rational basis"

smohare 3 days ago | parent | prev [-]

[dead]

fuckaj 3 days ago | parent | prev [-]

The physical universe has much higher throughput and lower latency than our computer emulating a digital world.

tomrod 3 days ago | parent [-]

Wouldn't it be nice if LLMs emulated the real world!

They predict next likely text token. That we can do so much with that is an absolute testament to the brilliance of researchers, engineers, and product builders.

We are not yet creating a god in any sense.

fuckaj 3 days ago | parent [-]

I mean that the computing power available to evolution and biological processes for training is magnitudes higher than for an LLM.

tomrod 2 days ago | parent [-]

Is it? Seems like C. elegans does just fine with limited compute. Despite our inability to model it in OpenWorm.