Remix.run Logo
visarga 4 hours ago

John Good's quote is pretty myopic, it assumes machines make better machines based on being "ultraintelligent" instead of learning from environment-action-outcome loop.

It's the difference between "compute is all you need" and "compute+explorative feedback" is all you need. As if science and engineering comes from genius brains not from careful experiments.

ACCount37 3 hours ago | parent | next [-]

At sufficient levels of intelligence, one can increasingly substitute it for the other things.

Intelligence can be the difference between having to build 20 prototypes and building one that works first try, or having to run a series of 50 experiments and nailing it down with 5.

The upper limit of human intelligence doesn't go high enough for something like "a man has designed an entire 5th gen fighter jet in his mind and then made it first try" to be possible. The limits of AI might go higher than that.

kilpikaarna 3 hours ago | parent [-]

Exceedingly elaborate, internally-consistent mind constructs, untested against the real world, sounds like a good definition of schizophrenia. May or may not correlate with high intelligence.

ACCount37 an hour ago | parent [-]

We only call it "schizophrenia" when those constructs are utterly useless.

They don't have to be. When they aren't, sometimes we call it "mathematics".

You only have to "test against the real world" if you don't already know the outcome in advance. And you often don't. But you could have. You could have, with the right knowledge and methods, tested the entire thing internally and learned the real world outcome in advance, to an acceptable degree of precision.

We have the knowledge to build CFD models already. The same knowledge could be used to construct a CFD model in your own mind, if only, you know, your mind was capable of supporting such a thing. And it isn't! Skill issue?

observationist 2 hours ago | parent | prev | next [-]

There's an implicit assumption there, anything a computer as intelligent as a human does will be exactly what a human would do, only faster. Or more intelligent. If the process is part of the intelligent way of doing things, like the scientific method and careful experimentation, then that's what the ultraintelligent machine will do.

There's no implication that it's going to do it all magically in its head from first principles; it's become very clear in AI that embodiment and interaction with the real world is necessary. It might be practical for a world model at sufficient levels of compute to simulate engineering processes at a sufficient level of resolution that they can do all sorts of first principles simulated physical development and problem solving "in their head", but for the most part, real ultraintelligent development will happen with real world iterations, robots, and research labs doing physical things. They'll just be far more efficient and fast than us meatsacks.

circlefavshape 4 hours ago | parent | prev | next [-]

> As if science and engineering comes from genius brains not from careful experiments

100% this. How long were humans around before the industrial revolution? Quite a while

snikeris 3 hours ago | parent [-]

Science and engineering didn't begin with the Industrial Revolution. See: https://en.wikipedia.org/wiki/Great_Pyramid_of_Giza

tjoff 3 hours ago | parent | prev | next [-]

Have you gotten any indication that machines won't have sensors?!

Eldt 4 hours ago | parent | prev [-]

Maybe ultraintelligence is having an improved environment-action-outcome loop. Maybe that's all intelligence really is

goodmythical 3 hours ago | parent | next [-]

I've noticed this core philosophical difference in certain geographically associated peoples.

There is a group of people who think AI is going to ruin the world because they think they themselves (or their superiors) would ruin the world.

There is a group of people who think AI is going to save the world because they think they themselves (or their superiors) would save the world.

Kind of funny to me that the former is typically democratic (those who are supposed to decide their own futures are afraid of the future they've chosen) while the other is often "less free" and are unafraid of the future that's been chosen for them.

mitthrowaway2 2 hours ago | parent | next [-]

There is also a group of people who think AI is going to ruin the world because they don't think the AI will end up doing what its creators (or their superiors) would want it to do.

tines 3 hours ago | parent | prev [-]

You’re just describing authoritarian vs non-authoritarian mindsets.

inigyou 3 hours ago | parent | prev [-]

In that case, it can't be improved with bigger computers.