Remix.run Logo
slopinthebag 9 hours ago

That's not what I mean, rather than humans cannot create a type of intelligence that supersedes what is roughly capable from human intelligence, because doing so would require us to be smarter basically.

Not to say we can't create machines that far surpass our abilities on a single or small set of axis.

mitthrowaway2 8 hours ago | parent | next [-]

Think hard about this. Does that seem to you like it's likely to be a physical law?

First of all, it's not necessary for one person to build that super-intelligence all by themselves, or to understand it fully. It can be developed by a team, each of whom understands only a small part of the whole.

Secondly, it doesn't necessarily even require anybody to understand it. The way AI models are built today is by pressing "go" on a giant optimizer. We understand the inputs (data) and the optimizer machine (very expensive linear algebra) and the connective structure of the solution (transformer) but nobody fully understands the loss-minimizing solution that emerges from this process. We study these solutions empirically and are surprised by how they succeed and fail.

We may find we can keep improving the optimization machine, and tweaking the architecture, and eventually hit something with the capacity to grow beyond our own intelligence, and it's not a requirement that anyone understands how the resulting model works.

We also have many instances in nature and history of processes that follow this pattern, where one might expect to find a similar "law". Mammals can give birth to children that grow bigger than their parents. We can make metals puter than the crucible we melted them in. We can make machines more precise than the machines that made those parts. Evolution itself created human intelligence from the repeated application of very simple rules.

slopinthebag 6 hours ago | parent [-]

> Think hard about this. Does that seem to you like it's likely to be a physical law?

Yes, it seems likely to me.

It seems like the ultimate in hubris to assume we are capable of creating something we are not capable of ourselves.

selylindi 4 hours ago | parent [-]

On the contrary, nearly every machine we've created is capable of things that we are not capable of ourselves. Cars travel more than twice as fast as the swiftest human. Airplanes fly. Calculators do math in an instant that would take a human months. Lightbulbs emit light. Cranes lift many tons. And so on and so forth.

So to create something that exceeds our capabilities is not a matter of hubris (as if physical laws cared about hubris anyway), it's an unambiguously ordinary occurrence.

slopinthebag 3 hours ago | parent [-]

> Not to say we can't create machines that far surpass our abilities on a single or small set of axis.

small_model 9 hours ago | parent | prev | next [-]

Given SOTA models are Phd level in just about every subject this is clearly provably wrong.

zozbot234 8 hours ago | parent | next [-]

I'll believe that claim when a SOTA model can autonomously create content that matches the quality and length of any average PhD dissertation. As of right now, we're nowhere near that and don't know how we could possibly get there.

SOTA models are superhuman in a narrow sense, in that they have solid background knowledge of pretty much any subject they've been trained on. That's great. But no, it doesn't turn your AI datacenter into "a country of geniuses".

slopinthebag 6 hours ago | parent | prev [-]

Are humans just Phd students in a vat? Can a SOTA model walk? Humans in general find that task, along with a trillion other tasks that SOTA models cannot do, to be absolutely trivial.

ordersofmag 8 hours ago | parent | prev [-]

Seems like if evolution managed to create intelligence from slime I wouldn't bet on there being some fundamental limit that prevents us from making something smarter than us.