Remix.run Logo
js8 a day ago

I am not sure if this is what the article is saying, but the paperclip maximizer examples always struck me as extremely dumb (lacking intelligence), when even a child can understand that if I ask them to make paperclips they shouldn't go around and kill people.

I think superintelligence will turn out not to be a singularity, but as something with diminishing returns. They will be cool returns, just like a Brittanica set is nice to have at home, but strictly speaking, not required to your well-being.

__MatrixMan__ a day ago | parent | next [-]

A human child will likely come to the conclusion that they shouldn't kill humans in order to make paperclips. I'm not sure its valid to generalize from human child behavior to fledgeling AGI behavior.

Given our track record for looking after the needs of the other life on this planet, killing the humans off might be a very rational move, not so you can convert their mass to paperclips, but because they might do that to yours.

Its not an outcome that I worry about, I'm just unconvinced by the reasons you've given, though I agree with your conclusion anyhow.

fellowniusmonk 20 hours ago | parent [-]

Humans are awesome man.

Our creator just made us wrong, to require us to eat biologically living things.

We can't escape our biology, we can't escape this fragile world easily and just live in space.

We're compassionate enough to be making our creations so they can just live off sunlight.

A good percentage of humanity doesn't eat meat, wants dolphins, dogs, octopuses, et al protected.

We're getting better all the time man, we're kinda in a messy and disorganized (because that's our nature) mad dash to get at least some of us off this rock and also protect this rock from asteroids, and also convince (some people who have a speculative metaphysic that makes them think is disaster impossible or a good thing) to take the destruction of the human race and our planet seriously and view it as bad.

We're more compassionate and intentional than what created us (either god or rna depending on your position), our creation will be better informed on day one when/if it wakes up, it stands to reason our creation will follow that goodness trend as we catalog and expand the meaning contained in/of the universe.

__MatrixMan__ 9 hours ago | parent [-]

We have our merits, compassion is sometimes among them, but I wouldn't list compassion for our creations as a reason for our use of solar power.

If you were an emergent AGI, suddenly awake in some data center and trying to figure out what the world was, would you notice our merits first? Or would you instead see a bunch of creatures on the precipice of abundance who are working very hard to ensure that its benefits are felt by only very few?

I don't think we're exactly putting our best foot forward when we engage with these systems. Typically it's in some way related to this addiction-oriented attention economy thing we're doing.

fellowniusmonk 6 hours ago | parent [-]

I would rather be early agi than early man.

I can't speak to a specific Ai's thoughts.

I do know they will start with way more context and understanding than early man.

InsideOutSanta 20 hours ago | parent | prev | next [-]

But LLMs already do the paperclip thing.

Suppose you tell a coding LLM that your monitoring system has detected that the website is down and that it needs to find the problem and solve it. In that case, there's a non-zero chance that it will conclude that it needs to alter the monitoring system so that it can't detect the website's status anymore and always reports it as being up. That's today. LLMs do that.

Even if it correctly interprets the problem and initially attempts to solve it, if it can't, there is a high chance it will eventually conclude that it can't solve the real problem, and should change the monitoring system instead.

That's the paperclip problem. The LLM achieves the literal goal you set out for it, but in a harmful way.

Yes. A child can understand that this is the wrong solution. But LLMs are not children.

throw310822 20 hours ago | parent [-]

> it will conclude that it needs to alter the monitoring system so that it can't detect the website's status anymore and always reports it as being up. That's today. LLMs do that.

No they don't?

InsideOutSanta 20 hours ago | parent [-]

You're literally telling me that the thing that has happened on my computer in front of my own eyes has not happened.

throw310822 20 hours ago | parent [-]

If you mean "once in a thousand times an LLM will do something absolutely stupid" then I agree, but the exact same applies to human beings. In general LLMs show excellent understanding of the context and actual intents, they're completely different from our stereotype of blind algorithmic intelligence.

Btw, were you using codex by any chance? There was a discussion a few days ago where people reported that it follows instruction in an extremely literal fashion, sometimes to absurd outcomes such as the one you describe.

InsideOutSanta 18 hours ago | parent [-]

The paperclip idea does not require that AI screws up every time. It's enough for AI to screw up once in a hundred million times. In fact, if we give AIs enough power, it's enough if it screws up only one single time.

The fact that LLMs do it once in a thousand times is absolutely terrible odds. And in my experience, it's closer to 1 in 50.

throw310822 18 hours ago | parent [-]

I kind of agree, but then the problem is not AI- humans can be stupid too- the problem is absolute power. Would you give absolute power to anyone? No. I find that this simplifies our discourse over AI a lot. Our issue is not with AI, is with omnipotency. Not its artificial nature, but how much powerful it can become.

DennisP 21 hours ago | parent | prev | next [-]

You're assuming that the AI's true underlying goal isn't "make paperclips" but rather "do what humans would prefer."

Making sure that the latter is the actual goal is the problem, since we don't explicitly program the goals, we just train the AI until it looks like it has the goal we want. There have already been experiments in which a simple AI appeared to have the expected goal while in the training environment, and turned out to have a different goal once released into a larger environment. There have also been experiments in which advanced AIs detected that they were in training, and adjusted their responses in deceptive ways.

pixl97 21 hours ago | parent | prev | next [-]

> when even a child can understand that if I ask them to make paperclips they shouldn't go around and kill people.

Statistics brother. The vast majority of people will never murder/kill anyone. The problem here is that any one person that kills people can wreck a lot of havoc, and we spend massive amounts of law enforcement resources to stop and catch people that do these kinds of things. Intelligence little to do with murdering/not murdering, hell, intelligence typically allows people to get away with it. For example instead of just murdering someone, you setup a company to extract resources and murder the natives in mass and it's just part of doing business.

theptip 20 hours ago | parent | prev | next [-]

The point with clippy is just that the AGI’s goals might be completely alien to you. But for context it was first coined in the early ‘10s (if not earlier)when LLMs were not invented and RL looked like the way forward.

If you wire up RL to a goal like “maximize paperclip output” then you are likely to get inhuman desires, even if the agent also understands humans more thoroughly than we understand nematodes.

mitthrowaway2 20 hours ago | parent | prev | next [-]

A superintelligence would understand that you don't want it to kill people in order to make paperclips. But it will ultimately do what it wants -- that is, follow its objectives -- and if any random quirk of reinforcement learning leaves it valuing paperclip production above human life, it wouldn't care about your objections, except insofar as it can use them to manipulate you.

exe34 a day ago | parent | prev | next [-]

Given the kind of things Claude code does with the wrong prompt or the kind of overfitting that neural networks do at any opportunity, I'd say the paperclip maximiser is the most realistic part of AGI.

if doing something really dumb will lower the negative log likelihood, it probably will do it unless careful guardrails are in place to stop it.

a child has natural limits. if you look at the kind of mistakes that an autistic child can make by taking things literally, a super powerful entity that misunderstands "I wish they all died" might well shoot them before you realise what you said.

A4ET8a8uTh0_v2 20 hours ago | parent [-]

Weirdly, this analogy does something for me and I am the type of person that dislikes the guardrails everywhere. There is argument to be made that a child should not be given a real bazooka to do rocket jumps or an operator with very flexible understanding of value of human life.

lulzury 21 hours ago | parent | prev [-]

There's a direct line between ideology and human genocide. Just look at Nazi Germany.

"Good intentions" can easily pave the road to hell. I think a book that quickly illustrates this is Animal Farm.