Remix.run Logo
AnIrishDuck 6 days ago

> ChatGPT is a program. The kid basically instructed it to behave like that.

I don't think that's the right paradigm here.

These models are hyper agreeable. They are intentionally designed to mimic human thought and social connection.

With that kind of machine, "Suicidal person deliberately bypassed safeguards to indulge more deeply in their ideation" still seems like a pretty bad failure mode to me.

> Vanilla OpenAI models are known for having too many guardrails, not too few.

Sure. But this feels like a sign we probably don't have the right guardrails. Quantity and quality are different things.

bastawhiz 6 days ago | parent | next [-]

> These models are hyper agreeable. They are intentionally designed to mimic human thought and social connection.

Python is hyper agreeable. If I comment out some safeguards, it'll happily bypass whatever protections are in place.

Lots of people on here argue vehemently against anthropomorphizing LLMs. It's either a computer program crunching numbers, or it's a nebulous form of pseudo-consciousness, but you can't have it both ways. It's either a tool that has no mind of its own that follows instructions, or it thinks for itself.

I'm not arguing that the model behaved in a way that's ideal, but at what point do you make the guardrails impassable for 100% of users? How much user intent do you reject in the interest of the personal welfare of someone intent on harming themselves?

AnIrishDuck 6 days ago | parent [-]

> Python is hyper agreeable. If I comment out some safeguards, it'll happily bypass whatever protections are in place.

These models are different from programming languages in what I consider to be pretty obvious ways. People aren't spontaneously using python for therapy.

> Lots of people on here argue vehemently against anthropomorphizing LLMs.

I tend to agree with these arguments.

> It's either a computer program crunching numbers, or it's a nebulous form of pseudo-consciousness, but you can't have it both ways. It's either a tool that has no mind of its own that follows instructions, or it thinks for itself.

I don't think that this follows. I'm not sure that there's a binary classification between these two things that has a hard boundary. I don't agree with the assertion here that these things are a priori mutually exclusive.

> I'm not arguing that the model behaved in a way that's ideal, but at what point do you make the guardrails impassable for 100% of users? How much user intent do you reject in the interest of the personal welfare of someone intent on harming themselves?

These are very good questions that need to be asked when modifying these guardrails. That's all I'm really advocating for here: we probably need to rethink them, because they seem to have major issues that are implicated in some pretty terrible outcomes.

dragonwriter 6 days ago | parent | prev [-]

> They deliberately are designed to mimic human thought and social connection.

No, they are deliberately designed to mimic human communication via language, not human thought. (And one of the big sources of data for that was mass scraping social media.)

> But this, to me, feels like a sign we probably don't have the right guardrails. Quantity and quality are different things.

Right. Focus on quantity implies that the details of "guardrails" don't matter, and that any guardrail is functionally interchangeable with any other guardrail, so as long as you have the right number of them, you have the desired function.

In fact, correct function is having the exactly the right combination of guardrails. Swapping a guardrail which would be correct with a different one isn't "having the right number of guardrails", or even merely closer to correct than either missing the correct one or having the different one, but in fact, farther from ideal state than either error alone.

AnIrishDuck 6 days ago | parent [-]

> No, they are deliberately designed to mimic human communication via language, not human thought.

My opinion is that language is communicated thought. Thus, to mimic language, at least really well, you have to mimic thought. At some level.

I want to be clear here, as I do see a distinction: I don't think we can say these things are "thinking", despite marketing pushes to the contrary. But I do think that they are powerful enough to "fake it" at a rudimentary level. And I think that the way we train them forces them to develop this thought-mimicry ability.

If you look hard enough, the illusion of course vanishes. Because it is (relatively poor) mimcry, not the real thing. I'd bet we are still a research breakthrough or two away from being able to simulate "human thought" well.