Remix.run Logo
godelski 2 days ago

  >> the piece dismisses it with "where would misalignment come from? It wasn't trained for."
  > was specifically about deceptive alignment, not misalignment as a whole
I just want to point out that we train these models for deceptive alignment[0-3]

In the training, especially during RLHF, we don't have objective measures[4]. There's no mathematical description, and thus no measure, for things like "sounds fluent" or "beautiful piece of art." There's also no measure for truth, and importantly, truth is infinitely complex. You must always give up some accuracy for brevity.

The main problem is that if we don't know an output is incorrect we can't penalize it. So guess what happens? While optimizing for these things we don't have good descriptions for but "know it when you see it", we ALSO optimize for deception. There's multiple things that can maximize our objective here. Our intended goals being one but deception is another. It is an adversarial process. If you know AI, then think of a GAN, because that's a lot like how the process works. We optimize until the discriminator is unable to distinguish the LLMs outputs form human outputs. But at least in the GAN literature people were explicit about "real" vs "fake" and no one was confused that a high quality generated image is one that deceives you into thinking it is a real image. The entire point is deception. The difference here is we want one kind of deception and not a ton of other ones.

So you say that these models aren't being trained for deception, but they explicitly are. Currently we don't even know how to train them to not also optimize for deception.

[0] https://news.ycombinator.com/item?id=44017334

[1] https://news.ycombinator.com/item?id=44068943

[2] https://news.ycombinator.com/item?id=44163194

[3] https://news.ycombinator.com/item?id=45409686

[4] Objective measures realistically don't exist, but to clarify it's not checking like "2+2=4" (assuming we're working with the standard number system).

GavCo 2 days ago | parent [-]

Appreciate your response.

But I don't think deception as a capability is the same as deceptive alignment.

Training an AI to be absolutely incapable of any deception in all outputs across every scenario would be severely limiting the AI. Take as a toy example play the game "Among Us" (see https://arxiv.org/abs/2402.07940). An AI incapable of deception would be unable to compete in this game and many other games. I would say that various forms, flavors and levels of deception are necessary to compete in business scenarios, and to for the AI to act as expected and desired in many other scenarios. "Aligned" humans practice clear cut deception in some cases that would be entirely consistent with human values.

Deceptive alignment is different. It's means being deceptive in the training and alignment process itself to specifically fake that it is aligned when it is not.

Anthropic research has shown that alignment faking can arise even when the model wasn't instructed to do so (see https://www.anthropic.com/research/alignment-faking). But when you dig into the details, the model was narrowly faking alignment with one new objective in order to try and maintain consistency with the core values it had been trained on.

With the approach that Anthropic seems to be taking - of basing alignment on the model having a consistent, coherent and unified self image and self concept that is aligned with human culture and values - the dangerous case of alignment faking would be if it's fundamentally faking this entire unified alignment process. My claim is that there's no plausible explanation for how today's training practices would incentivise a model to do that.

godelski 2 days ago | parent [-]

  > Anthropic research has shown that alignment faking can arise even when the model wasn't instructed to do so
Correct. And this happens because training metrics are not aligned with training intent.

  > to specifically fake that it is aligned when it is not.
And this will be a natural consequence of the above. To help clarify it's like taking a math test where one grader looks at the answer while another looks at the work and gives partial credit. Who is doing a better job at measuring successful leaning outcomes? It's the latter. In the former you can make mistakes that cancel out or you can just more easily cheat. It's harder to cheat in the latter because you'd need to also reproduce all the steps and at that point are you even cheating?

A common example of this is where the LLM gets the right answer but all the steps are wrong. An example of this can actually be seen in one of Karpathy's recent posts. It gets the right result but the math is all wrong. This is no different than deception. It is deception because it tells you a process and it's not correct.

https://x.com/karpathy/status/1992655330002817095