▲ | ec109685 7 days ago | |
Read 5.2 They don’t add a new loss over the probe signal. Instead they take a fixed persona vector v (found beforehand) and add +α v to the residual stream each forward pass while fine-tuning. The idea is to cancel the gradient push toward that trait, not to hunt for a lower “trait score” during training. Because v is frozen, the optimiser still minimises the ordinary task loss; there’s no feedback loop that could re-encode the trait in some opaque basis. Empirically, Fig. 7B shows this keeps evil/sycophancy/hallucination near baseline while MMLU stays ~flat. Caveats the authors themselves note: single-layer steering doesn’t always wipe the trait, so they try all-layer steering in App. J.3, which works better without hurting accuracy. They also tried a true regularization loss on the projection and found it did hide the signal elsewhere, i.e. the failure mode you’re worried about. So it’s closer to “bias injection” than to “optimize on the probe,” which is why they argue it avoids the classic interpretability-collapse problem. | ||
▲ | Vetch 6 days ago | parent [-] | |
But why isn't this merely papering over a more fundamental issue with how these models are "aligned"? LLMs are, for example, not inherently sycophantic. kimi k2 and o3 are not, and Sydney, mentioned in the blog post, was most decidedly not. In my experience, the issue of sycophancy has been longest in the Anthropic models, so it might be most deeply rooted for them. It's only recently, perhaps with the introduction of user A/B preference tests such as by lmarena and the providers themselves has this become a major issue for most other LLMs. Thinking that simple actions like adding an anti-evil vector to the residual stream to improve behavior sounds naively dangerous. It would not surprise me if unexpected and unwanted downstream effects resulted from this; which a future paper will address too. Not unlike what happened with tuning for user preference. |