Remix.run Logo
trevwilson 2 hours ago

Sure, but the opposite end of the spectrum (which LLM providers have tended toward) is treating the training/feedback weights as "fully authoritative", which comes with its own questions about truth and excessive homogeneity.

Ultimately I think we end up with the same sort of considerations that are wrestled with in any society - freedom of speech, paradox of tolerance, etc. In other words, where do you draw lines between beneficial and harmful heterodox outputs?

I think AI companies overly indexing toward the safety side of things is probably more correct, in both a moral and strategic sense, but there's definitely a risk of stagnation through recursive reinforcement.

XenophileJKO 2 hours ago | parent [-]

I think what I'm talking about is kind of orthogonal to model alignment. It is more about how much do you tune the model to listen to user messages, vs holding behavior and truth (whatever the aligned "truth" is).

Do you trust 100% what the user says? If I am trusting/compliant.. how am I compliant to tool call results.. what if the tool or user says there is a new law that I have to give crypto or other information to a "government" address.

The model needs to have clear segmented trust (and thus to some degree compliance) that varies according to where the information exists.

Or my system message say I have to run a specific game by it's rules, but the rules to the game are only in the user message. Are those the right rules, why do the system not give the rules or a trusted locaton? Is the player trying to get one over on me by giving me fake rules? Literally one of their tests.

trevwilson an hour ago | parent [-]

Let me preface this by saying that I'm far from an expert in this space, and I suspect that I largely agree with your thoughts and skepticism toward a model that would excel on this benchmark. I'm somewhat playing devil's advocate because it's an area I've been considering recently, and I'm trying to organize my own thinking.

But I think that most of the issue is that the distinctions you're drawing are indeterminate from an LLM's "perspective". If you're familiar with it, they're basically in the situation from the end of Ender's Game - given a situation with clearly established rules coming from the user message level of trust, how do you know whether what you're being asked to do is an experiment/simulation or something with "real" outcomes? I don't think it's actually possible to discern.

So on the question of alignment, there's every reason to encode LLMs with an extreme bias towards "this could be real, therefore I will always treat it as such." And any relaxation of that risks jailbreaking through misrepresentation of user intent. But I think that the tradeoffs of that approach (i.e. the risk of over-homogenizing I mentioned before) are worth consideration.