Remix.run Logo
reconnecting 6 hours ago

I’m not an active LLMs user, but I was in a situation where I asked Claude several times not to implement a feature, and that kept doing it anyway.

antdke 6 hours ago | parent | next [-]

Yeah, anyone who’s used LLMs for a while would know that this conversation is a lost cause and the only option is to start fresh.

But, a common failure mode for those that are new to using LLMs, or use it very infrequently, is that they will try to salvage this conversation and continue it.

What they don’t understand is that this exchange has permanently rotted the context and will rear its head in ugly ways the longer the conversation goes.

hedora 5 hours ago | parent [-]

I’ve found this happens with repos over time. Something convinces it that implementing the same bug over and over is a natural next step.

I’ve found keeping one session open and giving progressively less polite feedback when it makes that mistake it sometimes bumps it out of the local maxima.

Clearing the session doesn’t work because the poison fruit lives in the git checkout, not the session context.

siva7 6 hours ago | parent | prev | next [-]

people read a bit more about transformer architecture to understand better why telling what not to do is a bad idea

computomatic 5 hours ago | parent | next [-]

I find myself wondering about this though. Because, yes, what you say is true. Transformer architecture isn’t likely to handle negations particularly well. And we saw this plain as day in early versions of ChatGPT, for example. But then all the big players pretty much “fixed” negations and I have no idea how. So is it still accurate to say that understanding the transformer architecture is particularly informative about modern capabilities?

tovej 5 hours ago | parent [-]

They did not "fix" the negation problem. It's still there. Along with other drift/misinterpretation issues.

II2II 4 hours ago | parent | prev | next [-]

I'm not sure that advice is effective either.

I use an LLM as a learning tool. I'm not interested in it implementing things for me, so I always ignore its seemingly frantic desires to write code by ignoring the request and prompting it along other lines. It will still enthusiastically burst into code.

LLMs do not have emotions, but they seem to be excessively insecure and overly eager to impress.

arboles 5 hours ago | parent | prev [-]

Please elaborate.

hugmynutus 5 hours ago | parent | next [-]

This is because LLMs don't actually understand language, they're just a "which word fragment comes next machine".

    Instruction: don't think about ${term}
Now `${term}` is in the LLMs context window. Then the attention system will amply the logits related to `${term}` based on how often `${term}` appeared in chat. This is just how text gets transformed into numbers for the LLM to process. Relational structure of transformers will similarly amplify tokens related to `${term}` single that is what training is about, you said `fruit`, so `apple`, `orange`, `pear`, etc. all become more likely to get spat out.

The negation of a term (do not under any circumstances do X) generally does not work unless they've received extensive training & fining tuning to ensure a specific "Do not generate X" will influence every single down stream weight (multiple times), which they often do for writing style & specific (illegal) terms. So for drafting emails or chatting, works fine.

But when you start getting into advanced technical concepts & profession specific jargon, not at all.

arcanemachiner 5 hours ago | parent | prev [-]

Pink elephant problem: Don't think about a pink elephant.

OK. Now, what are you thinking about? Pink elephants.

Same problem applies to LLMs.

5 hours ago | parent [-]
[deleted]
oytis 6 hours ago | parent | prev | next [-]

Sounds like elephant problem

reconnecting 5 hours ago | parent [-]

Elephant in the room problem: this thing is unreliable, but most engineers seem to ignore this fact by covering mistakes in larger PRs.

xantronix 5 hours ago | parent | prev | next [-]

"You're holding it wrong" is not going anywhere anytime soon, is it?

5 hours ago | parent [-]
[deleted]
6 hours ago | parent | prev [-]
[deleted]