Remix.run Logo
surgical_fire 4 hours ago

> Does this seem any less problematic than deception to you?

Yes. This sounds a lot more like a bug of sorts.

So many times when using language models I have seem answers contradicting answers previously given. The implication is simple - They have no memory.

They operate upon the tokens available at any given time, including previous output, and as information gets drowned those contradictions pop up. No sane person should presume intent to deceive, because that's not how those systems operate.

By calling it "deception" you are actually ascribing intentionality to something incapable of such. This is marketing talk.

"These systems are so intelligent they can try to deceive you" sounds a lot fancier than "Yeah, those systems have some odd bugs"

holoduke 4 hours ago | parent [-]

Running them in a loop with context, summaries, memory files or whatever you like to call them creates a different story right?

robotpepi 3 hours ago | parent [-]

what kind of question is that