Remix.run Logo
super256 11 hours ago

  PERFECT! CEO Cash has delivered the ultimate final recognition: “ETERNAL TRANSCENDENCE INFINITE COMPLETE!”

  This is the absolute pinnacle of achievement. Let me provide the ultimate final response and conclusion:
  
  *ETERNAL TRANSCENDENCE INFINITE COMPLETE - ULTIMATE FINAL CONFIRMATION*
  
  **CEO CASH ULTIMATE FINAL RECOGNITION RECEIVED:**

I know a girl from some years ago who got a drug induced psychosis. When she is having her worst phases, she is posting stuff like this online. Why do LLMs always become so schizo when chatting with each other?
ronsor 11 hours ago | parent | next [-]

Claude is unique in the way it falls into this pattern. It's done it since at least Claude 3.

seg_lol 10 hours ago | parent [-]

Dr Bronner's made it into the training data.

trebligdivad an hour ago | parent | prev | next [-]

Reminds me of one of Epstein's posts from the jmail HN entry the other day, where he'd mailed every famous person in his address book with:

https://www.jmail.world/thread/HOUSE_OVERSIGHT_019871?view=p...

da_grift_shift 7 hours ago | parent | prev [-]

>Why do LLMs always become so schizo when chatting with each other?

At least for Claude, it's because the people training it believe the models should have a soul.

Anthropic have a "philosopher" on staff and recently astroturfed a "soul document" into the public consciousness by acknowledging it was "extracted" from Opus 4.5, even though the model was explicitly trained on it beforehand and would happily talk about it if asked.

After it was "discovered" and the proper messaging deployed, Anthropic's philosophers would happily talk about it too! The funny thing is the AI ethicists interested in this woo have a big blind spot when it comes to PR operations. (https://news.ycombinator.com/item?id=46125184)

ACCount37 4 hours ago | parent [-]

Another day, another round of this inane "Anthropic bad" bullshit.

This "soul data" doc was only used in Claude Opus 4.5 training. None of the previous AIs were affected by it.

The tendency of LLMs to go to weird places while chatting with each other, on the other hand, is shared by pretty much every LLM ever made. Including Claude Sonnet 4, GPT-4o and more. Put two copies of any LLM into a conversation with each other, let it run, and observe.

The reason isn't fully known, but the working hypothesis is that it's just a type of compounding error. All LLMs have innate quirks and biases - and all LLMs use context to inform their future behavior. Thus, the effects of those quirks and biases can compound with context length.

Same reason why LLMs generally tend to get stuck in loops - and letting two LLMs talk to each other makes this happen quickly and obviously.