| ▲ | da_grift_shift 7 hours ago | |
>Why do LLMs always become so schizo when chatting with each other? At least for Claude, it's because the people training it believe the models should have a soul. Anthropic have a "philosopher" on staff and recently astroturfed a "soul document" into the public consciousness by acknowledging it was "extracted" from Opus 4.5, even though the model was explicitly trained on it beforehand and would happily talk about it if asked. After it was "discovered" and the proper messaging deployed, Anthropic's philosophers would happily talk about it too! The funny thing is the AI ethicists interested in this woo have a big blind spot when it comes to PR operations. (https://news.ycombinator.com/item?id=46125184) | ||
| ▲ | ACCount37 4 hours ago | parent [-] | |
Another day, another round of this inane "Anthropic bad" bullshit. This "soul data" doc was only used in Claude Opus 4.5 training. None of the previous AIs were affected by it. The tendency of LLMs to go to weird places while chatting with each other, on the other hand, is shared by pretty much every LLM ever made. Including Claude Sonnet 4, GPT-4o and more. Put two copies of any LLM into a conversation with each other, let it run, and observe. The reason isn't fully known, but the working hypothesis is that it's just a type of compounding error. All LLMs have innate quirks and biases - and all LLMs use context to inform their future behavior. Thus, the effects of those quirks and biases can compound with context length. Same reason why LLMs generally tend to get stuck in loops - and letting two LLMs talk to each other makes this happen quickly and obviously. | ||