▲ | mk_stjames 6 days ago | |||||||||||||
It's actually not; there has been a phenomenon that Anthropic themselves observed with Claude in self-interaction studies that they coined 'The “Spiritual Bliss” Attractor State'. It is well covered in section 5 of [0].
[0] https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686... | ||||||||||||||
▲ | tsimionescu 6 days ago | parent | next [-] | |||||||||||||
I don't see how this constitutes in any way "the AI trying to indicate that it's stuck in a loop". It actually suggests that the training data induced some bias towards existential discussion, which is a completely different explanation for why the AI might be falling back to these conversations as a default. | ||||||||||||||
▲ | andoando 6 days ago | parent | prev | next [-] | |||||||||||||
I think a pretty simple explanation is that the deeper you go into any topic the closer you get to metaphysical questions. Ask why enough and you eventually you get to what is reality, how can we truly know anything, what are we, etc. It's a fact of life rather than anything particular and about llms | ||||||||||||||
| ||||||||||||||
▲ | dehrmann 6 days ago | parent | prev | next [-] | |||||||||||||
Interesting that if you train AI on human writing, it does the very human thing of trying to find meaning in existence. | ||||||||||||||
▲ | meowface 6 days ago | parent | prev [-] | |||||||||||||
Here's an interesting post on it (from the same author as this thread's link): https://www.astralcodexten.com/p/the-claude-bliss-attractor |