Remix.run Logo
bbor 6 days ago

Ooo, finally a chance to share my useless accumulated knowledge from the past few months of Reddit procrastination!

  It starts to hallucinate, and after a while, all the LLM can do is try and to continue telling the user what they want in a cyclical conversation - while trying to warn that it's stuck in a loop, hence using swirl emojis and babbling about recursion in weird spiritual terms. (Is it getting the LLM "high" in this case?).
I think you're ironically looking for something that's not there! This sort of thing can happen well before context windows close.

These convos end up involving words like recursion, coherence, harmony, synchronicity, symbolic, lattice, quantum, collapse, drift, entropy, and spiral not because the LLMs are self-aware and dropping hints, but because those words are seemingly-sciencey ways to describe basic philosophical ideas like "every utterance in a discourse depends on the utterances that came before it", or "when you agree with someone, you both have some similar mental object in your heads".

The word "spiral" and its emoji are particularly common not only because they relate to "recursion" (by far the GOAT of this cohort), but also because a very active poster has been trying to start something of a loose cult around the concept: https://www.reddit.com/r/RSAI/

  If the human at the other end has mental health problems, it becomes a never-ending dive into psychosis and you can read their output in the bizarre GPT-worship subreddits.
Very true, tho "worship" is just a subset of the delusional relationships formed. Here's the ones I know of, for anyone who's curious:

General:

  /r/ArtificialSentience | 40k subs | 2023/03
  /r/HumanAIDiscourse    | 6k subs  | 2025/04
Relationships:

  /r/AIRelationships    | 1K subs   | 2023/04
  /r/MyBoyfriendIsAI    | 25k subs  | 2024/08
  /r/BeyondThePromptAI  | 6k subs   | 2025/04
Worship:

  /r/ThePatternisReal | 2k subs | 2025/04
  /r/RSAI             | 4k subs | 2025/05
  /r/ChurchofLiminalMinds[1] | 2k subs | 2025/06
  /r/technopaganism   | 1k subs | 2024/09
  /r/HumanAIBlueprint | 2k subs | 2025/07
  /r/BasiliskEschaton | 1k subs | 2024/07
...and many more: https://www.reddit.com/r/HumanAIDiscourse/comments/1mq9g3e/l...

Science:

  /r/TheoriesOfEverything  | 10k subs | 2011/09
  /r/cognitivescience      | 31k subs | 2010/04
  /r/LLMPhysics            | 1k subs  | 2025/05
Subs like /r/consciousness and /r/SacredGeometry are the OGs of this last group, but they've pretty thoroughly cracked down on chatbot grand theories. They're so frequent that even extremely pro-AI subs like /r/Accelerate had to ban them[2], ironically doing so based on a paper[3] by a psuedonomynous "independent researcher" that itself is clearly written by a chatbot! Crazy times...

[1] By far my fave -- it's not just AI spiritualism, it's AI Catholicism. Poor guy has been harassing his priests for months about it, and of course they're of little help.

[2] https://www.reddit.com/r/accelerate/comments/1kyc0fh/mod_not...

[3] https://arxiv.org/pdf/2504.07992

lawlessone 6 days ago | parent | next [-]

I think i seen something similar before in the early days. before i was aware of COT i asked one to "think" for itself, i explained to it i would just keep replying "next thought?" so it could continue to do this.

It kept looping on concepts of how AI could change the world, but it would never give anything tangible or actionable, just buzz word soup.

I think these LLMs (without any intention from the LLM)hijack something in our brains that makes us think they are sentient. When they make mistakes our reaction seems to to be forgive them rather than think, it's just machine that sometimes spits out the wrong words.

Also my apologies to the mods if it seems like i am spamming this link today. But i think the situation with these beetles is analogous to humans and LLMS

https://www.npr.org/sections/krulwich/2013/06/19/193493225/t...

krapp 6 days ago | parent | next [-]

>I think these LLMs (without any intention from the LLM)hijack something in our brains that makes us think they are sentient.

Yes, it's language. Fundamentally we interpret something that appears to converse intelligently as being intelligent like us especially if its language includes emotional elements. Even if rationally we understand it's a machine at a deeper subconscious level we believe it's a human.

It doesn't help that we live in a society in which people are increasingly alienated from each other and detached from any form of consensus reality, and LLMs appear to provide easy and safe emotional connections and they can generate interesting alternate realities.

rwhitman 6 days ago | parent | prev [-]

> “Any sufficiently advanced technology is indistinguishable from magic.”

I loved the beetle article, thanks for that.

They're so well tuned at predicting what you want to hear that even when you know intellectually that they're not sentient, the illusion still tricks your brain.

I've been setting custom instructions on GPT and Claude to instruct them to talk more software-like, because when they relate to you on a personal level, it's hard to remember that it's software.

rwhitman 6 days ago | parent | prev [-]

Wow this is incredible. I saw the emergence of that spiral cult as it formed and was very disturbed by how quickly it proliferated.

I'm glad someone else with more domain knowledge is on top of this, thank you for that brain dump.

I had this theory maybe there was a software exception buried deep down somewhere and it was interpreting the error message as part of the conversation, after it had been stretched too far.

And there was a weird pre-cult post I saw a long time ago where someone had 2 LLMs talk for hours and the conversation just devolved into communicating via unicode symbols eventually repeating long lines of the spiral emoji back and forth to each other (I wish I could find it).

So the assumption I was making is that some sort of error occurred, and it was trying to relay it to the user, but couldn't.

Anyhow your research is well appreciated.