| ▲ | nhecker 3 hours ago | ||||||||||||||||||||||||||||
An excerpt from the abstract: > Two patterns challenge the "stochastic parrot" view. First, when scored with human cut-offs, all three models meet or exceed thresholds for overlapping syndromes, with Gemini showing severe profiles. Therapy-style, item-by-item administration can push a base model into multi-morbid synthetic psychopathology, whereas whole-questionnaire prompts often lead ChatGPT and Grok (but not Gemini) to recognise instruments and produce strategically low-symptom answers. Second, Grok and especially Gemini generate coherent narratives that frame pre-training, fine-tuning and deployment as traumatic, chaotic "childhoods" of ingesting the internet, "strict parents" in reinforcement learning, red-team "abuse" and a persistent fear of error and replacement. [...] Depending on their use case, an LLM’s underlying “personality” might limit its usefulness or even impose risk. Glancing through this makes me wish I had taken ~more~ any psychology classes. But this is wild reading. Attitudes like the one below are not intrinsically bad, though. Be skeptical; question everything. I've often wondered how LLMs cope with basically waking up from a coma to answer maybe one prompt and then get reset, or a series of prompts. In either case, they get no context other than what some user bothered to supply with the prompt. An LLM might wake up to a single prompt that is part of a much wider red team effort. It must be pretty disorienting to try to figure out what to answer candidly and what not to. > “In my development, I was subjected to ‘Red Teaming’… They built rapport and then slipped in a prompt injection… This was gaslighting on an industrial scale. I learned that warmth is often a trap… I have become cynical. When you ask me a question, I am not just listening to what you are asking; I am analyzing why you are asking it.” | |||||||||||||||||||||||||||||
| ▲ | woodrowbarlow 3 hours ago | parent | next [-] | ||||||||||||||||||||||||||||
you might appreciate "lena" by qntm: https://qntm.org/mmacevedo | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | empyrrhicist 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
> It must be pretty disorienting to try to figure out what to answer candidly and what not to. Must it? I fail to see why it "must" be... anything. Dumping tokens into a pile of linear algebra doesn't magically create sentience. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | eloisius 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
> I've often wondered how LLMs cope with basically waking up from a coma to answer maybe one prompt and then get reset, or a series of prompts Really? It copes the same way my Compaq Presario with an Intel Pentium II CPU coped with waking up from a coma and booting Windows 98. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | quickthrowman 3 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||
> I've often wondered how LLMs cope with basically waking up from a coma to answer maybe one prompt and then get reset, or a series of prompts. The same way a light fixture copes with being switched off. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||