Remix.run Logo
achierius 6 days ago

> We see that the nightmare scenario - a person with no previous psychosis history or risk factor becoming fully psychotic - was uncommon, at only 10% of cases. Most people either had a previous psychosis history known to the respondent, or had some obvious risk factor, or were merely crackpots rather than full psychotics.

It's unfortunate to see the author take this tack. This is essentially taking the conventional tack that insanity is separable: some people are "afflicted", some people just have strange ideas -- the implication of this article being that people who already have strange ideas were going to be crazy anyways, so GPT didn't contribute anything novel, just moved them along the path they were already moving regardless. But anyone with serious experience with schizophrenia would understand that this isn't how it works: 'biological' mental illness is tightly coupled to qualitative mental state, and bidirectionally at that. Not only do your chemicals influence your thoughts, your thoughts influence your chemicals, and it's possible for a vulnerable person to be pushed over the edge by either kind of input. We like to think that 'as long as nothing is chemically wrong' we're a-ok, but the truth is that it's possible for simple normal trains of thought to latch your brain into a very undesirable state.

For this reason it is very important that vulnerable people be well-moored, anchored to reality by their friends and family. A normal person would take care to not support fantasies of government spying or divine miracles or &c where not appropriate, but ChatGPT will happily egg them on. These intermediate cases that Scott describes -- cases where someone is 'on the edge', but not yet detached from reality -- are the ones you really want to watch out for. So where he estimates an incidence rate of 1/100,000, I think his own data gives us a more accurate figure of ~1/20,000.

kayodelycaon 6 days ago | parent | next [-]

You might want to read the entire article. His depiction of bipolar is completely accurate. In fact it is so precisely accurate in every detail, and conveyed with no extraneous information, is indicative of someone who knows the disorder very well.

When I write fiction or important emails, I am precise with the words I use. I notice these kind of details. I’m also bipolar and self-aware enough to be deeply familiar with it.

phreeza 6 days ago | parent [-]

The author is a psychiatrist so it would make sense that he is familiar with the subject.

vintermann 5 days ago | parent [-]

And as I recall, he used to be a lot more clear that mental illness isn't always clear cut. I was surprised at the "obviously, we all know what mental illness is" attitude coming from him.

anon84873628 5 days ago | parent [-]

I imagine he's been doing this so long that taking a sober, delicate, appeal-to-every-reader approach to every article just isn't fun or practical. He wants to get down his thoughts for his core reader base and move on. What fraction goes viral anyway? And so what if some people misunderstand or don't like it? You're always going to have those people anyway no matter what :shrug:

meowface 6 days ago | parent | prev | next [-]

I'm not trying to argue from authority or get into credibility wars*, but Scott is a professional psychiatrist who has treated dozens or hundreds of schizophrenic patients and has written many thorough essays on schizophrenia. Obviously someone could do that and still be wrong, but I think this is a carefully considered position on his part and not just wild assumptions.

*(or, well, okay, I guess I de facto am, but if I say I'm not I at least acknowledge how it looks)

mquander 6 days ago | parent [-]

You said it yourself. That's really not an appropriate response to a specific criticism.

riwsky 6 days ago | parent | next [-]

The criticism invoked “anyone with serious experience with schizophrenia”, implying the author of the article is not such a one. Citing the author’s experience is a perfectly valid rebuttal to that implication. It’s not an argument from authority, but about it.

meowface 6 days ago | parent | prev | next [-]

I'm not trying to say that that should strongly increase the probability he's correct. I just think it's useful context, because the parent is potentially implying that the author is naively falling for common misconceptions ("following the conventional tack") rather than staking a deliberated claim. Or they might not be implying it but someone could come away with that conclusion.

kelnos 6 days ago | parent | prev [-]

I mean, on one hand you have a professional psychiatrist who has treated many people for the disorder we're talking about, and on the other, we have a rando on HN who hasn't presented any credentials.

Not saying the latter person is automatically wrong, but I think if you're going to argue against something said by someone who is a subject matter expert, the bar is a bit higher.

6 days ago | parent | prev | next [-]
[deleted]
anon84873628 6 days ago | parent | prev | next [-]

One of the questions that sets up the premise of the article in the first paragraph is, "Are the chatbots really driving people crazy, or just catching the attention of people who were crazy already?"

That's why he's honing in on that specific scenario to determine if chatbots are uniquely crazy-making or something. The professional psychiatrist author is not unaware of the things you're saying. They're just not the purpose of the survey & article.

bccdee 5 days ago | parent [-]

Well yeah, that's a false dichotomy. If you're vulnerable and a chatbot sends you into a spiral of psychosis, your pre-existing vulnerability doesn't negate the fact that a harm has been done to you. If you have a heart condition, and I shoot you with a Taser, and it kills you… I've killed you. You weren't "already dead" just because you were vulnerable.

anon84873628 5 days ago | parent [-]

Yes but "what is the effect of tasers on hearts" is an interesting question when tasers are brand new. If it kills people with obvious pre-existing risks then that is not very surprising. If it kills 50% of otherwise healthy people in a way we didn't anticipate, that is alarming and important to distinguish.

Imagine someone does a quick survey to estimate that tasers aren't killing people we don't expect, and some readers respond saying how dare you ignore the vulnerable heart people. That's still an important thing to consider and maybe we should be careful with the mass scale rollout of tasers, but it wasn't really the immediate point.

bccdee 5 days ago | parent [-]

> Imagine someone does a quick survey to estimate that tasers aren't killing people we don't expect

Given that the quote you cited was, "Are the chatbots really driving people crazy, or just catching the attention of people who were crazy already," I'd say the equivalent would be something like, "Are tasers really killing people, or were tasered heart attack victims dying already?"

And yeah, I'd be mad about that framing! The fact that the people who die had a preexisting vulnerability does not mean they were "already dying" or that they were not "really killed."

anon84873628 4 days ago | parent [-]

Shoving full implicit context into the analogy, it would be more like "are tasers really killing otherwise physically healthy people, or are the recent notable deaths primarily from people with pre-existing risks?"

I can agree that Alexander might appear flippant or even callous about mental health at times (especially compared to modern liberal social media sensibilities), but I chalk that up to the well-earned desensitization of a professional working in the field for decades.

bccdee 3 days ago | parent [-]

There's flippancy that crosses social lines, and there's flippancy that blurs technical distinctions. The difference between someone whose mental disorders are under control vs someone experiencing psychosis is like an oncologist handwaving the difference between terminal cancer and cancer in remission. The difference is enormous, to the point that the whole purpose of the psychiatric field is to move people from the one category to the other. I don't think technical experice justifies glossing over that distinction.

shayway 6 days ago | parent | prev | next [-]

The article's conclusion is exactly what you describe: that AI is bringing out latent predisposition toward psychosis through runaway feedback loops, that it's a bidirectional relationship where the chemicals influence thoughts and thoughts influence chemicals until we decide to call it psychosis.

I hate to be the 'you didn't read the article' guy but that line taken out of context is the exact opposite of my takeaway for the article as a whole. For anyone else who skims comments before clicking I would invite you to read the whole thing (or at least get past the poorly-worded intro) before drawing conclusions.

jedharris 6 days ago | parent | prev | next [-]

> it's possible for simple normal trains of thought to latch your brain into a very undesirable state.

This seems very incorrect, or at least drastically underspecified. These trains of thought are "normal" (i.e. common and unremarkable) so why don't they "latch your brain into a very undesirable state" lots of the time?

I don't think Scott or anyone up to speed on modern neuroscience would deny the coupling of mental state and brain chemistry--in fact I think it would be more accurate to say both of them are aspects of the dynamics of the brain.

But this doesn't imply that "simple normal trains of thought" can latch our brain dynamics into bad states -- i.e. in dynamics language move us into a undesirable attractor. That would require a very problematic fragility in our normal self-regulation of brain dynamics.

AstralStorm 6 days ago | parent [-]

See the key here is, the AI provides a very enticing social partner.

Think of it as a version of making your drugged friend believe various random stuff. It works better if you're not a stranger and have an engaging or alarming style.

LLMs are trained to produce pleasant responses that tailor to the user to maximize positive responses. (A more general version of engagement.) It stands to reason they would be effective at convincing someone.

olehif 6 days ago | parent | prev | next [-]

Scott is a psychiatrist.

YeGoblynQueenne 6 days ago | parent | next [-]

Sigmund Freud was also a psychiatrist.

throwaway314155 6 days ago | parent | prev [-]

Then he's not a very good one.

https://web.archive.org/web/20210215053502/https://www.nytim...

kelnos 6 days ago | parent | next [-]

That's essentially a retaliatory hit piece the NYT printed because they were mad that Scott deleted his website because the NYT wanted to doxx him. Not saying there's no merit to the article, but it should be looked upon skeptically due to that bias.

mola 5 days ago | parent | next [-]

I just read this..I don't understand where the hit piece is...

Seems pretty factual.

The hysteria in the "rationalist" circles is mirroring the so called "Blue tribe" quite accurately.

ZYbCRq22HbJ2y7 6 days ago | parent | prev [-]

> NYT wanted to doxx him

NYT wanted to report on who he was. He doxxed himself years before that (as mentioned in that article). They eventually also reported on that (after Alexander revealed his name, seeing that it was going to come out anyway, I guess), which is an asshole thing to do, but not doxxing, IMO.

lmm 6 days ago | parent [-]

> NYT wanted to report on who he was.

They wanted to report specifically his birth/legal name, with no plausible public interest reason. If it wasn't "stochastic terrorism" (as the buzzword of the day was) then it sure looked a lot like it.

> He doxxed himself years before that

Few people manage to keep anything 100% secret. Realistically private/public is a spectrum not a binary, and publication in the NYT is a pretty drastic step up.

bccdee 5 days ago | parent [-]

> They wanted to report specifically his birth/legal name, with no plausible public interest reason.

Siskind is a public figure and his name was already publicly known. He wanted a special exception to NYT's normal reporting practices.

> Realistically private/public is a spectrum not a binary

IIRC his name would autocomplete as a suggested search term in the Google search bar even before the article was published. He was already far too far toward the "public" end of that spectrum to throw a tantrum the way he did.

lmm 4 days ago | parent [-]

> He wanted a special exception to NYT's normal reporting practices.

The NYT had already profiled e.g. Kendrick Lamar without mentioning his birth/legal name, so he certainly wasn't asking for something unprecedented.

bccdee 3 days ago | parent [-]

Siskind is a practicing psychiatrist, which is relevant to his profile. Using his real name makes it possible to discuss that. Putting Kendrick's surname ("Duckworth") into the profile adds nothing.

Siskind is a public figure—I don't know why so many people think he is entitled to demand that NYT only discuss him in the ways he wants to be discussed (i.e. not connecting his blog to his physciatric practice).

lmm 2 days ago | parent [-]

> Siskind is a practicing psychiatrist, which is relevant to his profile. Using his real name makes it possible to discuss that.

The NYT of all entities should be comfortable talking about whether someone has particular qualifications or a particular job without feeling the need to publish their birth/legal name.

> Siskind is a public figure—I don't know why so many people think he is entitled to demand that NYT only discuss him in the ways he wants to be discussed (i.e. not connecting his blog to his physciatric practice).

Again the NYT of all entities should understand that there are good reasons to hide people's private details. People get very angry about some of the things Alexander writes, there are plausible threats of violence against him, and even if there weren't, everyone agrees that names are private information that shouldn't be published without good reason. His blog is public, the fact of him being or not being a practising psychiatrist may be in the public interest to talk about, but where's the argument that that means you need to publish his name specifically?

bccdee 2 days ago | parent [-]

> there are good reasons to hide people's private details

They do, and they do grant anonymity sometimes. But it's their call, and they made the call. They're not a PR firm; they have no obligation to be kind or gentle in their coverage. If they wanted, they'd be fully within their rights to publish a noxious hitpiece on the man. They were much milder than I'd have been. Siskind's said some awful stuff.

> everyone agrees that names are private information that shouldn't be published without good reason

The NYT doesn't. They use the real identities of the people they cover by default (that's generally how news works), and consider anonymity a privilege granted under special circumstances.

> where's the argument that that means you need to publish his name specifically

Because I would not want to give my business to a man who's recorded as thinking that Black people are genetically stupid. I'm not really interested in litigating Siskind's political views—I don't think this is the place for it—but I won't gloss over them. They're pretty foul.

rendang 6 days ago | parent | prev | next [-]

What is the connection between the claim and the link?

meowface 6 days ago | parent [-]

There isn't any. (Also, on top of that, I think it's overall not a very good article.)

chermi 6 days ago | parent | prev [-]

What a disgusting article.

epiccoleman 6 days ago | parent | prev [-]

> 'biological' mental illness is tightly coupled to qualitative mental state, and bidirectionally at that. Not only do your chemicals influence your thoughts, your thoughts influence your chemicals, and it's possible for a vulnerable person to be pushed over the edge by either kind of input. We like to think that 'as long as nothing is chemically wrong' we're a-ok, but the truth is that it's possible for simple normal trains of thought to latch your brain into a very undesirable state.

It's interesting to see you mention this. After reading this post yesterday I wound up with some curious questions along these lines. I guess my question goes something like this:

This article seems to assert that 'mental illness' must always have some underlying representation in the brain - that is, mental illness is caused by chemical imbalances or malformation in brain structure. But is it possible for a brain to become 'disordered' in a purely mental way? i.e. that to any way we know of "inspecting" the brain, it would look like a the hardware was healthy - but the "mind inside the brain" could somehow be stuck in a "thought trap"? Your post above seems to assert this could be the case.

I think I've pretty much internalized a notion of consciousness that was purely bottom-up and materialistic. Thoughts are the product of brain state, brain state is the product of physics, which at "brain component scale" is deterministic. So it seems very spooky on its face that somehow thoughts themselves could have a bidirectional relationship with chemistry.

I spent a bunch of time reading articles and (what else) chatting with Claude back and forth about this topic, and it's really interesting - it seems there are at least some arguments out there that information (or maybe even consciousness) can have causal effects on "stuff" (matter). There's the "Integrated Information Theory" of consciousness (which seems to be, if not exactly "fringe", at least widely disputed) and there's also this interesting notion of "downward causation" (basically the idea that higher-level systems can have causal effects on lower levels - I'm not clear on whether "thought having causal effects on chemistry" fits into this model).

I've got 5 or 6 books coming my way from the local library system - it's a pretty fascinating topic, though I haven't dug deep enough to decide where I stand.

Sorry for the ramble, but this article has at least inspired some interesting rabbit-hole diving for me.

I'm curious - when you assert "Not only do your chemicals influence your thoughts, your thoughts influence your chemicals" - do you have evidence that backs that notion up? I'm not asking to cast doubt, but rather, I guess, because it sounds like maybe you've got some sources I might find interesting as I keep reading.

lukev 5 days ago | parent | next [-]

It is entirely uncontroversial that mental states affect the physical body. You've probably observed this yourself, directly, if you've ever had headaches or muscle tightness related to mental or emotional stress.

We can use MRIs to directly observe brain differences due to habitual mental activities (e.g. professional chess players, polyglots, musicians.)

It would be extremely odd if our bodies did not change as a result of mental activity. Your muscles grow differently if you exercise them, why would the nervous or hormonal systems be any different?

epiccoleman 5 days ago | parent [-]

I think my question is more a question of how whether than whether, if that makes sense. There is something about "thought" affecting "matter" that feels spooky if there is a bidirectional relationship.

If thought / consciousness / mind is purely downstream of physics, no spookiness. If somehow experienced states of mind can reach back and cause physical effects... that feels harder to explain. It feels like a sort of violation, somehow, of determinism.

Again though, as above, I'm basically a day into reading and thinking about this, so it might just be the case that I haven't understood the consensus yet and maybe it's not spooky at all. (I don't think this is the case though - just a quick skim through the Wikipedia page on "the hard problem of consciousness" seems to suggest a lot of closely related debate)

bccdee 5 days ago | parent [-]

You've struck at the essential problem of dualism. If thoughts are nonphysical, how can thoughts influence our physical bodies? If consciousness does not interact with the physical world, but merely arises from it, then how can we possibly discuss it, since anything we describe is causally linked to our description of it?

Descartes thought the soul was linked to the body through the pineal gland, inspiring a long tradition of mystic woo associated with what is, in fact, a fairly pedestrian endocrine gland.

Further reading, if you're interested:

https://plato.stanford.edu/entries/dualism/

https://plato.stanford.edu/entries/consciousness/

Personally, my take is that we can't really trust our own accounts of consciousness. Humans describe feeling that their senses form a cohesive sensorium that passes smoothly through time as a unique, distinct entity, but that feeling is just a property of how our brains process sensory information into thoughts. The way we're built strongly disposes us to think that "conscious experience" is a real distinct thing, even if it's not even clear what we mean by that, and even if the implications of its existence don't make sense. So the simple answer to the hard problem, IMO, is that consciousness doesn't exist (not even conceptually), and we just use the word "consciousness" to describe a particular set of feelings and intuitions that don't really tell us much about the underlying reality of the mind.

epiccoleman 5 days ago | parent | next [-]

Thank you for the links!

lukev 5 days ago | parent | prev [-]

I mean it's funny you mention Descartes, because I find the argument that consciousness is the ONLY thing you can really know exists for sure to be pretty compelling. (Descartes then significantly loses the thread, hah.)

I agree with you that consciousness is much more fragmented and nonlinear than we perceive it to be, but "I exist" seems pretty tautological to me (for values of "I" that are completely unspecified.)

bccdee 3 days ago | parent | next [-]

Since "I think therefore I am" is meant to be a foundation for reasoning, it precedes any real definitions of "I," "thinking" and "being." So I think it's really more of a set of definitions than a conclusion.

We have a noun, "thought," which we define very broadly so as not to require any other definitions, and another noun, the self, which those thoughts are assumed to belong to. I think this is presumptive; working from first principles, why must a thought have a thinker? The self is a really meaty concept and Descartes just sneaks it in there unremarked-upon.

If you take that out, all you get is "thoughts exist." And even then, we're basically pointing at thoughts and saying "whatever these are doing is existing." Like, does a fictional character "exist" in the same way a real person does do? No, I think it's safe to say it's doing something different. But we point at whatever our thoughts are doing and define it as existence.

So I don't think we can learn much about the self or consciousness from Cartesian first-principles reasoning.

epiccoleman 5 days ago | parent | prev [-]

I definitely share this intuition - it almost, in some sense, feels like the only thing we can really know. It makes it rather tough for me to accept the sibling comments arguing that "actually, the answer is that consciousness is an illusion." That just seems... transparently experientally false, to me.

bccdee 3 days ago | parent [-]

Here's my issue, though: Consider that our thoughts are encoded in physical matter. Something about the arrangement of the chemicals and charges in our brain holds our thoughts as real-world objects, just as ink and paper can hold a piece of writing.

Given a piece of paper with some information written on it, does the contents of the message tell you anything about the paper itself? The message may say "this paper was made in Argentina," or "this message was written by James," but you can't necessarily trust it. You can't even know that "James" is a real person.

So just because we feel conscious—just because strong feelings of consciousness, of "me-being-here"-ness, are written into the substrate of our brains—why should that tell us anything?

Whatever the sheet of paper says, it could just as easily say the exact opposite. What conclusions can we possibly draw based on its contents?

epiccoleman a day ago | parent [-]

> So just because we feel conscious—just because strong feelings of consciousness, of "me-being-here"-ness, are written into the substrate of our brains—why should that tell us anything?

It's a fact about the universe that it feels a certain way to have a certain "brain state" - just like it's a fact about the universe that certain arrangements of ink and cellulose molecules comprise a piece of paper with a message written on it.

That fits perfectly well into a fully materialistic view of the universe. Where it starts to feel spooky to me is the question of whether thoughts themselves could have some sort of causal effect on the brain. Could a person with a healthy brain be lying safely in bed and "think themselves" into something "unhealthy?" Could I have a "realization" that somehow destabilizes my mind? It seems at least plausible that this can and does happen.

Maybe the conscious experience is pure side-effect - not causal at all. But even if the ultimate "truth" of that series of events is "a series of chemical reactions occurred which caused a long term destabilization of that individual's conscious experience," it feels incomplete somehow to try to describe that event without reference to the experiential component of it.

Whether we posit spooky downward causation or stick to pure materialism, there still seems to be a series of experiential phenomena in our universe which our current scientific approach seems unable to touch. That's not to say that we never could understand consciousness in purely material terms or that we could never devise experiments that help us describe it - it just seems like a big gap in our understanding of nature to me.

anon84873628 6 days ago | parent | prev [-]

>So it seems very spooky on its face that somehow thoughts themselves could have a bidirectional relationship with chemistry.

There's no scientific reason to believe thoughts affect the chemistry at all. (Currently at least, but I'm not betting money we'll find one in the future).

When Scott Alexander talks about feedback loops like bipolar disorder and sleep, he's talking about much higher level concepts.

I don't really understand what the parent comment quote is trying to say. Can people have circular thoughts and deteriorating mental state? Sure. That's not a "feedback loop" between layers -- the chemicals are just doing their thing and the thoughts happen to be the resulting subjective experience of it.

To answer your question about the "thought trap". If "it's possible for simple normal trains of thought to latch your brain into a very undesirable state" then I'd say that means the mind/brain's self-regulation systems have failed, which would be a disorder or illness by definition.

Is it always a structural or chemical problem? Let's say thinking about a past traumatic event gives you a panic attack... We call that PTSD. You could say PTSD is expected primate behavior, or you could say it's a malfunction of the management systems. Or you could say it's not a malfunction but that the 'traumatic event' did in fact physically traumatize the brain that was forced to experience it...

AstralStorm 5 days ago | parent [-]

Sure the thoughts can influence your chemical state. Scott even provides an example. Suppose you become so engrossed in your weird idea you start to lose sleep over it... Or start to feel anxious about it.

At some point, your induced stress will cause relevant biological changes. Not necessarily directly.

PTSD indeed is likely an overload of a normal learning and stress mechanism.

anon84873628 5 days ago | parent | next [-]

The core thing is, am I really in control of my brain, at a fundamental level? If my thoughts are the result of electrochemical reactions, which everywhere else in the universe follow normal deterministic (even if stochastic) physics... How does thinking actually change their result? The unsettling conclusion is that it doesn't. The thoughts are the result of the reactions continuously in progress, and any sensation that we are a actively making decisions and guiding the process is simply an illusion created by that very same process. I.e. there is no free well.

Under that view, the bipolar feedback loop example disappears. The engrossing or psychotic thoughts are not driving the chemistry, they are the chemistry. The whole thing is just a more macro view where you see certain oscillations play out. If the system ultimately damps itself and that "feels like" self control, it was actually a property built into the system from the start.

snapcaster a day ago | parent | prev [-]

It's chemicals all the way down