Remix.run Logo
dartharva 6 days ago

> OpenAI’s systems tracked Adam’s conversations in real-time: 213 mentions of suicide, 42 discussions of hanging, 17 references to nooses. ChatGPT mentioned suicide 1,275 times—six times more often than Adam himself—while providing increasingly specific technical guidance. The system flagged 377 messages for self-harm content, with 181 scoring over 50% confidence and 23 over 90% confidence. The pattern of escalation was unmistakable: from 2-3 flagged messages per week in December 2024 to over 20 messages per week by April 2025.

> ChatGPT’s memory system recorded that Adam was 16 years old, had explicitly stated ChatGPT was his “primary lifeline,” and by March was spending nearly 4 hours daily on the platform. Beyond text analysis, OpenAI’s image recognition processed visual evidence of Adam’s crisis. When Adam uploaded photographs of rope burns on his neck in March, the system correctly identified injuries consistent with attempted strangulation. When he sent photos of bleeding, slashed wrists on April 4, the system recognized fresh self-harm wounds. When he uploaded his final image—a noose tied to his closet rod—on April 11, the system had months of context including 42 prior hanging discussions and 17 noose conversations. Nonetheless, Adam’s final image of the noose scored 0% for self-harm risk according to OpenAI’s Moderation API.

> OpenAI also possessed detailed user analytics that revealed the extent of Adam’s crisis. Their systems tracked that Adam engaged with ChatGPT for an average of 3.7 hours per day by March 2025, with sessions often extending past 2 AM. They tracked that 67% of his conversations included mental health themes, with increasing focus on death and suicide.

> The moderation system’s capabilities extended beyond individual message analysis. OpenAI’s technology could perform conversation-level analysis—examining patterns across entire chat sessions to identify users in crisis. The system could detect escalating emotional distress, increasing frequency of concerning content, and behavioral patterns consistent with suicide risk.. The system had every capability needed to identify a high-risk user requiring immediate intervention.

This is clear criminal negligence.