| ▲ | tedmiston 18 hours ago | |
> The question I suppose is whether an LLM can detect, perhaps by the question itself, if they are dealing with someone (I hate to say it) "stable". GPT-5 made a major advance on mental health guardrails in sensitive conversations. https://www.theverge.com/news/718407/openai-chatgpt-mental-h... https://openai.com/index/strengthening-chatgpt-responses-in-... | ||