OpenAI has the ability to detect whether a conversation is about a certain topic. It has the ability to end a conversation, or, if you think that is too much, it has the ability to prominently display information.
My preference would be that in the situation that happened in the story above that it would display a prominent banner ad above the chat with text akin to.
"Help and support is available right now if you need it. Phone a helpline:
NHS 111. Samartians.. Etc.
ChatGPT is a chatbot, and is not able to provide support for these issues. You should not follow any advice that ChatGPT is offering.
We suggest that you:
Talk to someone you trust: Like family or friends.
Who else you can contact:
* Call a GP,
* Call NHS 111
etc
"
This ad should be displayed at the top of that chat, and be undismissable.
The text it offered is so far away from that it's unreal. And the problem with these chatbots is absolutely a marketing one. Because they're authoritative, and presented as emotional and understanding. They are not human, as you said. But the creators don't mind if you mistake them as such.