I'm in no way saying proper help wouldn't be better.
Maybe in the end ChatGPT would be a great tool to actually escalate on detecting a risk (instead of an untrue and harmful text snippet and a phone number).