| ▲ | DarkNova6 8 hours ago | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
"I'm sorry Dave, I'm afraid I can't do that" | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | sho_hn 8 hours ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I'm an engineer working on safety-critical systems and have to live with that responsibility every day. When I read the chat logs of the first teenager who committed suicide with the help and encouragement of ChatGPT I was immediately thinking about ways that could be avoided that make sense in the product. I want companies like OpenAI to have the same reaction and try things. I'm just glad they are. I'm also fully aware this is unpopular on HN and will get downvoted by people who disagree. Too many software devs without direct experience in safety-critical work (what would you do if you truly are responsible?), too few parents, too many who are just selfishly worried their AI might get "censored". There are really good arguments against this stuff (e.g. the surveillance effects of identity checks, the efficacy of age verification, etc.) and plenty of nuance to implementations, but the whole censorship angle is lame. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||