▲ | slg 6 days ago | |
Another great example of the HN mindset. How can you say "their safety layer clearly failed" without being able to acknowledge that they should be held responsible for that failure and that we should work to reduce the likelihood of similar failures in the future? If a car, rope, or drill had some sort of failure in their manufacturing that killed people, those companies would be held responsible. Why can't we do the same with OpenAI? This isn't a desire for "the digital world to be devoid of risks", it is a desire for people and companies in the digital world to be held responsible for the harm they are causing. Yet, people here seem to believe that no one should ever be held responsible for the damage caused by the software they create. |