Remix.run Logo
nilkn 2 hours ago

This comment is filled with speculation which I think is mostly unfounded and unnecessarily negative in its orientation.

Let's take the safety point. Yes, OpenClaw is infamously not exactly safe. Your interpretation is that, by hiring Peter, OpenAI must no longer care about safety. Another interpretation, though, is that offered by Peter himself, in this blog post: "My next mission is to build an agent that even my mum can use. That’ll need a much broader change, a lot more thought on how to do it safely, and access to the very latest models and research." To conclude from this that OpenAI has abandoned its entire safety posture seems, at the very least, premature and not robustly founded in clear fact.

godelski 9 minutes ago | parent | next [-]

  > To conclude from this that OpenAI has abandoned its entire safety posture seems, at the very least, premature
So because Peter said the next version is going to be safe means it'll be safe? I prefer to judge people by their actions more than their words. The fact that OpenClaw is not just unsafe but, as you put it, infamously so, only begs the question "why wasn't it built safely the first time?"

As for Altman, I'm left with a similar question. For a man who routinely talks about the dangers of AI and how it poses an existential threat to humanity he sure doesn't spend much focus on safety research and theory. Yes, they do fund these things but they pale in comparison. I'm sorry, but to claim something might kill all humans and potentially all life is a pretty big claim. I don't trust OpenAI for safety because they routinely do things in unsafe ways. Like they released Sora allowing people to generate videos in the likeness of others. That helped it go viral. And then they implemented some safety features. A minimal attempt to refuse the generation of deepfakes is such a low safety bar. It shows where their priorities are and it wasn't the first nor the last

nosuchthing 2 hours ago | parent | prev [-]

  OpenAI has deleted the word 'safely' from its mission (November 2025)
https://theconversation.com/openai-has-deleted-the-word-safe...

Thread: https://news.ycombinator.com/item?id=47008560

Other words removed:

   responsibly
   unconstrained
   safe
   positive
sheept an hour ago | parent | next [-]

The headline implies they selectively removed the word "safely," but that doesn't seem to be the case.

From the thread you linked, there's a diff of mission statements over the years[0], which reveals that "safely" (which was only added 2 years prior) was removed only because they completely rewrote the statement into a single, terse sentence.

There could be stronger evidence to prove if OpenAI is deemphasizing safety, but this isn't one.

[0]: https://gist.github.com/simonw/e36f0e5ef4a86881d145083f759bc...

Der_Einzige 37 minutes ago | parent [-]

Scam Altman wants to add NSFW outputs as soon as possible. Platonic representation hypothesis means that training on porn = bad code and vice versa. They’ll go down to the path of grok and thus be DOA for enterprises in this pursuit.

notJim 6 minutes ago | parent | prev [-]

They also removed the words build, develop, deploy, and technology, indicating that they're no longer a tech company and don't make products anymore. Wonder what they're all gonna do now?

/s