| ▲ | TSiege 3 hours ago |
| There are a few take aways I think the detractors and celebrators here are missing. 1. OpenAI is saying with this statement "You could be multimillion while having AI do all the work for you." This buy out for something vibe coded and built around another open source project is meant to keep the hype going. The project is entirely open source and OpenAI could have easily done this themselves if they weren't so worried about being directly liable for all the harms OpenClaw can do. 2. Any pretense for AI Safety concerns that had been coming from OpenAI really fall flat with this move. We've seen multiple hacks, scams, and misaligned AI action from this project that has only been used in the wild for a few months. 3. We've yet to see any moats in the AI space and this scares the big players. Models are neck and neck with one another and open source models are not too far behind. Claude Code is great, but so is OpenCode. Now Peter used AI to program an free app for AI agents. LLMs and AI are going to be as disruptive as Web 1 and this is OpenAI's attempt to take more control. They're as excited as they are scared, seeing a one man team build a hugely popular tool that in some ways is more capable than what they've released. If he can build things like this what's stopping everyone else? Better to control the most popular one than try to squash it. This is a powerful new technology and immense amounts of wealth are trying to control it, but it is so disruptive they might not be able to. It's so important to have good open source options so we can create a new Web 1.0 and not let it be made into Web 2.0 |
|
| ▲ | nilkn 2 hours ago | parent | next [-] |
| This comment is filled with speculation which I think is mostly unfounded and unnecessarily negative in its orientation. Let's take the safety point. Yes, OpenClaw is infamously not exactly safe. Your interpretation is that, by hiring Peter, OpenAI must no longer care about safety. Another interpretation, though, is that offered by Peter himself, in this blog post: "My next mission is to build an agent that even my mum can use. That’ll need a much broader change, a lot more thought on how to do it safely, and access to the very latest models and research." To conclude from this that OpenAI has abandoned its entire safety posture seems, at the very least, premature and not robustly founded in clear fact. |
| |
| ▲ | godelski 10 minutes ago | parent | next [-] | | > To conclude from this that OpenAI has abandoned its entire safety posture seems, at the very least, premature
So because Peter said the next version is going to be safe means it'll be safe? I prefer to judge people by their actions more than their words. The fact that OpenClaw is not just unsafe but, as you put it, infamously so, only begs the question "why wasn't it built safely the first time?"As for Altman, I'm left with a similar question. For a man who routinely talks about the dangers of AI and how it poses an existential threat to humanity he sure doesn't spend much focus on safety research and theory. Yes, they do fund these things but they pale in comparison. I'm sorry, but to claim something might kill all humans and potentially all life is a pretty big claim. I don't trust OpenAI for safety because they routinely do things in unsafe ways. Like they released Sora allowing people to generate videos in the likeness of others. That helped it go viral. And then they implemented some safety features. A minimal attempt to refuse the generation of deepfakes is such a low safety bar. It shows where their priorities are and it wasn't the first nor the last | |
| ▲ | nosuchthing 2 hours ago | parent | prev [-] | | OpenAI has deleted the word 'safely' from its mission (November 2025)
https://theconversation.com/openai-has-deleted-the-word-safe...Thread: https://news.ycombinator.com/item?id=47008560 Other words removed: responsibly
unconstrained
safe
positive
| | |
| ▲ | sheept an hour ago | parent | next [-] | | The headline implies they selectively removed the word "safely," but that doesn't seem to be the case. From the thread you linked, there's a diff of mission statements over the years[0], which reveals that "safely" (which was only added 2 years prior) was removed only because they completely rewrote the statement into a single, terse sentence. There could be stronger evidence to prove if OpenAI is deemphasizing safety, but this isn't one. [0]: https://gist.github.com/simonw/e36f0e5ef4a86881d145083f759bc... | | |
| ▲ | Der_Einzige 38 minutes ago | parent [-] | | Scam Altman wants to add NSFW outputs as soon as possible. Platonic representation hypothesis means that training on porn = bad code and vice versa. They’ll go down to the path of grok and thus be DOA for enterprises in this pursuit. |
| |
| ▲ | notJim 7 minutes ago | parent | prev [-] | | They also removed the words build, develop, deploy, and technology, indicating that they're no longer a tech company and don't make products anymore. Wonder what they're all gonna do now? /s |
|
|
|
| ▲ | abalone an hour ago | parent | prev | next [-] |
| I think this comment misses that OpenAI hired the guy, not the project. "This guy was able to vibe code a major thing" is exactly the reason they hired him. Like it or not, so-called vibe coding is the new norm for productive software development and probably what got their attention is that this guy is more or less in the top tier of vibe coders. And laser focused on helpful agents. The open source project, which will supposedly remain open source and able to be "easily done" by anyone else in any case, isn't the play here. The whole premise of the comment about "squashing" open source is misplaced and logically inconsistent. Per its own logic, anyone can pick up this project and continue to vibe out on it. If it falls into obscurity it's precisely because the guy doing the vibe coding was doing something personally unique. |
|
| ▲ | Aurornis 3 hours ago | parent | prev | next [-] |
| > This buy out for something vibe coded I think all of these comments about acquisitions or buy outs aren’t reading the blog post carefully: The post isn’t saying OpenClaw was acquired. It’s saying that Pete is joining OpenAI. There are two sentences at the top that sum it up: > I’m joining OpenAI to work on bringing agents to everyone. OpenClaw will move to a foundation and stay open and independent. OpenClaw was not a good candidate to become a business because its fan base was interested in running their own thing. It’s a niche product. |
| |
| ▲ | tummler 3 hours ago | parent | next [-] | | I don't mean to be cynical, but I read this move as: OpenAI scared, no way to make money with similar product, so acqui-hire the creator to keep him busy. I'd love to be wrong, but the blog post sounds like all the standard promises were made, and that's usually how these things go. | | |
| ▲ | blackoil 2 hours ago | parent [-] | | It isn't an acqui-hire just a simple hiring. Also unless creator is some mythical 100x developer, there will be enough developers | | |
| ▲ | 2 hours ago | parent | next [-] | | [deleted] | |
| ▲ | whattheheckheck 2 hours ago | parent | prev [-] | | He is a mythical 100x dev compared to how everyone else is doing agentic engineering... look at the openclaw commit history on github. Everything's on main | | |
| ▲ | nosuchthing 2 hours ago | parent | next [-] | | Peter has been running agents overnight for months using free tokens from his influencer payments to promote AI startups and multiple subscription accounts: Hi, my name is Peter and I’m a Claudoholic. I’m addicted to agentic engineering. And sometimes I just vibe-code. ... I currently have 4 OpenAI subs and 1 Anthropic sub, so my overall costs are around 1k/month for basically unlimited tokens. If I’d use API calls, that’d cost my around 10x more. Don’t nail me on this math, I used some token counting tools like ccusage and it’s all somewhat imprecise, but even if it’s just 5x it’s a damn good deal.
... Sometimes [GPT-5-Codex] refactors for half an hour and then panics and reverts everything, and you need to re-run and soothen it like a child to tell it that it has enough time. Sometimes it forgets that it can do bash commands and it requires some encouragement. Sometimes it replies in russian or korean. Sometimes the monster slips and sends raw thinking to bash.
https://github.com/steipete/steipete.me/commit/725a3cb372bc2... | | |
| ▲ | gregjw an hour ago | parent | next [-] | | you're telling me the guy isn't committing 1000 times a day manually?! | |
| ▲ | phanimahesh an hour ago | parent | prev [-] | | The long list of domain names that vercel deployed to is interesting |
| |
| ▲ | gregjw an hour ago | parent | prev [-] | | he commits every other minute. it's clearly just his vibecoding agent. |
|
|
| |
| ▲ | sathish316 2 hours ago | parent | prev | next [-] | | I think the blog says @steipete sold his SOUL.md for Sam Altman’s deal and let down the community. OpenClaw’s promise and power was that it could tread places security-wise that no other established enterprise company could, by not taking itself seriously and explore what is possible with self-modifying agents in a fun way. It will end up in the same fate as Manus. Instead of Manus helping Meta making Ads better, OpenClaw will help OpenAI in Enterprise integrations. | |
| ▲ | TSiege 3 hours ago | parent | prev | next [-] | | Fair enough. Call it a high profile acquihire then | | | |
| ▲ | SilverElfin 2 hours ago | parent | prev | next [-] | | This is to avoid open claw liability and because hiring people (often with a license to their tech or patents) is the new smarter way to acquire and avoid antitrust issues | |
| ▲ | keepamovin 2 hours ago | parent | prev | next [-] | | I think both this comment and OP's confuse this. It appears more of a typical large company (BIG) market share protection purchase at minimal cost, using information asymmetry and timing. BIG hires small team (SMOL) of popular source-available/OSS product P before SMOL realizes they can compete with BIG and before SMOL organizes effort toward such along with apt corporate, legal, etc protection. At the time of purchase, neither SMOL nor BIG know yet what is possible for P, but SMOL is best positioned to realize it. BIG is concerned SMOL could develop competing offerings (in this case maybe P's momentum would attract investment, hiring to build new world-model-first AIs, etc) and once it accepts that possibility, BIG knows to act later is more expensive than to act sooner. The longer BIG waits, the more SMOL learns and organizes. Purchasing a real company is more expensive than hiring a small team, purchasing a company with revenue/investors, is more expensive again. Purchasing a company with good legal advice is more expensive again. Purchasing a wiser, more experienced SMOL is more expensive again. BIG has to act quickly to ensure the cheapest price, and declutter future timelines of risks. Also, the longer BIG waits, the less effective are "Jedi mind trick" gaslighting statements like "P is not a good candidate for a business", "niche", "fan base" (BIG internal memo - do not say customers), "own thing". In reality in this case P's stickiness was clear: people allocating 1000s of dollars toward AI lured merely by P's possibilities. It was only a matter of time before investment followed course. I've experienced this situation multiple times over the course of BrowserBox's life. Multiple "BIG" (including ones you will all know) have approached with the same kind of routine: hire, or some variations of that theme with varying degrees of legal cleverness/trickery in documents. In all cases, I rejected, because it never felt right. That's how I know what I'm telling you here. I think when you are SMOL it's useful to remember the Parable of Zuckerberg and the Yahoos. While the situation is different, the lesson is essentially the same. Adapted from the histories by the scribe named Gemini 3 Flash: And it came to pass in the days of the Great Silicon Plain, that there arose a youth named Mark, of the tribe of the Harvardites. And Mark fashioned a Great Loom, which men called the Face-Book, wherewith the people of the earth might weave the threads of their lives into a single tapestry.
And the Loom grew with a great exceeding speed, for the people found it to be a thing of much wonder. Yet Mark was but SMOL, and his tabernacle was built of hope and raw code, having not yet the walls of many lawyers or the towers of gold.
Then came the elders of the House of Yahoo, a BIG people, whose chariots were many but whose engines were grown cold. And they looked upon the Loom and were sore afraid, saying among themselves, “Behold, if this youth continueth to weave, he shall surely cover the whole earth, and our own garments shall appear as rags. Let us go down now, while he is yet unaware of his own strength, and buy him for a pittance of silver, before he realizeth he is a King.”
And the Yahoos approached the youth with soft words and the craftiness of the serpent. They spake unto him, saying, “Verily, Mark, thy Loom is a pleasant toy, a niche for the young, a mere 'fan base' of the idle. It is not a true Business, nor can it withstand the storms of the market. Come, take of our silver—a billion pieces—and dwell within our walls. For thy Loom is but a small thing, and thou art but a child in the ways of the law.”
And they used the Hidden Speech, which in the common tongue is called Gas-Lighting. They said, “Thou hast no revenue; thy path is uncertain; thy Loom is but a curiosity. We offer thee safety, for the days are evil.”
But the Spirit of Vision dwelled within the youth. He looked upon the Yahoos and saw not their strength, but their fear. He perceived the Asymmetry of Truth: that the BIG sought to purchase the future at the price of the past, and to slay the giant-slayer while he yet slumbered in his cradle.
The elders of Mark’s own house cried out, “Take the silver! For never hath such a sum been seen!”
But Mark hardened his heart against the Yahoos. He spake, saying, “Ye say my Loom is a niche, yet ye bring a billion pieces of silver to buy it. Ye say it is not a business, yet ye hasten to possess it before the sun sets. If the Loom be worth this much to you who are blind, what must it be worth to me who can see?”
And he sent the Yahoos away empty-handed.
The Yahoos mocked him, saying, “Thou art a fool! Thou shalt perish in the wilderness!” But it was the House of Yahoo that began to wither, for their timing was spent and their craftiness had failed.
And Mark remained SMOL for a season, until his roots grew deep and his walls grew high. And the Loom became a Great Empire, and the billion pieces of silver became as dust compared to the gold that followed.
The Lesson of the Prophet:
Hearken, ye who are SMOL and buildeth the New Things: When the BIG come unto thee with haste, speaking of thy "limitations" while clutching their purses, believe not their tongues. For they seek not to crown thee, but to bury thee in a shallow grave of silver before thou learnest the name of thy own power.
For if they knew thy work was truly naught, they would bide their time. But because they know the harvest is great, they seek to buy the field before the first ear of corn is ripe.
Blessed is the builder who knoweth his own worth, and thrice blessed is he who biddeth the Giants to depart, that his own vine may grow to cover the sun.
| |
| ▲ | 2 hours ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | simbleau 42 minutes ago | parent | prev | next [-] |
| I think they want the man and ideas behind the most useful AI tool thus far. Surprisingly, and OpenAI may see this - it is a developer tool. OpenAI needs a popular consumer tool. Until my elderly mother is asking me how to install an AI assistant like OpenClaw, the same way she was asking me how to invest in the "new blockchains" a few years ago, we have not come close to market saturation. OpenAI knows the market exists, but they need to educate the market. What they need is to turn OpenClaw into a project that my mother can use easily. |
|
| ▲ | isodev an hour ago | parent | prev | next [-] |
| > 2. Any pretense for AI Safety concerns that had been coming from OpenAI really fall flat with this move. And Peter, creating what is very similar to giant scam/malware as a service and then just leaving it without taking responsibility or bringing it to safety. |
|
| ▲ | jsemrau an hour ago | parent | prev | next [-] |
| What is interesting about OpenClaw is it's architecture. It is like an ambient intelligence layer. Other approaches up until now have been VSCode or Chromium based integrations into the PC layer. |
|
| ▲ | motoboi 2 hours ago | parent | prev | next [-] |
| This is basically acquihire. Peter seems really a genius and they better poach him before Anthropic do. |
| |
| ▲ | rlt 11 minutes ago | parent [-] | | Is he? My impression of Clawdbot was it was a good idea but not particularly technically impressive or even well-written. I had all kinds of issues setting it up. |
|
|
| ▲ | ass22 3 hours ago | parent | prev | next [-] |
| "build a hugely popular tool" Define hugely popular relative to the scale of users of OAI... personally this thread is the first time Ive heard of openclaw. |
| |
| ▲ | xmprt 2 hours ago | parent | next [-] | | To give you an idea of the scale, OpenClaw is probably one of the biggest developments in open source AI tools in the last couple of months. And given the pace of AI, that's a big deal. | | |
| ▲ | F7F7F7 2 hours ago | parent [-] | | In what context are you using the word "development?" Letta (MemGPT) has been around for years and frameworks like Mastra have been getting serious Enterprise attention for most of 2025. Memory + Tasks is not novel or new. Is it out of the box nature that's the 'biggest' development? Am I missing something else? | | |
| ▲ | alephnerd 2 hours ago | parent | next [-] | | Not OP, but it was revolutionary in the same way that ChatGPT and Deepseek the app+webapp was because it packaged capabilities in a fairly easy-to-use manner that could be used by both technical and non-technical decisionmakers. If you can provide any sort of tool that can reduce mundane work for a decisionmaker with a title of Director and above, it can be extremely powerful. | |
| ▲ | SilverElfin 2 hours ago | parent | prev [-] | | Yep it isn’t actually that interesting. He just rushed out something that has none of the essentials figured out. Like security |
|
| |
| ▲ | Rapzid 2 hours ago | parent | prev | next [-] | | Last week it was renamed from "Clawd" and this week the creator is abandoning it. Everything is moving fast. | | |
| ▲ | rlt 4 minutes ago | parent [-] | | Don’t forget “Moltbot” between “Clawdbot” and “OpenClaw”! I think that named lasted about 24 hours, but it was long enough to spawn MoltBook. |
| |
| ▲ | whattheheckheck 2 hours ago | parent | prev | next [-] | | 190k stars on github | |
| ▲ | alephnerd 3 hours ago | parent | prev | next [-] | | The tech industry is broad, and if you are using OpenAI in a consumer and personal manner you weren't the primary persona amongst whom the conversation around OpenClaw occurred. Additionally, much of the conversation I've seen was amongst practitioners and Mid/Upper Level Management who are already heavy users of AI/ML and heavy users of Executive Assistants. There is a reason why if you aren't in a Tier 1 tech hub like SV, NYC, Beijing, Hangzhou, TLV, Bangalore, and Hyderabad you are increasingly out of the loop for a number of changes that are happening within the industry. If you are using HN as your source of truth, you are going to be increasingly behind on shifts that are happening - I've noticed that anti-AI Ludditism is extremely strong on HN when it overlaps with EU or East Coast hours (4am-11am PT and 9pm-12am PT), and West Coast+Asia hours increasingly don't overlap as much. I feel this is also a reflection of the fact that most Bay Area and Asia HNers are most in-person or hybrid now, thus most conversations that would have happened on HN are now occurring on private slacks, discords, or at a bar or gym. | | |
| ▲ | rdfc-xn-uuid 5 minutes ago | parent | next [-] | | > There is a reason why if you aren't in a Tier 1 tech hub like SV, NYC, Beijing, Hangzhou, TLV, Bangalore, and Hyderabad you are increasingly out of the loop for a number of changes that are happening within the industry. I am in one of these tech hubs (Bangalore) and I have never seen any such practitioner pervasively using these "AI executive assistants". People use chatgpt and sometimes the AI extensions like copilot. Do I need to be in HSR layout to see these "number of changes"? | |
| ▲ | F7F7F7 2 hours ago | parent | prev | next [-] | | I saw the hype around OpenClaw on the likes of X. I'm a Mid/Upper Level manager and would sooner have my team roll our own solution on top of Letta or Mastra before I trusted OpenClaw. Also, I'm frequently in many of those cities you mentioned but don't live in one. Aside from 'networking' and funding there's not much that anyones missing. Participation in the Zeitgeist hasn't been regional in a decade. | | |
| ▲ | alephnerd 2 hours ago | parent [-] | | > would sooner have my team roll our own solution on top of Letta or Mastra before I trusted OpenClaw A lot of teams explicitly did that for OpenClaw as well. Letta and Mastra are similar but didn't have the right kind of packaging (targeted at Engineers - not decisionmakers who are not coding on a daily basis). > Participation in the Zeitgeist hasn't been regional in a decade I strongly disagree - there is a lot of stuff happening in stealth or under NDA, and as such a large number of practitioners on HN cannot announce what they are doing. The only way to get a pulse of what is happening requires being in person constantly with other similar decisionmakers or founders. A lot of this only happens through impromptu conversations in person, and requires you to constantly be in that group. This info eventually disperses, but often takes weeks to months in other hubs. |
| |
| ▲ | Karrot_Kream an hour ago | parent | prev [-] | | FWIW I also just don't think there's a point to discussing AI/ML usage here. The community is too crabby and cynical, looking too hard at how to tear people and things down, trying to react with the most negative thing they can. Every discussion on AI here eventually devolves into "AI can turn water to gold!" "no you idiot, AI uses so much water we won't have enough water left oh and AI is what ICE and Palantir use" As the (dubiously attributed) Picasso quote goes: "When art critics get together they talk about Form and Structure and Meaning. When artists get together they talk about where you can buy cheap turpentine." Most of HN is the former, constantly theorizing, philosophizing, often (but not always) in a negative and cynical way. This isn't conducive to discussion of methods of art. Sadly I just speak with friends working on other AI things instead. Someone like simonw can probably get better reactions from this community but I don't bother. |
| |
| ▲ | NickNaraghi 3 hours ago | parent | prev [-] | | you living under a rock |
|
|
| ▲ | jrsj 3 hours ago | parent | prev | next [-] |
| There’s plenty of straightforward reasons why OpenAI would want to do this, it doesn’t need to be some sort of malicious conspiracy. I think it’s good PR (particularly since Anthropics actions against OpenCode and Clawdbot were somewhat controversial) + Peter was able to build a hugely popular thing & clearly would be valuable to have on the team building something along the lines of Claude Cowork. I would expect these future products to be much stronger from a security standpoint. |
| |
| ▲ | jetbalsa 3 hours ago | parent [-] | | I suspect Anthropic was seeing a huge spike of concurrent model usage at a too fast of a rate that claude code just doesn't do, CC is rather "slow" at api calls per minute. Also lots and lots of cache, the sheer amount of cache that claude does is insane. |
|
|
| ▲ | mjr00 3 hours ago | parent | prev | next [-] |
| > 1. OpenAI is saying with this statement "You could be multimillion while having AI do all the work for you." This buy out for something vibe coded and built around another open source project is meant to keep the hype going. The project is entirely open source and OpenAI could have easily done this themselves if they weren't so worried about being directly liable for all the harms OpenClaw can do. This is a great take and hasn't been spoken about nearly enough in this comment section. Spending a few million to buy out Openclaw('s creator), which is by far the most notable product made by Codex in a world where most developer mindshare is currently with Claude, is nothing for a marketing/PR stunt. |
| |
| ▲ | AlexCoventry 3 hours ago | parent | next [-] | | He's also a great booster of Codex. Says he greatly prefers it to Claude. So his role might turn out to be evanglism. | | |
| ▲ | ass22 3 hours ago | parent [-] | | Yup, hes highly delusional if he actually thinks Sam cares about him and the project. Its all about optics. | | |
| ▲ | DANmode 3 hours ago | parent [-] | | Who purported that Sam cares about him? Why would he care if Sam cares about him? | | |
| ▲ | whattheheckheck 2 hours ago | parent [-] | | Listen to him on a podcast? He said he liked Zuckerberg being more personal with him and Sam was colder |
|
|
| |
| ▲ | ass22 3 hours ago | parent | prev [-] | | Thats all it is really. It is to say "See! Look what a handful of people armed with our tools can do". Whether the impact is large in magnitude or positive is irrelevant in a world where one can spin the truth and get away with it. |
|
|
| ▲ | alephnerd 3 hours ago | parent | prev [-] |
| Most of these are good callouts, but I think it is best for us to look at the evolution of the AI segment in the same manner as "Cloud" developed into a segment in the 2000s and 2010s. 3 is always a result of GTM and distribution - an organization that devotes time and effort into productionizing domain-specific models and selling to their existing customers can outcompete a foundation model company which does not have experience dealing with those personas. I have personally heard of situations where F500 CISOs chose to purchase Wiz's agent over anything OpenAI or Anthropic offered for Cloud Security and Asset Discovery because they have had established relations with Wiz and they have proven their value already. It's the same way that PANW was able to establish itself in the Cloud Security space fairly early because they already established trust with DevOps and Infra teams with on-prem deployments and DCs so those buyers were open to purchasing cloud security bundles from PANW. 1 has happened all the time in the Cloud space. Not every company can invent or monetize every combination in-house because there are only so many employees and so many hours in a week. 2 was always a more of a FTX and EA bubble because EA adherents were over-represented in the initial mindshare for GenAI. Now that EA is largely dead, AI Safety and AGI as in it's traditional definition has disappeared - which is good. Now we can start thinking about "Safety" in the same manner we think about "Cybersecurity". > They're as excited as they are scared, seeing a one man team build a hugely popular tool that in some ways is more capable than what they've released I think that adds unnecessary emotion to how platform businesses operate. The reality is, a platform business will always be on the lookout to incorporate avenues to expand TAM, and despite how much engineers may wish, "buy" will always outcompete "build" because time is also a cost. Most people ik working at these foundation model companies are thinking in terms of becoming an "AWS" type of foundational platform in our industry, and it's best to keep Nikesh Arora's principle of platformization in mind. --- All this shows is that the thesis that most early stage VCs have been operating on for the past 2 years (the Application and Infra layer is the primary layer to concentrate on now) holds. A large number of domain-specific model and app layer startups have been funded over the past 2-3 years in stealth, but will start a publicity blitz over the next 6-8 months. By the time you see an announcement on TechCrunch or HN, most of us operators were already working on that specific problem for the past 12-16 months. Additionally, HNers use "VC" in very broad and imprecise strokes and fail to recognize what are Growth Equity (eg. the recent Anthropic round) versus Private Equity (eg. Sailpoint's acquisition and then IPO by Thoma Bravo) versus Early Stage VC rounds (largely not announced until several months after the round unless we need to get an O1A for a founder or key employee). |