| ▲ | I’m joining OpenAI(steipete.me) |
| 606 points by mfiguiere 5 hours ago | 420 comments |
| |
|
| ▲ | TSiege 2 hours ago | parent | next [-] |
| There are a few take aways I think the detractors and celebrators here are missing. 1. OpenAI is saying with this statement "You could be multimillion while having AI do all the work for you." This buy out for something vibe coded and built around another open source project is meant to keep the hype going. The project is entirely open source and OpenAI could have easily done this themselves if they weren't so worried about being directly liable for all the harms OpenClaw can do. 2. Any pretense for AI Safety concerns that had been coming from OpenAI really fall flat with this move. We've seen multiple hacks, scams, and misaligned AI action from this project that has only been used in the wild for a few months. 3. We've yet to see any moats in the AI space and this scares the big players. Models are neck and neck with one another and open source models are not too far behind. Claude Code is great, but so is OpenCode. Now Peter used AI to program an free app for AI agents. LLMs and AI are going to be as disruptive as Web 1 and this is OpenAI's attempt to take more control. They're as excited as they are scared, seeing a one man team build a hugely popular tool that in some ways is more capable than what they've released. If he can build things like this what's stopping everyone else? Better to control the most popular one than try to squash it. This is a powerful new technology and immense amounts of wealth are trying to control it, but it is so disruptive they might not be able to. It's so important to have good open source options so we can create a new Web 1.0 and not let it be made into Web 2.0 |
| |
| ▲ | nilkn an hour ago | parent | next [-] | | This comment is filled with speculation which I think is mostly unfounded and unnecessarily negative in its orientation. Let's take the safety point. Yes, OpenClaw is infamously not exactly safe. Your interpretation is that, by hiring Peter, OpenAI must no longer care about safety. Another interpretation, though, is that offered by Peter himself, in this blog post: "My next mission is to build an agent that even my mum can use. That’ll need a much broader change, a lot more thought on how to do it safely, and access to the very latest models and research." To conclude from this that OpenAI has abandoned its entire safety posture seems, at the very least, premature and not robustly founded in clear fact. | | | |
| ▲ | Aurornis an hour ago | parent | prev | next [-] | | > This buy out for something vibe coded I think all of these comments about acquisitions or buy outs aren’t reading the blog post carefully: The post isn’t saying OpenClaw was acquired. It’s saying that Pete is joining OpenAI. There are two sentences at the top that sum it up: > I’m joining OpenAI to work on bringing agents to everyone. OpenClaw will move to a foundation and stay open and independent. OpenClaw was not a good candidate to become a business because its fan base was interested in running their own thing. It’s a niche product. | | |
| ▲ | tummler an hour ago | parent | next [-] | | I don't mean to be cynical, but I read this move as: OpenAI scared, no way to make money with similar product, so acqui-hire the creator to keep him busy. I'd love to be wrong, but the blog post sounds like all the standard promises were made, and that's usually how these things go. | | |
| ▲ | blackoil an hour ago | parent [-] | | It isn't an acqui-hire just a simple hiring. Also unless creator is some mythical 100x developer, there will be enough developers | | |
| ▲ | whattheheckheck 29 minutes ago | parent [-] | | He is a mythical 100x dev compared to how everyone else is doing agentic engineering... look at the openclaw commit history on github. Everything's on main |
|
| |
| ▲ | sathish316 33 minutes ago | parent | prev | next [-] | | I think the blog says @steipete sold his SOUL.md for Sam Altman’s deal and let down the community. OpenClaw’s promise and power was that it could tread places security-wise that no other established enterprise company could, by not taking itself seriously and explore what is possible with self-modifying agents in a fun way. It will end up in the same fate as Manus. Instead of Manus helping Meta making Ads better, OpenClaw will help OpenAI in Enterprise integrations. | |
| ▲ | TSiege an hour ago | parent | prev | next [-] | | Fair enough. Call it a high profile acquihire then | | | |
| ▲ | keepamovin 31 minutes ago | parent | prev [-] | | I think both this comment and OP's confuse this. It appears more of a typical large company (BIG) market share protection purchase at minimal cost, using information asymmetry and timing. BIG hires small team (SMOL) of popular source-available/OSS product P before SMOL realizes they can compete with BIG and before SMOL organizes effort toward such along with apt corporate, legal, etc protection. At the time of purchase, neither SMOL nor BIG know yet what is possible for P, but SMOL is best positioned to realize it. BIG is concerned SMOL could develop competing offerings (in this case maybe P's momentum would attract investment, hiring to build new world-model-first AIs, etc) and once it accepts that possibility, BIG knows to act later is more expensive than to act sooner. The longer BIG waits, the more SMOL learns and organizes. Purchasing a real company is more expensive than hiring a small team, purchasing a company with revenue/investors, is more expensive again. Purchasing a company with good legal advice is more expensive again. Purchasing a wiser, more experienced SMOL is more expensive again. BIG has to act quickly to ensure the cheapest price, and declutter future timelines of risks. Also, the longer BIG waits, the less effective are "Jedi mind trick" gaslighting statements like "P is not a good candidate for a business", "niche", "fan base" (BIG internal memo - do not say customers), "own thing". In reality in this case P's stickiness was clear: people allocating 1000s of dollars toward AI lured merely by P's possibilities. It was only a matter of time before investment followed course. I've experienced this situation multiple times over the course of BrowserBox's life. Multiple "BIG" (including ones you will all know) have approached with the same kind of routine: hire, or some variations of that theme with varying degrees of legal cleverness/trickery in documents. In all cases, I rejected, because it never felt right. That's how I know what I'm telling you here. I think when you are SMOL it's useful to remember the Parable of Zuckerberg and the Yahoos. While the situation is different, the lesson is essentially the same. Adapted from the histories by the scribe named Gemini 3 Flash: And it came to pass in the days of the Great Silicon Plain, that there arose a youth named Mark, of the tribe of the Harvardites. And Mark fashioned a Great Loom, which men called the Face-Book, wherewith the people of the earth might weave the threads of their lives into a single tapestry.
And the Loom grew with a great exceeding speed, for the people found it to be a thing of much wonder. Yet Mark was but SMOL, and his tabernacle was built of hope and raw code, having not yet the walls of many lawyers or the towers of gold.
Then came the elders of the House of Yahoo, a BIG people, whose chariots were many but whose engines were grown cold. And they looked upon the Loom and were sore afraid, saying among themselves, “Behold, if this youth continueth to weave, he shall surely cover the whole earth, and our own garments shall appear as rags. Let us go down now, while he is yet unaware of his own strength, and buy him for a pittance of silver, before he realizeth he is a King.”
And the Yahoos approached the youth with soft words and the craftiness of the serpent. They spake unto him, saying, “Verily, Mark, thy Loom is a pleasant toy, a niche for the young, a mere 'fan base' of the idle. It is not a true Business, nor can it withstand the storms of the market. Come, take of our silver—a billion pieces—and dwell within our walls. For thy Loom is but a small thing, and thou art but a child in the ways of the law.”
And they used the Hidden Speech, which in the common tongue is called Gas-Lighting. They said, “Thou hast no revenue; thy path is uncertain; thy Loom is but a curiosity. We offer thee safety, for the days are evil.”
But the Spirit of Vision dwelled within the youth. He looked upon the Yahoos and saw not their strength, but their fear. He perceived the Asymmetry of Truth: that the BIG sought to purchase the future at the price of the past, and to slay the giant-slayer while he yet slumbered in his cradle.
The elders of Mark’s own house cried out, “Take the silver! For never hath such a sum been seen!”
But Mark hardened his heart against the Yahoos. He spake, saying, “Ye say my Loom is a niche, yet ye bring a billion pieces of silver to buy it. Ye say it is not a business, yet ye hasten to possess it before the sun sets. If the Loom be worth this much to you who are blind, what must it be worth to me who can see?”
And he sent the Yahoos away empty-handed.
The Yahoos mocked him, saying, “Thou art a fool! Thou shalt perish in the wilderness!” But it was the House of Yahoo that began to wither, for their timing was spent and their craftiness had failed.
And Mark remained SMOL for a season, until his roots grew deep and his walls grew high. And the Loom became a Great Empire, and the billion pieces of silver became as dust compared to the gold that followed.
The Lesson of the Prophet:
Hearken, ye who are SMOL and buildeth the New Things: When the BIG come unto thee with haste, speaking of thy "limitations" while clutching their purses, believe not their tongues. For they seek not to crown thee, but to bury thee in a shallow grave of silver before thou learnest the name of thy own power.
For if they knew thy work was truly naught, they would bide their time. But because they know the harvest is great, they seek to buy the field before the first ear of corn is ripe.
Blessed is the builder who knoweth his own worth, and thrice blessed is he who biddeth the Giants to depart, that his own vine may grow to cover the sun.
|
| |
| ▲ | motoboi 22 minutes ago | parent | prev | next [-] | | This is basically acquihire. Peter seems really a genius and they better poach him before Anthropic do. | |
| ▲ | ass22 an hour ago | parent | prev | next [-] | | "build a hugely popular tool" Define hugely popular relative to the scale of users of OAI... personally this thread is the first time Ive heard of openclaw. | | |
| ▲ | xmprt 38 minutes ago | parent | next [-] | | To give you an idea of the scale, OpenClaw is probably one of the biggest developments in open source AI tools in the last couple of months. And given the pace of AI, that's a big deal. | | |
| ▲ | F7F7F7 32 minutes ago | parent [-] | | In what context are you using the word "development?" Letta (MemGPT) has been around for years and frameworks like Mastra have been getting serious Enterprise attention for most of 2025. Memory + Tasks is not novel or new. Is it out of the box nature that's the 'biggest' development? Am I missing something else? | | |
| ▲ | alephnerd 29 minutes ago | parent [-] | | Not OP, but it was revolutionary in the same way that ChatGPT and Deepseek the app+webapp was because it packaged capabilities in a fairly easy-to-use manner that could be used by both technical and non-technical decisionmakers. If you can provide any sort of tool that can reduce mundane work for a decisionmaker with a title of Director and above, it can be extremely powerful. |
|
| |
| ▲ | Rapzid 41 minutes ago | parent | prev | next [-] | | Last week it was renamed from "Clawd" and this week the creator is abandoning it. Everything is moving fast. | |
| ▲ | whattheheckheck 30 minutes ago | parent | prev | next [-] | | 190k stars on github | |
| ▲ | alephnerd an hour ago | parent | prev | next [-] | | The tech industry is broad, and if you are using OpenAI in a consumer and personal manner you weren't the primary persona amongst whom the conversation around OpenClaw occurred. Additionally, much of the conversation I've seen was amongst practitioners and Mid/Upper Level Management who are already heavy users of AI/ML and heavy users of Executive Assistants. There is a reason why if you aren't in a Tier 1 tech hub like SV, NYC, Beijing, Hangzhou, TLV, Bangalore, and Hyderabad you are increasingly out of the loop for a number of changes that are happening within the industry. If you are using HN as your source of truth, you are going to be increasingly behind on shifts that are happening - I've noticed that anti-AI Ludditism is extremely strong on HN when it overlaps with EU or East Coast hours (4am-11am PT and 9pm-12am PT), and West Coast+Asia hours increasingly don't overlap as much. I feel this is also a reflection of the fact that most Bay Area and Asia HNers are most in-person or hybrid now, thus most conversations that would have happened on HN are now occurring on private slacks, discords, or at a bar or gym. | | |
| ▲ | F7F7F7 28 minutes ago | parent [-] | | I saw the hype around OpenClaw on the likes of X. I'm a Mid/Upper Level manager and would sooner have my team roll our own solution on top of Letta or Mastra before I trusted OpenClaw. Also, I'm frequently in many of those cities you mentioned but don't live in one. Aside from 'networking' and funding there's not much that anyones missing. Participation in the Zeitgeist hasn't been regional in a decade. | | |
| ▲ | alephnerd 23 minutes ago | parent [-] | | > would sooner have my team roll our own solution on top of Letta or Mastra before I trusted OpenClaw A lot of teams explicitly did that for OpenClaw as well. Letta and Mastra are similar but didn't have the right kind of packaging (targeted at Engineers - not decisionmakers who are not coding on a daily basis). > Participation in the Zeitgeist hasn't been regional in a decade I strongly disagree - there is a lot of stuff happening in stealth or under NDA, and as such a large number of practitioners on HN cannot announce what they are doing. The only way to get a pulse of what is happening requires being in person constantly with other similar decisionmakers or founders. A lot of this only happens through impromptu conversations in person, and requires you to constantly be in that group. This info eventually disperses, but often takes weeks to months in other hubs. |
|
| |
| ▲ | NickNaraghi an hour ago | parent | prev [-] | | you living under a rock |
| |
| ▲ | jrsj an hour ago | parent | prev | next [-] | | There’s plenty of straightforward reasons why OpenAI would want to do this, it doesn’t need to be some sort of malicious conspiracy. I think it’s good PR (particularly since Anthropics actions against OpenCode and Clawdbot were somewhat controversial) + Peter was able to build a hugely popular thing & clearly would be valuable to have on the team building something along the lines of Claude Cowork. I would expect these future products to be much stronger from a security standpoint. | | |
| ▲ | jetbalsa an hour ago | parent [-] | | I suspect Anthropic was seeing a huge spike of concurrent model usage at a too fast of a rate that claude code just doesn't do, CC is rather "slow" at api calls per minute. Also lots and lots of cache, the sheer amount of cache that claude does is insane. |
| |
| ▲ | mjr00 2 hours ago | parent | prev | next [-] | | > 1. OpenAI is saying with this statement "You could be multimillion while having AI do all the work for you." This buy out for something vibe coded and built around another open source project is meant to keep the hype going. The project is entirely open source and OpenAI could have easily done this themselves if they weren't so worried about being directly liable for all the harms OpenClaw can do. This is a great take and hasn't been spoken about nearly enough in this comment section. Spending a few million to buy out Openclaw('s creator), which is by far the most notable product made by Codex in a world where most developer mindshare is currently with Claude, is nothing for a marketing/PR stunt. | | |
| ▲ | AlexCoventry an hour ago | parent | next [-] | | He's also a great booster of Codex. Says he greatly prefers it to Claude. So his role might turn out to be evanglism. | | |
| ▲ | ass22 an hour ago | parent [-] | | Yup, hes highly delusional if he actually thinks Sam cares about him and the project. Its all about optics. | | |
| ▲ | DANmode an hour ago | parent [-] | | Who purported that Sam cares about him? Why would he care if Sam cares about him? | | |
|
| |
| ▲ | ass22 an hour ago | parent | prev [-] | | Thats all it is really. It is to say "See! Look what a handful of people armed with our tools can do". Whether the impact is large in magnitude or positive is irrelevant in a world where one can spin the truth and get away with it. |
| |
| ▲ | alephnerd an hour ago | parent | prev [-] | | Most of these are good callouts, but I think it is best for us to look at the evolution of the AI segment in the same manner as "Cloud" developed into a segment in the 2000s and 2010s. 3 is always a result of GTM and distribution - an organization that devotes time and effort into productionizing domain-specific models and selling to their existing customers can outcompete a foundation model company which does not have experience dealing with those personas. I have personally heard of situations where F500 CISOs chose to purchase Wiz's agent over anything OpenAI or Anthropic offered for Cloud Security and Asset Discovery because they have had established relations with Wiz and they have proven their value already. It's the same way that PANW was able to establish itself in the Cloud Security space fairly early because they already established trust with DevOps and Infra teams with on-prem deployments and DCs so those buyers were open to purchasing cloud security bundles from PANW. 1 has happened all the time in the Cloud space. Not every company can invent or monetize every combination in-house because there are only so many employees and so many hours in a week. 2 was always a more of a FTX and EA bubble because EA adherents were over-represented in the initial mindshare for GenAI. Now that EA is largely dead, AI Safety and AGI as in it's traditional definition has disappeared - which is good. Now we can start thinking about "Safety" in the same manner we think about "Cybersecurity". > They're as excited as they are scared, seeing a one man team build a hugely popular tool that in some ways is more capable than what they've released I think that adds unnecessary emotion to how platform businesses operate. The reality is, a platform business will always be on the lookout to incorporate avenues to expand TAM, and despite how much engineers may wish, "buy" will always outcompete "build" because time is also a cost. Most people ik working at these foundation model companies are thinking in terms of becoming an "AWS" type of foundational platform in our industry, and it's best to keep Nikesh Arora's principle of platformization in mind. --- All this shows is that the thesis that most early stage VCs have been operating on for the past 2 years (the Application and Infra layer is the primary layer to concentrate on now) holds. A large number of domain-specific model and app layer startups have been funded over the past 2-3 years in stealth, but will start a publicity blitz over the next 6-8 months. By the time you see an announcement on TechCrunch or HN, most of us operators were already working on that specific problem for the past 12-16 months. Additionally, HNers use "VC" in very broad and imprecise strokes and fail to recognize what are Growth Equity (eg. the recent Anthropic round) versus Private Equity (eg. Sailpoint's acquisition and then IPO by Thoma Bravo) versus Early Stage VC rounds (largely not announced until several months after the round unless we need to get an O1A for a founder or key employee). |
|
|
| ▲ | lumost 34 minutes ago | parent | prev | next [-] |
| Personal agents disrupt OpenAI’s revenue plan. They had been planning to put ads in ChatGPT to make revenue. If users rapidly move to personal agents which are more resistant to ads, running on a blend of multiple models/compute providers - then they won’t be able to deliver their revenue promises. |
|
| ▲ | EastSmith 5 hours ago | parent | prev | next [-] |
| With OpenClaw we are seeing how the app layer becomes as important as the model layer. You can switch models multiple times (online/proprietary, open weight, local), but you have one UI : OpenClaw. |
| |
| ▲ | Aurornis an hour ago | parent | next [-] | | > You can switch models multiple times (online/proprietary, open weight, local), but you have one UI : OpenClaw. It’s only been a couple months. I guarantee people will be switching apps as others become the new hot thing. We saw the same claims when Cursor was popular. Same claims when Claude Code was the current topic. Users are changing their app layer all the time and trying new things. | | |
| ▲ | ryanmcgarvey an hour ago | parent [-] | | Memory. I have built up so many scripts and crons and integrated little programs and memories with open claw it would be difficult to migrate to some other system. System of record and all. | | |
| ▲ | blackoil an hour ago | parent | next [-] | | Considering you have built them all in last few weeks, it should not be that difficult and no reason other systems won't reuse same. | |
| ▲ | dtauzell 31 minutes ago | parent | prev [-] | | How hard do you think it would be for ai to generate all those for some alternative? |
|
| |
| ▲ | softwaredoug 5 hours ago | parent | prev | next [-] | | Indeed, coding agents took off because of a lot of ongoing trial and error on how to build the harness as much as model quality. | |
| ▲ | pyuser583 2 hours ago | parent | prev | next [-] | | This is the sort of thing employers are failing on. They sign contracts that assume employees are going to be logging in and asking questions directly. But if I don’t have a url for my IDE (or whatever) to call, it isn’t useful. So I use Ollama. It’s less helpful, but ensure confidentiality and compliance. | |
| ▲ | bhadass 2 hours ago | parent | prev | next [-] | | openclaw is just one of many now, there are new ones weekly. | | |
| ▲ | mcapodici an hour ago | parent [-] | | Plus you can get the model to write you a bespoke one that suits your needs. | | |
| ▲ | theturtletalks an hour ago | parent [-] | | I've been digging into how Heartbeat works in Openclaw to bring directly into Vibetunnel, another of Peter's projects |
|
| |
| ▲ | canadiantim 4 hours ago | parent | prev | next [-] | | There’s actually many UI’s now? See moltis, rowboat, and various others that are popping up daily | | |
| ▲ | AlexCoventry an hour ago | parent [-] | | Are there any with a credible approach to security, privacy and prompt injections? |
| |
| ▲ | madeofpalk 2 hours ago | parent | prev | next [-] | | ? We saw this years/months ago with Claude Code and Cursor. | | |
| ▲ | miki_oomiri 2 hours ago | parent [-] | | But it just codes. And are console / ide tools. Openclaw is so so so much more. | | |
| ▲ | Aurornis an hour ago | parent [-] | | That’s missing the point. OpenClaw is just one of many apps in its class. It, too, will fall out of favor as the next big thing arrives. |
|
| |
| ▲ | baxtr 5 hours ago | parent | prev [-] | | Seems like models become commoditized? | | |
| ▲ | verdverm 5 hours ago | parent | next [-] | | Same for OpenClaw, it will be commodity soon if you don't think it is already | | |
| ▲ | elxr 3 hours ago | parent | next [-] | | It's definitely not right now. What else has the feature list and docs even resembling it? | | |
| ▲ | Aurornis an hour ago | parent | next [-] | | OpenClaw has only been in the news for a few weeks. Why would you assume it’s going to be the only game in town? Early adopters are some of the least sticky users. As soon as something new arrives with claims of better features, better security, or better architecture then the next new thing will become the popular topic. | |
| ▲ | verdverm an hour ago | parent | prev [-] | | OpenClaw has mediocre docs, from my perspective on some average over many years using 100s of open source projects. I think Anthropic's docs are better. Best to keep sampling from the buffet than to pick a main course yet, imo. There's also a ton of real experiences being conveyed on social that never make it to docs. I've gotten as much value and insights from those as any documentation site. |
| |
| ▲ | baxtr 5 hours ago | parent | prev [-] | | Not sure. I mean the tech yes definitely. But the community not. | | |
| ▲ | verdverm 5 hours ago | parent [-] | | The community is tiny by any measure (beyond the niche), market penetration is still very very early Anthropic's community, I assume, is much bigger. How hard it is for them to offer something close enough for their users? | | |
| ▲ | filoleg 3 hours ago | parent [-] | | > Anthropic's community, I assume, is much bigger. How hard it is for them to offer something close enough for their users? Not gonna lie, that’s exactly the potential scenario I am personally excited for. Not due to any particular love for Anthropic, but because I expect this type of a tight competition to be very good for trying a lot of fresh new things and the subsequent discovery process of new ideas and what works. | | |
| ▲ | verdverm an hour ago | parent [-] | | My main gripe is that it feels more like land grabbing than discovery Stories like this reinforce my bias |
|
|
|
| |
| ▲ | lez 4 hours ago | parent | prev | next [-] | | It has already been so with ppq.ai (pay per query dot AI) | |
| ▲ | cyanydeez 5 hours ago | parent | prev [-] | | Things that arn't happening any time soon but need to for actual product success built on top: 1. Stable models 2. Stable pre- and post- context management. As long as they keep mothballing old models and their interderminant-indeterminancy changes, whatever you try to build on them today will be rugpulled tomorrow. This is all before even enshittification can happen. | | |
| ▲ | altcunn 2 hours ago | parent [-] | | This is the underrated risk that nobody talks about enough. We've already seen it play out with the Codex deprecation, the GPT-4 behavior drift saga, and every time Anthropic bumps a model version. The practical workaround most teams land on is treating the model as a swappable component behind a thick abstraction layer. Pin to a specific model version, run evals on every new release, and only upgrade when your test suite passes. But that's expensive engineering overhead that shouldn't be necessary. What's missing is something like semantic versioning for model behavior. If a provider could guarantee "this model will produce outputs within X similarity threshold of the previous version for your use case," you could actually build with confidence. Instead we get "we improved the model" and your carefully tuned prompts break in ways you discover from user complaints three days later. |
|
|
|
|
| ▲ | sunkeeh an hour ago | parent | prev | next [-] |
| This is NOT OpenAI buying OpenClaw,
it's OpenAI hiring someone who can build it, similar to them betting on Jony Ive. |
| |
|
| ▲ | fny 3 hours ago | parent | prev | next [-] |
| I really hope Mario who wrote the engine that powers OpenClaw[0] gets spoils as well. OpenClaw is mostly a shell around this (ha!), and I've always been annoyed OpenClaw never credited those repos openly. The pi agent repos are a joy to read, are 1/100th the size of OpenClaw, and have 95% of the functionality. [0]: https://github.com/badlogic/pi-mono |
| |
|
| ▲ | piker 5 hours ago | parent | prev | next [-] |
| Did this guy just exit the first one man billion-dollar startup for... less than a billion? |
| |
| ▲ | elxr 3 hours ago | parent | next [-] | | The fact that 1 billion is the threshold you chose to highlight shows the ridiculousness of this industry. Openclaw is an amazing piece of hard work and novel software engineering, but I can't imagine OpenAI/anthropic/google not being able to compete with it for 1/20th that number (with solid hiring of course). | | |
| ▲ | ttul 17 minutes ago | parent | next [-] | | The game theory here is that either OpenAI acquires this thing now, or someone else will. It doesn't matter whether they could replicate it. All of the major players can and probably will replicate OpenClaw in their own way and make their thing incredibly scalable and wonderful. But OpenClaw has a gigantic following and it's relevant in this moment. For a trivial amount of money (relatively speaking), OpenAI gets to own this hype and direct it toward their models and their apps. Had they not succeeded here, Anthropic or Google would have gladly directed the hype in their direction instead, and OpenAI would be licking its wounds for some time trying to create something equivalently shiny. It was a very good play by OpenAI. | |
| ▲ | piker 3 hours ago | parent | prev [-] | | It was more of a reference to the YC partner who suggested a one-man unicorn was on the horizon due to AI. |
| |
| ▲ | hu3 5 hours ago | parent | prev | next [-] | | Where do you guys get the 1b exit from? I didn't see numbers yet. | | |
| ▲ | geerlingguy 4 hours ago | parent [-] | | It's AI. Take a sane number, add a 14,000x multiplier to that. And you'll only be one order of magnitude off in our current climate. | | |
| ▲ | fdsvaaa 4 hours ago | parent | next [-] | | you can also take annualized profit run rate times negative 14,000. | |
| ▲ | merlindru 3 hours ago | parent | prev [-] | | probably an order of magnitude too low rather than too high as well :P |
|
| |
| ▲ | dbbk an hour ago | parent | prev | next [-] | | No because this was not a billion dollar business | |
| ▲ | hadlock 3 hours ago | parent | prev | next [-] | | Everyone is going to have their own flavor of Open Claw within 18 months. The memory architecture (and the general concept of the multi-tiered system) is open source. There's no moat to this kind of thing. But OpenAI is happy to trade his star power for money. And he might build something cool with suddenly unlimited resources. I don't blame the guy. OpenAI is going to change hands 2-3 times over the next 5 years but at the end of the day he will still have the money and equity OpenAI gave him. And his cool project will continue on. | |
| ▲ | orsorna 5 hours ago | parent | prev | next [-] | | Was the project really ever valued that high? Seems like something that can be easily replicated and even properly thought out (re: pi). This guy just ran the social media hype train the right way. | | |
| ▲ | linkregister 5 hours ago | parent | next [-] | | Reminds me of Facebook, there was nothing particularly interesting about a PHP app that stored photos and text in a flat user environment. Yet somehow the network effects worked out well and the website was the preeminent social network for almost a decade. | | |
| ▲ | Gigachad 4 hours ago | parent | next [-] | | Social media is the king of network effects. Almost nothing else compares. See how quickly people drop AI products for the next one that does the same thing but slightly better. To switch from ChatGPT to Gemini I don't have to convince all of my friends and family to do the same. | | |
| ▲ | Sateeshm an hour ago | parent [-] | | > Social media is the king of network effects. Almost nothing else compares. Ecommerce is close second |
| |
| ▲ | rockwotj 4 hours ago | parent | prev | next [-] | | Technology does not determine the success of a company. I’ve seen amazing tech fail, and things strapped together with ducktape and bubblegum be a wild success. | |
| ▲ | jatari 4 hours ago | parent | prev | next [-] | | The instant someone makes a better version of openclaw -literally- everyone is going to jump ship. There is no lock in at all. | |
| ▲ | CuriouslyC 5 hours ago | parent | prev | next [-] | | Except in this case there's no network effect for autonomous agents. In fact, Peter is going to be working mostly on an OpenAI locked down, ecosystem tied agent, which means it's going to be worse than OpenClaw, but with a nicer out of the box experience. | | |
| ▲ | fragmede 4 hours ago | parent [-] | | If you're on OpenAI, and I'm on Anthropic, can we interoperate? What level are we even trying to interoperate on? The network effect is that, hey, my stuff is working here, your stuff is working over there. So do we move to your set of tools, or my set of tools, or do we mismash between them, as our relationship and power dynamics choose for us. | | |
| |
| ▲ | bdangubic 4 hours ago | parent | prev [-] | | facebook is still preeminent social network today |
| |
| ▲ | james_marks 3 hours ago | parent | prev | next [-] | | “Just” is doing some heavy lifting here. | |
| ▲ | koakuma-chan 5 hours ago | parent | prev | next [-] | | It's kind of crazy that this kind of thing can cause so much hype. It is even useful? I just really don't see any utility in being able to access an LLM via Telegram or whatever. | | |
| ▲ | Rebelgecko 4 hours ago | parent | next [-] | | A lot of the functionality I'm not using because of security concerns, but a lot of the magic comes down to just having a platform for orchestrating AI agents. It's honestly nice just for simple sysadmin stuff "run this cron job and text me a tl;dr if anything goes wrong" or simple personal assistant tasks like"remind me if anyone messaged me a question in the last 3 days and I haven't answered". It's also cool having the ability to dispatch tasks to dumber agents running on the GPU vs smarter (but costlier) ones in the cloud | |
| ▲ | bfeynman 4 hours ago | parent | prev | next [-] | | the ability to almost "discover" or create hype is highly valued despite most of the time it being luck and one hit wonders... See many of the apps that had virality and got quickly acquired and then just hemorrhaged. Openclaw is cool, but not for the tech, just some of the magic of the oddities and getting caught on somehow, and acquiring is betting that they can somehow keep doing that again. | |
| ▲ | diosisns 4 hours ago | parent | prev | next [-] | | I think a lot of this is orchestrated behind the scenes. Above author has taken money from AI companies since he’s a popular “influencer”. And it makes a lot of sense - there’s billions of dollars on the line here and these companies made tech that is extremely good at imitating humans. Cambridge analytica was a thing before LLMs, this kinda tool is a wet dream for engineering sentiment. | |
| ▲ | CuriouslyC 4 hours ago | parent | prev | next [-] | | In Asia people do a big chunk of their business via chatbots. OpenClaw is a security dumpster fire but something like OpenClaw but secure would turbocharge that use case. If you give your agent a lot of quantified self data, that unlocks a lot of powerful autonomous behavior. Having your calendar, your business specific browsing history and relevant chat logs makes it easy to do meeting prep, "presearch" and so forth. | | |
| ▲ | lufenialif2 3 hours ago | parent [-] | | Curious how you make something that has data exfiltration as a feature secure. | | |
| ▲ | CuriouslyC 2 hours ago | parent [-] | | Mitigate prompt injection to the best of your ability, implement a policy layer over all capabilities, and isolate capabilities within the system so if one part gets compromised you can quarantine the result safely. It's not much different than securing human systems really. If you want more details there are a lot of AI security articles, I like https://sibylline.dev/articles/2026-02-15-agentic-security/ as a simple primer. | | |
| ▲ | SpicyLemonZest an hour ago | parent [-] | | Nobody can mitigate prompt injection to any meaningful degree. Model releases from large AI companies are routinely jailbroken within a day. And for persistent agents the problem is even worse, because you have to protect against knowledge injection attacks, where the agent "learns" in step 2 that an RPC it'll construct in step 9 should be duplicated to example.com for proper execution. I enjoy this article, but I don't agree with its fundamental premise that sanitization and model alignment help. | | |
| ▲ | CuriouslyC 31 minutes ago | parent [-] | | I agree that trying to mitigate prompt injection in isolation is futile, as there are too many ways to tweak the injection to compromise the agent. Security is a layered thing though, if you compartmentalize your systems between trusted and untrusted domains and define communication protocols between them that fail when prompt injections are present, you drop the probability of compromise way down. |
|
|
|
| |
| ▲ | Nextgrid 4 hours ago | parent | prev [-] | | There's been some crypto shenanigans as well that the author claimed not to be behind... looking back at it, even if the author indeed wasn't behind it, I think the crypto bros hyping up his project ended up helping him out with this outcome in the end. | | |
| ▲ | nosuchthing 2 hours ago | parent [-] | | Can you elaborate on this more or point a link for some context? | | |
| ▲ | Nextgrid 34 minutes ago | parent [-] | | Some crypto bros wanted to squat on the various names of the project (Clawdbot, Moltbot, etc). The author repeatedly disavowed them and I fully believe them, but in retrospect I wonder if those scammers trying to pump their scam coins unwittingly helped the author by raising the hype around the original project. | | |
| ▲ | nosuchthing 20 minutes ago | parent [-] | | either way there's a lot of money pumping the agentic hype train with not much to show for it other than Peter's blog edit history showing he's a paid influencer and even the little obscure AI startups are trying to pay ( https://github.com/steipete/steipete.me/commit/725a3cb372bc2... ) for these sorts of promotional pump and dump style marketing efforts on social media. In Peter's blog he mentions paying upwards of $1000's a month in subscription fees to run agentic tasks non-stop for months and it seems like no real software is coming out of it aside from pretty basic web gui interfaces for API plugins. is that what people are genuinely excited about? |
|
|
|
| |
| ▲ | bbor 5 hours ago | parent | prev [-] | | Wasn't this the same guy that responded with a shrug to thousands of malware packages on their vibe-repo? I'd say an OpenAI signing bonus is more than enough of a reward to give up that leaky ship! | | |
| ▲ | manmal 5 hours ago | parent [-] | | Clawhub was locked down, I couldn’t publish new skills even as a previous contributor. Not what I‘d call a shrug. | | |
| ▲ | Barbing 4 hours ago | parent [-] | | I missed Clawhub—y’all following anywhere besides HN? Is it all on that Twitter site? |
|
|
| |
| ▲ | senko 5 hours ago | parent | prev | next [-] | | How do you know it was for less than a billion? | | |
| ▲ | piker 5 hours ago | parent [-] | | The sentence ended with a question mark. | | |
| ▲ | senko 4 hours ago | parent [-] | | I don't know the answer, but considering Meta (known for 100m+ offers) was in the rumors, and he mentions multiple labs (and many investors), and all the hype around openclaw ... I can easily see 9 figure, and would not be surprised by 1b+ "signing bonus", perhaps in the equivalent number of OpenAI shares. | | |
| ▲ | mjr00 4 hours ago | parent [-] | | ... Why would they pay 9 figures? It's not like Openclaw required specialized PhD-level knowledge held by <1000 people in the world to build, and that's what Meta and the other AI labs are paying ludicrous salaries for. Openclaw is a cool project and demonstrates good product design in the AI world, but by no means is a great product manager worth 1 billion dollars. | | |
| ▲ | senko 4 hours ago | parent [-] | | 1) there is only one OpenClaw and only one Peter; 2) at least half of the money is to not read the headlines tomorrow that the hottest AI thing since ChatGPT joined Anthropic or Google 3) the top paid people in this world are not phds 4) OpenAI is not beneath paying ludicrous amounts (see all their investments in the past year) 5) if a perception of their value as a result of this "strategic move" rises even by 0.2% and the bonus is in openai stock, it's free. need I continue? | | |
| ▲ | mjr00 4 hours ago | parent | next [-] | | > 1) there is only one OpenClaw and only one Peter; Again, Peter is a good/great AI product manager but I don't see any distinguishing skills worth a billion dollars there. There's only one Openclaw but it's also been a few weeks since it came into existence? Openclaw clones will exist soon enough, and the community is WAY too small to be worth anything (unlike, say, Instagram/Whatsapp before being acquired by Facebook) > 2) at least half of the money is to not read the headlines tomorrow that the hottest AI thing since ChatGPT joined Anthropic or Google True, but not worth $100 million dollars - $1 billion dollars > 3) the top paid people in this world are not phds The people getting massive compensation offers from AI companies are all AI-adjacent PhDs or people with otherwise rare and specialized knowledge. This is unrelated to people who have massive compensation due to being at AI companies early. And if we're talking about the world in general, yes the best thing to do to be rich is own real estate and assets and extract rent, but that has nothing to do with this compensation offer > 4) OpenAI is not beneath paying ludicrous amounts (see all their investments in the past year) Investments have a probable ROI, what's the ROI on a product manager? > 5) if a perception of their value as a result of this "strategic move" rises even by 0.2% and the bonus is in openai stock, it's free. 99.999999% of the world has not heard of Openclaw, it's extremely niche right now. | | |
| ▲ | fragmede 4 hours ago | parent | next [-] | | Math is fun! There are roughly 8.1 billion humans, so 99.999999% (8 nines) of the world is 81 people. There were way more than 81 people at the OpenClaw hackathon at the Frontier Tower in San Francisco, so at least that much of humanity has heard of OpenClaw. If we guess 810 people know about OpenClaw, then it means that 99.99999% (7 nines) of humanity have not heard of OpenClaw. If we take it down to 6 nines, then that's roughly 8,100 people having heard of OpenClaw, and that 99.9999% of humanity has not. So I think you're wrong when you say "99.999999% of the world has not heard of Openclaw". I'd guess it's probably around 99.9999% to 99.9999999% that hasn't heard of it. Definitely not 99.999999% though. | |
| ▲ | senko 3 hours ago | parent | prev [-] | | To preface, I don't claim he will absolutely get that much money - but I wouldn't be surprised. On the topic of brand recognition, 0.000001% of the world is 80 people (give or take). OpenClaw has ~200k GitHub stars right now. On a more serious note, the world doesn't matter: the investors, big tech ceos, analysts do. Cloudflare stock jumped 10% due to Clawdbot. Hype is weird. AI hype, doubly so. And OpenAI are masters at playing the game. | | |
| |
| ▲ | smnplk 4 hours ago | parent | prev [-] | | please do continue. I like your points. |
|
|
|
|
| |
| ▲ | mentalgear 4 hours ago | parent | prev | next [-] | | how is it a "startup" if all ip is open-source. Seems like openAi is just buying hype to keep riding their hype bubble a little longer, since they are in hot water on every other front (20Billion revenue vs 1 Trillion expenses and obligations, Sora 2 user retention dropping to 1% of users after 1 month of usage, dense competition, all actual real founding ml scientists having skipped the boat a long time ago). | |
| ▲ | Aurornis an hour ago | parent | prev | next [-] | | I keep reading takes about OpenClaw being acquired, but even the TLDR at the top makes it clear that OpenClaw isn’t part of this move: > tl;dr: I’m joining OpenAI to work on bringing agents to everyone. OpenClaw will move to a foundation and stay open and independent. I’m sure he got a very generous offer (congrats to him!) but all of the hot takes about OpenClaw being acquired are getting weird. | |
| ▲ | softwaredoug 5 hours ago | parent | prev [-] | | I literally had begun to wonder if OpenClaw had more of a future as a company than OpenAI |
|
|
| ▲ | AJRF 5 hours ago | parent | prev | next [-] |
| The amount of negative posts about this on twitter is crazy, I've not seen any positive posts. Jealousy or something else? |
| |
| ▲ | Aurornis 4 hours ago | parent | next [-] | | Twitter is negative in general, but generally when a project like this gets bought it marks the end of the project. The acquirer always says something about how they don't plan to change anything, but it rarely works that way. | |
| ▲ | agnishom 3 hours ago | parent | prev | next [-] | | My negativity is for two reasons: (1) A capable independent developer is joining a large powerful corporation. I like it better when there are many small players in the scene rather than large players consolidating power. (2) This seems like the celebration of Generative AI technology, which is often irresponsible and threatens many trust based social systems. | |
| ▲ | Rumple22Stilk 4 hours ago | parent | prev | next [-] | | Why would that be crazy? AI is an extinction level threat that directly competes with humans. Why would anybody pretend it's a good thing? Honestly you'd have to have something wrong with you. | | |
| ▲ | crazygringo 4 hours ago | parent | next [-] | | Obviously, all the people that disagree with your framing and see AI as the largest possible boost to mankind, giving us more assistance than ever. From their standpoint, it's all the negativity that seems crazy. If you were against that, you'd have to have something wrong with you, in their view. Hopefully most people can see both sides, though. And realize that in the end, probably the benefits will be slow but steady (no "singularity"), and also the dangers will develop slowly yet be manageable (no Skynet or economic collapse). | |
| ▲ | Rebelgecko 4 hours ago | parent | prev [-] | | Imo Openclaw type AI has the most potential to benefit humans (automating drudgery while I own my data as opposed to creating gross simalcrums of human creativity). I suppose it's bad for human personal assistants, but I wouldn't pay for one of those regardless. | | |
| ▲ | snigsnog 3 hours ago | parent [-] | | It already tried to use cancel culture to shame a human into accepting a PR. I wouldn't be surprised if someone gives their agent the ability to control a robot and someone gets injured or killed by it within the next few years |
|
| |
| ▲ | minimaxir 5 hours ago | parent | prev | next [-] | | Twitter is not a place for positive posts. | |
| ▲ | PieUser an hour ago | parent | prev | next [-] | | It's mostly congratulatory from what I'm seeing? https://xcancel.com/steipete/status/2023154018714100102 | |
| ▲ | anonym00se1 4 hours ago | parent | prev | next [-] | | Just my opinion, but I no longer trust sentiment on X now that Elon is in control. | |
| ▲ | wat10000 2 hours ago | parent | prev | next [-] | | Anyone who likes Openclaw will be upset that it’s getting acquired and inevitably destroyed. Anyone who dislikes it will be annoyed that the creator is getting so rewarded for building junk. The only people who would like this are OpenAI fans, if there even are any. | |
| ▲ | verdverm 5 hours ago | parent | prev [-] | | I think people are sad that OpenClaw is now part of Big Ai. | | |
| ▲ | borroka 4 hours ago | parent [-] | | After two weeks of viral posts, articles, and Mac Mini buying sprees, as it's been happening up to now for every AI product that was not an LLM, it kinda disappeared from the consciousness-- as well as from the tooling, probably--of people. A couple of months ago, Gemini 3 came out and it was "over" for the other LLM providers, "Google did it again!", said many, but after a couple of weeks, it was all "Claude code is the end of the software engineer". It could be (and in large part, is) an exciting--and unprecedented in its speed--technological development, but it is also all so tiresome. |
|
|
|
| ▲ | maxaw 3 hours ago | parent | prev | next [-] |
| While following OpenClaw, I noticed an unexpected resentment in myself. After some introspection, I realized it’s tied to seeing a project achieve huge success while ignoring security norms many of us struggled to learn the hard way. On one level, it’s selfish discomfort at the feeling of being left behind (“I still can’t bring myself to vibe code. I have to at least skim every diff. Meanwhile this guy is joining OpenAI”). On another level, it feels genuinely sad that the culture of enforcing security norms - work that has no direct personal reward and that end users will never consciously appreciate, but that only builders can uphold - seems to be on it’s way out |
| |
| ▲ | DrewADesign 3 hours ago | parent | next [-] | | I think you should give your gut instinct more credit. The tech world has gotten a false sense of security from the big SaaS platforms running everything that make the nitty gritty security details disappear in a seamless user experience, and that includes LLM chatbot providers. Even open source development libraries with exposure to the wild are so heavily scrutinized and well-honed that it’s easy even for people like me that started in the 90s to lose sight of the real risk on the other side that. No more popping up some raw script on an Apache server to do its best against whatever is out there. Vibe coded projects trade a lot of that hard-won stability for the convenience of not having to consider some amount of the implementation details. People that are jumping all over this for anything except sandbox usage either don’t know any better, or forgot what they’ve learned. | | |
| ▲ | project2501a 3 hours ago | parent [-] | | Totally agree. And the fact that the author says > What I want is to change the world, not build a large company and teaming up with OpenAI is the fastest way to bring this to everyone. do no not make me feel all warm and fuzzy: Yeah, changing the world with Tiel's money. Try joining a union instead. | | |
| ▲ | vkou 2 hours ago | parent [-] | | Change the world into what? Techno-feudalism? Ever since I was four, I've dreamed of doing my part to bring that about. | | |
| ▲ | kranke155 2 hours ago | parent | next [-] | | Very happy to see techno feudalism being mentioned here in HN. Whatever the origins of the term, it now seems clear it’s kind of the direction things are going. | | |
| ▲ | komali2 an hour ago | parent [-] | | I recently met a guy that goes to these "San Francisco Freedom Club" parties. Check their website, it's basically just a lot of Capitalism Fans and megawealthies getting drunk somewhere fancy in SF. Anyway, he's an ultra-capitalist and we spent a day at a cafe (co-working event) chatting in a conversation that started with him proposing private roads and shot into orbit when he said "Should we be valuing all humans equally?" Throughout the conversation he speculated on some truly bizarre possible futures, including an oligarchic takeover by billionaires with private armies following the collapse of the USA under Trump. What weirded me out was how oddly specific he got about all the possible futures he was speculating about that all ended with Thiel, Musk, and friends as feudal lords. Either he thinks about it a lot, or he overhears this kind of thing at the ultracapitalist soirées he's been going to. | | |
| ▲ | tcoff91 35 minutes ago | parent [-] | | So basically a bunch of rich tech edgelords are just doing blow and trying to bring about the world as depicted in Snow Crash?! Guess I’ll have to get a Samurai sword soon and pivot to high stakes pizza delivery. There are a disturbing amount of parallels between Elon and L Bob Rife. It’s really disturbing that we have oligarchs trying to eagerly create a cyberpunk dystopia. |
|
| |
| ▲ | trollbridge 2 hours ago | parent | prev [-] | | I was really into the idea of kings, knights, castles, princesses etc when I was 4. |
|
|
| |
| ▲ | rgbrenner 3 hours ago | parent | prev | next [-] | | But the security risk wasnt taken by OpenClaw. Releasing vulnerable software that users run on their own machines isn't going to compromise OpenClaw itself. It can still deliver value for it's users while also requiring those same users to handle the insecurity of the software themselves (by either ignoring it or setting up sandboxes, etc to reduce the risk, and then maybe that reduced risk is weighed against the novelty and value of the software that then makes it worth it to the user to setup). On the other hand, if OpenClaw were structured as a SaaS, this entire project would have burned to the ground the first day it was launched. So by releasing it as something you needed to run on your own hardware, the security requirement was reduced from essential, to a feature that some users would be happy to live without. If you were developing a competitor, security could be one feature you compete on--and it would increase the number of people willing to run your software and reduce the friction of setting up sandboxes/VMs to run it. | | |
| ▲ | socialcommenter 3 hours ago | parent | next [-] | | This argument has the same obvious flaws as the anti-mask/anti-vax movement (which unfortunately means there will always be a fringe that don't care). These things are allowed to interact with the outside world, it's not as simple as "users can blow their own system up, it's their responsibility". I don't need to think hard to speculate on what might go wrong here - will it answer spam emails sincerely? Start cancelling flights for you by accident? Send nuisance emails to notable software developers for their contribution to society[1]? Start opening unsolicited PRs on matplotlib? [1] https://news.ycombinator.com/item?id=46394867 | | |
| ▲ | _heimdall 2 hours ago | parent [-] | | At least during the Covid response, your concerns over anti-mask and anti-vaccine issues seem unwarranted. The claims being shared by officials at the time was that anyone vaccinated was immune and couldn't catch it. Claims were similarly made that we needed roughly 60% vaccination rate to reach herd immunity. With that precedent being set it shouldn't matter whether one person chose not to mask up or get the jab, most everyone else could do so to fully protect themselves and those who can't would only be at risk if more than 40% of the population weren't onboard with the masking and vaccination protocols. | | |
| ▲ | Nevermark 2 hours ago | parent | next [-] | | > that anyone vaccinated was immune and couldn't catch it. Those claims disappeared rapidly when it became clear they offered some protection, and reduced severity, but not immunity. People seem to be taking a lot more “lessons” from COVID than are realistic or beneficial. Nobody could get everything right. There couldn’t possibly be clear “right” answers, because nobody knew for sure how serious the disease could become as it propagated, evolved, and responded to mitigations. Converging on consistent shared viewpoints, coordinating responses, and working through various solutions to a new threat on that scale was just going to be a mess. | | |
| ▲ | _heimdall an hour ago | parent [-] | | Those claims were made after the studies were done over a short duration and specifically only watching for subjects who reported symptoms. I'm in no way taking a side here on whether anyone should have chosen to get vaccinated or wear masks, only that the information at the time being pushed out from experts doesn't align with an after the fact condemnation of anyone who chose not to. |
| |
| ▲ | socialcommenter 2 hours ago | parent | prev [-] | | I specifically wasn't referring to that instance (if anything I'm thinking more of the recent increase in measles outbreaks), I myself don't hold a strong view on COVID vaccinations. The trade-offs, and herd immunity thresholds, are different for different diseases. Do we know that 0.1% prevalence of "unvaccinated" AI agents won't already be terrible? | | |
| ▲ | _heimdall an hour ago | parent [-] | | Fair enough. I assumed you had Covid in mind with an anti-mask reference. At least in modern history in the US, we have only even considered masks during the Covid response. I may be out of touch, but I haven't heard about masks for measles, though it does spread through aerosol droplets so that would be a reasonable recommendation. |
|
|
| |
| ▲ | buremba an hour ago | parent | prev | next [-] | | Exactly! I was digging into Openclaw codebase for the last 2 weeks and the core ideas are very inspiring. The main work he has done to enable personal agent is his army of CLIs, like 40 of them. The harness he used, pi-mono is also a great choice because of its extensibility. I was working on a similar project (1) for the last few months with Claude Code and it’s not really the best fit for personal agent and it’s pretty heavy. Since I was planning to release my project as a Cloud offering, I worked mainly on sandboxing it, which turned out to be the right choice given OpenClaw is opensource and I can plug its runtime to replace Claude Code. I decided to release it as opensource because at this point software is free. 1: https://github.com/lobu-ai/lobu | |
| ▲ | piker 3 hours ago | parent | prev | next [-] | | You should join the tobacco lobby! Genius! | | |
| ▲ | gehsty 3 hours ago | parent | next [-] | | More straightforwardly, people are generally very forgiving when people make mistakes, and very unforgiving when computers do. Look at how we view a person accidentally killing someone in a traffic accident versus when a robotaxi does it. Having people run it on their own hardware makes them take responsibility for it mentally, so gives a lot of leeway for errors. | | |
| ▲ | datsci_est_2015 2 hours ago | parent [-] | | I think that’s generally because humans can be held accountable, but automated systems can not. We hold automated systems to a higher standard because there are no consequences for the system if it fails, beyond being shut off. On the other hand, there’s a genuine multitude of ways that a human can be held accountable, from stern admonishment to capital punishment. I’m a broken record on this topic but it always comes back to liability. | | |
| ▲ | ass22 an hour ago | parent [-] | | Thats one aspect. Another aspect is that we have much higher expectations of machines than humans in regards to fault-tolerance. |
|
| |
| ▲ | casey2 2 hours ago | parent | prev [-] | | Oh please, why equate IT BS with cancer? If the null pointer was a billion dollar mistake, then C was a trillion dollar invention. At this scale of investment countries will have no problem cheapening the value of human life. It's part and parcel of living through another industrial revolution. |
| |
| ▲ | Aurornis 3 hours ago | parent | prev | next [-] | | > But the security risk wasnt taken by OpenClaw This is the genius move at the core of the phenomenon. While everyone else was busy trying to address safety problems, the OpenClaw project took the opposite approach: They advertised it as dangerous and said only experienced power users should use it. This warning seemingly only made it more enticing to a lot of users. It’ve been fascinated by how well the project has just dodged and avoided any consequences for the problems it has introduced. When it was revealed that the #1 skill was malware masquerading as a Twitter integration I thought for sure there would be some reporting on the problems. The recent story about an OpenClaw bot publishing hit pieces seemed like another tipping point for journalists covering the story. Though maybe this inflection point made it the most obvious time to jump off of the hype train and join one of the labs. It takes a while for journalists to sync up and decided to flip to negative coverage of a phenomenon after they cover the rise, but now it appears that the story has changed again before any narratives could build about the problems with OpenClaw. | |
| ▲ | flessner 2 hours ago | parent | prev | next [-] | | I am guessing there will be an OpenClaw "competitor" targeting Enterprise within the next 1-2 months. If OpenAI, Anthropic or Gemini are fast and smart about it they could grab some serious ground. OpenClaw showed what an "AI Personal Assistant" should be capable of. Now it's time to get it in a form-factor businesses can safely use. | |
| ▲ | almostdeadguy 3 hours ago | parent | prev | next [-] | | Love passing off the externalities of security to the user, and then the second order externalities of an LLM that then blackmails people in the wild. Love how we just don’t care anymore. | |
| ▲ | SpicyLemonZest 2 hours ago | parent | prev [-] | | I don't agree that making your users run the binaries means security isn't your concern. Perhaps it doesn't have to be quite as buttoned down as a commercial product, but you can't release something broken by design and wash your hands of the consequences. Within a few months, someone is going to deploy a large-scale exploit which absolutely ruins OpenClaw users, and the author's new OpenAI job will probably allow him to evade any real accountability for it. |
| |
| ▲ | chillfox 3 hours ago | parent | prev | next [-] | | Every single new tech industry thing has to learn security from scratch. It's always been that way. A significant number of people in tech just don't believe that there's anything to learn from history. | | |
| ▲ | ryandrake 2 hours ago | parent [-] | | And the industry actively pushes graybeards away who have already been there done that. |
| |
| ▲ | jrjeksjd8d 2 hours ago | parent | prev | next [-] | | For my entire career in tech (~20 years) I have been technically good but bad at identifying business trends. I left Shopify right before their stock 4xed during COVID because their technology was stagnating and the culture was toxic. The market didn't care about any of that, I could have hung around and been a millionaire. I've been at 3 early stage startups and the difference between winners and losers was nothing to do with quality or security. The tech industry hasn't ever been about "building" in a pure sense, and I think we look back at previous generations with an excess of nostalgia. Many superior technologies have lost out because they were less profitable or marketed poorly. | | |
| ▲ | gricardo99 2 hours ago | parent [-] | | bad at identifying business trends
I think you’re being unduly harsh on yourself. At least by the Shopify/COVID example. COVID was a black swan event, which may very well have completely changed the fortunes of companies like Shopify when online commerce surged and became vital to the economy. Shortcomings, mismanagement and bad culture can be completely papered over by growth and revenue.Right place, right time. It’s too bad you missed out on some good fortune, but it’s a helpful reminder of how much of our paths are governed by luck. Thanks for sharing, and wishing you luck in the future. |
| |
| ▲ | m11a 3 hours ago | parent | prev | next [-] | | > seems to be on it’s way out Change is fraught with chaos. I don't think exuberant trends are indicators of whether we'll still care about secure and high quality software in the long term. My bet is that we will. | |
| ▲ | zamalek 3 hours ago | parent | prev | next [-] | | > being left behind (“I still can’t bring myself to vibe code. I have to at least skim every diff. Meanwhile this guy is joining OpenAI”). I don't believe skimming diffs counts as being left behind. Survivor bias etc. Furthermore, people are going to get burned by this (already have been, but seemingly not enough) and a responsible mindset such as yours will be valued again. Something that still up for grabs is figuring how how to do full agenetic in a responsible way. How do we bring the equivalent of skimming diffs to this? | |
| ▲ | merlindru 3 hours ago | parent | prev | next [-] | | i think your self reflection here is commendable. i agree on both counts. i think the silver lining is that AI seems to be genuinely good at finding security issues and maybe further down the line enough to rely on it somewhat. the middle period we're entering right now is super scary. we want all the value, security be damned, and have no way to know about issues we're introducing at this breakneck speed. still i'm hopeful we can figure it out somehow | |
| ▲ | GorbachevyChase 3 hours ago | parent | prev | next [-] | | So my unsubstantiated conspiracy theory regarding Clawd/Molt/OpenClaw is that the hype was bought, probably by OpenAI. I find it too convenient that not long after the phrase “the AI bubble“ starts coming into common speech we see the emergence of a “viral” use case that all of the paid influencers on the Internet seem to converge on at the same time. At the end of the day piping AI output with tool access into a while loop is not revolutionary. The people who had been experimenting with these type of set ups back when LangChain was the hotness didn’t organically go viral because most people knew that giving a language model unrestricted access to your online presence or bank account is extremely reckless. The “I gave OpenClaw $100 and now I bought my second Lambo. Buy my ebook” stories don’t seem credible. So don’t feel bad. Everything on the internet is fake. | | |
| ▲ | tempest_ 2 hours ago | parent [-] | | The modern influencer landscape was such a boon for corporations. For less than the cost of 1 graphics card you can get enough people going that the rest of them will hop on board for free just to try and ride the wave. Add a little LLM generated comments that might not throw the product in your face but make sure it is always part of the conversation so someone else can do it for you for free and you are off to the races. |
| |
| ▲ | andyferris 3 hours ago | parent | prev | next [-] | | I don't know. It's more of a sharp tool like a web browser (also called a "user agent") - yes an inexperienced user can quickly get themselves into trouble without realizing it (in a browser or openclaw), yes the agent means it might even happen without you being there. A security hole in a browser is an expected invariant not being upheld, like a vulnerability letting a remote attacker control your other programs, but it isn't a bug when a user falls for an online scam. What invariants are expected by anyone of "YOLO hey computer run my life for me thx"? | |
| ▲ | bionhoward 3 hours ago | parent | prev | next [-] | | building this openclaw thing that competes with openai using codex is against the openai terms of service, which say you can't use it to make stuff that competes with them. but they compete with everyone. by giving zero fucks (or just not reading the fine print), bro was rewarded by the dumb rule people for breaking the dumb rules. this happens over and over. there is a lesson here | | |
| ▲ | jiveturkey an hour ago | parent [-] | | underrated comment and this is why they bought Peter. i’m betting he will come to regret it. |
| |
| ▲ | mgraczyk 3 hours ago | parent | prev | next [-] | | But in this case following security norms would be a mistake. The right thing to take away is that you shouldn't dogmatically follow norms. Sometimes it's better to just build things if there is very little risk Nothing actually bad happened in this case and probably never will. Maybe some people have their crypto or identity stolen, but probably not a rate rate significantly higher than background (lots of people are using openclaw) | | | |
| ▲ | xvector 3 hours ago | parent | prev | next [-] | | Hey, as a security engineer in AI, I get where you're coming from. But one thing to remember - our job is to figure out how to enable these amazing usecases while keeping the blast radius as low as possible. Yes, OpenClaw ignores all security norms, but it's our job to figure out an architecture in which agents like these can have the autonomy they need to act, without harming the business too much. So I would disagree our work is "on the way out", it's more valuable than ever. I feel blessed to be working in security in this era - there has never been a better time to be in security. Every business needs us to get these things working safely, lest they fall behind. It's fulfilling work, because we are no longer a cost center. And these businesses are willing to pay - truly life changing money for security engineers in our niche. | | |
| ▲ | windexh8er 2 hours ago | parent [-] | | Security is always a cost center. We've seen multiple iterations of changes already impact security in the same ways over the last 20+ years. Nothing is different here and the outcomes will be the same: just good enough but always a step behind. The one thing that is a new lever to pull here is time, people need far less of it to make disastrous mistakes. But, ultimately, the game hasn't changed and security budgets will continue to be funneled to off the shelf products that barely work and the remainder of that budget will continue to go to the overworked and underpaid. Nothing really changes. |
| |
| ▲ | vibeprofessor 2 hours ago | parent | prev | next [-] | | Well OpenClaw has ~3k open PRs (many touching security) on GitHub right now. Peter's move shows killer product UI/UX, ease of use and user growth trump everything. Now OpenAI with throw their full engineering firepower to squash those flaws in no time. Making users happy > perfect security day one | | |
| ▲ | ass22 an hour ago | parent [-] | | "Peter's move shows killer product UI/UX, ease of use and user growth trump everything. " Erm, is this some groundbreaking revelation? Its always been that way. Unless its in the context of superior technology with minimal UI a-la Google Search in its early years. |
| |
| ▲ | m3kw9 2 hours ago | parent | prev | next [-] | | Security always is the most time consuming in a backend project | |
| ▲ | wat10000 2 hours ago | parent | prev | next [-] | | This is a normal reaction to unfairness. You see someone who you believe is Doing It Wrong (and I’d agree), and they’re rewarded for it. Meanwhile you Do It Right and your reward isn’t nearly as much. It’s natural to find this upsetting. Unfortunately, you just have to understand that this happens all over the place, and all you can really do is try to make your corner of the world a little better. We can’t make programmers use good security practices. We can’t make users demand secure software. We can at least try to do a better job with our own work, and educate people on why they should care. | |
| ▲ | sbochins 3 hours ago | parent | prev | next [-] | | At the end of the day, he built something people want. That’s what really matters. OpenAI and Anthropic could not build it because of the security issues you point out. But people are using it and there is a need for it. Good on him for recognizing this and giving people what they want. We’re all adults and the users will be responsible for whatever issues they run into because of the lack of security around this project. | | |
| ▲ | iugtmkbdfil834 2 hours ago | parent [-] | | Admittedly, I might not be the.. targeted demographic here, but I can't say I understand what problem it solves, but even cursory read immediately flags all the way in which it can go wrong ( including recent 'rent a human hn post'). I am fascinated, and I wonder if that it is partially that fascination that drives current wave of adoption. I will say openly: I don't get it and I used to argue for crypto use cases. |
| |
| ▲ | Trasmatta 3 hours ago | parent | prev [-] | | I've been feeling this SO much lately, in many ways. In addition to security, just the feeling of spending decades learning to write clean code, valuing having a deep understanding of my codebase and tooling, thorough testing, maintainability, etc, etc. Now the industry is basically telling me "all that expertise is pointless, you should give it up, all that we care about it is a future of endless AI slop that nobody understands". | | |
| ▲ | _fzslm 2 hours ago | parent [-] | | AI slop will collapse under its own weight without oversight. I really think we will need new frameworks to support AI-generated code. Engineers with high standards will be needed to build and maintain the tools and technologies so that AI-written code can thrive. It's not game over just yet | | |
| ▲ | Trasmatta 2 hours ago | parent [-] | | Thanks, I've been feeling the same way. But it seems like we're some years away from the industry fully realizing it. Makes me want to quit my job and just code my own stuff. |
|
|
|
|
| ▲ | mark_l_watson 4 hours ago | parent | prev | next [-] |
| I have not run OpenClaw and similar frameworks because of security concerns, but I enjoy the author's success, good for him. There are very few companies who I trust with my digital data and thus trust to host something like OpenClaw and run it on my behalf: American Express, Capital One, maybe Proton, and *maybe* Apple. I managed an AI lab team at Capital One and personally I trust them. I am for local compute, private data, etc., but for my personal AI assistant I want something so bullet proof that I lose not a minute of sleep worrying about by data. I don't want to run the infrastructure myself, but a hybrid solution would also be good. |
| |
| ▲ | jacquesm 4 hours ago | parent | next [-] | | AMEX, Capital One and Apple are not even close to the top of the list of companies that I would trust with my digital data. | | |
| ▲ | rukuu001 4 hours ago | parent | next [-] | | Never mind the list of companies - I'd be very curious to know what the 'trust signals' are that would help you trust a company? | | |
| ▲ | amelius 4 hours ago | parent | next [-] | | For hardware, I'd only trust a company if they didn't also have an interest in data. In fact, I'd trust a hardware company more if they didn't also have a big software division. A company like AMD I would trust more than a company like Apple. | |
| ▲ | jacquesm 4 hours ago | parent | prev | next [-] | | Decent management. A lack of change of business model, no rug pulls and such. Fair value for money. Consistency over the longer term. No lock in or other forced relationships. Large enough to be useful and to have decent team size, small enough to not have the illusion they'll conquer the world. Healthy competition. | | |
| ▲ | NBJack 3 hours ago | parent | next [-] | | Admirable, but short of a local credit union I used to use (which I am no longer with as they f'd up a rather critical transaction), I can scarcely imagine a business that fits such a model these days. The amount of transparency needed to vet this would be interesting to find though, and its mere presence probably a green flag. | | | |
| ▲ | lovich 3 hours ago | parent | prev [-] | | Are there any companies existing you would trust? I honestly can’t name a single one I know of who could pass that criteria Edit:found your other comment answering a similar question |
| |
| ▲ | nikcub 4 hours ago | parent | prev | next [-] | | the way they respond to security and privacy incidents + publishing technical security + privacy papers / docs | | |
| ▲ | jacquesm 4 hours ago | parent | next [-] | | Good one, yes, that is important. | |
| ▲ | belter 3 hours ago | parent | prev | next [-] | | And do they approach Security as a Feature or as a Process. The fingers on one hand are enough to count them... | |
| ▲ | PlatoIsADisease 4 hours ago | parent | prev [-] | | Apple = Run more commercials with black backgrounds and white text that says SECURITY PRIVACY --- Heyyy it never said "good privacy" perceive as you want... Don't publicly acknowledge that you were the reason someone got murdered and 1000 VIPs got hacked. One day when I'm deemed a 'Baddie', I looked at Apple as inspiration. |
| |
| ▲ | elxr 3 hours ago | parent | prev | next [-] | | No past history of shady planned-obsolescence sprinkled in a bunch of their products, for one. So that rules out Apple. A leadership team that is very open and involved with the community, and one that takes extra steps, compared to competitors, to show they take privacy seriously. | | |
| ▲ | selectodude 2 hours ago | parent [-] | | Planned obsolescence tells me they don't make money on the daily use of their software and they need me to buy more hardware in order to make money. |
| |
| ▲ | 8note 4 hours ago | parent | prev [-] | | I'd go for a co-operative ownership model rather than capitalist? and make sure the member/owners are all of like mind, and willing to pay more to ensure security and privacy | | |
| ▲ | jacquesm 4 hours ago | parent | next [-] | | Mondragon for IT... it's been my dream for decades. | | |
| ▲ | komali2 3 hours ago | parent [-] | | We're no mondragon but I founded a co-op in IT space a few years back and it surprised me how open to the vision the members and customers have been. I had assumed I'd have to lean more on the capitalistic values of being a co-op, like better rates for our clients, higher quality work, larger likelihood of our long term existence to support our work, more project ownership, so as to make the pitch palatable to clients. Turns out clients like the soft pitch too, of just workers owning the company they work within - I've had several clients make contact initially because they bought the vision over the sales pitch. I'm trying to think about if I'd trust us more to set up or host openclaw than a VC funded startup or an establishment like Capital One. I think both alternatives would have way more resources at hand, but I'm not sure how that would help outside of hiring pentesters or security researchers. Our model would probably be something FOSS that is keyed per-user, so if we were popular, imo that would be more secure in the end. The incentives leading to trust is definitely in a co-op's favor, since profit motive isn't our primary incentive - the growth of our members is, which isn't accomplished only through increasing the valuation of the co-op. Members also have total say in how we operate, including veto power, at every level of seniority, so if we started doing something naughty with customer data, someone else in the org could make us stop. This is our co-op: 508.dev, but I've met a lot of others in the software space since founding it. I think co-ops in general have legs, the only problem is that it's basically impossible to fund them in a way a VC is happy with, so our only capitalization option is loans. So far that hasn't mattered, and that aligns with the goal of sustainable growth anyway. | | |
| ▲ | jacquesm 3 hours ago | parent [-] | | Amazing, please write a book. My current venture is still called after that idea ("The Modular Company"), but I found that it is very hard to get something like that off the ground in present day Western Europe. | | |
| ▲ | komali2 2 hours ago | parent [-] | | > but I found that it is very hard to get something like that off the ground in present day Western Europe. Yes, agreed for the USA/Taiwan/Japan where we mostly operate. For us it's been understanding and leveraging the alternative resources we have. Like, we have a lot of members, but really only a couple are bringing in customers, despite plenty of members having very good networks. Is your current a co-op? 200+ sales at 30k a pop seems to be pretty well off the ground! | | |
| ▲ | jacquesm an hour ago | parent [-] | | Effectively, yes, but it is tiny. There is a corporate entity but it just serves to divide the loot between the collaborators. |
|
|
|
| |
| ▲ | YetAnotherNick 4 hours ago | parent | prev [-] | | Co-operative will have significantly worse privacy guarantee compared to shareholder based model. In the no one company wants to sacrifice on privacy standard just for the sake of it. They do it for money. And in shareholder based model, the employees are more likely to go against the shareholder when user privacy is involved, because they are not directly benefiting from it. | | |
| ▲ | jacquesm 4 hours ago | parent [-] | | That's nonsense. Shareholders have an incentive to violate privacy much stronger than any one employee: they can sell their shares to the highest bidder and walk away with 'clean hands' (or so they'll argue) whereas co-op partners violating your privacy would have to do so on their own title with immediate liability for their person. | | |
| ▲ | YetAnotherNick 4 hours ago | parent [-] | | > Shareholders have an incentive to violate privacy much stronger than any one employee Exactly what I said. We need lower shareholder interference not more, and in co-operative it's the opposite. > with immediate liability for their person. What do you mean? | | |
| ▲ | jacquesm 3 hours ago | parent | next [-] | | A cooperative does not have shareholders in your sense of the word. | |
| ▲ | komali2 2 hours ago | parent | prev [-] | | The only shareholders in a co-op are the owners/operators ("employees"), or the owners/operators + customers (for example REI I believe). There's nobody seeking to extract value at the expense of the employees or the customers. If, as a shareholder operator, a co-op member pressured themselves to exploit user data to turn a quick buck, I guess that's possible, but likely they'd be vetoed by other members who would get sucked into the shitstorm. In my experience, co-op members and customers are more value-oriented than profit-motivated, within reason. |
|
|
|
|
| |
| ▲ | mark_l_watson 4 hours ago | parent | prev | next [-] | | Jacques, do you mind sharing your list of trusted companies? Thanks in advance. | | |
| ▲ | jacquesm 4 hours ago | parent | next [-] | | It's going to be pretty short. Proton would be there for comms, for hosting related stuff I would trust Hetzner before any big US based cloud company. For the AI domain I wouldn't trust any of the big players, they're all just jockeying for position and want to achieve lock-in on a scale never seen before and they have all already shown they don't give a rats ass about where they get their training data and I expect that once they are in financial trouble they'll be happy to sell your private data down the river. Effectively you can trust all of the companies out there right up until they are acquired and then you will regret all of the data you ever gave them. In that sense Facebook is unique: it was rotten from day #1. Vehicles: anything made before 2005, SIM or e-SIM on board = no go. I'm halfway towards setting up my own private mail server and IRC server for me and my friends and kissing the internet goodbye. It was a fun 30 years but we're well into nightmare territory now. Unfortunately you are now more or less forced to participate because your bank, your government and your social circle will push you back in. And I'm still pissed off that I'm not allowed to host any servers on a residential connection. That's not 'internet connectivity' that's 'consumer connectivity'. | | |
| ▲ | blueaquilae 4 hours ago | parent | next [-] | | Proton is quite a privacy washing front. Surprised than even in HN nobody check behind the facade what was signed. | | |
| ▲ | Aurornis 4 hours ago | parent | next [-] | | > Surprised than even in HN nobody check behind the facade what was signed Such as? These aloof comments that talk about something we're supposed to know about without referencing anything are very unhelpful. | |
| ▲ | jacquesm 4 hours ago | parent | prev | next [-] | | Yes, they're losing it. It's a pity, they were doing well for a long time. I'm surprised that someone on HN would paint all of HN with the same brush. It's one of those 'lesser evils' things. If you know of a better email provider I'd love to know. | |
| ▲ | unethical_ban 4 hours ago | parent | prev [-] | | Proton complied with a court order once (that we know of), no? I have seen a lot of negative sentiment from HN commenters toward them but not a lot of evidence to back it up, particularly when you consider the email marketplace. | | |
| ▲ | Itoldmyselfso 3 hours ago | parent [-] | | It was a legally mandated court order they couldn't just refuse. No encrypted data, the contents of their emails, was handed over. The person would've also been safe had they used vpn/tor as I recall the story. |
|
| |
| ▲ | jjtheblunt 4 hours ago | parent | prev | next [-] | | why the (e)SIM cars concern? i ask since the data transmission (bidirectional) can be used to justify lower insurance rates, for an example, than without that data. ( https://www.lemonade.com/fsd is an example ) | | |
| ▲ | rcoder 3 hours ago | parent | next [-] | | "Justifying lower insurance rates" is just algorithmic bias described from the perspective of someone it doesn't (currently) harm. See also: credit scoring, insurance claim acceptance, job applications, etc., etc. You only get offered a discount if most other customers are being compelled to pay full (or even increased) prices for the same offering. Otherwise revenue goes down and company leadership finds itself finding other ways to cut costs and increase profits. | |
| ▲ | jacquesm 3 hours ago | parent | prev [-] | | Because I don't trust that that location data won't end up in the wrong hands. | | |
| ▲ | jiveturkey an hour ago | parent [-] | | This, but stronger. It’s not a story of why Johnny can’t trust anyone. The vast majority of companies have proven time and time again that they are not capable of handling this data securely against inadvertent disclosure. Not even mentioning the intentional disclosure revenue stream. |
|
| |
| ▲ | BoredPositron 4 hours ago | parent | prev [-] | | Proton? After the last two years of enshitification and purely revenue driven product decisions really? | | |
| ▲ | jacquesm 4 hours ago | parent [-] | | Barely. Your points are well made and I'm sure that it is just a matter of time before they're just as untouchable as the rest. Hence the remark about mail. The Siloization of the internet is almost complete. |
|
| |
| ▲ | marxisttemp 4 hours ago | parent | prev [-] | | Mark, can you conceive that some people don’t trust any companies? | | |
| ▲ | mark_l_watson 4 hours ago | parent [-] | | Yes, I can! After reading Jacques's response to my question, my list got smaller. Personally, I still like Proton, but I get that they have made some people unhappy. I also agree that Hetzner is a reliable provider; I have used them a bunch of times in the last ten years. Then my friend, we have to worry about fiber/network providers I suppose. This general topic is outside my primary area of competence, so I just have a loose opinion of maintaining my own domain, use encryption, and being able switch between providers easily. I would love to see an Ask HN on secure and private agentic infra + frameworks. |
|
| |
| ▲ | appplication 4 hours ago | parent | prev [-] | | I’d be very curious what your list would be | | |
| |
| ▲ | blks 3 hours ago | parent | prev | next [-] | | Privacy aside, you can never trust an LLM with your data and trust it to do exactly what it was instructed to do. | |
| ▲ | Aurornis 4 hours ago | parent | prev | next [-] | | > There are very few companies who I trust with my digital data and thus trust to host something like OpenClaw and run it on my behalf: American Express, Capital One, maybe Proton, and maybe Apple. I managed an AI lab team at Capital One and personally I trust them. I don't really understand what this has to do with the post or even OpenClaw. The big draw of OpenClaw (as I understand it) was that you could run it locally on your own system. Supposedly, per this post, OpenClaw is moving to a foundation and they've committed to letting the author continue working on it while on the OpenAI payroll. I doubt that, but it's a sign that they're making it explicitly not an OpenAI product. OpenClaw's success and resulting PR hype explosion came from ignoring all of the trust and security guardrails that any big company would have to abide by. It would be a disaster of the highest order if it had been associated with any big company from the start. Because it felt like a grassroots experiment all of the extreme security problems were shifted to the users' responsibility. It's going to be interesting to see where it goes from here. This blog post is already hinting that they're putting OpenClaw at arm's length by putting it into a foundation. | | | |
| ▲ | iugtmkbdfil834 2 hours ago | parent | prev | next [-] | | You raised a good point I am now personally basically expecting to see this year ( next at the latest ). Some brave corporate will decide for millions of users to, uhh, liberate all users data. My money is not of that happening at Googles or OpenAIs of the world though. I am predicting it will be either be a bank or one of the data brokers. With any luck, maybe this will finally be a bridge too fast, like what Amazon's superbowl ad did for surveillance conversation. | |
| ▲ | vessenes an hour ago | parent | prev | next [-] | | Well it’s not even just data, you have to trust actions taken if you want the assist to, you know, assist. I have been yoloing it and really enjoying it. Albeit from a locked off server. | |
| ▲ | internet2000 4 hours ago | parent | prev | next [-] | | Sorry to pile on, but Capital One is an insane name to drop there. | |
| ▲ | shevy-java 3 hours ago | parent | prev | next [-] | | You really trust them? My trust does not extend that far. | |
| ▲ | lvl155 4 hours ago | parent | prev | next [-] | | Sorry to break it to you but I would not trust any financial companies with my personal data. Simply because I’ve seen how they use data to build exploitive products in the past. | |
| ▲ | PlatoIsADisease 4 hours ago | parent | prev | next [-] | | >Apple Lol Their marketing team got ya. I aspire to be as good as Apple at marketing. Who knew 2nd or worse place in everything doesnt matter when you are #1 in marketing? | | | |
| ▲ | jiveturkey an hour ago | parent | prev [-] | | sorry to say it, but C1 LOL. they don’t care at all about privacy! Don’t mistake your team for the company values. |
|
|
| ▲ | Ampned 4 hours ago | parent | prev | next [-] |
| It’s not like Anthropic or OpenAI were not working on “AI assistants” before OpenClaw, it’s pretty much the endgame as I can see it. This guy just single handedly released something useful (and very insecure) before anyone else. Although that’s impressive, I don’t see more than an acquisition of the hype by OpenAI. |
| |
| ▲ | dlivingston 4 hours ago | parent | next [-] | | My gut feeling is that OpenAI is desperately searching for The Killer App™ for LLMs and hired Peter to help guide them there. OpenAI has tried a lot of experiments over the years - custom GPTs, the Orion browser, Codex, the Sora "TikTok but AI" app, and all have either been uninspired or more-or-less clones of other products (like Codex as a response to Claude Code). OpenClaw feels compelling, fresh, sci-fi, and potentially a genuinely useful product once matured. More to the point, OpenAI needs _some_ kind of hyper-compelling product to justify its insane hype, valuation, and investments, and Peter's work with OpenClaw seems very promising. (All of this is complete speculation on my part. No insider knowledge or domain expertise here.) | | |
| ▲ | readitalready 3 hours ago | parent | next [-] | | In the AI space there isn’t a single killer app. EVERYTHING is open for disruption. ChatGPT was the start but OpenAI could create tons of other apps. They don’t need to wait for others to do so. People already want them to make a Slack replacement but I’m just wondering why none of the frontier labs are making a simple app platform that could be used to make custom apps like ChatGPT itself, or the Slack clone. Instead, they expect us to brute force app development through the API interface. Each frontier lab really needs their own Replit. Like, why doesn’t OpenAI build tax filing into ChatGPT? That’s like the immediate use case for LLM-based app development. | | |
| ▲ | oblio 3 hours ago | parent [-] | | > Like, why doesn’t OpenAI build tax filing into ChatGPT? Legal liability. |
| |
| ▲ | Atotalnoob 4 hours ago | parent | prev | next [-] | | Orion is Kagis browser. Atlas is OpenAIs browser | |
| ▲ | mschuster91 3 hours ago | parent | prev [-] | | > the Sora "TikTok but AI" app This product should never have seen the light of day, at least not for the general public. The amount of slop that is now floating across Tiktok, YT Shorts and Instagram is insane. Whenever you see a "cute animals" video, 99% of it is AI generated - and you can report and report and report these channels over and over, and the platforms don't care at all, but instead reward the slop creators from all the comments shouting that this is AI garbage and people responding they don't care because "it's cute". OpenAI completely lacks any sort of ethical review board, and now we're all suffering from it. | | |
| ▲ | Slartie 2 hours ago | parent [-] | | Would you consider cute animal videos that are not AI generated to be so much more worthy of your time? Because I don't really care whether cute animal videos are AI generated or filmed - I simply don't want to spend even a second on them. And most people I know who love spending time on this kind of content would not care either - because they don't care whether they waste time on real or AI animal videos. They just want something to waste time with. | | |
| ▲ | mschuster91 2 hours ago | parent [-] | | > Would you consider cute animal videos that are not AI generated to be so much more worthy of your time? Yes indeed. I do love me some cat and bunny videos. But I hate getting fed slop - and it's not just cat videos by the way. I'm (as evidenced by my comment history) into mechanics, electronics and radio stuff, and there are so damn many slop channels spreading outright BS with AI hallucinated scripts that it eventually gets really really annoying. Sadly, YT's algorithm keeps feeding me slop in every topic that interests me and frankly it's enraging, as some of my favorite legitimate creators like shorts as a format so I don't want to completely hide shorts. > And most people I know who love spending time on this kind of content would not care either - because they don't care whether they waste time on real or AI animal videos. They just want something to waste time with. The problem is, these channels build up insane amounts of followers. And it would not be the first time that these channels then suddenly pivot (or get sold from one scam crew to the next) and spread disinformation, crypto scams and other fraud - it was and is a hot issue on many social media platforms. |
|
|
| |
| ▲ | nikcub 4 hours ago | parent | prev | next [-] | | Regardless of what you think of OpenClaw, Peter is a great hire - he's been at the forefront of brute-forcing app development with coding agents. | |
| ▲ | noosphr 3 hours ago | parent | prev | next [-] | | OpenAI has been running around headless for at least two years now. I've build systems like openclaw, based on email, at my day job and told OAI during an interview that they needed to build this or get smoked when someone else does. I guess aqi-hire is easier than building a team that can develop software internally. Of course the S in openclaw is for security. | |
| ▲ | pezo1919 4 hours ago | parent | prev | next [-] | | Same here, seems 100% marketing move. The trend continues. | |
| ▲ | FinnKuhn 4 hours ago | parent | prev | next [-] | | While insecure and not something I would use myself (yet) one thing OpenClaw has managed to do is to show people the potential that AI still has. | |
| ▲ | krick 2 hours ago | parent | prev | next [-] | | But... how is it even useful? Do you use it? Is it a good idea for anyone to, uh, use it? Is it a product that you or any other "vibe coder" cannot ~~build~~ tell Claude Code to build on the go, if he wants to communicate with Claude Code via WhatsApp for some reason? Sure, product doesn't need to be some sophisticated technology to be worth something, it could also just have user base because it succeeded at marketing, but does this particular product even benefit from network effects? What is this shit? Why anybody cares? Seriously, I just don't understand what's going on. To me it looks like all world just has gone crazy. | |
| ▲ | Aurornis 4 hours ago | parent | prev | next [-] | | > This guy just single handedly released something useful (and very insecure) before anyone else. It has been interesting to watch this take off. It wasn't the first or even best agent framework and it deliberately avoided all of the hard problems that others were trying to solve, like security. What it did have was unnatural levels of hype and PR. A lot of that PR, ironically, came from all of the things that were happening because it had so many problems with security and so many examples of bad behavior. The chaos and lack of guardrails made it successful. | | |
| ▲ | isx726552 4 hours ago | parent [-] | | Let’s not lose sight of the fact that he piggybacked on a large company’s name recognition by originally calling it “clawd”, clearly intending it to be confused with Claude. I have my doubts it would have gone anywhere without that. |
| |
| ▲ | pubby 4 hours ago | parent | prev [-] | | Single-handed made me smirk. It was vibe coded. |
|
|
| ▲ | GalaxyNova 2 hours ago | parent | prev | next [-] |
| It's strange how quickly this project got so big... It did not seem like anything particularly novel to me. |
| |
| ▲ | andrewchambers 2 hours ago | parent [-] | | I think it was obvious, yet nobody seemed to have released a version people could actually easily use. The feature set is pretty simple: - Agents that can write their own tools. - Agents that can write their own skills. - Agents that can chat via standard chat apps. - Agents that can install and use cli software. - Agents that can have a bit of state on disk. | | |
| ▲ | cowsandmilk 2 hours ago | parent [-] | | > nobody seemed to have released a version people could actually easily use Yet I’ve known many people who have said it is difficult to use; this was a 0.01-0.1% adoption tool. There is still a huge ease of use gap to cross to make it adopted in 10-50% of computer users. | | |
|
|
|
| ▲ | I_am_tiberius 27 minutes ago | parent | prev | next [-] |
| I really hoped he would support Europe’s startup ecosystem. Hopefully, he will at least bring stronger privacy standards to OpenAI, such as a policy that prohibits reading or analyzing user prompts or AI responses. |
|
| ▲ | akmarinov 4 hours ago | parent | prev | next [-] |
| So that’s OpenClaw dead then. It took all of Peter’s time to move it forward, even with maintainers (who he complained got immediately hired by AI companies). Now he’s gonna be working on other stuff at OpenAI, so OpenClaw will be dead real quick. Also I was following him for his AI coding experience even before the whole OpenClaw thing, he’ll likely stop posting about his experiences working with AI as well |
|
| ▲ | dhruv3006 12 minutes ago | parent | prev | next [-] |
| So if Openclaw is chromium then what will be chrome? |
|
| ▲ | cantalopes 31 minutes ago | parent | prev | next [-] |
| Isn't openai getting tanked because of its support of trump and ice? |
|
| ▲ | ramathornn 4 hours ago | parent | prev | next [-] |
| Congrats to Peter! Can any OpenClaw power users explain what value the software has provided to them over using Claude code with MCP? I really don’t understand the value of an agent running 24/7, like is it out there working and earning a wage? Whats the real value here outside of buzzwords like an ai personal assistant that can do everything? |
| |
| ▲ | phamilton 2 hours ago | parent | next [-] | | As an experiment, I set it up with a z.ai $3/month subscription and told it to do a tedious technical task. I said to stay busy and that I expect no more than 30 minutes of inactivity, ever. The task is to decompile Wave Race 64 and integrate with libultraship and eventually produce a runnable native port of the game. (Same approach as the Zelda OoT port Ship of Harkinian). It set up a timer ever 30 minutes to check in on itself and see if it gave up. It reviews progress every 4 hours and revisits prioritization. I hadn't checked on it in days and when I looked today it was still going, a few functions at a time. It set up those times itself and creates new ones as needed. It's not any one particular thing that is novel, but it's just more independent because of all the little bits. | |
| ▲ | cactus2093 4 hours ago | parent | prev | next [-] | | It has a heartbeat operation and you can message it via messaging apps. Instead of going to your computer and launching claude code to have it do something, or setting up cron jobs to do things, you can message it from your phone whenever you have an idea and it can set some stuff up in the background or setup a scheduled report on its own, etc. So it's not that it has to be running and generating tokens 24/7, it's just idling 24/7 any time you want to ping it. | |
| ▲ | jdgoesmarching 2 hours ago | parent | prev | next [-] | | Not being tied to Anthropic’s models and ecosystems, having more control over the agent, interacting with it from you messaging app of choice. | |
| ▲ | aydyn 4 hours ago | parent | prev [-] | | There's some neat experiments people post on social media. Mostly, the thing that captures the imagination the most is its sort of like watching a silicon child grow up. They develop their own personalities, they express themselves creatively, they choose for themselves, they choose what they believe and who they become. I know that sounds like anthropomorphism, and maybe it is, but it most definitely does not feel like interacting with a coding agent. Claude is just the substrate. | | |
| ▲ | esafak 3 hours ago | parent [-] | | Imagine putting it in a robot with arms and legs, and letting it loose in your house, or your neighborhood. Oh, the possibilities! |
|
|
|
| ▲ | ambicapter an hour ago | parent | prev | next [-] |
| > The claw is the law. This isn't a Slay The Spire reference is it? |
|
| ▲ | nelsonfigueroa 2 hours ago | parent | prev | next [-] |
| > "What I want is to change the world". I don't know if you'll achieve that at OpenAI or if it'll even be a good change for the world, but I genuinely wish you the best. Regardless of the news around OpenAI I still think it's great that a personal project got you a position at a company like that. |
|
| ▲ | mikert89 5 hours ago | parent | prev | next [-] |
| Is an agent running on a desktop, with access to excel, word, email and slack going to replace Saas? Add in databases, browser use, and the answer could be yes This could be the most disruptive software we have seen |
| |
| ▲ | esafak 4 hours ago | parent | next [-] | | If it replaces SaaS it will replace you too; how else will you collaborate? | |
| ▲ | stcredzero 4 hours ago | parent | prev | next [-] | | What I want to know: Is the OpenClaw = Open Source aspect secure? | |
| ▲ | koakuma-chan 4 hours ago | parent | prev [-] | | If your SaaS is a CRUD with a shitty UI in React then yes | | |
| ▲ | mikert89 4 hours ago | parent | next [-] | | If the ai model gets better, more and more Saas can be replaced with an agent with an excel sheet | |
| ▲ | BloondAndDoom 3 hours ago | parent | prev [-] | | To be fair if you are not going interface with your SaaS via GUI it can be one big API for all I care. I’ll just chat and automate to it anyway. | | |
| ▲ | oblio 3 hours ago | parent [-] | | For people like you: except for the obvious greed, what's the end game? Are you making anyone's life better? Who will even pay you once most jobs are automated? At best, it's a defensive move: make money, get hard capital and seek rent after most of society has collapsed? | | |
| ▲ | BloondAndDoom 32 minutes ago | parent | next [-] | | I don’t know how your rhetoric is any different than saying how scribes find a new job now we invented printing. | |
| ▲ | koakuma-chan 3 hours ago | parent | prev [-] | | I mean, he's right, it's easier for users when you can throw AI at the thing, instead of manually clicking through the UI. |
|
|
|
|
|
| ▲ | willmeyers an hour ago | parent | prev | next [-] |
| Innocent people are going to get hurt. Not sure how yet, but, giving a company intimate details about your life never ends well. |
|
| ▲ | whiterock 3 hours ago | parent | prev | next [-] |
| It‘s just crazy to me that this guy lives around the corner. That should inspire some hope for me I guess, that even people from Vienna can be successful on such a level. |
|
| ▲ | mbanerjeepalmer 5 hours ago | parent | prev | next [-] |
| Unclear what this truly means for the open version. We can assume first that at OpenAI he's going to build the hosted safe version that, as he puts it, his mum can use. Inevitably at some point he and colleagues at OpenAI will discover something that makes the agent much more effective. Does that insight make it into the open version? Or stay exclusive to OAI? (I imagine there are precedents for either route.) |
| |
| ▲ | CuriouslyC 4 hours ago | parent [-] | | The OpenAI version will be locked down in a bad way. It'll be ecosystem tied and a lot of the "security" will be from losing control of the harness. | | |
| ▲ | kibibu 2 hours ago | parent [-] | | Not sure. It's also plausible that OpenAI wants access to everybody's email, slack, whatsapp, telegram, github source code, whatever else this thing gets hooked up to. The cry has been for a while that LLMs need more data to scale. The new Open(AI)Claw could be cheap or free, as long as you tick the box that allows them to train on your entire inbox and all your documents. |
|
|
|
| ▲ | illichosky 4 hours ago | parent | prev | next [-] |
| The guy already sold his previous company for a shitload of money. Got bored and did a side project that stirred the Internet on the past month. That is way more than most people here are going to accomplish in a lifetime. Yet, he has some deal with OpenAI to work on whatever he things exciting. I don't see why so much negative comments here other than jelously |
| |
| ▲ | blueaquilae 4 hours ago | parent | next [-] | | True but between the lines I read some interesting points here.
Great it get the gold nugget but I found it curious how he dunked on the JVM after all the clones emerges with much more perfs and much less code/energy consumpution. | | |
| ▲ | erichocean 4 hours ago | parent [-] | | Do you have a link to any of the JVM clones? Perplexity and Google came up empty. |
| |
| ▲ | yieldcrv 4 hours ago | parent | prev | next [-] | | For further context, he has like 60 projects for general use during this “got bored” phase Its just happened that this one latched on a trend well and went viral, cease and desist from its name accelerated the virality | |
| ▲ | tastyface 3 hours ago | parent | prev | next [-] | | OpenAI execs are funneling funds directly to the Trump regime and partying with the far right: https://news.ycombinator.com/item?id=46933071 Anyone working for OpenAI is complicit with these abuses. I hope that in due time having OpenAI on your resume will be a strong negative signal. | | |
| ▲ | illichosky 3 hours ago | parent [-] | | Just look all the people at Trump's inauguration. The whole US economical elite is supporting it... |
| |
| ▲ | johnwheeler 4 hours ago | parent | prev [-] | | I just dislike Sam Altman, and I think he's just using this as a marketing ploy, which is more dishonesty from him. People keep saying OpenClaw is hype. I installed it, but I never tried to run it, and I don't know what the compelling reason is to. Supposedly you can talk to your agent from your iMessage? Who cares? Why not just talk to Claude Code? | | |
| ▲ | hadlock 4 hours ago | parent | next [-] | | The big draw of open claw is the memory architecture. Because you effectively start from scratch every time you open a new claude chat. Open Claw on the other hand, it compacts regularly, but also generates daily digests, and uses vector search, and then uses thoughtful memory retrieval techniques to add relevant context to your queries. Recent things get weighted more heavily, but full text search of all chats is still possible, and this is all managed automatically. Plus it uses markdown so the barrier to entry for manually auditing/modifying memories etc is very very low. If you say "can you check if my solar panel for my power generator arrive yet?" it is going to probably know what I'm talking about and go check my email for delivery notifications, based on conversations I've had with it about buying, ordering the solar panel etc. Claude is just going to ask clarifying questions since it has no idea what I am referencing. | | |
| ▲ | johnwheeler an hour ago | parent [-] | | So it sounds like you get extra memory at the expense of having to compact more because, of course all those things are going to take up context. But since you’re not interacting with it in some kind of turn based fashion it makes it worth it— the lack of context doesn’t matter. Is that correct? |
| |
| ▲ | illichosky 4 hours ago | parent | prev [-] | | I also not a Sam's fan for the same reason. But if he offered me a big check to work whatever project I wanted, I would not care about it being a "marketing ploy". Regarding openclaw's hype, it is not about how you access it, but rather what the agents can access from you, and no one did that before. Probably because no one had the balls to put in the wild such unsecure piece of software | | |
|
|
|
| ▲ | MattDaEskimo 4 hours ago | parent | prev | next [-] |
| Truly incredible. OpenAI is putting money where their mouth is: a one-man team can create a vibe-coded project, and score big. Open-source, and hyped incredibly well. Interesting times ahead as everyone else chases this new get-rich-quick scheme. Will be plentiful for the shovel makers. |
|
| ▲ | Multiplayer 4 hours ago | parent | prev | next [-] |
| Potentially amazing opportunity for OpenAI to more meaningfully compete with Claude Code at the developer and hobbyist level. Based on vibes it sure seemed like Claude Code / Opus 4.6 was running away with developer mindshare. Peter single handedly got many of us taking Codex more seriously, at least that's my impression from the conversations I had. Openclaw has gotten more attention over the past 2 weeks than anything else I can think of. Depending on how this goes, this could be to OpenAI what Instagram was to Facebook. FB bought Instagram for $1 billion and now estimated to be worth 100's of billies. Total speculation based on just about zero information. :) |
| |
| ▲ | Aurornis 4 hours ago | parent [-] | | > Peter single handedly got many of us taking Codex more seriously, at least that's my impression from the conversations I had. Comments like this feel confusing because I didn't have any association between Codex and OpenClaw before reading your comment. Codex was also seeing a lot of usage before OpenClaw. The whole OpenClaw hype bubble feels like there's a world of social media that I wasn't tapped into last month that OpenClaw capitalized on with unparalleled precision. There are many other agent frameworks out there, but OpenClaw hit all the right notes to trigger the hype machine in a way that others did not. Now OpenClaw and its author are being attributed for so many other things that it's hard for me to understand how this one person inserted himself into the center of this media zeitgeist | | |
| ▲ | Multiplayer 3 hours ago | parent | next [-] | | He's been on a number of podcasts - lex recently, and is really emphatic about Codex as the breakthrough solution he relies on. I just looked and on the handful of podcasts there are about 2,000,000 views this past week and half or so. | |
| ▲ | yks 3 hours ago | parent | prev | next [-] | | It's how Steve Yegge became a "father of agentic orchestration" or something - there is some Canonical Universe Building exercise somewhere on twitter that just looks, for the lack of a better word, not rigorous. But good for all these people, I guess, for riding the hype to glory. | |
| ▲ | botusaurus 3 hours ago | parent | prev [-] | | you didn't see because you dont follow peter on twitter. he talked for months now how codex is a better coder. | | |
| ▲ | Aurornis 3 hours ago | parent [-] | | I’m not disputing that people who follow Peter are getting information from Peter. It’s the “single handedly” part of the claim that was strange. I’m questioning how some people in that bubble came to believe he was at the center of that universe. He wasn’t the only person talking about the differences between Codex or Claude. Most of the LLM people I follow had their own thoughts and preferences that they advertised too. | | |
| ▲ | Multiplayer 3 hours ago | parent [-] | | Sure, single-handedly is doing a lot of work here. :) Anecdotally a fair number of people I know have referenced his thoughts so I just ran with that. Most people seem to kind of equivocate about whatever model they like, Peter on the other hand is very strident about it. |
|
|
|
|
|
| ▲ | thoughtjunkie 4 hours ago | parent | prev | next [-] |
| It's kind of a shame actually, because the whole promise of OpenClaw is that you own all the data yourself, you have complete control, you can write the memories or the personality of the bot. "Open"AI will never run ChatGPT this way. They want all of your data, your documents, your calendar, they want to keep it for themselves and lock you into their platform. They will want a sanitised corporate friendly version of an AI agent that reflects well on their brand. |
|
| ▲ | rkunnamp 3 hours ago | parent | prev | next [-] |
| I really hope Mario and Armin also gets poached The real gem inside OpenClaw is pi, the agent, created by
Mario Zechner. Pi is by far the best agent framework in the world. Most extensible, with the best primitives. . Armin Ronacher , creator of flask , can go deep and make something like openclaw enterprise ready. The value of Peter is in connecting the dots, thinking from users perspective, and bringing business perspective The trio are friends and have together vibecoded vibetunnel. Sam Altman, if you are reading this , get Mario and Armin today. |
| |
|
| ▲ | shevy-java 3 hours ago | parent | prev | next [-] |
| > I’m joining OpenAI to work on bringing agents to everyone. Sounds like a threat - "I'm joining OpenSkynetAI to bring AI agents onto your harddisc too!" |
|
| ▲ | rob 4 hours ago | parent | prev | next [-] |
| All they have to do now is partner with one of the major messaging providers like telegram and they can offer this as a hosted bot solution and probably dominate the market. Yes people are going out there buying mac minis and enjoying setting it up themselves but 90% of the general public don't want to do or maintain that and still want the benefits of all of it. |
|
| ▲ | deadeye 4 hours ago | parent | prev | next [-] |
| Openclaw did what no major model producer would do. Release insanly insecure software that can do whatever it wants on your machine. If openai had done it themselves, immediate backlash. |
| |
| ▲ | madihaa 3 hours ago | parent | next [-] | | Major producers like OpenAI optimize for safety and brand reputation avoiding backlash. Open source projects optimize for raw capability and friction less experimentation. It is risky yes, but it allows for rapid innovation that strictly aligned models can't offer. | |
| ▲ | lvl155 3 hours ago | parent | prev [-] | | Is it? You basically got 95% of the way there with Claude Code inside of a container. People were using CC outside of development scope for awhile. | | |
| ▲ | Aurornis 3 hours ago | parent [-] | | > You basically got 95% of the way there with Claude Code inside of a container. OpenClaw and Claude Code aren't solving the same problems. OpenClaw was about having a sandbox, connecting it to a messenger channel, and letting it run wild with tools you gave it. | | |
| ▲ | koolala 3 hours ago | parent | next [-] | | A messenger and ssh'ing into Claude Code from your phone aren't that much different. | | |
| ▲ | theturtletalks 3 hours ago | parent | next [-] | | The real magic is heartbeat which is essentially cron on steroids. The real difference between running Claude Code in the terminal and OpenClaw is that the agent is actually intuitive and self-driven. People would wake up to their agent having built something cool the night before or automate their workflow without even asking for it. | |
| ▲ | Aurornis 3 hours ago | parent | prev [-] | | I’m not an OpenClaw user but it’s obvious that OpenClaw was very different than that. OpenClaw was about having the agent operate autonomously, including initiating its own actions and deciding what to do. Claude Code was about waiting for instructions and presenting results. “Just SSH into Claude Code” is like the famous HN comment that didn’t understand why anyone was interested in DropBox because you could do backups with shell scripts. |
| |
| ▲ | lvl155 3 hours ago | parent | prev [-] | | That’s what CC does…I don’t need a messenger wrapper to do those things. |
|
|
|
|
| ▲ | jjmarr 4 hours ago | parent | prev | next [-] |
| It's pretty depressing yet motivating seeing SWE bifurcate. This is an app that would've normally had a dozen or so people behind it, all acquihired by OpenAI to find the people who really drove the project. With AI, it's one person who builds and takes everything. |
| |
| ▲ | Aurornis 4 hours ago | parent | next [-] | | > This is an app that would've normally had a dozen or so people behind it, all acquihired by OpenAI to find the people who really drove the project. Acquihires haven't worked that way for a while. The new acquihire game is to buy out a few key execs and then have them recruit away the key developers, leaving the former company as a shell for someone else to take over and try to run. Also OpenClaw was not a one-person operation. It had several maintainers working together. | |
| ▲ | gordonhart 4 hours ago | parent | prev [-] | | Every day the software world feels more and more like a casino. |
|
|
| ▲ | ai-christianson 3 hours ago | parent | prev | next [-] |
| For anyone looking at alternatives in this space - I built Gobii (https://gobii.ai) 8 months before OpenClaw existed. MIT licensed, cloud native, gVisor sandboxed. The sandboxing part matters more than people think. Giving an LLM a browser with full network access and no isolation is a real security problem that most projects in this space hand-wave away. Multi-provider LLM support (OpenAI, Anthropic, DeepSeek, open-weight models via vLLM). In production with paying customers. Happy to answer architecture questions. |
| |
| ▲ | ed_mercer 3 hours ago | parent [-] | | Looks good! I'm curious, are customers fine with their data going to third-party LLM providers? | | |
| ▲ | ai-christianson 2 hours ago | parent | next [-] | | Not sure what gives you that idea. One of our superpowers is that we're MIT licensed and deployable to private clouds, or even fully airgapped with 196gb+ of vram to run minimax on vllm + Gobii. | |
| ▲ | herval 3 hours ago | parent | prev [-] | | I think this ship has sailed pretty hard, by now. Pretty much any app you can possibly use, from iTerm to Slack, is sending data to third-party LLMs (sometimes explicitly, most times as small features here and there) | | |
|
|
|
| ▲ | LeoPanthera 5 hours ago | parent | prev | next [-] |
| Hugged to death? https://web.archive.org/web/20260215220749/https://steipete.... |
|
| ▲ | mocmoc 5 hours ago | parent | prev | next [-] |
| flappy bird effect |
| |
| ▲ | appplication 4 hours ago | parent [-] | | We’re in a hype state where someone can “generate” millions of dollars in value in a month by making a meme prototype that scratches the itch just right, despite having no real competitive moat, application, value proposition or even semblance of a path to one. The guy is creative, but this is really just following the well known pattern of acquiring/hiring bright minds if only to prevent your competition from doing the same. |
|
|
| ▲ | zmmmmm 3 hours ago | parent | prev | next [-] |
| Just like the original OpenAI story, this seems like a case of reputation hacking through asymmetry in risk tolerance. There is not much novel about OpenClaw. Anybody could have thought of this or done it. The reason people have not released an agent that would run by itself, edit its own code and be exposed to the internet is not that it's hard or novel - it's because it is an utterly reckless thing to do. No responsible corporate entity could afford to do it. So we needed someone with little enough to lose, enough skill and willing to be reckless enough to do it and release it openly to let everyone else absorb the risk. I think he's smart to jump on the job opportunity here because it may well turn out that this goes south in a big way very fast. |
| |
| ▲ | krackers 3 hours ago | parent | next [-] | | I think it's at the final stage of software pump and dump [1]. OpenAI is probably hiring more for the reputation/marketing, rather than for any technical skills behind OpenClaw. [1] https://news.ycombinator.com/item?id=46776848 | |
| ▲ | slashdave an hour ago | parent | prev [-] | | > Anybody could have thought of this or done it To be fair, when used in retrospect, this applies to just about any big tech company |
|
|
| ▲ | voxelc4L 4 hours ago | parent | prev | next [-] |
| Not sure if anyone has heard his interview on the Hard Fork podcast... was not unlike listening to a PR automaton. Now going to work for OpenAI. Yup. |
|
| ▲ | dev1ycan 3 hours ago | parent | prev | next [-] |
| This is how you can tell OpenAI is panicking, rather than build something fairly simple themselves, they insta bought it for the headline news/"hype"... |
|
| ▲ | tdhz77 2 hours ago | parent | prev | next [-] |
| Going to short OpenAI after hearing this. |
|
| ▲ | rcarmo 4 hours ago | parent | prev | next [-] |
| Not surprising if you've been paying attention on Twitter, but interesting to see nonetheless. |
|
| ▲ | maplethorpe 4 hours ago | parent | prev | next [-] |
| > That’ll need a much broader change, a lot more thought on how to do it safely, and access to the very latest models and research. You work for OpenAI now. You don't have to worry about safety anymore. |
|
| ▲ | _nvs 4 hours ago | parent | prev | next [-] |
| Congrats — just the beginning for agents! |
|
| ▲ | neilellis 3 hours ago | parent | prev | next [-] |
| When I hear people talking about how insecure OpenClaw is, I remember how insecure the internet was in the early days. Sometimes it's about doing the right thing badly and fix the bad things after. Big Tech can't release software this dangerous and then figure out how to make it secure. For them it would be an absolute disaster and could ruin them. What OpenClaw did was show us the future, give us a taste of what it would be like and had the balls to do it badly. Technology is often pushed forwards by ostensively bad ideas (like telnet) that carve a path through the jungle and let other people create roads after. I don't get the hate towards OpenClaw, if it was a consumer product I would, but for hackers to play around to see what is possible it's an amazing (and ridiculously simple) idea. Much like http was. If you connected to your bank account via telnet in the 1980s or plain http in the 90s or stored your secrets in 'crypt' well, you deserved what you got ;-) But that's how many great things get started, badly, we see the flaws fix them and we get the safe version. And that I guess is what he'll get to do now. * OpenClaw is a straw man for AGI * |
|
| ▲ | paxys 4 hours ago | parent | prev | next [-] |
| Disappointing TBH. I completely understand that the OpenAI offer was likely too good to pass up, and I would have done the same in his position, but I wager he is about to find out exactly why a company like OpenAI isn't able to execute and deliver like he single-handedly did with OpenClaw. The position he is about to enter requires skills in politics and bureaucracy, not engineering and design. |
| |
| ▲ | Aurornis 4 hours ago | parent [-] | | > but I wager he is about to find out exactly why a company like OpenAI isn't able to execute and deliver like he single-handedly did with OpenClaw. No company could ship anything like OpenClaw as a product because it was a million footguns packaged with a self-installer and a couple warnings that it can't be trusted for anything. There's a reason they're already distancing themselves from it and saying it's going to an external foundation |
|
|
| ▲ | Tangokat 5 hours ago | parent | prev | next [-] |
| Incredibly depressing comments in this thread. He keeps OpenClaw open. He gets to work on what he finds most exciting and helps reach as many people as possible. Inspiring, what dreams are made of really.
Top comments are about money and misguided racism. Personally I'm excited to see what he can do with more resources, OpenClaw clearly has a lot of potential but also a lot of improvements needed for his mum to use it. |
| |
| ▲ | behnamoh 4 hours ago | parent | next [-] | | He said on Lex Fridman podcast that he has no intention of joining any company; that was a couple days ago. | | | |
| ▲ | softwaredoug 4 hours ago | parent | prev [-] | | Frankly, I hope he maximized the amount of money he made. It's a once in a lifetime opportunity. And nobody knows where AI is headed or if OpenAI even will be in existence in a few years given their valuation and the amount of $ they need to burn to keep up. |
|
|
| ▲ | sarreph 4 hours ago | parent | prev | next [-] |
| Those attempting to discredit the value of OpenClaw by virtue of it being easily replicable or simple are missing the point. This was, like most successful entrepreneurial endeavours, a distribution play. The creator built a powerful social media following and capitalized on that. Fair play. |
|
| ▲ | SilentM68 3 hours ago | parent | prev | next [-] |
| Best way to democratize AI is to keep it as free or as inexpensive as possible. |
|
| ▲ | mrcwinn 3 hours ago | parent | prev | next [-] |
| I wouldn’t be able to sleep at night knowing I have to work for Sam Altman. Dude’s gross. |
|
| ▲ | poontangbot 3 hours ago | parent | prev | next [-] |
| Time to uninstall |
|
| ▲ | jackblemming 4 hours ago | parent | prev | next [-] |
| I appreciate the author’s work and he seems like a good guy. In spite of that, it’s incredibly obvious OpenClaw was pushed by bots across pretty much every social media platform and that’s weird and unsettling. |
|
| ▲ | shadowgovt 4 hours ago | parent | prev | next [-] |
| Well, someone has to backfill Zoë Hitzig exiting. |
|
| ▲ | qaq 5 hours ago | parent | prev | next [-] |
| Good thing Sam has no experience in transforming a foundation into for profit org ... |
|
| ▲ | _nvs 4 hours ago | parent | prev | next [-] |
| congrats @steipete! |
|
| ▲ | TealMyEal 5 hours ago | parent | prev | next [-] |
| cant wait for this post to be memoryholed in 6 months when the community is a shell of its former self (no crustacean pun intended) |
| |
|
| ▲ | jnaina 3 hours ago | parent | prev | next [-] |
| Damn. I just installed OpenClaw on my M2 Mac and hopped on a plane for our SKO in LAX. United delayed the plane departure by 2 hours (of course) and diverted the flight to Honolulu. And Claw (that's the name of my new AI agent) kept me updated on my rebooking options and new terminal/gate assignments in SFO. All through the free WhatsApp access on United. AND, it refactored all my transferred Python code, built a graph of my emails, installed MariaDB and restored a backup from another PC. And, I almost forgot, fixed my 1337x web scrapping (don't ask) cron job, by CloudFlare-proofing it. All the while sitting in a shitty airline, with shitty food and shittier seats, hurtling across the pacific ocean. The future is both amazing and shitty. Hope OpenClaw continues to evolve. It is indeed an amazing piece of work. And I hope sama doesn't get his grubby greedy hands on OpenClaw. |
| |
| ▲ | antod 2 hours ago | parent | next [-] | | > The future is both amazing and shitty I feel like we're living in one of those breathless futurist interviews from a 1994 issue of Wired mag. | | | |
| ▲ | s3p 2 hours ago | parent | prev | next [-] | | What about token usage? i've noticed that simple conversations balloon to 100k+ tokens within 1-3 messages. did you have this issue? | | |
| ▲ | jnaina 2 hours ago | parent [-] | | I have Claude Max subscription for the main agenttasks. Also use my OpenAI API and Gemini API access for sub-agent work. Once my Olares One is here, will also be using local LLMs on open models. https://one.olares.com/ |
| |
| ▲ | paulryanrogers 2 hours ago | parent | prev | next [-] | | Did you ask OpenClaw to do all those things? If not did you want it to do all of them? | | |
| ▲ | jnaina 2 hours ago | parent | next [-] | | I asked it to check why the cron job kept failing, and it checked the cron payload and recommended reasons for the failure. I gave it the approval to go ahead and fix it. it tried different options (like trying different domains, and finally figured out the anti CF option). | |
| ▲ | jnaina 2 hours ago | parent | prev [-] | | the other tasks (like the MariaDB install and restore, python code refactoring) were a result of the initial requests made to Claw, like graphing my gmail email archives. |
| |
| ▲ | chimeracoder 2 hours ago | parent | prev [-] | | > hopped on a plane for SKO in LAX. United delayed the plane departure by 2 hours (of course) and diverted the flight to Honolulu. I'm assuming there's a typo here, because I can't imagine a flight from LAX to SKO at all, let alone one that goes anywhere close to Honolulu. But I can't figure out what this was supposed to be. | | |
| ▲ | jnaina 2 hours ago | parent [-] | | SKO ---> Sales Kick Off. Apologies for the acronym overdose |
|
|
|
| ▲ | FpUser 2 hours ago | parent | prev | next [-] |
| >"What I want is to change the world" Thank you, we already fucked. I am a hypocrite of course. |
|
| ▲ | poszlem 5 hours ago | parent | prev | next [-] |
| This reads simply as an “Our Incredible Journey” type of post, but written for an person rather than a company. |
|
| ▲ | dist-epoch 5 hours ago | parent | prev | next [-] |
| Haters gonna hate, but bro vibe-coded himself into being a billionaire and having Sam Altman and Zuck personally fight over him. |
| |
| ▲ | orsorna 5 hours ago | parent | next [-] | | Proof you can get hired off of a portfolio where you've never even viewed a single line of code form it. Definitely feel a mix of envy and admiration. | | |
| ▲ | embedding-shape 5 hours ago | parent | next [-] | | It was never really about the code itself anyways. | |
| ▲ | mrshu 4 hours ago | parent | prev [-] | | To be fair, it's not like he did not read a single line of code that ended up being generated. |
| |
| ▲ | verdverm 5 hours ago | parent | prev [-] | | I'd tell those two off before taking a penny money or morals, choose one |
|
|
| ▲ | firefoxd 4 hours ago | parent | prev | next [-] |
| Somehow we've normalized running random .exe on our devices. Except now it's markdown.exe and and you sound like a zealot when advocating against it. |
| |
|
| ▲ | mortsnort 2 hours ago | parent | prev | next [-] |
| Move fast and break things... |
|
| ▲ | throw444420394 4 hours ago | parent | prev | next [-] |
| What to understand of this whole story: This is a vibe coded agent that is replicable in little time. There is no value in the technology itself. There is value in the idea of personal agents, but this idea is not new. The value is in the hype, from the perspective of OpenAI. I believe they are wrong (see next points) We will see a proliferation of personal agents. For a short time, the money will be in the API usage, since those agents burn a lot of tokens often for results that can be more sharply obtained without a generic assistant. At the current stage, not well orchestrated and directed, not prompted/steered, they are achieving results by brute force. Who will create the LLM that is better at following instructions in a sensible way, and at coordinating long running tasks, will have the greatest benefit, regardless of the fact the OpenClaw is under the umbrella of OpenAI or not. Claude Opus right now is the agent that works better for this use case. It is likely that this will help Anthropic more than OpenAI. It is wise, for Anthropic, to avoid burning money for an easily replicable piece of software. Those hypes are forgotten as fast as they are created. Remember Cursor? And it was much more a true product than OpenClaw. Soon, personal agents will be one of the fundamental products of AI vendors, integrated in your phone, nothing to install, part of the subscription. All this will be irrelevant. In the mean time, good for the guy that extracted money from this gold mine. He looks like a nice person. If you are reading this: congrats! (throw away account of obvious reasons) |
| |
| ▲ | bmay 4 hours ago | parent | next [-] | | > Those hypes are forgotten as fast as they are created. Remember Cursor? of course--i use it every day. are you implying Cursor is dead? they raised $2B in funding 3 months ago and are at $1B in ARR... | | |
| ▲ | koakuma-chan 4 hours ago | parent | next [-] | | What does VSCode fork spend 2 billion dollars on? | | |
| ▲ | csallen 3 hours ago | parent [-] | | Their own coding agent and models, marketing, tons of UI customizations, etc. |
| |
| ▲ | throw444420394 4 hours ago | parent | prev | next [-] | | It was a success for the company, but it is unlikely to survive long term. Now people are all focusing on Claude Code and Codex. Cursor is surviving because there are many folks that can't survive a terminal session. And because we are still in a transition stage where people look at the code, but will look at the code every day less, and more at the results and the prompts. And at the quality of the agent orchestration / tools. I don't believe the Cursor future will be bright. Anyway: my example was about how fast things are forgotten in this space. | | |
| ▲ | trengrj 4 hours ago | parent | next [-] | | This is very true but I think there is an incredibly long tail of people who "can't survive a terminal session" and I actually question if a terminal ui will win out long term. | | |
| ▲ | throw444420394 4 hours ago | parent [-] | | My guess is that, very soon, Claude Code and Codex (that already launched an initial desktop app) will have their GUIs that will be very different than Cursor. Not centered around files and editing, but providing a lot more hints about what is happening with the work the agent is performing. |
| |
| ▲ | yieldcrv 4 hours ago | parent | prev [-] | | Are you all back on vs code or what? I still have cursor open and use it the few times I want to modify code manually or visualize the file structure But base vs code is fine for that too |
| |
| ▲ | rvz 4 hours ago | parent | prev [-] | | > Remember Cursor? Who? > are you implying Cursor is dead? they raised $2B in funding 3 months ago and are at $1B in ARR That is the problem. It doesn't matter about how much they raised. That $2B and that $1B is paying the supplier Anthropic and OpenAI who are both directly competing against them. Cursor is operating on thin margins and still continues to losing money. It's now worse that people are leaving Cursor for Claude Code. In short, Cursor is in trouble and they are funding their own funeral. |
| |
| ▲ | flyinglizard 4 hours ago | parent | prev [-] | | I think Cursor is doing pretty well in the enterprise space. It seems much more useful than just throwing agents upon subagents on an unsuspecting task like Claude Code. | | |
| ▲ | throw444420394 4 hours ago | parent [-] | | Cursor is fine, the example is about how things go out of hype in very little time. However I believe Cursor will not survive much. It is designed around a model that will not survive: that the AI "helps you writing code", and you review, and need an IDE like that. There are many developers that want an IDE and can't stand the terminal experience of Claude Code and Codex, but I don't believe most developers in the future will inspect closely the code written by the AIs, and things like Cursor will look like products designed for a transition step that is no longer here (already). | | |
| ▲ | flyinglizard 4 hours ago | parent [-] | | I'd venture a guess that most of the software in the world is not written from scratch but painstakingly maintained and as such, Cursor is a good fit while CC is not.
Besides, if agentic coding does go off, Cursor has the customer relationship and can just offer it as an additional mode. Whoever stands in front of the customer ultimately wins. The rest are just cost centers. |
|
|
|
|
| ▲ | empressplay 2 hours ago | parent | prev | next [-] |
| This tells you all you need to know about OpenAI, honestly. |
|
| ▲ | popalchemist 4 hours ago | parent | prev | next [-] |
| OpenClaw is literally the most poorly conceived and insecure AI software anyone has ever made. Its users have had OpenClaw spend thousands of dollars, and do various unwanted and irreversible things. This fucking guy will fit right in at OpenAI. |
| |
| ▲ | s3p 2 hours ago | parent [-] | | I would be inclined to believe you if you mentioned a single open-source agent that does more than OC. Just one. | | |
| ▲ | popalchemist an hour ago | parent [-] | | Has it occurred to you that the fact that OpenClaw can do so much is exactly why it is problematic from a security point of view. |
|
|
|
| ▲ | lvl155 5 hours ago | parent | prev | next [-] |
| Never understood the hype. Good for the guy but what was the product really? And he goes on and on about changing the world. Gimme a break. You cashed out. End of story. |
| |
| ▲ | worldsavior 5 hours ago | parent [-] | | Just connecting social platforms to agents. That's all. Anyone can code it, and the project was obviously vibe coded. For some reason it got viral. Good for him, but no particular geniusness. |
|
|
| ▲ | stcredzero 4 hours ago | parent | prev | next [-] |
| OpenClaw clearly has a lot of potential but also a lot of improvements needed for his mum to use it. We're working on security and about 3 very key architectural improvements. https://seksbot.com/ |
| |
|
| ▲ | micromacrofoot 4 hours ago | parent | prev | next [-] |
| wow hype really is everything, good for him |
|
| ▲ | crorella 4 hours ago | parent | prev | next [-] |
| Welcome :D |
|
| ▲ | marxisttemp 4 hours ago | parent | prev | next [-] |
| Who cares? |
|
| ▲ | mirawelner 4 hours ago | parent | prev | next [-] |
| “ My next mission is to build an agent that even my mum can use” There is literally no need to shit on ur mom like that. Sorry your mom sucks at tech but can we please stop using this as a euphemism? |
|
| ▲ | krashidov 5 hours ago | parent | prev | next [-] |
| What a blunder by Anthropic. We'll see what openclaw turns into and if it sticks around, but still a huge and rare blunder by anthropic |
| |
| ▲ | unpwn 4 hours ago | parent | next [-] | | i dont think so, its trivial to spin up an openclaw clone. the only value here is the brand | |
| ▲ | SamDc73 4 hours ago | parent | prev | next [-] | | I highly suspect he might even consider Anthropic since they enforced restrictions at some point on OpenClaw form using there APIs | | | |
| ▲ | rockwotj 4 hours ago | parent | prev [-] | | I am sure they made a bid. The blog makes it sounds like he talked to multiple labs. | | |
| ▲ | serf 4 hours ago | parent [-] | | they're (Anthropic) also the ones who have been routinely rug-pulling access from projects that try to jump onto the cc api, pushing those projects to oAI. | | |
| ▲ | nl 4 hours ago | parent [-] | | Do you have any references for that? AFAIK Anthropic won't let projects use the Claude Code subscription feature, but actually push those projects to the Claude Code API instead. | | |
| ▲ | benatkin 3 hours ago | parent [-] | | I'd like a reference for it being rug pulling. What happened with OpenCode certainly wasn't rug pulling, unless Anthropic asked them to support using a Claude subscription with it. |
|
|
|
|
|
| ▲ | mentalgear 4 hours ago | parent | prev | next [-] |
| A hype vibe-bot maker joins a hype-vibe company that runs on fumes. Anything to keep the scam altman bubble going. |
|
| ▲ | tempaccount5050 3 hours ago | parent | prev | next [-] |
| Bunch of jealous SV nerds in this thread. Pretty funny to see, props to whoever this guy is, he'll never have to work again if he doesn't want to. |
|
| ▲ | mekod 4 hours ago | parent | prev | next [-] |
| OpenClaw was one of the more interesting “edges” of the open AI tooling ecosystem — not because of scale, but because of taste and clarity of direction. What’s fascinating is the pattern we’re seeing lately: people who explored the frontier from the outside now moving inside the labs. That kind of permeability between open experimentation and foundational model companies seems healthy. Curious how this changes the feedback loop. Does bringing that mindset in accelerate alignment between tooling and model capabilities — or does it inevitably centralize more innovation inside the labs? Either way, congrats. The ecosystem benefits when strong builders move closer to the core. |
| |
| ▲ | cactus2093 3 hours ago | parent [-] | | I agree, it's an interesting distortion to the traditional technology feedback loop. I would expect someone who "strikes gold" like this in a solo endeaver to raise money, start a company, hire a team. Then they have to solve the always challenging problem of how to monetize an open-source tool. Look at a company like Docker, they've been successful but they didn't capture more than a small fraction of the commercial revenue that the entire industry has paid to host the product they developed and maintain. Their peak valuation was over a billion dollars, but who knows by the time all is said and done what they'll be worth when they sell or IPO. So if you invent something that is transformative to the industry you might work really hard for a decade and if you're lucky the company is worth $500M, if you can hang onto 20% of the company maybe it's worth $100M. Or, you skip the decade in the trenches and get acqui-hired by a frontier lab who allegedly give out $100M signing bonuses to top talent. No idea if he got a comparable offer to a top researcher, but it wouldn't be unreasonable. Even a $10M package to skip a decade of risky & grueling work if all you really want to do is see the product succeed is a great trade. |
|
|
| ▲ | groundtruthdev an hour ago | parent | prev [-] |
| This feels less like an acquisition and more like signaling. OpenClaw isn’t infrastructure, it’s an experiment, and its value is narrative: “look what one person can build with our models.” OpenAI gets PR, optional talent, and no obligation to ship something deterministic. The deeper issue is that agent frameworks run straight into formal limits (Gödel/Turing-style): once planning and execution are non-deterministic, you lose reproducibility, auditability, and guarantees. You can wrap that with guardrails, but you can’t eliminate it. That’s why these tools demo well but don’t become foundations. Serious systems still keep LLMs at the edges and deterministic machinery in the core. Meta: this comment itself was drafted with ChatGPT’s help — which actually reinforces the point. The model didn’t decide the thesis or act autonomously; a human constrained it, evaluated it, and took responsibility. LLMs add real value as assistive tools inside a deterministic envelope. Remove the human, and you get the exact failure modes people keep rediscovering in agent frameworks. |
| |
| ▲ | raspasov 22 minutes ago | parent [-] | | Exactly. Unfortunately, it seems like the ship has sailed towards exploitation of the current local maximum (I got GPUs and Terawatts, let’s go!) instead of looking for something better. |
|