Remix.run Logo
orsorna 6 hours ago

Was the project really ever valued that high? Seems like something that can be easily replicated and even properly thought out (re: pi). This guy just ran the social media hype train the right way.

linkregister 6 hours ago | parent | next [-]

Reminds me of Facebook, there was nothing particularly interesting about a PHP app that stored photos and text in a flat user environment.

Yet somehow the network effects worked out well and the website was the preeminent social network for almost a decade.

Gigachad 6 hours ago | parent | next [-]

Social media is the king of network effects. Almost nothing else compares. See how quickly people drop AI products for the next one that does the same thing but slightly better. To switch from ChatGPT to Gemini I don't have to convince all of my friends and family to do the same.

Sateeshm 2 hours ago | parent [-]

> Social media is the king of network effects. Almost nothing else compares.

Ecommerce is close second

rockwotj 6 hours ago | parent | prev | next [-]

Technology does not determine the success of a company. I’ve seen amazing tech fail, and things strapped together with ducktape and bubblegum be a wild success.

jatari 6 hours ago | parent | prev | next [-]

The instant someone makes a better version of openclaw -literally- everyone is going to jump ship.

There is no lock in at all.

CuriouslyC 6 hours ago | parent | prev | next [-]

Except in this case there's no network effect for autonomous agents. In fact, Peter is going to be working mostly on an OpenAI locked down, ecosystem tied agent, which means it's going to be worse than OpenClaw, but with a nicer out of the box experience.

fragmede 6 hours ago | parent [-]

If you're on OpenAI, and I'm on Anthropic, can we interoperate? What level are we even trying to interoperate on? The network effect is that, hey, my stuff is working here, your stuff is working over there. So do we move to your set of tools, or my set of tools, or do we mismash between them, as our relationship and power dynamics choose for us.

CuriouslyC 5 hours ago | parent [-]

I'd describe that as platform lock-in rather than the network effect.

bdangubic 6 hours ago | parent | prev [-]

facebook is still preeminent social network today

james_marks 4 hours ago | parent | prev | next [-]

“Just” is doing some heavy lifting here.

koakuma-chan 6 hours ago | parent | prev | next [-]

It's kind of crazy that this kind of thing can cause so much hype. It is even useful? I just really don't see any utility in being able to access an LLM via Telegram or whatever.

bfeynman 6 hours ago | parent | next [-]

the ability to almost "discover" or create hype is highly valued despite most of the time it being luck and one hit wonders... See many of the apps that had virality and got quickly acquired and then just hemorrhaged. Openclaw is cool, but not for the tech, just some of the magic of the oddities and getting caught on somehow, and acquiring is betting that they can somehow keep doing that again.

diosisns 6 hours ago | parent | prev | next [-]

I think a lot of this is orchestrated behind the scenes. Above author has taken money from AI companies since he’s a popular “influencer”.

And it makes a lot of sense - there’s billions of dollars on the line here and these companies made tech that is extremely good at imitating humans. Cambridge analytica was a thing before LLMs, this kinda tool is a wet dream for engineering sentiment.

Rebelgecko 5 hours ago | parent | prev | next [-]

A lot of the functionality I'm not using because of security concerns, but a lot of the magic comes down to just having a platform for orchestrating AI agents. It's honestly nice just for simple sysadmin stuff "run this cron job and text me a tl;dr if anything goes wrong" or simple personal assistant tasks like"remind me if anyone messaged me a question in the last 3 days and I haven't answered".

It's also cool having the ability to dispatch tasks to dumber agents running on the GPU vs smarter (but costlier) ones in the cloud

lofaszvanitt 42 minutes ago | parent [-]

but why?

CuriouslyC 6 hours ago | parent | prev | next [-]

In Asia people do a big chunk of their business via chatbots. OpenClaw is a security dumpster fire but something like OpenClaw but secure would turbocharge that use case.

If you give your agent a lot of quantified self data, that unlocks a lot of powerful autonomous behavior. Having your calendar, your business specific browsing history and relevant chat logs makes it easy to do meeting prep, "presearch" and so forth.

lufenialif2 4 hours ago | parent [-]

Curious how you make something that has data exfiltration as a feature secure.

CuriouslyC 4 hours ago | parent [-]

Mitigate prompt injection to the best of your ability, implement a policy layer over all capabilities, and isolate capabilities within the system so if one part gets compromised you can quarantine the result safely. It's not much different than securing human systems really. If you want more details there are a lot of AI security articles, I like https://sibylline.dev/articles/2026-02-15-agentic-security/ as a simple primer.

SpicyLemonZest 3 hours ago | parent [-]

Nobody can mitigate prompt injection to any meaningful degree. Model releases from large AI companies are routinely jailbroken within a day. And for persistent agents the problem is even worse, because you have to protect against knowledge injection attacks, where the agent "learns" in step 2 that an RPC it'll construct in step 9 should be duplicated to example.com for proper execution. I enjoy this article, but I don't agree with its fundamental premise that sanitization and model alignment help.

CuriouslyC 2 hours ago | parent [-]

I agree that trying to mitigate prompt injection in isolation is futile, as there are too many ways to tweak the injection to compromise the agent. Security is a layered thing though, if you compartmentalize your systems between trusted and untrusted domains and define communication protocols between them that fail when prompt injections are present, you drop the probability of compromise way down.

krethh an hour ago | parent [-]

> define communication protocols between them that fail when prompt injections are present

There's the "draw the rest of the owl" of this problem.

Until we figure out a robust theoretical framework for identifying prompt injections (not anywhere close to that, to my knowledge - as OP pointed out, all models are getting jailbroken all the time), human-in-the-loop will remain the only defense.

CuriouslyC 36 minutes ago | parent [-]

Human in the loop isn't the only defense, you can't achieve complete injection coverage, but you can have an agent convert untrusted input into a response schema with a canary field, then fail any agent outputs that don't conform to the schema or don't have the correct canary value. This works because prompt injection scrambles instruction following, so the odds that the injection works, the isolated agent re-injects into the output, and the model also conforms to the original instructions regarding schema and canary is extremely low. As long as the agent parsing untrusted content doesn't have any shell or other exfiltration tools, this works well.

Nextgrid 6 hours ago | parent | prev [-]

There's been some crypto shenanigans as well that the author claimed not to be behind... looking back at it, even if the author indeed wasn't behind it, I think the crypto bros hyping up his project ended up helping him out with this outcome in the end.

nosuchthing 4 hours ago | parent [-]

Can you elaborate on this more or point a link for some context?

Nextgrid 2 hours ago | parent [-]

Some crypto bros wanted to squat on the various names of the project (Clawdbot, Moltbot, etc). The author repeatedly disavowed them and I fully believe them, but in retrospect I wonder if those scammers trying to pump their scam coins unwittingly helped the author by raising the hype around the original project.

nosuchthing 2 hours ago | parent [-]

either way there's a lot of money pumping the agentic hype train with not much to show for it other than Peter's blog edit history showing he's a paid influencer and even the little obscure AI startups are trying to pay ( https://github.com/steipete/steipete.me/commit/725a3cb372bc2... ) for these sorts of promotional pump and dump style marketing efforts on social media.

In Peter's blog he mentions paying upwards of $1000's a month in subscription fees to run agentic tasks non-stop for months and it seems like no real software is coming out of it aside from pretty basic web gui interfaces for API plugins. is that what people are genuinely excited about?

bbor 6 hours ago | parent | prev [-]

Wasn't this the same guy that responded with a shrug to thousands of malware packages on their vibe-repo? I'd say an OpenAI signing bonus is more than enough of a reward to give up that leaky ship!

manmal 6 hours ago | parent [-]

Clawhub was locked down, I couldn’t publish new skills even as a previous contributor. Not what I‘d call a shrug.

Barbing 6 hours ago | parent [-]

I missed Clawhub—y’all following anywhere besides HN? Is it all on that Twitter site?