Remix.run Logo
EGreg 12 hours ago

No. Emphatically NOT. Apple has done a great job safeguarding people's devices and privacy from this crap. And no, AI slop and local automation is scarcely better than giving up your passwords to see pictures of cats, which is an old meme about the gullibility of the general public.

OpenClaw is a symbol of everything that's wrong with AI, the same way that shitty memecoins with teams that rugpull you, or blockchain-adjacent centralized "give us your money and we pinky swear we are responsible" are a symbol of everything wrong with Web3.

Giving everyone GPU compute power and open source models to use it is like giving everyone their own Wuhan Gain of Function Lab and hoping it'll be fine. Um, the probability of NO ONE developing bad things with AI goes to 0 as more people have it. Here's the problem: with distributed unstoppable compute, even ONE virus or bacterium escaping will be bad (as we've seen with the coronavirus for instance, smallpox or the black plague, etc.) And here we're talking about far more active and adaptable swarms of viruses that coordinate and can wreak havoc at unlimited scale.

As long as countries operate on the principle of competition instead of cooperation, we will race towards disaster. The horse will have left the barn very shortly, as open source models running on dark compute will begin to power swarms of bots to be unstoppable advanced persistent threats (as I've been warning for years).

Gain-of-function research on viruses is the closest thing I can think of that's as reckless. And at least there, the labs were super isolated and locked down. This is like giving everyone their own lab to make designer viruses, and hoping that we'll have thousands of vaccines out in time to prevent a worldwide catastrophe from thousands of global persistent viruses. We're simply headed towards a nearly 100% likely disaster if we don't stop this.

If I had my way, AI would only run in locked-down environments and we'd just use inert artifacts it produces. This is good enough for just about all the innovations we need, including for medical breakthroughs and much more. We know where the compute is. We can see it from space. Lawmakers still have a brief window to keep it that way before the genie cannot be put back into the bottle.

A decade ago, I really thought AI would be responsible developed like this: https://nautil.us/the-last-invention-of-man-236814/ I still remember the quaint time when OpenAI and other companies promised they'd vet models really strongly before releasing them or letting them use the internet. That was... 2 years ago. It was considered an existential risk. No one is talking about that now. MCP just recently was the new hotness.

I wasn't going to get too involved with building AI platforms but I'm diving in and a month from now I will release an alternative to OpenClaw that actually shows the way how things are supposed to go. It involves completely locked-down environments, with reproducible TEE bases and hashes of all models, and even deterministic AI so we can prove to each other the provenance of each output all the way down to the history of the prompts and input images. I've already filed two provisional patents on both of these and I'm going to implement it myself (not an NPE). But even if it does everything as well as OpenClaw and even better and 100% safely, some people will still want to run local models on general purpose computing environments. The only way to contain the runaway explosion now is to come together the same way countries have come together to ban chemical weapons, CFCs (in the Montreal protocol), let the hole in the ozone layer heal, etc. It is still possible...

This is how I feel:

https://www.instagram.com/reels/DIUCiGOTZ8J/

PS: Historically, for the last 15 years, I've been a huge proponent of open source and an opponent of patents. When it comes to existential threats of proliferation, though, I am willing to make an exception on both.