Remix.run Logo
theshrike79 a day ago

I'm a skeptic myself and a long-time developer. But I do have to admit there's a nugget of truth in the Claws.

I installed picoclaw on a whim (or nano? can't remember).

In maybe 15 minutes I had it make a "get weather for this specific area using the met.no API" skill and "check the train tables at these two stops for this specific line" skill.

Then I could just say "I go to the office every Monday on a train that leaves at 8, notify me if the weather is bad or there are delays in the train schedules"

And it just worked.

The "make a skill" bit was optional, it could've figured out both on its own, but I've been doing this for a while and figured out it's a lot more (token) efficient to have it specifically know how to do the things I want it to do.

---

Now lets take this loop and think about the system and what it could do.

Even if I wasn't a programmer and just went with "tell me when the train line I use for my commute is late" the system itself could see that "hmm, this looks like a thing I'll be doing often" and create a script/skill/plugin to do that via an official API (or WebMCP in the future).

You can't do that with Zapier or N8N.

There are many cool ways a pure LLM-powered system like that can be optimised, and more importantly, can be taught to self-optimise. By default I think the systems use the "main" model to read the HEARTBEAT.md file, which is stupid expensive. That could be done with a local model small enough to run on a modern phone.

And if that small local model says "yep, there's something to do", then it can either give the full task to a LLM or if it's smart enough it can spread specific tasks to small or medium local models first.

tl;dr OpenClaw is what Siri should've been after that epic fail of a Apple Keynote.