Remix.run Logo
loveparade 4 hours ago

What are people using these things for? The use cases I've seen look a bit contrived and I could ask Claude or ChatGPT to do it directly

ryanjshaw 3 hours ago | parent | next [-]

Here’s a copy of a post I made on Farcaster where I’m unconvinced it’s actually being used at all:

I've used OpenClaw for 2 full days and 3 evenings now. I simply don't believe people are using this for anything majorly productive.

I really, really want to like it. I see glimpses of the future in it. I generally try to be a positive guy. But after spending $200 on Claude Max, running with Opus 4.5 most of the time, I'm just so irritated and agitated... IT'S JUST SO BAD IN SO MANY WAYS.

1. It goes off on these huge 10min tangents that are the equivalent of climbing out of your window and flying around the world just to get out of your bed. The /abort command works maybe 1 time out of 100, so I end up having to REBOOT THE SERVER so as not to waste tokens!

2. No matter how many times I tell it not to do things with side effects without checking in with me first, it insists on doing bizarre things like trying to sign up for new accounts people when it hits an inconvenient snag with the account we're using, or it tried emailing and chatting to support agents because it can't figure out something it could easily have asked ME for help with, etc.

3. Which reminds me that its memory is awful. I have to remind it to remind itself. It doesn't understand what it's doing half the time (e.g. it forgets the password it generated for something). It forgets things regularly; this could be because I keep having to reboot the server.

4. It forgets critical things after compaction because the algorithm is awful. There I am, typing away, and suddenly it's like the Men in Black paid a visit and the last 30min didn't happen. Surely just throwing away the oldest 75% of tokens would be more effective than whatever it's doing? Because it completely loses track of what we're doing and what I asked it NOT to do, I end up with problem (1) again.

5. When it does remember things, it spreads those memories all over the place in different locations and forgets to keep them consistent. So after a reboot it gets confused about what is the truth.

bosky101 an hour ago | parent | next [-]

i've never had situations where i prompt and had to go out for coffee or a walk or drive. one shotting - your first prompt. perhaps.

but like a person - when the possibility of going off in the wron g direction is so high, i've always had 1 - 2 line prompts, small iterations much more appealing. The only times i've had to rollback would be when i run out of credits, and a new model cant deal with the half baked context, errors, refactoring.

threethirtytwo 2 hours ago | parent | prev [-]

there's an entire cohort on HN who still claim AI is utterly and completely useless despite in your face evidence. Literally people making a similar claim word for word who say that they don't understand the hype that they used AI themselves and it's shit.

Meanwhile my entire company uses AI and the on the ground reality for me versus the cohort above is so much at odds with each other we're both claiming the other side is insane.

I haven't used these bots yet but I want to see the full story. Not just one guys take and one guys personal experience. The hype exists because there are success stories. I want to hear those as well.

ryanjshaw 38 minutes ago | parent | next [-]

I don’t know how you came to that conclusion from my comment. I’m talking about a particular product named OpenClaw, representing a new style of doing work; not AI in general.

I dropped $200 on Claude Max in my personal capacity to test OpenClaw because I use Opus 4.5 all day in Cursor on an enterprise subscription… because it works for those problems.

Philip-J-Fry 2 hours ago | parent | prev | next [-]

What do you use AI for?

Pretty much everyone in my company also uses AI. But everyone sees the same downsides.

jamespo 2 hours ago | parent | prev [-]

There's people saying AI isn't living up its hype / valuation, I don't see many saying "utterly useless".

And there's plenty who worship at the altar of Claude.

sReinwald 3 hours ago | parent | prev | next [-]

Disclaimer: Haven't used any of these (was going to try OpenClaw but found too many issues). I think the biggest value-add is agency. Chat interfaces like Claude/ChatGPT are reactive, but agents can be proactive. They don't need to wait for you to initiate a conversation.

What I've always wanted: a morning briefing that pulls in my calendar (CalDAV), open Todoist items, weather, and relevant news. The first three are trivial API work. The news part is where it gets interesting and more difficult - RSS feeds and news APIs are firehoses. But an LLM that knows your interests could actually filter effectively. E.g., I want tech news but don't care about Android (iPhone user) or MacOS (Linux user). That kind of nuanced filtering is hard to express as traditional rules but trivial for an LLM.

rustyhancock 2 hours ago | parent | next [-]

I have a few cron jobs that basically are `opencode run` with a context file and it works very well.

At some point OpenClaw will take over in terms of it's benefits but it doesn't feel close yet for the simplicity of just run the job every so often and have OpenCode decide what it needs to do.

Currently it shoots me a notification if my trip to work is likely to be delayed. Could I do it manually well sure.

rafram 2 hours ago | parent | prev | next [-]

But this could be done for 1/100 the cost by only delegating the news-filtering part to an LLM API. No reason not to have an LLM write you the code, too! But putting it in front of task scheduling and API fetching — turning those from simple, consistent tasks to expensive, nondeterministic ones — just makes no sense.

sReinwald an hour ago | parent [-]

Like I said, the first examples are fairly trivial, and you absolutely don't need an LLM for those. A good agent architecture lets the LLM orchestrate but the actual API calls are deterministic (through tool use / MCPs).

My point was specifically about the news filtering part, which was something I had tried in the past but never managed to solve to my satisfaction.

The agent's job in the end for a morning briefing would be:

  - grab weather, calendar, Todoist data using APIs or MCP  
  - grab news from select sources via RSS or similar, then filter relevant news based on my interests and things it has learned about me  
  - synthesize the information above
The steps that explicitly require an LLM are the last two. The value is in the personalization through memory and my feedback but also the ability for the LLM to synthesize the information - not just regurgitate it. Here's what I mean: I have a task to mow the lawn on my Todoist scheduled for today, but the weather forecast says it's going to be a bit windy and rain all day. At the end of the briefing, the assistant can proactively offer to move the Todoist task to tomorrow when it will be nicer outside because it knows the forecast. Or it might offer to move it to the day after tomorrow, because it also knows I have to attend my nephew's birthday party tomorrow.
loveparade 3 hours ago | parent | prev [-]

But can't you do the same using appropriate MCP servers with any of the LLM providers? Even just a generic browser MCP is probably enough to do most of these things. And ChatGPT has Tasks that are also proactive/scheduled. Not sure if Claude has something similar.

If all you want to do is schedule a task there are much easier solutions, like a few lines of python, instead of installing something so heavy in a vm that comes with a whole bunch of security nightmares?

sReinwald 2 hours ago | parent | next [-]

> But can't you do the same just using appropriate MCP servers with any of the LLM providers?

Yeah, absolutely. And that was going to be my approach for a personal AI assistant side project. No need to reinvent the wheel writing a Todoist integration when MCPs exist.

The difference is where it runs. ChatGPT Tasks and MCP through the Claude/OpenAI web interfaces run on their infrastructure, which means no access to your local network — your Home Assistant instance, your NAS, your printer. A self-hosted agent on a mac mini or your old laptop can talk to all of that.

But I think the big value-add here might be "disposable automation". You could set up a Home Assistant automation to check the weather and notify you when rain is coming because you're drying clothes on the clothesline outside. That's 5 minutes of config for something you might need once. Telling your AI assistant "hey, I've got laundry on the line. Let me know if rain's coming and remind me to grab the clothes before it gets dark" takes 10 seconds and you never think about it again. The agent has access to weather forecasts, maybe even your smart home weather station in Home Assistant, and it can create a sub-agent, which polls those once every x minutes and pings your phone when it needs to.

loveparade an hour ago | parent [-]

But if you run e.g. Claude/Codex/opencode/etc locally you also have access to your local machine and network? What is the difference?

j16sdiz 2 hours ago | parent | prev [-]

OpenClaw allow the LLM to make their own schedule, spawn subagents, and make their own tool.

Yes, basically just some "appropriate MCP servers" can do. but OpenClaw sell it as a whole preconfigured package.

lxgr 2 hours ago | parent | prev | next [-]

One significant advantage over Claude/ChatGPT is that your own agent will be able to access many websites that block cloud-hosted agents via robots.txt and/or IP filters. This is unfortunately getting more common.

Another is that you have access to and control over its memory much more directly, since it's entirely based on text files on your machine. Much less vendor lock-in.

gergo_b 3 hours ago | parent | prev | next [-]

I have no idea. the single thing I can think of is that it can have a memory.. but you can do that with even less code. Just get a VPS. create a folder and run CC in it, tell it to save things into MD files. You can access it via your phone using termux.

sReinwald 3 hours ago | parent | next [-]

You could, but Claude Code's memory system works well for specialized tasks like coding - not so much for a general-purpose assistant. It stores everything in flat markdown files, which means you're pulling in the full file regardless of relevance. That costs tokens and dilutes the context the model actually needs.

An embedding-based memory system (letta, mem0, or a self-built PostgreSQL + pgvector setup) lets you retrieve selectively and only grab what's relevant to the current query. Much better fit for anything beyond a narrow use case. Your assistant doesn't need to know your location and address when you're asking it to look up whether sharks are indeed older than trees, but it probably should know where you live when you ask it about the weather, or good Thai restaurants near you.

3 hours ago | parent | prev [-]
[deleted]
stavros 3 hours ago | parent | prev | next [-]

I couldn't really use OpenClaw (it was too slow and buggy), but having an agent that can autonomously do things for you and have the whole context of your life would be massively helpful. It would be like having a personal assistant, and I can see the draw there.

dominicq 3 hours ago | parent | prev [-]

Yeah, I don't get it either. Deploy a VM that runs an LLM so that I can talk to it via Telegram... I could just talk to it through an app or a web interface. I'm not even trying to be snarky, like what the hell even is the use case?

xylo 2 hours ago | parent | next [-]

Difference is that openclaw is not LLM but engine that spawns up agent that interact with LLM and the system its installed on.

It can have full access to the system it’s running on. So it can browse internet via browser, run cli commands, api’s via skills etc.

Idea is to act like a Jarvis personal assistant. You tell what to do via chat e.g telegram, then it does it for you.

BoredPositron 3 hours ago | parent | prev [-]

It's not even an LLM it's just to pipe api calls.