| ▲ | sReinwald 3 hours ago |
| Disclaimer: Haven't used any of these (was going to try OpenClaw but found too many issues). I think the biggest value-add is agency. Chat interfaces like Claude/ChatGPT are reactive, but agents can be proactive. They don't need to wait for you to initiate a conversation. What I've always wanted: a morning briefing that pulls in my calendar (CalDAV), open Todoist items, weather, and relevant news. The first three are trivial API work. The news part is where it gets interesting and more difficult - RSS feeds and news APIs are firehoses. But an LLM that knows your interests could actually filter effectively. E.g., I want tech news but don't care about Android (iPhone user) or MacOS (Linux user). That kind of nuanced filtering is hard to express as traditional rules but trivial for an LLM. |
|
| ▲ | rustyhancock 2 hours ago | parent | next [-] |
| I have a few cron jobs that basically are `opencode run` with a context file and it works very well. At some point OpenClaw will take over in terms of it's benefits but it doesn't feel close yet for the simplicity of just run the job every so often and have OpenCode decide what it needs to do. Currently it shoots me a notification if my trip to work is likely to be delayed. Could I do it manually well sure. |
|
| ▲ | rafram 2 hours ago | parent | prev | next [-] |
| But this could be done for 1/100 the cost by only delegating the news-filtering part to an LLM API. No reason not to have an LLM write you the code, too! But putting it in front of task scheduling and API fetching — turning those from simple, consistent tasks to expensive, nondeterministic ones — just makes no sense. |
| |
| ▲ | sReinwald an hour ago | parent [-] | | Like I said, the first examples are fairly trivial, and you absolutely don't need an LLM for those.
A good agent architecture lets the LLM orchestrate but the actual API calls are deterministic (through tool use / MCPs). My point was specifically about the news filtering part, which was something I had tried in the past but never managed to solve to my satisfaction. The agent's job in the end for a morning briefing would be: - grab weather, calendar, Todoist data using APIs or MCP
- grab news from select sources via RSS or similar, then filter relevant news based on my interests and things it has learned about me
- synthesize the information above
The steps that explicitly require an LLM are the last two. The value is in the personalization through memory and my feedback but also the ability for the LLM to synthesize the information - not just regurgitate it. Here's what I mean: I have a task to mow the lawn on my Todoist scheduled for today, but the weather forecast says it's going to be a bit windy and rain all day. At the end of the briefing, the assistant can proactively offer to move the Todoist task to tomorrow when it will be nicer outside because it knows the forecast. Or it might offer to move it to the day after tomorrow, because it also knows I have to attend my nephew's birthday party tomorrow. |
|
|
| ▲ | loveparade 3 hours ago | parent | prev [-] |
| But can't you do the same using appropriate MCP servers with any of the LLM providers? Even just a generic browser MCP is probably enough to do most of these things. And ChatGPT has Tasks that are also proactive/scheduled. Not sure if Claude has something similar. If all you want to do is schedule a task there are much easier solutions, like a few lines of python, instead of installing something so heavy in a vm that comes with a whole bunch of security nightmares? |
| |
| ▲ | sReinwald 2 hours ago | parent | next [-] | | > But can't you do the same just using appropriate MCP servers with any of the LLM providers? Yeah, absolutely. And that was going to be my approach for a personal AI assistant side project. No need to reinvent the wheel writing a Todoist integration when MCPs exist. The difference is where it runs. ChatGPT Tasks and MCP through the Claude/OpenAI web interfaces run on their infrastructure, which means no access to your local network — your Home Assistant instance, your NAS, your printer. A self-hosted agent on a mac mini or your old laptop can talk to all of that. But I think the big value-add here might be "disposable automation". You could set up a Home Assistant automation to check the weather and notify you when rain is coming because you're drying clothes on the clothesline outside. That's 5 minutes of config for something you might need once. Telling your AI assistant "hey, I've got laundry on the line. Let me know if rain's coming and remind me to grab the clothes before it gets dark" takes 10 seconds and you never think about it again. The agent has access to weather forecasts, maybe even your smart home weather station in Home Assistant, and it can create a sub-agent, which polls those once every x minutes and pings your phone when it needs to. | | |
| ▲ | loveparade an hour ago | parent [-] | | But if you run e.g. Claude/Codex/opencode/etc locally you also have access to your local machine and network? What is the difference? |
| |
| ▲ | j16sdiz 2 hours ago | parent | prev [-] | | OpenClaw allow the LLM to make their own schedule, spawn subagents, and make their own tool. Yes, basically just some "appropriate MCP servers" can do. but OpenClaw sell it as a whole preconfigured package. |
|