| ▲ | tstrimple 8 hours ago | |
Never felt a need for it. I can already replicate much of what it does in more sustainable / preferable ways. I don't want agents reacting to things and doing things. I use agents to build reliable scripts which are then automated. I do have data collection points that I use an LLM to evaluate. The last example of this is I built a job polling service using CC. It's just a normal script that hits open APIs to pull job listing results into a SQLite database. I have another report which is run that drops an update of how many new jobs are in the database. If there's enough for me to be interested in, I'll fire up CC and have it parse through the job opportunities for the ones which match the profile we've been building. I've used an agent to literally build and deploy it all and it runs on an automated schedule. It just doesn't do agent shit while I'm not looking. I could have piped the results of that search into `claude -p` and had it do the analysis "real-time" and only alert me about things I would be interested in. That's closing the loop in a similar way people use OpenClaw for. But I'm just not interested. It adds more failure points and conditions. The automated things should be as simple and predictable as possible. This may change after months or years more of LLM development or even just me refining my working config. But not yet. | ||