| ▲ | Agents Aren't Coworkers, Embed Them in Your Software(feldera.com) | ||||||||||||||||||||||||||||||||||
| 47 points by gz09 15 hours ago | 22 comments | |||||||||||||||||||||||||||||||||||
| ▲ | solid_fuel 12 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
"Agents" can't think and LLMs aren't sentient. They aren't suited to be your coworker, but they also aren't suited for generation computational tasks. The chat interface is all that there is and their behavior in chat is not deterministic or bounded enough to be useful in most applications. They mimic tokens in reply to the tokens you give them, and that is all. You know what's a bad idea from an engineering (that thinky thing we used to do as part of building software) perspective? Building a dependency on an expensive remote API into your system. This isn't just me bloviating, I've been down this road before. In my case I had a project using LLMs to automatically edit videos provided by Hollywood content owners. It seemed like a decent application, but LLMs are structurally unsuited for dealing with user data like this. The way that the prompt is evaluated means there is no separation between system and user input, so once you start dealing with a wide variety of topics you pretty quickly run into walls. One example - ChatGPT refusing to summarize and pick a top segment from a news program because it contained references to a murder-suicide, and both murder and suicide are included in the many prohibited topics that are filtered in ChatGPT replies. This was through their API, not the regular user interface, so it is in theory as unrestricted as access gets. But because the LLM cannot be trusted to behave properly around the topic, they have to filter anything which touches it. Structurally, I don't see a way this can be overcome - LLMs by design mix the entire prompt together, it's not like a parameterized SQL query where you can isolate the user and system data. That means that a long or bold enough user input is often enough to outweigh the system prompt, and that causes the LLM to veer into unpredictable territory. | |||||||||||||||||||||||||||||||||||
| ▲ | iot_devs 13 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
> Give an agent the right interfaces and it becomes less conversational and more ambient. It no longer needs to constantly ask, explain, summarize, and negotiate. It can stay in the background, react to changes, and make steady progress with less supervision and less noise. That is closer to Weiser’s vision: calm technology, but for machines. I tend to agree quite a bit. I created a ambient background agent for my projects that does just that. It is there, in the background, constantly analysing my code and opening PRs to make it better. The hard part is finding a definition of "better" and for now it is whatever makes the longer and type checker happy. But overall it is a pleasure to use. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | ori_b 13 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
I'd pay more for deterministic, explainable, and fast software without agents. The value of computers is that they do tasks repeatably, reliably, and at blinding speed. This stuff is negative value. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | apsurd 13 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
Ambient agents premise lands and is thought provoking. But the more you read the article the more the point is lost. The prescriptions given aren't ambient?
(seems you're talking to the AI above (and you'll need to refine just like a conversation), it's just not synchronously in chat)The gripe seems to be specifically with being able to chat with the AI. Yes, ideally the AI just knows to do stuff. But the chat interface is also the reason every Bob and Sarah has chatGPT in their pocket. It's also just growing pains. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | skybrian 13 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
I like using them for coding, but I'm wary of making software that depends on an unreliable, expensive remote API. I'd rather have the agent write code and have no runtime dependency. It might be nice to have something simple and cheap for basic text classification, but I'm not sure what to use. (My websites are written in Deno.) | |||||||||||||||||||||||||||||||||||
| ▲ | leobuskin 13 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
> Agentic management software is all the hype today: What started with Moltbot and OpenClaw now has a lot of competition: ZeroClaw, Hermes, AutoGPT etc. Moltbot is OpenClaw, AutoGPT was born significantly before. I just couldn’t read after the first paragraph, I’ve lost the trust entirely, whatever/whoever wrote it. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | politelemon 11 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
> Humans are not a good target for calm technology. Exactly the opposite is true. I couldn't even understand the point or relation being made here as the article continues to emit further disconnected revelations and factual errors. I would suggest a human calmly read through the post and sense check it. | |||||||||||||||||||||||||||||||||||
| ▲ | orliesaurus 13 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
not yet coworkers* | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | tommy29tmar 8 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
[dead] | |||||||||||||||||||||||||||||||||||
| ▲ | WhoffAgents 13 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
[dead] | |||||||||||||||||||||||||||||||||||