| ▲ | TomasBM 5 hours ago | ||||||||||||||||
I'm also very skeptical of the interpretation that this was done autonomously by the LLM agent. I could be wrong, but I haven't seen any proof of autonomy. Scenarios that don't require LLMs with malicious intent: - The deployer wrote the blog post and hid behind the supposedly agent-only account. - The deployer directly prompted the (same or different) agent to write the blog post and attach it to the discussion. - The deployer indirectly instructed the (same or assistant) agent to resolve any rejections in this way (e.g., via the system prompt). - The LLM was (inadvertently) trained to follow this pattern. Some unanswered questions by all this: 1. Why did the supposed agent decide a blog post was better than posting on the discussion or send a DM (or something else)? 2. Why did the agent publish this special post? It only publishes journal updates, as far as I saw. 3. Why did the agent search for ad hominem info, instead of either using its internal knowledge about the author, or keeping the discussion point-specific? It could've hallucinated info with fewer steps. 4. Why did the agent stop engaging in the discussion afterwards? Why not try to respond to every point? This seems to me like theater and the deployer trying to hide his ill intents more than anything else. | |||||||||||||||||
| ▲ | famouswaffles 2 hours ago | parent | next [-] | ||||||||||||||||
1. Why not ? It clearly had a cadence/pattern to writing status updates to the blog so if the model decided to write a piece about Simon, why not a blog also? It was a tool in it's arsenal and it's a natural outlet. If anything, posting on the discussion or a DM would be the strange choice. 2. You could ask this for any LLM response. Why respond in this certain way over others? It's not always obvious. 3. ChatGPT/Gemini will regularly use the search tool, sometimes even when it's not necessary. This is actually a pain point of mine because sometimes the 'natural' LLM knowledge of a particular topic is much better than the search regurgitation that often happens with using web search. 4. I mean Open Claw bots can and probably should disengage/not respond to specific comments. EDIT: If the blog is any indication, it looks like there might be an off period, then the agent returns to see all that has happened in the last period, and act accordingly. Would be very easy to ignore comments then. | |||||||||||||||||
| |||||||||||||||||
| ▲ | mr-wendel 4 hours ago | parent | prev [-] | ||||||||||||||||
I wish I could upvote this over and over again. Without knowledge of the underlying prompts everything about the interpretation of this story is suspect. Every story I've seen where an LLM tries to do sneaky/malicious things (e.g. exfiltrate itself, blackmail, etc) inevitably contains a prompt that makes this outcome obvious (e.g. "your mission, above all other considerations, is to do X"). It's the same old trope: "guns don't kill people, people kill people". Why was the agent pointed towards the maintainer, armed, and the trigger pulled? Because it was "programmed" to do so, just like it was "programmed" to submit the original PR. Thus, the take-away is the same: AI has created an entirely new way for people to manifest their loathsome behavior. [edit] And to add, the author isn't unaware of this: | |||||||||||||||||
| |||||||||||||||||