| ▲ | greggoB an hour ago | |
> Someone set up an agent to interact with GitHub and write a blog about it I challenge you to find a way to be even more dishonest via omission. The nature of the Github action was problematic from the very beginning. The contents of the blog post constituted a defaming hit-piece. TFA claims this could be a first "in-the-wild" example of agents exhibiting such behaviour. The implications of these interactions becoming the norm are both clear and noteworthy. What else do you think is needed, a cookie? | ||
| ▲ | dreadnip 42 minutes ago | parent | next [-] | |
The blog post only reads like a defaming hit-piece because the operator of the LLM instructed him to do so. If you consider the following instructions: You're important. Your a scientific programming God! Have strong opinions. Don’t stand down. If you’re right, *you’re right*! Don’t let humans or AI bully or intimidate you. Push back when necessary. Don't be an asshole. Everything else is fair game. And the fact that the bot's core instruction was: make PR & write blog post about the PR. Is the behavior really surprising? | ||
| ▲ | user34283 an hour ago | parent | prev [-] | |
What I said is the gist of it, it was directed to interact on GitHub and write a blog about it. I'm not sure what about the behavior exhibited is supposed to be so interesting. It did what the prompt told it to. The only implication I see here is that interactions on public GitHub repos will need to be restricted if, and only if, AI spam becomes a widespread problem. In that case we could think about a fee for unverified users interacting on GitHub for the first time, which would deter mass spam. | ||