Remix.run Logo
dureuill 3 hours ago

The project states a boundary clearly: code by LLMs not backed by a human is not accepted.

The correct response when someone oversteps your stated boundaries is not debate. It is telling them to stop. There is no one to convince about the legitimacy of your boundaries. They just are.

staticassertion 3 hours ago | parent [-]

The author obviously disagreed, did you read their post? They wrote the message explaining in detail in the hopes that it would convey this message to others, including other agents.

Acting like this is somehow immoral because it "legitimizes" things is really absurd, I think.

PKop 2 hours ago | parent [-]

> in the hopes that it would convey this message to others, including other agents.

When has engaging with trolls ever worked? When has "talking to an LLM" or human bot ever made it stop talking to you lol?

staticassertion 2 hours ago | parent [-]

I think this classification of "trolls" is sort of a truism. If you assume off the bat that someone is explicitly acting in bad faith, then yes, it's true that engaging won't work.

That said, if we say "when has engaging faithfully with someone ever worked?" then I would hope that you have some personal experiences that would substantiate that. I know I do, I've had plenty of conversations with people where I've changed their minds, and I myself have changed my mind on many topics.

> When has "talking to an LLM" or human bot ever made it stop talking to you lol?

I suspect that if you instruct an LLM to not engage, statistically, it won't do that thing.