Remix.run Logo
arcfour 3 hours ago

That seems a bit fatalistic, "we have lost so much because curl discontinued bug bounties". That's unfortunate, but it's very minor in the grand scheme of things.

Also, the fault there lies squarely with charlatans who have been asked/told not to submit "AI slop" bug bounties and yet continue to do so anyway, not with the AI tools used to generate them.

Indeed, intelligent researchers have used AI to find legitimate security issues (I recall a story last month on HN about a valid bug being found and disclosed intelligently with AI in curl!).

Many tools can be used irresponsibly. Knives can be used to kill someone, or to cook dinner. Cars can take you to work, or take someone's life. AI can be used to generate garbage, or for legitimate security research. Don't blame the tool, blame the user of it.

twelvedogs 2 hours ago | parent | next [-]

Blaming only people is also incorrect, it's incredibly easy to see that once the cost of submission was low enough compared to the possible reward bounties would become unviable

Ai just made the cost of entry very low by pushing it onto the people offering the bounty

There will always be a percentage of people desperate enough or without scruples that can do that basic math, you can blame them but it's like blaming water for being wet

timmytokyo 2 hours ago | parent | prev | next [-]

"Guns don't kill people, people kill people."

AlexandrB 2 hours ago | parent | prev [-]

> Also, the fault there lies squarely with charlatans who have been asked/told not to submit "AI slop" bug bounties and yet continue to do so anyway, not with the AI tools used to generate them.

I think there's a general feeling that AI is most readily useful for bad purposes. Some of the most obvious applications of an LLM are spam, scams, or advertising. There are plenty of legitimate uses, but they lag compared to these because most non-bad actors actually care about what the LLM output says and so there are still humans in the loop slowing things down. Spammers have no such requirements and can unleash mountains of slop on us thanks to AI.

The other problem with AI and LLMs is that the leading edge stuff everyone uses is radically centralized. Something like a knife is owned by the person using it. LLMs are generally owned one of a few massive corps and at best you can do is sort of rent it. I would argue this structural aspect of AI is inherently bad regardless of what you use it for because it centralizes control of a very powerful tool. Imagine a knife where the manufacturer could make it go dull or sharp on command depending on what you were trying to cut.