Remix.run Logo
methodical 8 hours ago

To be fair, delineating between benevolent and malevolent pen-testing and cybersecurity purposes is practically impossible since the only difference is the user's intentions. I am entirely unsurprised (and would expect) that as models improve the amount to which widely available models will be prohibited from cybersecurity purposes will only increase.

Not to say I see this as the right approach, in theory the two forces would balance each other out as both white hats and black hats would have access to the same technology, but I can understand the hesitancy from Anthropic and others.

trinix912 2 hours ago | parent | next [-]

But this technology is now out there, the cat's out of the bag, there's no going back to a world where people can't ask AI to write malware for them.

I'd argue that black hats will find a way to get uncensored models and use them to write malware either way, and that further restricting generally available LLMs for cybersec usage would end up hurting white hats and programmers pentesting their own code way more (which would once again help the black hats, as they would have an advantage at finding unpatched exploits).

ACCount37 8 hours ago | parent | prev [-]

Yes, and the previous approach Anthropic took was "allow anything that looks remotely benign". The only thing that would get a refusal would be a downright "write an exploit for me". Which is why I favored Anthropic's models.

It remains to be seen whether Anthropic's models are still usable now.

I know just how much of a clusterfuck their "CBRN filter" is, so I'm dreading the worst.