| ▲ | ACCount37 8 hours ago |
| > We are releasing Opus 4.7 with safeguards that automatically detect and block requests that indicate prohibited or high-risk cybersecurity uses. Fucking hell. Opus was my go-to for reverse engineering and cybersecurity uses, because, unlike OpenAI's ChatGPT, Anthropic's Opus didn't care about being asked to RE things or poke at vulns. It would, however, shit a brick and block requests every time something remotely medical/biological showed up. If their new "cybersecurity filter" is anywhere near as bad? Opus is dead for cybersec. |
|
| ▲ | methodical 8 hours ago | parent | next [-] |
| To be fair, delineating between benevolent and malevolent pen-testing and cybersecurity purposes is practically impossible since the only difference is the user's intentions. I am entirely unsurprised (and would expect) that as models improve the amount to which widely available models will be prohibited from cybersecurity purposes will only increase. Not to say I see this as the right approach, in theory the two forces would balance each other out as both white hats and black hats would have access to the same technology, but I can understand the hesitancy from Anthropic and others. |
| |
| ▲ | trinix912 2 hours ago | parent | next [-] | | But this technology is now out there, the cat's out of the bag, there's no going back to a world where people can't ask AI to write malware for them. I'd argue that black hats will find a way to get uncensored models and use them to write malware either way, and that further restricting generally available LLMs for cybersec usage would end up hurting white hats and programmers pentesting their own code way more (which would once again help the black hats, as they would have an advantage at finding unpatched exploits). | |
| ▲ | ACCount37 8 hours ago | parent | prev [-] | | Yes, and the previous approach Anthropic took was "allow anything that looks remotely benign". The only thing that would get a refusal would be a downright "write an exploit for me". Which is why I favored Anthropic's models. It remains to be seen whether Anthropic's models are still usable now. I know just how much of a clusterfuck their "CBRN filter" is, so I'm dreading the worst. |
|
|
| ▲ | Havoc 8 hours ago | parent | prev | next [-] |
| Claude code had safeguards like that hardcoded into the software. You could see it if you intercept the prompts with a proxy |
|
| ▲ | brynnbee 7 hours ago | parent | prev | next [-] |
| I'm currently testing 4.7 with some reverse engineering stuff/Ghidra scripting and it hasn't refused anything so far, but I'm also doing it on a 20 year old video game, so maybe it doesn't think that's problematic. |
| |
| ▲ | ACCount37 6 hours ago | parent [-] | | I really hope it's that way for my use cases too, also Ghidra and decompiler outputs, but I'm not optimistic. |
|
|
| ▲ | johnmlussier 7 hours ago | parent | prev | next [-] |
| Incredible - in one fell swoop killing my entire use case for Claude. I have about 15 submissions that I now need to work with Codex on cause this "smarter" model refuses to read program guidelines and take them seriously. |
|
| ▲ | senko 7 hours ago | parent | prev | next [-] |
| From the article: > Security professionals who wish to use Opus 4.7 for legitimate cybersecurity purposes (such as vulnerability research, penetration testing, and red-teaming) are invited to join our new Cyber Verification Program. |
| |
| ▲ | atonse 7 hours ago | parent | next [-] | | This seems reasonable to me. The legit security firms won't have a problem doing this, just like other vendors (like Apple, who can give you special iOS builds for security analysis). If anyone has a better idea on how to _pragmatically_ do this, I'm all ears. | | |
| ▲ | adrian_b 5 hours ago | parent [-] | | If the vendors of programs do not want bugs to be found in their programs, they should search for them themselves and ensure that there are no such bugs. The "legit security firms" have no right to be considered more "legit" than any other human for the purpose of finding bugs or vulnerabilities in programs. If I buy and use a program, I certainly do not want it to have any bug or vulnerability, so it is my right to search for them. If the program is not commercial, but free, then it is also my right to search for bugs and vulnerabilities in it. I might find acceptable to not search for bugs or vulnerabilities in a program only if the authors of that program would assume full liability in perpetuity for any kind of damage that would ever be caused by their program, in any circumstances, which is the opposite of what almost any software company currently does, by disclaiming all liabilities. There exists absolutely no scenario where Anthropic has any right to decide who deserves to search for bugs and vulnerabilities and who does not. If someone uses tools or services provided by Anthropic to perform some illegal action, then such an action is punishable by the existing laws and that does not concern Anthropic any more than a vendor of screwdrivers should be concerned if someone used one as a tool during some illegal activity. I am really astonished by how much younger people are willing to put up with the behaviors of modern companies that would have been considered absolutely unacceptable by anyone, a few decades ago. | | |
| ▲ | atonse 3 hours ago | parent | next [-] | | Not sure where the younger people thing came from, but I'm 45 and have been working in this industry since 1999. But even when I was in my 20s, I don't remember considering that I had a "right" to do something with a company's product before they've sold it to me. In fact, I would say the idea of entitlement and use of words like "rights" when you're talking about a company's policies and terms of use (of which you are perfectly fine to not participate. rights have nothing to do with anything here. you're free to just not use these tools) feels more like a stereotypical "young" person's argument that sees everything through moralistic and "rights" based principles. If you don't want to sign these documents, don't. This is true of pretty much every single private transaction, from employment, to anything else. It is your choice. If you don't want to give your ID to get a bank account, don't. Keep the cash in your mattress or bitcoin instead. Regarding "legit" - there are absolutely "legit" actors and not so "legit" actors, we can apply common sense here. I'm sure we can both come up with edge cases (this is an internet argument after all), but common cases are a good place to start. | | |
| ▲ | adrian_b 2 hours ago | parent [-] | | You cannot search for bugs or vulnerabilities in "a company's product before they've sold it to you", because you cannot access it. Obviously, I was not talking about using pirated copies, which I had classified as illegal activities in my comment, so what you said has nothing to do with what I said. "A company's policies and terms of use" have become more and more frequently abusive and this is possible only because nowadays too many people have become willing to accept such terms, even when they are themselves hurt by these terms, which ensures that no alternative can appear to the abusive companies. I am among those who continue to not accept mean and stupid terms forced by various companies, which is why I do not have an Anthropic subscription. > "if you don't want to give your ID to get a bank account, don't" I do not see any relevance of your example for our discussion, because there are good reasons for a bank to know the identity of a customer. On the other hand there are abusive banks, whose behavior must not be accepted. For instance, a couple of decades ago I have closed all my accounts in one of the banks that I was using, because they had changed their online banking system and after the "upgrade" it worked only with Internet Explorer. I do not accept that a bank may impose conditions on their customers about what kinds of products of any nature they must buy or use, e.g. that they must buy MS Windows in order to access the services of the bank. More recently, I closed my accounts in another bank, because they discontinued their Web-based online banking and they have replaced that with a smartphone application. That would have been perfectly OK, except that they refused to provide the app for downloading, so that I could install it, but they provided the app only in the online Google store, which I cannot access because I do not have a Google account. A bank does not have any right to condition their services on entering in a contractual relationship with a third party, like Google. Moreover, this is especially revolting when that third party is from a country that is neither that of the bank nor that of the customer, like Google. These are examples of bad bank behavior, not that with demanding an ID. |
| |
| ▲ | senko 4 hours ago | parent | prev [-] | | > If someone uses tools or services provided by Anthropic to perform some illegal action, then such an action is punishable by the existing laws and that does not concern Anthropic any more than a vendor of screwdrivers should be concerned if someone used one as a tool during some illegal activity. In civilised parts of the world, if you want to buy a gun, or poison, or larger amount of chemicals which can be used for nefarious purposes, you need to provide your identity and the reason why you need it. Heck, if you want to move a larger amount of money between your bank accounts, the bank will ask you why. Why are those acceptable, yet the above isn't? > I am really astonished by how much younger people are willing to put up with Unsure where you got the "younger people" from. | | |
| ▲ | adrian_b 2 hours ago | parent [-] | | Your examples have nothing to do with Anthropic and the like. A gun does not have other purposes than being used as a weapon, so it is normal for the use of such weapons to be regulated. On the other hand it is not acceptable to regulate like weapons the tools that are required for other activities, for instance kitchen knives or many chemicals, like acids and alkalis, which are useful for various purposes and which in the past could be bought freely for centuries, without that ever causing any serious problems. LLMs are not weapons, they are tools. Any tools can be used in a bad or dangerous way, including as weapons, but that is not a reason good enough to justify restrictions in their use, because such restrictions have much more bad consequences than good consequences. > Unsure where you got the "younger people" from. Like I have said, none of the people that I know from my generation have ever found acceptable the kinds of terms and conditions that are imposed nowadays by most big companies for using their products or their attempts to transition their customers from owning products to renting products. The people who are now in their forties are a generation after me, so most of them are already much more compliant with these corporate demands, which affects me and the other people who still refuse to comply, because the companies can afford to not offer alternatives when they have enough docile customers. |
|
|
| |
| ▲ | ACCount37 7 hours ago | parent | prev [-] | | Yeah no. They can fuck right off with KYC humiliation rituals. |
|
|
| ▲ | zb3 8 hours ago | parent | prev [-] |
| It appears we're learning the hard way that we can't rely on capabilities of models that aren't open weights. These can be taken from us at any time, so expect it to get much worse.. |
| |
| ▲ | hootz 7 hours ago | parent [-] | | Can't wait for a random chinese company to train a model on Mythos by breaking Anthropic's ToS just to release it for free and with open weights. |
|