Remix.run Logo
ACCount37 7 days ago

Too little too late. OpenAI's shit was nearly worthless for cybersec for what, a year already?

ChatGPT 5.x just tries to deny everything remotely cybersecurity-related - to the point that it would at times rather deny vulnerabilities exist than go poke at them. Unless you get real creative with prompting and basically jailbreak it. And it was this bad BEFORE they started messing around with 5.4 access specifically.

And that was ChatGPT 5.4. A model that, by all metrics and all vibes, doesn't even have a decisive advantage over Opus 4.6 - which just does whatever the fuck you want out of the box.

What's I'm afraid the most of is that Anthropic is going to snort whatever it is that OpenAI is high on, and lock down Mythos the way OpenAI is locking down everything.

jruz 7 days ago | parent | next [-]

That’s the whole point of this variant of the model, it won’t have those guardrails.

ACCount37 7 days ago | parent [-]

Yes. But "perform a humiliation ritual of KYC to access the actual model instead of the nerfed version of it that's so neurotic about cybersec you have to sink 400 tokens into getting it to a usable baseline" does not inspire any confidence at all.

lebovic 6 days ago | parent | next [-]

It seems reasonable for a company to require KYC for a product that's dual use – especially a novel one that's built for security research.

Privacy concerns aside, the KYC process for OpenAI was self-serve and took about a minute.

jiggawatts 6 days ago | parent | prev [-]

Remember the argument that the bad guys using AI to hack systems won't be a problem because all the "good guys" will have access too and can secure their software?

Pepperidge Farm remembers.

alephnerd 6 days ago | parent | prev [-]

> OpenAI's shit was nearly worthless for cybersec for what, a year already

Plenty of AI for Cybersecurity companies use a mixture of models depending on iteration and testing, including OpenAI's.