Remix.run Logo
mlsu 2 hours ago

Vulnerabilities were found, probably a few by bad actors, when GPT4 was released. Every vulnerability found now is probably found with AI assistance at the very least. Should they have never released GPT4? Should we have believed claims that GPT4 was too dangerous for mere mortals to access? I believe openAI was making similar claims about how GPT4 was a step function and going to change white collar work forever when that model was released.

The point is that this whole "the model is too powerful" schtick is a bunch of smoke and mirrors. It serves the valuation.

simianwords 2 hours ago | parent [-]

Its far more simple to believe that they are releasing it step by step. Release to trusted third parties first, get the easy vulnerabilities fixed, work on the alignment and then release to public.

Do you don't believe that the vulnerabilities found by these agents are serious enough to warrant staggered release?