Remix.run Logo
spongebobstoes 6 hours ago

anthropic has nothing but a contract to enforce what is appropriate usage of their models. there are no safety rails, they disabled their standard safety systems

openai can deploy safety systems of their own making

from the military perspective this is preferable because they just use the tool -- if it works, it works, and if it doesn't, they'll use another one. with the anthropic model the military needs a legal opinion before they can use the tool, or they might misuse it by accident

this is also preferable if you think the government is untrustworthy. an untrustworthy government may not obey the contract, but they will have a hard time subverting safety systems that openai builds or trains into the model

nawgz 6 hours ago | parent [-]

Source?