▲ | dunkmaster 19 hours ago | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
API shows either auto or low available. Is there another secret value with even lower restrictions? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | refulgentis 18 hours ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Not that I know of. I just took any indication that the parent post meant absolute zero moderation as them being a bit loose with their words and excitable with how they understand things, there were some signs: 1. it's unlikely they completed an API integration quickly enough to have an opinion on military / defense image generation moderation yesterday, so they're almost certainly speaking about ChatGPT. (this is additionally confirmed by image generation requiring tier 5 anyway, which they would have been aware of if they had integrated) 2. The military / defense use cases for image generation are not provided (and the steelman'd version in other comments is nonsensical, i.e. we can quickly validate you can still generate kanban boards or wireframes of ships) 3. The poster passively disclaims being in military / defense themself (grep "in that space") 4. it is hard to envision cases of #2 that do not require universal moderation for OpenAI's sake, i.e. lets say their thought process is along the lines of: defense/military ~= what I think of as CIA ~= black ops ~= image manipulation on social media, thus, the time I said "please edit this photo of the ayatollah to have him eating pig and say I hate allah" means its overmoderated for defense use cases 5. It's unlikely openai wants to be anywhere near PR resulting from #4. Assuming there is a super secret defense tier that allows this, it's at the very least, unlikely that the poster's defense contractor friends were blabbing about about the exclusive completely unmoderated access they had, to the poster, within hours of release. They're pretty serious about that secrecy stuff! 6. It is unlikely the lack of ability to generate images using GPT Image 1 would drive the military to Chinese models (there aren't Chinese LLMs that do this! even if they were, there's plenty of good ol' American diffusion models!) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|