▲ | echelon 19 hours ago | |
I work in the space. There are a lot of use cases that get censored by OpenAI, Kling, Runway, and various other providers for a wide variety of reasons: - OpenAI is notorious for blocking copyrighted characters. They do prompt keyword scanning, but also run a VLM on the results so you can't "trick" the model. - Lots of providers block public figures and celebrities. - Various providers block LGBT imagery, even safe for work prompts. Kling is notorious for this. - I was on a sales call with someone today who runs a father's advocacy group. I don't know what system he was using, but he said he found it impossible to generate an adult male with a child. In a totally safe for work context. - Some systems block "PG-13" images of characters that are in bathing suits or scantily clad. None of this is porn, mind you. | ||
▲ | thot_experiment 19 hours ago | parent | next [-] | |
Sure but that has nothing to do with the model architecture and everything to do with the cloud inference providers wanting to cover their asses. | ||
▲ | throwaway314155 19 hours ago | parent | prev [-] | |
What does any of that have to do with the distinction between diffusion vs. autoregressive models? |