▲ | KaiserPro 5 days ago | |
> There's no "proper safeguarding". This isn't just possible with what we have. Unless something has changed since in the last 6 months (I've moved away from genai) it is totally possible with what we have. Its literally sentiment analysis. Go on, ask me how I know. > and then essentially pray that it's effective If only there was a massive corpus of training data, which openAI already categorise and train on already. Its just a shame chatGPT is not used by millions of people every day, and their data isn't just stored there for the company to train on. > secondary model will trigger a scary red warning that you're violating their usage policy I would be surprised if thats a secondary model. Its far easier to use stop tokens, and more efficient. Also, coordinating the realtime sharing of streams is a pain in the arse. I've never worked at openai > The big labs all have whole teams working on this. Google might, but facebook sure as shit doesn't. Go on, ask me how I know. > It's not a binary issue of "doing it properly". at no point did I say that this is binary. I said "a flaw is still a tradeoff.". The tradeoff is growth against safety. > The more censored/filtered/patronizing you'll make the model Again I did not say make the main model more "censored", I said "comb through history to assess the state of the person" which is entirely different. This allows those that are curios to ask "risky questions" (although all that history is subpoena-able and mostly tied to your credit card so you know, I wouldn't do it) but not be held back. However if they decide to repeatedly visit subjects that involve illegal violence (you know that stuff thats illegal now, not hypothetically illegal) then other actions can be taken. Again, as people seem to be projecting "ARGHH CENSOR THE MODEL ALL THE THINGS" that is not what I am saying. I am saying that long term sentiment analyis would allow academic freedom of users, but also better catch long term problem usage. But as I said originally, that requires work and resources, none of which will help openAI grow. |