Remix.run Logo
2983592 7 hours ago

How do you know? If you have access you are not unbiased, otherwise you cannot know by definition.

AI companies routinely claim that something is too dangerous to release (I think GPT-2 was the first case) for marketing reasons. There are at least 10 documented high profile cases.

They keep it secret because they now sell to the MIC with China and North Korea bullshit stories as well as to companies who are invested in the AI hype themselves.

Glemllksdf 6 hours ago | parent | next [-]

I prefer a more cautios approach than the musk style were stuff gets fixed after.

And with gpt-2 the worry was mass emails a lot better and more detailed and personal, social media campaigns etc.

How many bots are deployed today on X and influencing democrazy around the globe?

Its fair to say it had an impact and LLMs still have.

SpicyLemonZest 7 hours ago | parent | prev | next [-]

GPT-2 was obviously too dangerous to release at the time! It's OK-ish now, when the knowledge that AI can produce arbitrary text is widely shared. It would have been a disaster for scammers and phishers to get GPT-2 at a time when almost everyone still assumed that large volumes of detailed text proved there's a real human being on the other end of the conversation.

jayd16 6 hours ago | parent [-]

And, as we all know, humans can't be scammers. They need the robots to lie.

afthonos 7 hours ago | parent | prev [-]

> How do you know? If you have access you are not unbiased, otherwise you cannot know by definition.

The platonic ideal of how to dismiss any argument by anyone about anything.