Remix.run Logo
johnbarron 2 hours ago

>> Anthropic using marketing to convince people their models are more advanced, better built, or that AI is a threat that needs to be regulated because only they have the answer? I’m shocked.

I remember when OpenAI was saying GPT-2 was too dangerous to release.

stingraycharles an hour ago | parent | next [-]

I remember when there was a guy at Google years a few years ago that was convinced that they had an internal, sentient creature in their labs (I think maybe 4 years ago?)

If I’m not mistaken, after the media cycle, he lost his job for breaking confidentiality.

That was the opposite of marketing, Google really didn’t get how to turn this into a product until ChatGPT happened.

player1234 37 minutes ago | parent [-]

[dead]

an hour ago | parent | prev | next [-]
[deleted]
2ndorderthought an hour ago | parent | prev [-]

"it can almost like write 2 paragraphs!" "It might be conscious" "this is basically AGI, we had to fire someone who spilled the beans"

etiam 21 minutes ago | parent [-]

I always thought he was fired for making crackpot statements to the press in reference to his professional capacity, and thus creating bad PR and embarrassing spectacle for his employer. Seems like legitimate reasons to me.

ZeroGravitas 16 minutes ago | parent [-]

An interesting question now is whether he had standard mental health issues, or if he was an early example of AI psychosis or whatever we call people who are falling in love with their AI chatbots because they tell them how smart they are.