▲ | lukev 2 days ago | |||||||||||||||||||
That would not help in the slightest, any more than a "surgeon general's warning" helps stop smokers. The problem is calling it "AI" to start with. This (along with the chat format itself) primes users to think of it as an entity... something with care, volition, motive, goals, and intent. Although it can emulate these traits, it doesn't have them. Chatting with a LLM is entering a one-person echo-chamber, a funhouse mirror that reflects back whatever semantic region your initial query put it. And the longer you chat, the deeper that rabbit hole goes. | ||||||||||||||||||||
▲ | duskwuff 2 days ago | parent | next [-] | |||||||||||||||||||
> That would not help in the slightest, any more than a "surgeon general's warning" helps stop smokers. Particularly given some documented instances where a user has asked the language model about similar warnings, and the model responded by downplaying the warnings, or telling the user to disregard them. | ||||||||||||||||||||
▲ | threatofrain 2 days ago | parent | prev | next [-] | |||||||||||||||||||
It's not a one-person echo-chamber though, it also carries with it the smell and essence of a large corpus of human works. That's why it's so useful to us, and that's why it carries so much authority. | ||||||||||||||||||||
▲ | jvanderbot 2 days ago | parent | prev [-] | |||||||||||||||||||
Well, hate to be that guy, but surgeons general warnings coincided with significant reduction in smoking. We've just reached the flattening of that curve. After decades of declines. It's hard to believe that a prominent well - worded warning would do nothing but that's not to say it'll be effective for this. | ||||||||||||||||||||
|