| ▲ | atomicnumber3 9 hours ago | ||||||||||||||||
"we can have a fluent conversation with a super smart AIwe can have a fluent conversation with a super smart AI" But we can't. I can have something styled as a conversation with a token predictor that emits text that, if interpreted as a conversation, will gaslight you constantly, while at best sometimes being accidentally correct (but still requiring double-checking with an actual source). Yes, I am uninterested in having the gaslighting machine installed into every single UI I see in my life. | |||||||||||||||||
| ▲ | hodgehog11 9 hours ago | parent | next [-] | ||||||||||||||||
LLMs are severely overhyped, have many problems, and I don't want them in my face anymore than the average person. But we're not in 2023 anymore. These kinds of comments just come off ignorant. | |||||||||||||||||
| |||||||||||||||||
| ▲ | throwuxiytayq 9 hours ago | parent | prev [-] | ||||||||||||||||
You seem severely confused about how low the probability of being “accidentally correct” is for almost any real life task that you can imagine. | |||||||||||||||||