▲ | Turskarama 20 hours ago | |||||||||||||||||||||||||||||||
It's a tool that promotes incorrect usage though, and that is an inherent problem. All of these companies are selling AI as a tool to do work for you, and the AI _sounds confident_ not matter what it spits out. | ||||||||||||||||||||||||||||||||
▲ | Terr_ 18 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||
My personal pet-peeve is how a great majority of people--and too many developers--are being misled into believing a fictional character coincidentally named "Assistant" inside a story-document half-created by an LLM is the author-LLM. If a human generates a story containing Count Dracula, that doesn't mean vampires are real, or that capabilities like "turning into a cloud of bats" are real, or that the algorithm "thirsts for the blood of the innocent." The same holds when the story comes from an algorithm, and it continues to hold when story is about a differently-named character named "AI Assistant" who is "helpful". Getting people to fall for this illusion is great news for the companies though, because they can get investor-dollars and make sales with the promise of "our system is intelligent", which is true in the same sense as "our system converts blood into immortality." | ||||||||||||||||||||||||||||||||
▲ | croes 18 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
That's the real danger of AI. The false promises of the AI companies and the false expectations of the management and users. Had it just recently for a data migration where the users asked if they still need to enter meta data for documents they just could use AI to query data that was usually based on that meta data. They trust AI before it's even there and don't even consider a transition period where they check if the result are correct. Like with security convenience prevails. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | xpe 16 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||
> All of these companies are selling AI as a tool to do work for you, and the AI _sounds confident_ not matter what it spits out. If your LLM + pre-prompt setup sounds confident with every response, something is probably wrong; it doesn't have to be that way. It isn't for me. I haven't collected statistics, but I often get decent nuance back from Claude. Think more about what you're doing and experiment. Try different pre-prompts. Try different conversation styles. This is not dismissing the tendency for overconfidence, sycophancy, and more. I'm just sharing some mitigations. | ||||||||||||||||||||||||||||||||
|