▲ | motorest 6 days ago | |
> That page was made in December 2022, (...) Irrelevant. Wikipedia does not create concepts. Again, if you take a few minutes to learn about the topic you will eventually understand the concept was coined a couple of decades ago, and has a specific meaning. Either you opt to learn, or you don't. Your choice. > Here's the first linked source: Irrelevant. Your argument is as pointless and silly as claiming rubber duck debugging doesn't exist because no rubber duck is involved. | ||
▲ | windward 6 days ago | parent [-] | |
Uh oh! Let me spend a few minutes to learn about the topic. Thankfully, a helpful Hacker News user has linked me to a useful resource. I will follow one of the linked sources to the paper 'ChatGPT is bullshit' >Hicks, M.T., Humphries, J. and Slater, J. (2024). ChatGPT is bullshit. Ethics and information technology, 26(2). doi:https://doi.org/10.1007/s10676-024-09775-5. Hicks et al. note: >calling their mistakes ‘hallucinations’ isn’t harmless: it lends itself to the confusion that the machines are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived. What an enlightening input. I will now follow another source, 'Why ChatGPT and Bing Chat are so good at making things up' >Edwards, B. (2023). Why ChatGPT and Bing Chat are so good at making things up. [online] Ars Technica. Available at: https://arstechnica.com/information-technology/2023/04/why-a.... Edwards notes: >In academic literature, AI researchers often call these mistakes "hallucinations." But that label has grown controversial as the topic becomes mainstream because some people feel it anthropomorphizes AI models (suggesting they have human-like features) or gives them agency (suggesting they can make their own choices) in situations where that should not be implied. The creators of commercial LLMs may also use hallucinations as an excuse to blame the AI model for faulty outputs instead of taking responsibility for the outputs themselves. >Still, generative AI is so new that we need metaphors borrowed from existing ideas to explain these highly technical concepts to the broader public. In this vein, we feel the term "confabulation," although similarly imperfect, is a better metaphor than "hallucination." In human psychology, a "confabulation" occurs when someone's memory has a gap and the brain convincingly fills in the rest without intending to deceive others. ChatGPT does not work like the human brain, but the term "confabulation" arguably serves as a better metaphor because there's a creative gap-filling principle at work It links to a tweet from someone called 'Yann LeCun': >Future AI systems that are factual (do not hallucinate)[...] will have a very different architecture from the current crop of Auto-Regressive LLMs. That was an interesting diversion, but let's go back to learning more. How about 'AI Hallucinations: A Misnomer Worth Clarifying'? >Maleki, N., Padmanabhan, B. and Dutta, K. (2024). AI Hallucinations: A Misnomer Worth Clarifying. 2024 IEEE Conference on Artificial Intelligence (CAI). doi:https://doi.org/10.1109/cai59869.2024.00033. Maleki et al. say: >As large language models continue to advance in Artificial Intelligence (AI), text generation systems have been shown to suffer from a problematic phenomenon often termed as "hallucination." However, with AI’s increasing presence across various domains, including medicine, concerns have arisen regarding the use of the term itself. [...] Our results highlight a lack of consistency in how the term is used, but also help identify several alternative terms in the literature. Wow, how interesting! I'm glad I opted to learn that! My fun was spoiled though. I tried following a link to the 1995 paper, but it was SUPER BORING because it didn't say 'hallucinations' anywhere! What a waste of effort, after I had to go to those weird websites just to be able to access it! I'm glad I got the opportunity to learn about Hallucinations (Artificial Intelligence) and how they are meaningfully different from bullshit, and how they can be avoided in the future. Thank you! |