▲ | kolektiv 5 days ago | ||||||||||||||||||||||||||||||||||||||||
But an LLM is not answering "what is truth?". It's "answering" "what does an answer to the question "what is truth?" look like?". It doesn't need a conceptual understanding of truth - yes, there are far more wrong responses than right ones, but the right ones appear more often in the training data and so the probabilities assigned to the tokens which would make up a "right" one are higher, and thus returned more often. You're anthropomorphizing in using terms like "lying to us" or "know the truth". Yes, it's theoretically possible I suppose that they've secretly obtained some form of emergent consciousness and also decided to hide that fact, but there's no evidence that makes that seem probable - to start from that premise would be very questionable scientifically. A lot of people seem to be saying we don't understand what it's doing, but I haven't seen any credible proof that we don't. It looks miraculous to the relatively untrained eye - many things do, but just because I might not understand how something works, it doesn't mean nobody does. | |||||||||||||||||||||||||||||||||||||||||
▲ | rambambram 5 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||
Nice to read some common sense in a friendly way. I follow your RSS feed, please keep posting on your blog. Unless you're an AI and secretly obtained some form of emergent consciousness, then not. | |||||||||||||||||||||||||||||||||||||||||
▲ | ninetyninenine 5 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||
>But an LLM is not answering "what is truth?". It's "answering" "what does an answer to the question "what is truth?" look like?". You don't actually know this right? You said what I'm saying is theoretically possible so you're contradicting what you're saying. >You're anthropomorphizing in using terms like "lying to us" or "know the truth". Yes, it's theoretically possible I suppose that they've secretly obtained some form of emergent consciousness and also decided to hide that fact, but there's no evidence that makes that seem probable - to start from that premise would be very questionable scientifically. Where did I say it's conscious? You hallucinated here thinking I said something I didn't. Just because you can lie doesn't mean you're conscious. For example, a sign can lie to you. If the speed limit is 60 but there's a sign that says the speed limit is 100 then the sign is lying. Is the sign conscious? No. Knowing is a different story though. But think about this carefully. How would we determine whether a "human" knows anything? We only can tell whether a "human" "knows" things based on what it Tells us. Just like an LLM. So based off of what the LLM tells us, it's MORE probable that the LLM "knows" because that's the SAME exact reasoning on how we can tell a human "knows". There's no other way we can determine whether or not an LLM or a human "knows" anything. So really I'm not anthropomorphizing anything. You're the one that's falling for that trap. Knowing and lying are not unique concepts to conciousness or humanity. These are neutral concepts that exist beyond what it means to be human. When I say something, "knows" or something "lies" I'm saying it from a highly unbiased and netural perspective. It is your bias that causes you to anthropomorphize these concepts with the hallucination that these are human centric concepts. >A lot of people seem to be saying we don't understand what it's doing, but I haven't seen any credible proof that we don't. Bro. You're out of touch. https://www.youtube.com/watch?v=qrvK_KuIeJk&t=284s Hinton, the godfather of modern AI says we don't understand. It's not people saying we don't understand. It's the generally understanding within academia is: we don't understand LLMs. So you're wrong. You don't know what you're talking about and you're highly misinformed. | |||||||||||||||||||||||||||||||||||||||||
|