Remix.run Logo
leptons 5 days ago

When I write a sentence, I do it with intent, with specific purpose in mind. When an "AI" does it, it's predicting the next word that might satisfy the input requirement. It doesn't care if the sentence it writes makes any sense, is factual, etc, so long as it is human readable and follows gramatic rules. It does not do this with any specific intent, which is why you get slop and just plain wrong output a fair amount of time. Just because it produces something that sounds correct sometimes does not mean it's doing any thinking at all. Yes, humans do actually think before they speak, LLMs do not, cannot, and will not because that is not what they are designed to do.

chpatrick 4 days ago | parent [-]

Actually LLMs crunch through half a terabyte of weights before they "speak". How are you so confident that nothing happens in that immense amount of processing that has anything to do with thinking? Modern LLMs are also trained to have an inner dialogue before they output an answer to the user.

When you type the next word you also put a word that fits some requirement. That doesn't mean you're not thinking.

leptons 4 days ago | parent [-]

"crunch through half a terabyte of weights" isn't thinking. Following grammatical rules to produce a readable sentence isn't thought, it's statistics, and whether that sentence is factual or foolish isn't something the LLM cares about. If LLMs didn't so constantly produce garbage, I might agree with you more.

chpatrick 4 days ago | parent [-]

They don't follow "grammatical rules", they process inputs with an incredibly large neural net. It's like saying humans aren't really thinking because their brains are made of meat.