| ▲ | latexr 5 hours ago | ||||||||||||||||
> For example, ~2 years ago, an expert in ML See, that’s a poor argument already. Anyone could counter that with other experts in ML publicly making remarks that AI would have replaced 80% of the work force or cured multiple diseases by now, which obviously hasn’t happened. That’s about as good an argument as when people countered NFT critics by citing how Clifford Stoll said the internet was a fad. > made this remark on stage: LLMs can't do math. Today they absolutely and obviously, can. How exactly are “LLMs can’t” and “do math” defined? As you described it, that sentence does not mean “will never be able to”, so there’s no contradiction. Furthermore, it continues to be true that you cannot trust LLMs on their own for basic arithmetic. They may e.g. call an external tool to do it, but pattern matching on text isn’t sufficient. > The definitions don't change. Of course they do, what are you talking about? Definitions change all the time with new information. That’s called science. | |||||||||||||||||
| ▲ | NitpickLawyer 5 hours ago | parent [-] | ||||||||||||||||
The definition of "can/cannot do math" didn't change. That's not up for debate. 2 years ago they couldn't solve an erdos problem (people have tried, Tao has tried ~1 year ago). Today they can. Definitions don't change. The idea that now that they can it's no longer intelligence is changing. And that's literally moving the goalposts. Read the thread here, go to the bottom part. There are zillions of comments saying this. You are keen to not trying to understand what the quote is saying. This is not good faith discussion, and it's not going anywhere. We're already miles from where we started. The quote is an observation (and an old one at that) about goalposts moving. If you can't or won't see that, there's no reason to continue this thread. | |||||||||||||||||
| |||||||||||||||||