| ▲ | vivzkestrel 2 days ago | |||||||
I am amazed not a single pro AI person on HN has anything to say or even speculate about this. This is such a serious issue | ||||||||
| ▲ | simianwords 2 days ago | parent | next [-] | |||||||
This is a very poor article. What I understood is that they take one benchmark (in particular) that tests grade school level math. This benchmark apparently claims to test ability to reason through math problems. They agree that the benchmarks show that the LLMs can solve such questions and models are getting better. But their main point is that this does not prove that the model is reasoning. But so what??? It may not reason in the way humans do but it is pretty damn close. The mechanics are the same - recursively generate a prompt that terminates in an answer generating prompt. They don’t like that this indicates the model “reasons through” the problem. But it’s just semantics at this point. For me and for most others - getting the final answer is important. And it largely accomplishes this task. I don’t buy that the model couldn’t reason through - have you ever asked a model for its explanation? It does genuinely explain how it got the solution. At this point who the hell cares what “reasoning” means if it 1. Gets me the right answer 2. Reasonably explains how it did it | ||||||||
| ||||||||
| ▲ | ulfw 2 days ago | parent | prev [-] | |||||||
Because the pro AI persons are busy trying sell their whatevertheyhave before the bubble bursts | ||||||||