▲ | vidarh 4 days ago | |||||||
I do contract work in the LLM space which involves me seeing a lot of human prompts, and its made the magic of human reasoning fall away: Humans are shocking bad at reasoning on the large. One of the things I find extremely frustrating is that almost no research on LLM reasoning ability benchmarks them against average humans. Large proportions of humans struggle to comprehend even a moderately complex sentence with any level of precision. | ||||||||
▲ | meroes 3 days ago | parent | next [-] | |||||||
Aren’t prompts seeking to offload reasoning though? Is that really a fair data point for this? | ||||||||
| ||||||||
▲ | dartos 3 days ago | parent | prev [-] | |||||||
Another one! What’s the point of your argument? AI companies: “There’s a new machine that can do reasoning!!!” Some people: “actually they’re not very good at reasoning” Some people like you: “well neither are humans so…” > research on LLM reasoning ability benchmarks them against average humans Tin foil hat says that it’s because it probably wouldn’t look great and most LLM research is currently funded by ML companies. > Large proportions of humans struggle to comprehend even a moderately complex sentence with any level of precision. So what? How does that assumption make LLMs better? | ||||||||
|