▲ | bubblyworld 6 days ago | |
I think the argument here is nonsense. LLMs clearly work differently to human cognition, so pointing out a difference between how LLMs and humans approach a problem and calling that the reason that they can't build software makes no sense. Plausibly there are many ways to build software that don't make sense to a human. That said, I agree with the conclusion. They do seem to be missing coherent models of what they work on - perhaps part of the reason they do so poorly on benchmarks like ARC, which are designed to elicit that kind of skill? |