| ▲ | jcgrillo 2 hours ago | |||||||
That quote jumped out at me for a different reason... it's simply a falsehood. Claude code is built with an LLM which is a pattern-matching machine. While human researchers undoubtedly do some pattern matching, they also do a whole hell of a lot more than that. It's a ridiculous claim that their tool "reasons about your code the way a human would" because it's clearly wrong--we are not in fact running LLMs in our heads. If this thing actually does something interesting, they're doing their best to hide that fact behind a steaming curtain of bullshit. | ||||||||
| ▲ | nadis 2 hours ago | parent | next [-] | |||||||
That's a fair point and agreed that human researchers certainly do more than just pattern match. I took it as sort of vision-y fluff and not literally, but do appreciate you calling that out more explicitly as being wrong. | ||||||||
| ▲ | dboreham an hour ago | parent | prev [-] | |||||||
It's all pattern matching. Your brain fools you into believing otherwise. All other humans (well not absolutely all) join in the delusion, confirming it as fact. | ||||||||
| ||||||||