Remix.run Logo
jcgrillo 2 hours ago

That quote jumped out at me for a different reason... it's simply a falsehood. Claude code is built with an LLM which is a pattern-matching machine. While human researchers undoubtedly do some pattern matching, they also do a whole hell of a lot more than that. It's a ridiculous claim that their tool "reasons about your code the way a human would" because it's clearly wrong--we are not in fact running LLMs in our heads.

If this thing actually does something interesting, they're doing their best to hide that fact behind a steaming curtain of bullshit.

nadis 2 hours ago | parent | next [-]

That's a fair point and agreed that human researchers certainly do more than just pattern match. I took it as sort of vision-y fluff and not literally, but do appreciate you calling that out more explicitly as being wrong.

dboreham an hour ago | parent | prev [-]

It's all pattern matching. Your brain fools you into believing otherwise. All other humans (well not absolutely all) join in the delusion, confirming it as fact.

jcgrillo 8 minutes ago | parent [-]

I suppose I should have been more specific--pattern matching in text. We humans do a lot more than processing ascii bytes (or whatever encoding you like) and looking for semantically nearby ones. If "only" because we have sensors which harvest more varied data than a 1D character stream. Security researchers may get an icky feeling if they notice something or another in some system they're analyzing, which leads eventually to something exploitable. Or they may beat their head against a problem all day at work on a Friday, go to the bar afterwards, wake up with a terrible hangover Saturday morning, go out to brunch, and while stepping off the bus on the way to the zoo after brunch an epiphany strikes like a flash and the exploit unfurls before them unbidden like a red carpet. LLMs do precisely none of this.