| ▲ | Antibabelic 29 minutes ago | |
They are doing "the same thing" only from the point of view of function, which only makes sense from the point of view of the thing utilizing this function (e.g. a clerical worker that needs to add numbers quickly). Otherwise, if "the parts are all different, and the construction isn't even remotely similar", how can the thing they're doing be "the same"? More importantly, how is it possible to make useful inferences about one based on the other if that's the case? | ||
| ▲ | ACCount37 8 minutes ago | parent [-] | |
The more you try to look into the LLM internals, the more similarities you find. Humanlike concepts, language-invariant circuits, abstract thinking, world models. Mechanistic interpretability is struggling, of course. But what it found in the last 5 years is still enough to dispel a lot of the "LLMs are merely X" and "LLMs can't Y" myths - if you are up to date on the relevant research. It's not just the outputs. The process is somewhat similar too. LLMs and humans both implement abstract thinking of some kind - much like calculators and arithmometers both implement addition. | ||