| ▲ | int_19h a day ago | |
> But my point is that LLM's essentially arrive at answers by brute force through search. If "brute force" worked for this, we wouldn't have needed LLMs; a bunch of nested for-loops can brute force anything. The reason why LLMs are clearly "magic" in ways similar to our own intelligence (which we very much don't understand either) is precisely because it can actually arrive at an answer without brute force, which is computationally prohibitive for most non-trivial problems anyway. Even if the LLM takes several hours spinning in a reasoning loop, those millions tokens still represent a minuscule part of the total possible solution space. And yes, we're obviously more efficient and smarter. The smarter part should come as no surprise given that our brains have vastly more "parameters". The efficient part is definitely remarkable, but completely orthogonal to the question of whether the phenomenon exhibited is fundamentally the same or not. | ||