| ▲ | BearOso 4 hours ago | |||||||||||||
Did you pay attention in computer science classes? There are problems you can't simply brute-force. You can throw all the computing power you want at them, but they won't terminate before the heat-death of the universe. An LLM can only output a convolution of its data set. That's its plateau. It can't solve problems, it can only output an existing solution. Compute power can make it faster to narrow down to that existing solution, but it can't make the LLM smarter. | ||||||||||||||
| ▲ | aurareturn 3 hours ago | parent | next [-] | |||||||||||||
Maybe LLMs can solve novel problems, maybe not. We don't know for sure. It's trending like it can. There are still plenty of problems that having more tokens would allow them to be solved, and solved faster, better. There is no absolutely no way we've already met AI compute demands for the problems that LLMs can solve today. | ||||||||||||||
| ||||||||||||||
| ▲ | WarmWash 2 hours ago | parent | prev | next [-] | |||||||||||||
Not really. You can leverage randomness (and LLMs absolutely do) to generate bespoke solutions and then use known methods to verify them. I'm not saying LLMs are great at this, they are gimped by their inability to "save" what they learn, but we know that any kind of "new idea" is a function of random and deterministic processes mixed together in varying amounts. Everything is either random, deterministic, or some shade of the two. Human brain "magic" included. | ||||||||||||||
| ▲ | Hydraulix989 2 hours ago | parent | prev [-] | |||||||||||||
LLMs are considered Turing complete. | ||||||||||||||
| ||||||||||||||