| ▲ | qsera 7 hours ago | ||||||||||||||||
>AI could independently develop a cure for cancer All the answers for all your questions is contained in randomness. If you have a random sentence generator, there is a chance that it will output the answer to this question every time it is invoked. But that does not actually make it intelligent, does it? | |||||||||||||||||
| ▲ | famouswaffles 7 hours ago | parent | next [-] | ||||||||||||||||
You are arguing a point no-one is making. LLMs are not random sentence generators. Its probability distributions are anything but random. You could make an actual random sentence generator, but no-one would argue about its intelligence. | |||||||||||||||||
| ▲ | graemefawcett 7 hours ago | parent | prev [-] | ||||||||||||||||
This is exactly how problem solving works, regardless of the substrate of cognition. Start with "all your questions contained in randomness" -> the unconstrained solution space. The game is whether or not you can inject enough constraints to collapse the solution space to one that can be solved before your TTL expires. In software, that's generally handled by writing efficient algorithms. With LLMs, apparently the SOTA for this is just "more data centers, 6 months, keep pulling the handle until the right tokens fall out". Intelligence is just knowing which constraints to apply and in what order such that the search space is effectively partitioned, same thing the "reasoning" traces do. Same thing thermostats, bacteria, sorting algorithms and rivers do, given enough timescale. You can do the same thing with effective prompting. The LLM has no grounding, no experience and no context other than which is provided to it. You either need to build that or be that in order for the LLM to work effectively. Yes, the answers for all your questions are contained. No, it's not randomness. It's probability and that can be navigated if you know how | |||||||||||||||||
| |||||||||||||||||