Remix.run Logo
anonymous908213 3 hours ago

We engage in many exercises in deterministic logic. Humans invented entire symbolic systems to describe mathematics without any prior art in a dataset. We apply these exercises in deterministic logic to reality, and reality confirms that our logical exercises are correct to within extremely small tolerances, allowing us to do mind-boggling things like trips to the moon, or engineering billions of transistors organized on a nanometer scale and making them mimick the appearance of human language by executing really cool math really quickly. None of this could have been achieved from scratch by probabilistic behaviour modelled on a purely statistical analysis of past information, which is immediately evident from the fact that, as mentioned, an LLM cannot do basic arithmetic, or any other deterministic logical exercise in which the answer cannot be predicted from already being in the training distribution, while we can. People will point to humans sometimes making mistakes, but that is because we take mental shortcuts to save energy. If you put a gun to our head and say "if you get this basic arithmetic problem wrong, you will die" we will reason long enough to get it right. People try prompting that with LLMs, and they still can't do it, funnily enough.