| ▲ | chongli 5 hours ago | |||||||||||||
I’m confused. How is a program that uses static allocation not Turing complete? | ||||||||||||||
| ▲ | 4 hours ago | parent | next [-] | |||||||||||||
| [deleted] | ||||||||||||||
| ▲ | skybrian 5 hours ago | parent | prev | next [-] | |||||||||||||
A Turing machine has an unlimited tape. You can’t emulate it with a fixed amount of memory. It’s mostly a theoretical issue, though, because all real computer systems have limits. It’s just that in languages that assume unlimited memory, the limits aren’t written down. It’s not “part of the language.” | ||||||||||||||
| ||||||||||||||
| ▲ | e12e 4 hours ago | parent | prev | next [-] | |||||||||||||
It's not, if it can do Io to network/disk..? | ||||||||||||||
| ▲ | jerf 4 hours ago | parent | prev [-] | |||||||||||||
Technically, your computer is not Turing Complete because it does not have access to infinite memory. Technically, once all the input has been given to a program, that program is a finite state automaton. That "once all the input has been given to the program" is doing a bit of heavy lifting since we have a number of programs where we have either unbounded input, or input modulated by the output itself (e.g., when a human plays a game their inputs are affected by previous outputs, which is the point after all), or other such things. But you can model all programs as their initial contents and all inputs they will ever receive, in principle if not in fact, and then your program is really just a finite state automaton. Static allocation helps make it more clear, but technically all computers are bounded by their resources anyhow, so it really doesn't change anything. No program is Turing complete. The reason why we don't think of them this way is several fold, but probably the most important is that the toolkit you get with finite state automata don't apply well in our real universe to real programs. The fact that mathematically, all programs can in fact be proved to halt or not in finite time by simply running them until they either halt or a full state of the system is repeated is not particularly relevant to beings like us who lack access to the requisite exponential space and time resources necessary to run that algorithm for real. The tools that come from modeling our systems as Turing Complete are much more practically relevant to our lives. There's also the fact that if your program never runs out of RAM, never reaches for more memory and gets told "no", it is indistinguishable from running on a system that has infinite RAM. Technically, nothing in this universe is Turing Complete. We have an informal habit of referring to things that "would be Turing Complete if extended in some reasonably obvious manner to be infinitely large" as simply being Turing Complete even though they aren't. If you really, really push that definition, the "reasonably obvious manner" can spark disagreements, but generally all those disagreements involve things so exponentially large as to be practically irrelevant anyhow and just be philosophy in the end. For example, you can't just load a modern CPU up with more and more RAM, eventually you would get to the point where there simply isn't enough state in the CPU to address more RAM, not even if you hook together all the registers in the entire CPU and all of its cache and everything else it has... but such an amount of RAM is so inconceivably larger than our universe that it isn't going to mean anything practical in this universe. You then get into non-"obvious" ways you might extend it from there, like indirect referencing through other arbitrarily large values in RAM, but it is already well past the point where it has any real-world meaning. | ||||||||||||||