| ▲ | d3ckard 6 hours ago |
| Personally I believe static allocation has pretty huge consequences for theoretical computer science. It’s the only kind of program that can be actually reasoned about. Also, not exactly Turing complete in classic sense. Makes my little finitist heart get warm and fuzzy. |
|
| ▲ | mikepurvis 5 hours ago | parent | next [-] |
| I'm not an academic, but all those ByteArray linked lists have me feeling like this is less "static allocation" and more "I re-implemented a site-specific allocator and all that that implies". Also it's giving me flashbacks to LwIP, which was a nightmare to debug when it would exhaust its preallocated buffer structures. |
| |
| ▲ | kevin_thibedeau 4 hours ago | parent | next [-] | | This is still a more dependable approach with resource constraints. Fragmentation is eliminated and you can monitor pools for usage in a worst case scenario. The only other risk here versus true static allocation is a memory leak which can be guarded against with suitable modern language design. LwIPs buffers get passed around across interrupt handler boundaries in and out of various queues. That's that makes it hard to reason about. The allocation strategy is still sound when you can't risk using a heap. | |
| ▲ | d3ckard 4 hours ago | parent | prev [-] | | Personally, I see dynamic allocation more and more as a premature optimization and a historical wart. We used to have very little memory, so we developed many tricks to handle it. Now we have all the memory we need, but tricks remained. They are now more harmful than helpful. Interestingly, embedded programming has a reputation for stability and AFAIK game development is also more and more about avoiding dynamic allocation. | | |
| ▲ | mikepurvis 4 hours ago | parent [-] | | Also not a game dev, but my understanding there is that there there's a lot of in-memory objects whose lifetimes are tied to specific game-time entities, like a frame, an NPC, the units of octtree/bsp corresponding to where the player is, etc. Under these conditions, you do need a fair bit of dynamism, but the deallocations can generally be in big batches rather than piecemeal, so it's a good fit for slab-type systems. |
|
|
|
| ▲ | muvlon 30 minutes ago | parent | prev | next [-] |
| > It’s the only kind of program that can be actually reasoned about. No. That is one restriction that allows you to theoretically escape the halting problem, but not the only one. Total functional programming languages for example do it by restricting recursion to a weaker form. Also, more generally, we can reason about plenty of programs written in entirely Turing complete languages/styles. People keep mistaking the halting problem as saying that we can never successfully do termination analysis on any program. We can, on many practical programs, including ones that do dynamic allocations. Conversely, there are programs that use only a statically bounded amount of memory for which this analysis is entirely out of reach. For example, you can write one that checks the Collatz conjecture for the first 2^1000 integers that only needs about a page of memory. |
|
| ▲ | kibwen 4 hours ago | parent | prev | next [-] |
| > It’s the only kind of program that can be actually reasoned about. Theoretically infinite memory isn't really the problem with reasoning about Turing-complete programs. In practice, the inability to guarantee that any program will halt still applies to any system with enough memory to do anything more than serve as an interesting toy. I mean, I think this should be self-evident: our computers already do have finite memory. Giving a program slightly less memory to work with doesn't really change anything; you're still probably giving that statically-allocated program more memory than entire machines had in the 80s, and it's not like the limitations of computers in the 80s made us any better at reasoning about programs in general. |
| |
| ▲ | d3ckard 4 hours ago | parent [-] | | Yes, but allocations generate ever increasing combinatorial space of possible failure modes. Static allocation requires you to explicitly handle overflows, but also by centralizing them, you probably need not to have as many handlers. Technically, all of this can happen as well in language with allocations. It’s just that you can’t force the behavior. | | |
| ▲ | kibwen 3 hours ago | parent [-] | | Sure, but let's be clear: it's a tradeoff. If every program reserved as much memory at startup as needed to service 100% of its theoretically-anticipated usage, the amount of programs we could run in parallel would be drastically reduced. That is to say, static allocation makes OOM conditions dramatically more likely by their very nature, because programs are greedily sitting on unused memory that could be doled out to other processes. |
|
|
|
| ▲ | dnautics 5 hours ago | parent | prev | next [-] |
| i think you mean "exactly not Turing complete" |
| |
| ▲ | d3ckard 5 hours ago | parent [-] | | Nice correction :) It’s actually quite tricky though. The allocation still happens and it’s not limited to, so you could plausibly argue both ways. | | |
| ▲ | chongli 5 hours ago | parent [-] | | I’m confused. How is a program that uses static allocation not Turing complete? | | |
| ▲ | 4 hours ago | parent | next [-] | | [deleted] | |
| ▲ | skybrian 5 hours ago | parent | prev | next [-] | | A Turing machine has an unlimited tape. You can’t emulate it with a fixed amount of memory. It’s mostly a theoretical issue, though, because all real computer systems have limits. It’s just that in languages that assume unlimited memory, the limits aren’t written down. It’s not “part of the language.” | | |
| ▲ | chongli 38 minutes ago | parent | next [-] | | What about IO? Just because I have a statically allocated program with a fixed amount of memory doesn’t mean I can’t do IO. My fixed memory can just be a cache / scratchpad and the unlimited tape can work via IO (disk, network, etc). | |
| ▲ | dnautics 4 hours ago | parent | prev [-] | | If we get REALLY nitpicky, zig currently (but not in the future) allows unbounded function recursion with "theoretically" assumes unlimited stack size, so it's potentially "still technically theoretically turing complete". For now. |
| |
| ▲ | e12e 4 hours ago | parent | prev | next [-] | | It's not, if it can do Io to network/disk..? | |
| ▲ | jerf 4 hours ago | parent | prev [-] | | Technically, your computer is not Turing Complete because it does not have access to infinite memory. Technically, once all the input has been given to a program, that program is a finite state automaton. That "once all the input has been given to the program" is doing a bit of heavy lifting since we have a number of programs where we have either unbounded input, or input modulated by the output itself (e.g., when a human plays a game their inputs are affected by previous outputs, which is the point after all), or other such things. But you can model all programs as their initial contents and all inputs they will ever receive, in principle if not in fact, and then your program is really just a finite state automaton. Static allocation helps make it more clear, but technically all computers are bounded by their resources anyhow, so it really doesn't change anything. No program is Turing complete. The reason why we don't think of them this way is several fold, but probably the most important is that the toolkit you get with finite state automata don't apply well in our real universe to real programs. The fact that mathematically, all programs can in fact be proved to halt or not in finite time by simply running them until they either halt or a full state of the system is repeated is not particularly relevant to beings like us who lack access to the requisite exponential space and time resources necessary to run that algorithm for real. The tools that come from modeling our systems as Turing Complete are much more practically relevant to our lives. There's also the fact that if your program never runs out of RAM, never reaches for more memory and gets told "no", it is indistinguishable from running on a system that has infinite RAM. Technically, nothing in this universe is Turing Complete. We have an informal habit of referring to things that "would be Turing Complete if extended in some reasonably obvious manner to be infinitely large" as simply being Turing Complete even though they aren't. If you really, really push that definition, the "reasonably obvious manner" can spark disagreements, but generally all those disagreements involve things so exponentially large as to be practically irrelevant anyhow and just be philosophy in the end. For example, you can't just load a modern CPU up with more and more RAM, eventually you would get to the point where there simply isn't enough state in the CPU to address more RAM, not even if you hook together all the registers in the entire CPU and all of its cache and everything else it has... but such an amount of RAM is so inconceivably larger than our universe that it isn't going to mean anything practical in this universe. You then get into non-"obvious" ways you might extend it from there, like indirect referencing through other arbitrarily large values in RAM, but it is already well past the point where it has any real-world meaning. |
|
|
|
|
| ▲ | IshKebab 4 hours ago | parent | prev [-] |
| > It’s the only kind of program that can be actually reasoned about. What do you mean? There are loads of formal reasoning tools that use dynamic allocation, e.g. Lean. |