| ▲ | Veserv 3 days ago | ||||||||||||||||||||||||||||||||||||||||
What a clueless post. Even ignoring their massive overstatement of the difficulty and hardware complexity of hardware mapping tables, they appear to not even understand the problems solved by mapping tables. Okay, let us say you have a physical object store. How are the actual contents of those objects stored? Are they stored in individual, isolated memory blocks? What if I want to make a 4 GB array? Do I need to have 4 GB memory blocks? What if I only have 6 GB? That is obviously unworkable. Okay, we can solve that by compacting our physical object store onto a physical linear store and just presenting a object store as a abstraction. Sure, we have a physical linear store, but we never present that to the user. But what if somebody deallocates a object? Obviously we should be able to reuse that underlying physical linear store. What if they allocated a 4 GB array? Obviously we need to be able to fragment that into smaller pieces for future objects. What if we deallocated 4 GB of disjoint 4 KB objects? Should we fail to allocate a 8 KB object just because the fragments are not contiguous? Oh, just keep in mind the precise structure of the underlying physical store to avoid that (what a leaky and error-prone abstraction). Oh, but what about if there are multiple programs running, some potentially even buggy, how the hell am I supposed to keep track of the shared physical store to keep track of global fragmentation of the shared resource? Okay, we can solve all of that with a level of indirection by giving you a physical object key instead of a physical object "reference". You present the key, and then we have a runtime structure that allows us to lookup where in the physical linear store we have put that data. This allows us to move and compact the underlying storage while letting you have a stable key. Now we have a mapping between object key and linear physical memory. But what if there are multiple programs on the same machine, some of which may be untrustworthy? What if they just start using keys they were not given? Obviously we need some scheme of preventing anybody from using any key. Maybe we could solve that by tagging every object in the system with a list of every program allowed to use it? But the number of programs is dynamic and if we have millions or billions of objects, each new program would require re-tagging all of those objects. We could make that list only encode "allowed" programs which would save space and amount of cleanup work, but how would the hardware do that lookup efficiently and how would it store that data efficiently? Okay, we can solve that by having a per-program mapping between object key to linear physical memory. Oh no, that is looking suspiciously close to the per-program mapping between linear virtual memory to linear physical memory. Hopefully there are no other problems that will just result in us getting back to right where we started. Oh no, here comes another one. How is your machine storing this mapping between object key to linear physical memory? If you will remember from your data structures courses, those would usually be implemented as either a hash table or a tree. A tree sounds too suspiciously close to what currently exists, so let us use a hash table. Okay, cool, how big should the hash table be? What if I want a billion objects in this program and a thousand objects in a different program? I guess we should use a growable hash table. All that happens is that if we allocate enough objects we allocate a new, dynamically sized storage structure then bulk rehash and insert all the old objects. That is amortized O(1), just at the cost of a unpredictable pause on potentially any memory allocation which can not only be gigantic, but is proportional to the number of live allocations. That is fine if our goal is just putting in a whole hardware garbage collector, but not really applicable for high performance computing. For high performance computing we would want worse case bounded time and memory cost (not amortized, per-operation). Okay, I guess we have to go with a per-program tree-based mapping between object key to linear physical memory. But it is still a object store, so we won, right? How is the hardware going to walk that efficiently? For the hardware to walk that efficiently, you are going to want a highly regular structure with high fanout to both maximize the value of the cache lines you will load and to reduce the worst case number of cache lines you need to load. So you will want a B-Tree structure of some form. Oh no, that is exactly what hardware mapping tables look like. But it is still a object store, so we won, right? But what if I deallocated 4 GB of disjoint 4 KB objects? You could move and recompact all of that memory, but why? You already have a mapping structure with a layer of indirection via object keys. Just create a interior mapping within a object between the object-relative offsets and potentially disjoint linear physical memory. Then you do not need physically contiguous backing, you can use disjoint physical linear store to provide the abstraction of a object linear store. And now we have a per-program tree-based mapping between linear object address to linear physical memory. But what if the objects are of various sizes? In some cases the hardware will traverse the mapping from object key to linear object store, then potentially need to traverse another mapping from a large linear object address to linear physical memory. If we just compact the linear object store mappings, then we can unify the trees and just provide a common linear address to linear physical memory mapping and the tree-based mapping will be tightly bounded for all walks. And there we have it, a per-program tree-based mapping between linear virtual memory and linear physical memory one step at a time. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | senko 3 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||
> What a clueless post [...a lot of OS memory handling details...]. And there we have it. Considering who wrote the post, I would bet the author is aware of what you've described here. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | cryptonector 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
If PHK, DJB, <insert luminary> writes a post that comes across as clueless or flat out wrong, I'm going to read it and read it carefully. It does happen that <luminary> says dumb and/or incorrect things from time to time, but most likely there will be a very cool nugget of truth in there. Regarding what PHK seems to be asking for, I think it's... linear addressing of physical memory, yes (because what else can you do?) but with pointer values that have attached capabilities so that you can dispense with virtual to physical memory mapping (and a ton of MMU TLB and page table hardware and software complexity and slowness) and replace it with hardware capability verification. Because such pointers are inherently abstract, the fact that the underlying physical memory address space is linear is irrelevant and the memory model looks more like "every object is a segment" if you squint hard. Obviously we need to be able to address arrays of bytes, for example, so within an object you have linear addressing, and overall you have it too because physical memory is linear, but otherwise you have a fiction of a pile of objects some of which you have access to and some of which you don't. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
| [deleted] | |||||||||||||||||||||||||||||||||||||||||
| ▲ | amelius 3 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||
> What a clueless post. Even ignoring their massive overstatement of the difficulty and hardware complexity of hardware mapping tables, they appear to not even understand the problems solved by mapping tables. From the article: > And before you tell me this is impossible: The computer is in the next room, built with 74xx-TTL (transistor-transistor logic) chips in the late 1980s. It worked back then, and it still works today. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||