| ▲ | mpweiher 2 days ago | |||||||
It isn't 100% proof that the concept is flawed, but the fact that the for decades most successful CPU manufacturer in the world couldn't make segmentation work in multiple attempts is pretty strong evidence that at least there are, er, "issues" that aren't immediately obvious. I think it is safe to assume that they applied what they learned from their earlier failures to their later failures. Again, we can never be 100% certain of counterfactuals, but certainly the assertion that linear address spaces were only there for backwards compatibility with small machines is simply historically inaccurate. Also, Intel weren't the only ones. The first MMU for the Motorola MC68K was the MC68451, which was a segmented MMU. It was later replaced by the MC68851, a paged MMU. The MC68451, and segmentation, was both rarely used and then discontinued. The MC68851 was comparatively widely used, and later integrated in simplified form into future CPUs like the MC68030 and its successors. So there as well, segmentation was tried first and then later abandoned. Which again, isn't definitive proof that segmentation is flawed, but way more evidence than you give credit for in your article. People and companies again and again start out with segmentation, can't make it work and then later abandon it for linear paged memory. My interpretation is that segmentation is one of those things that sounds great in theory, but doesn't work nearly as well in practice. Just thinking about it in the abstract, making an object boundary also a physical hardware-enforced protection boundary sounds absolutely perfect to me! For example something like the LOOM object-based virtual memory system for Smalltalk (though that was more software). But theory ≠ practice. Another example of something that sounded great in theory was SOAR: Smalltalk on a RISC. They tried implementing a good part of the expensive bits of Smalltalk in silicon in a custom RISC design. It worked, but the benefits turned out to be minimal. What actually helped were larger caches and higher memory bandwidth, so RISC. Another example was the Rekursiv, which also had object-capability addressing and a lot of other OO features in hardware. Also didn't go anywhere. Again: not everything that sounds good in theory also works out in practice. | ||||||||
| ▲ | phkamp 2 days ago | parent [-] | |||||||
All the examples you bring up are from an entirely different time in terms of hardware, a time where one of the major technological limitations were how many pins a chip could have and two-layer PCBs. Ideas can be good, but fail because they are premature, relative to the technological means we have to implement them. (Electrical vehicles will probably be the future text-book example of this.) The interesting detail in the R1000's memory model, is that it combines segmentation with pages, removing the need for segments to be contiguous in physical memory, which gets rid of the fragmentation issue, which was a huge issue for the archtectures you mention. But there obviously always will be a tension between how much info you stick into whatever goes for a "pointer" and how big it becomes (ie: "Fat pointers") but I think we can safely say that CHERI has documented that fat pointers is well worth their cost, and how we are just discussing what's in them. | ||||||||
| ||||||||