Remix.run Logo
mpweiher 2 days ago

It isn't 100% proof that the concept is flawed, but the fact that the for decades most successful CPU manufacturer in the world couldn't make segmentation work in multiple attempts is pretty strong evidence that at least there are, er, "issues" that aren't immediately obvious.

I think it is safe to assume that they applied what they learned from their earlier failures to their later failures.

Again, we can never be 100% certain of counterfactuals, but certainly the assertion that linear address spaces were only there for backwards compatibility with small machines is simply historically inaccurate.

Also, Intel weren't the only ones. The first MMU for the Motorola MC68K was the MC68451, which was a segmented MMU. It was later replaced by the MC68851, a paged MMU. The MC68451, and segmentation, was both rarely used and then discontinued. The MC68851 was comparatively widely used, and later integrated in simplified form into future CPUs like the MC68030 and its successors.

So there as well, segmentation was tried first and then later abandoned. Which again, isn't definitive proof that segmentation is flawed, but way more evidence than you give credit for in your article.

People and companies again and again start out with segmentation, can't make it work and then later abandon it for linear paged memory.

My interpretation is that segmentation is one of those things that sounds great in theory, but doesn't work nearly as well in practice. Just thinking about it in the abstract, making an object boundary also a physical hardware-enforced protection boundary sounds absolutely perfect to me! For example something like the LOOM object-based virtual memory system for Smalltalk (though that was more software).

But theory ≠ practice. Another example of something that sounded great in theory was SOAR: Smalltalk on a RISC. They tried implementing a good part of the expensive bits of Smalltalk in silicon in a custom RISC design. It worked, but the benefits turned out to be minimal. What actually helped were larger caches and higher memory bandwidth, so RISC.

Another example was the Rekursiv, which also had object-capability addressing and a lot of other OO features in hardware. Also didn't go anywhere.

Again: not everything that sounds good in theory also works out in practice.

phkamp 2 days ago | parent [-]

All the examples you bring up are from an entirely different time in terms of hardware, a time where one of the major technological limitations were how many pins a chip could have and two-layer PCBs.

Ideas can be good, but fail because they are premature, relative to the technological means we have to implement them. (Electrical vehicles will probably be the future text-book example of this.)

The interesting detail in the R1000's memory model, is that it combines segmentation with pages, removing the need for segments to be contiguous in physical memory, which gets rid of the fragmentation issue, which was a huge issue for the archtectures you mention.

But there obviously always will be a tension between how much info you stick into whatever goes for a "pointer" and how big it becomes (ie: "Fat pointers") but I think we can safely say that CHERI has documented that fat pointers is well worth their cost, and how we are just discussing what's in them.

mpweiher 13 hours ago | parent [-]

> examples from different time

Yes, because (a) you asked a question about how we got here and (b) that question actually has a non-rhetorical historical answer. This is how we got here, and that's when it pretty much happened.

> one of the major technological limitations were how many pins a chip could have and two-layer PCBs

How is that relevant to the question of segmentation vs. linear address space? In your esteemed opinion? The R1000 is also from that erea, the 68451 and 68851 were (almost) contemporaries, and once the MMU was integrated into the CPU, the pins would be exactly the same.

> Ideas can be good, but fail because they are premature

Sure, and it is certainly possible that in the future, segmentation will make a comeback. I never wrote it couldn't. I answered your question about how we got here, which you answered incorrectly in the article.

And yes, the actual story of how we got here does indicate that hardware segmentation is problematic, though it doesn't tell us why. It also strongly hints that hardware segmentation is both superficially attractive and less obviously, but subtly and deeply flawed.

CHERI is an interesting approach and seems workable. I don't see how it's been documented to be worth the cost, as we simply haven't had wide-spread adoption yet. Memory pressure is currently the main performance driver ("computation is what happens in the gaps while the CPU waits for memory"), so doubling pointer sizes is definitely going to be an issue.

It certainly seems possible that CHERI will push us more strongly away from direct pointer usage than 64 bit already did, towards base + index addressing. Maybe a return to object tables for OO systems? And for large data sets, SOAs. Adjacency tables for graphs. Of course those tend to be safe already.