Remix.run Logo
skissane 3 days ago

I think it is a pity Intel went with 16 byte paragraphs instead of 256 byte paragraphs for the 8086.

With 16 byte paragraphs, a 16 bit segment and 16 bit offset can only address 1MiB (ignoring the HMA you can get on 80286+).

With 256 byte paragraphs, the 8086 would have been able to address 16MiB in real mode (again not counting the HMA, which would have been a bit smaller: 65,280 bytes instead of 65,520 bytes).

pwg 2 days ago | parent | next [-]

Intel also released both the 8086 and 8088 as 40pin DIP's.

Squeezing four more address pins in would have meant multiplexing four more of the pins on the chip, and if you exclude power/ground pins there are only 13 pins that are not multiplexed, and several of those either can't be multiplexed (because they are inputs, i.e., CLK, INTR, NMI) or would have made bus design even more painful than it already is for these chips.

The 4 bit shift, instead of 8 bit shift, for the segment registers was likely as big an address bus they could do that would also fit the constraint of "fits into a 40pin DIP".

https://en.wikipedia.org/wiki/File:Intel_8086_pinout.svg

spc476 3 days ago | parent | prev [-]

The 8086 was released in '78 (or thereabouts). 64K of RAM was very expensive at the time, and wasting 256 bytes just to align segments would have been extravagant. Also, the 8086 was meant as a stop-gap product until the Intel 432 was released (hint: it never really was as it was hideously expensive and hideously slow, but bits of it showed up in the 80286 and 80386).

The 80286 changed how the segment registers worked in protected mode, giving access to 16M of address space, but couldn't change it for real mode as it would have broken a ton of code. Both Intel and IBM never thought the IBM PC would take over the market like it did.

gpderetta 2 days ago | parent [-]

I still do not understand this point: intel could have used 16 bits from the offset register and 4 bits from the segment register to get non-overlapping segments, leaving the top 12 bits of the segment register unused (either masked out, mirroring the other segments or trapping). It wouldn't have changed the number of lines it needed to address 1M of memory, but it would have made extending the address space further much simpler.

rep_lodsb 2 days ago | parent | next [-]

As TFA explains, the purpose of segment registers wasn't just to extend the address space, it was to make code and data relocatable without the need of fixing up every address referenced.

They considered 256 byte alignment too wasteful, 64K would have been ridiculous (many business computers at the time didn't even have that much memory)

smitelli 2 days ago | parent | prev | next [-]

Scenario A: Picture that a quick, tiny function is needed that can load data from struct members and operate on them. The structs are tiny but there are a whole lot of them, and the values of interest always start at offsets e.g. 0, 4, and 8. If the structs can be stored in memory aligned on a segment boundary, a pointer can be constructed where offset 0 always points to the beginning of the struct, and the code can use the literal offsets 0, 4, 8 added to the pointer base without having to do any further arithmetic.

Scenario B: Imagine you're writing a page of video to the VGA framebuffer. Glossing over a whole lot of minutiae, you can simply jam 64,000 bytes into the address and data lines starting at A000:0000 without needing to stop and think about what you're doing w.r.t. the segment registers. Any kind of segment change every n bytes would require the loop to be interrupted some number of times over the course of the transfer to update DS or ES. This would also prevent something like `rep movs` from being able to work on a full screenful of data.

The 16-byte paragraph, and the many segment/offset aliases that could be constructed to refer to a single linear memory address, was a design choice that tried to serve the needs of both of those groups.

pwg 2 days ago | parent | prev [-]

> but it would have made extending the address space further much simpler.

Given published information [1] that the 8086 was designed in a weekend as a panic stop-gap to provide some form of "more advanced CPU" to keep Intel in the market while the iapx432 project was underway, but falling far behind schedule, it seems doubtful that the designers would also have been thinking of "ease of further expansion for future revision" on what was, at the time, just a stop-gap CPU to sell while awaiting the shipment of the iapx432. Published information is that at the time Intel never expected the 8086 to create the huge extended family it has created, and instead expected the iapx432 to be that "grand family ancestor". The market, of course, had other ideas, and IBM's choice of 8088 for the IBM-PC was the catalyst that launched the 8086 design into the family it is today.

[1] I no longer have a reference to the publication