| ▲ | ndiddy 9 hours ago |
| Fun fact: Bob Colwell (chief architect of the Pentium Pro through Pentium 4) recently revealed that the Pentium 4 had its own 64-bit extension to x86 that would have beaten AMD64 to market by several years, but management forced him to disable it because they were worried that it would cannibalize IA64 sales. > Intel’s Pentium 4 had our own internal version of x86–64. But you could not use it: we were forced to “fuse it off”, meaning that even though the functionality was in there, it could not be exercised by a user. This was a marketing decision by Intel — they believed, probably rightly, that bringing out a new 64-bit feature in the x86 would be perceived as betting against their own native-64-bit Itanium, and might well severely damage Itanium’s chances. I was told, not once, but twice, that if I “didn’t stop yammering about the need to go 64-bits in x86 I’d be fired on the spot” and was directly ordered to take out that 64-bit stuff. https://www.quora.com/How-was-AMD-able-to-beat-Intel-in-deli... |
|
| ▲ | userbinator an hour ago | parent | next [-] |
| "Recently revealed" is more like a confirmation of what I had read many years before; and furthermore, that Intel's 64-bit x86 would've been more backwards-compatible and better-fitting than AMD64, which looks extremely inelegant in contrast, with several stupid missteps like https://www.pagetable.com/?p=1216 (the comment near the bottom is very interesting.) If you look at the 286's 16-bit protected mode and then the 386's 32-bit extensions, they fit neatly into the "gaps" in the former; there are some similar gaps in the latter, which look like they had a future extension in mind. Perhaps that consideration was already there in the 80s when the 386 was being designed, but as usual, management got in the way. |
| |
| ▲ | Dylan16807 6 minutes ago | parent [-] | | > (the comment near the bottom is very interesting.) Segmentation very useful for virtualization? I don't follow that claim. |
|
|
| ▲ | kimixa 5 hours ago | parent | prev | next [-] |
| That's no guarantee it would succeed though - AMD64 also cleaned up a number of warts on the x86 architecture, like more registers. While I suspect the Intel equivalent would do similar things, simply from being a big enough break it's an obvious thing to do, there's no guarantee it wouldn't be worse than AMD64. But I guess it could also be "better" from a retrospective perspective. And also remember at the time the Pentium 4 was very much struggling to get the advertised performance. One could argue that one of the major reasons that the AMD64 ISA took off is that the devices that first supported it were (generally) superior even in 32-bit mode. EDIT: And I'm surprised it got as far as silicon. AMD64 was "announced" and the spec released before the pentium 4 was even released, over 3 years before the first AMD implementations could be purchased. I guess Intel thought they didn't "need" to be public about it? And the AMD64 extensions cost a rather non-trivial amount of silicon and engineering effort to implement - did the plan for Itanium change late enough in the P4 design that it couldn't be removed? Or perhaps this all implies it was a much less far-reaching (And so less costly) design? |
| |
| ▲ | ghaff 3 hours ago | parent | next [-] | | As someone who followed IA64/Itanium pretty closely, it's still not clear to me the degree to which Intel (or at least groups within Intel) thought IA64 was a genuinely better approach and the degree to which Intel (or at least groups within Intel) simply wanted to get out from existing cross-licensing deals with AMD and others. There were certainly also existing constraints imposed by partnerships, notably with Microsoft. | | |
| ▲ | ajross 3 hours ago | parent [-] | | Both are likely true. It's easy to wave it away in hindsight, but there was genuine energy and excitement about the architecture in its early days. And while the first chips were late and on behind-the-cutting-edge processes they were actually very performant (FPU numbers were world-beating, even -- parallel VLIW dispatch really helped here). Lots of people loved Itanium and wanted to see it succeed. But surely the business folks had their own ideas too. | | |
| ▲ | kimixa 3 hours ago | parent [-] | | Yes - VLIW seems to lend itself to computation-heavy code, used to this day in many DSP (and arguably GPU, or at least "influences" many GPU) architectures. |
|
| |
| ▲ | chasil 3 hours ago | parent | prev [-] | | The times that I have used "gcc -S" on my code, I have never seen the additional registers used. I understand that r8-r15 require a REX prefix, which is hostile to code density. I've never done it with -O2. Maybe that would surprise me. | | |
| ▲ | astrange 3 hours ago | parent | next [-] | | You should be able to see it. REX prefixes cost a lot less than register spills do. If you mean literally `gcc -S`, -O0 is worse than not optimized and basically keeps everything in memory to make it easier to debug. -Os is the one with readable sensible asm. | | | |
| ▲ | o11c 2 hours ago | parent | prev [-] | | Obviously it depends on how many live variables there are at any point. A lot of nasty loops have relatively few non-memory operands involved, especially without inlining (though even without inlining, the ability to control ABI-mandated spills better will help). But it's guaranteed to use `r8` and `r9` for for a function that takes 5 and 6 integer arguments (including unpacked 128-bit structs as 2 arguments), or 3 and 4 arguments (not sure about unpacking) for Microsoft. And `r10` is used if you make a system call on Linux. |
|
|
|
| ▲ | kstrauser 8 hours ago | parent | prev | next [-] |
| "If you don't cannibalize yourself, someone else will." Intel has a strong history of completely mis-reading the market. |
| |
| ▲ | zh3 8 hours ago | parent | next [-] | | Andy Grove, "Only the paranoid survive":- Quote: Business success contains the seeds of its own destruction. Success breeds complacency. Complacency breeds failure. Only the paranoid survive. - Andy Grove, former CEO of Intel From wikipedia: https://en.wikipedia.org/wiki/Andrew_Grove#Only_the_Paranoid... Takeaway: Be paranoid about MBAs running your business. | | |
| ▲ | zer00eyz 7 hours ago | parent [-] | | > Takeaway: Be paranoid about MBAs running your business. Except Andy is talking about himself, and Noyce the engineers getting it wrong: (watch a few minutes of this to get the gist of where they were vs Japan) https://www.youtube.com/watch?v=At3256ASxlA&t=465s Intel has a long history of sucking, and other people stepping in to force them to get better. Their success has been accident and intervention over and over. And this isnt just an intel thing, this is kind of an American problem (and maybe a business/capitalism problem). See this take on steel: https://www.construction-physics.com/p/no-inventions-no-inno... that sounds an awful lot like what is happening to intel now. | | |
| ▲ | II2II 4 hours ago | parent | next [-] | | > Intel has a long history of sucking, and other people stepping in to force them to get better. Their success has been accident and intervention over and over. If one can take popular histories of Intel at face value, they have had enough accidental successes, avoided enough failures, and outright failed so many times that they really ought to know better. The Itanium wasn't their first attempt to create an incompatible architecture, and it sounds like it was incredibly successful compared to the iAPX 432. Intel never intended to get into microprocessors, wanting to focus on memory instead. Yet they picked up a couple of contracts (which produced the 4004 and 8008) to survive until they reached their actual goal. Not only did it help the company at the time, but it proved essential to the survival of the company when the Japanese semiconductor industry nearly obliterated American memory manufacturers. On the flip side, the 8080 was source compatible with the 8008. Source compatibility would help sell it to users of the 8008. It sounds like the story behind the 8086 is similar, albeit with a twist: not only did it lead to Intel's success when it was adopted by IBM for the PC, but it was intended as a stopgap measure while the iAPX 432 was produced. This, of course, is a much abbreviated list. It is also impossible to suggest where Intel would be if they made different decisions, since they produced an abundance of other products. We simply don't hear much about them because they were dwarfed by the 80x86 or simply didn't have the public profile of the 80x86 (for example: they produced some popular microcontrollers). | | |
| ▲ | asveikau 3 hours ago | parent [-] | | Windows NT also originally targeted a non-x86 CPU from Intel, the i860. |
| |
| ▲ | wslh 5 hours ago | parent | prev [-] | | Andy Grove explained this very clearly in his book. By the way, the parallel works if you replace Japan with China in the video. In the late 1970s and 1980s, Japan initially reverse engineered memory chips, and soon it became impossible to compete with them. The Japanese government also heavily subsidized its semiconductor industry during that period. My point isn't to take a side, but simply to highlight how history often repeats itself, sometimes almost literally, not rhyme. |
|
| |
| ▲ | nextos 5 hours ago | parent | prev [-] | | I don't think it's just mis-reading. It's also internal politics. How many at Nokia knew that the Maemo/MeeGo series was the future, rather than Symbian? I think quite a few. But Symbian execs fought to make sure Maemo didn't get a mobile radio. In most places, internal feuds and little kingdoms prevail over optimal decisions for the entire organization. I imagine lots of people at Intel were deeply invested in IA-64. Same thing repeats mostly everywhere. For example, from what I've heard from insiders, ChromeOS vs Android battles at Google were epic. |
|
|
| ▲ | wmf 9 hours ago | parent | prev | next [-] |
| It wasn't recent; Yamhill has been known since 2002. A detailed article about this topic just came out: https://computerparkitecture.substack.com/p/the-long-mode-ch... |
|
| ▲ | jcranmer 8 hours ago | parent | prev | next [-] |
| The story I heard (which I can't corroborate) was that it was Microsoft that nixed Intel's alternative 64-bit x86 ISA, instead telling it to implement AMD's version instead. |
| |
| ▲ | smashed 7 hours ago | parent | next [-] | | Microsoft did port some versions of Windows to Itanium, so they did not reject it at first. With poor market demand and AMD's success with amd64, Microsoft did not support itanium in vista and later desktop versions which signaled the end of Intel's Itanium. | | |
| ▲ | wmf 4 hours ago | parent | next [-] | | Microsoft supported IA-64 (Itanium) and AMD64 but they refused to also support Yamhill. They didn't want to support three different ISAs. | |
| ▲ | 6 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | Analemma_ 7 hours ago | parent | prev [-] | | Microsoft also ships/shipped a commercial compiler with tons of users, and so they were probably in a position to realize early that the hypothetical "sufficiently smart compiler" which Itanium needed to reach its potential wasn't actually possible. |
| |
| ▲ | antod 3 hours ago | parent | prev [-] | | Yeah, I remember hearing that at the time too. When MS chose to support AMD64, they made it clear it was the only 64bit x86 ISA they were going to support, even though it was an open secret Intel was sitting on one but not wanting to announce it. |
|
|
| ▲ | h4ck_th3_pl4n3t 8 hours ago | parent | prev [-] |
| I wanted to mention that the Pentium 4 (Prescott) that was marketed as the Centrino in laptops had 64bit capabilities, but it was described as 32bit extended mode. I remember buying a laptop in 2005(?) which I first ran with XP 32bit, and then downloading the wrong Ubuntu 64bit Dapper Drake image, and the 64bit kernel was running...and being super confused about it. Also, for a long while, Intel rebranded the Pentium 4 as Intel Atom, which then usually got an iGPU on top with being a bit higher in clock rates. No idea if this is still the case (post Haswell changes) but I was astonished to buy a CPU 10 years later to have the same kind of oldskool cores in it, just with some modifications, and actually with worse L3 cache than the Centrino variants. core2duo and core2quad were peak coreboot hacking for me, because at the time the intel ucode blob was still fairly simple and didn't contain all the quirks and errata fixes that more modern cpu generations have. |
| |
| ▲ | marmarama 6 hours ago | parent | next [-] | | Centrino was Intel's brand for their wireless networking and laptops that had their wireless chipsets, the CPUs of which were all P6-derived (Pentium M, Core Duo). Possibly you meant Celeron? Also the Pentium 4 uarch (Netburst) is nothing like any of the Atoms (big for the time out-of-order core vs. a small in-order core). | |
| ▲ | mjg59 6 hours ago | parent | prev | next [-] | | Pentium 4 was never marketed as Centrino - that came in with the Pentium M, which was very definitely not 64-bit capable (and didn't even officially have PAE support to begin with). Atom was its own microarchitecture aimed at low power use cases, which Pentium 4 was definitely not. | |
| ▲ | kccqzy 6 hours ago | parent | prev | next [-] | | In 2005 you could already buy Intel processors with AMD64. It just wasn't called AMD64 or Intel64; it was called EM64T. During that era running 64-bit Windows was rare but running 64-bit Linux was pretty commonplace, at least amongst my circle of friends. Some Linux distributions even had an installer that told the user they were about to install 32-bit Linux on a computer capable of running 64-bit Linux (perhaps YaST?). | | |
| ▲ | fy20 2 hours ago | parent [-] | | AMD was a no-brainer in the mid 2000s if you were running Linux. It was typically cheaper than Intel, lower power consumption (= less heat, less fan noise), had 64bit so you could run more memory, and dual core support was more widespread. Linux was easily able to take advantage of all of these, were as for Windows it was trickier. |
| |
| ▲ | SilverElfin 6 hours ago | parent | prev | next [-] | | Speaking of marketing, that era of Intel was very weird for consumers. In the 1990s, they had iconic ads and words like Pentium or MMX became powerful branding for Intel. In the 2000s I think it got very confused. Centrino? Ultrabook? Atom? Then for some time there was Core. But it became hard to know what to care about and what was bizarre corporate speak. That was a failure of marketing. But maybe it was also an indication of a cultural problem at Intel. | |
| ▲ | cogman10 7 hours ago | parent | prev [-] | | Are you referring to PAE? [1] [1] https://en.wikipedia.org/wiki/Physical_Address_Extension | | |
|