Remix.run Logo
bombcar 9 hours ago

Youngsters today don't remember it; x86 was fucking dead according to the press; it really wasn't until Athlon 64 came out (which gave a huge bump to Linux as it was one of the first OSes to fully support it - one of the reasons I went to Gentoo early on was to get that sweet 64 bit compilation!) that everyone started to admit the Itanium was a turd.

The key to the whole thing was that it was a great 32 bit processor; the 64 bit stuff was gravy for many, later.

Apple did something similar with its CPU changes - now three - they only swap when the old software runs better on the new chip even if emulated than it did on the old.

AMD64 was also well thought out; it wasn't just a simple "have two more bytes" slapped on 32 bit. Doubling the number of general purpose registers was noticeable - you took a performance hit going to 64 bit early on because all the memory addresses were wider, but the extra registers usually more than made up for it.

This is also where the NX bit entered.

golddust-gecko 9 hours ago | parent | next [-]

100% -- the conventional wisdom was that the x86 architecture was too riddled with legacy and complexity to improve its performance, and was a dead end.

Itanium never met an exotic computer architecture journal article that it didn't try and incorporate. Initially this was viewed as "wow such amazing VLIW magic will obviously dominate" and subsequently as "this complexity makes it hard to write a good compiler for, and the performance benefit just doesn't justify it."

Intel had to respond to AMD with their "x86-64" copy, though it really didn't want to.

Eventually it became obvious that the amd64/x64/x86-64 chips were going to exceed Itanium in performance, and with the massive momentum of legacy on its side and Itanium was toast.

Animats 7 hours ago | parent [-]

Back in that era I went to an EE380 talk at Stanford where the people from HP trying to do a compiler for Itanium spoke. It the project wasn't going well at all. Itanium is an explicit-parallelism superscalar machine. The compiler has to figure out what operations to do in parallel. Most superscalar machines do that during execution. Instruction ordering and packing turned out to be a hard numerical optimization problem. The compiler developers sounded very discouraged.

It's amazing that retirement units, the part of a superscalar CPU that puts everything back together as the parallel operations finish, not only work but don't slow things down. The Pentium Pro head designer had about 3,000 engineers working at peak, which indicates how hard this is. But it all worked, and that became the architecture of the future.

This was around the time that RISC was a big thing. Simplify the CPU, let the compiler do the heavy lifting, have lots of registers, make all instructions the same size, and do one instruction per clock. That's pure RISC. Sun's SPARC is an expression of that approach. (So is a CRAY-1, which is a large but simple supercomputer with 64 of everything.) RISC, or something like it, seemed the way to go faster. Hence Itanium. Plus, it had lots of new patented technology, so Intel could finally avoid being cloned.

Superscalars can get more than one instruction per clock, at the cost of insane CPU complexity. Superscalar RISC machines are possible, but they lose the simplicity of RISC. Making all instructions the same size increases the memory bandwidth the CPU needs. That's where RISC lost out over x86 extensions. x86 is a terse notation.

So we ended up with most of the world still running on an instruction set based on the one Harry Pyle designed when he was an undergrad at Case in 1969.

jerf 9 hours ago | parent | prev | next [-]

If I am remembering correctly, this was also a good time to be in Linux. Since the Linux world operated on source code rather than binary blobs, it was easier to convert software to run 64-bit native. Non-trivial in an age of C, but still much easier than the commercial world. I had a much more native 64-bit system running a couple of years before it was practical in the Windows world.

wmf 9 hours ago | parent | next [-]

Linux for Alpha probably deserves some credit for getting everything 64-bit-ready years before x86-64 came out.

MangoToupe 8 hours ago | parent | prev [-]

It also helps that linux had a much better 32-bit compatibility than windows did. Not sure why but it probably has something to do with legacy support windows shed moving to 64-bits.

jacquesm 9 hours ago | parent | prev | next [-]

Up until Athlon your best bet for a 64 bit system was a DEC Alpha running RedHat. Amazing levels of performance for a manageable amount of money.

drob518 9 hours ago | parent | prev [-]

Itanium wasn’t a turd. It was just not compatible with x86. And that was enough to sink it.

kstrauser 8 hours ago | parent | next [-]

It absolutely was. It was possible, hypothetically, to write a chunk of code that ran very fast. There were any number of very small bits of high-profile code which did this. However, it was impossible to make general-purpose, not-manually-tuned code run fast on it. Itanium placed demands on compiler technology that simple didn't exist, and probably still don't.

Basically, you could write some tuned assembly that would run fast on one specific Itanium CPU release by optimizing for its exact number of execution units, etc. It was not possible to run `./configure && make && make install` for anything not designed with that level of care and end up with a binary that didn't run like frozen molasses.

I had to manage one of these pigs in a build farm. On paper, it should've been one of the more powerful servers we owned. In practice, the Athlon servers were several times faster at any general purpose workloads.

hawflakes 9 hours ago | parent | prev | next [-]

Itanium was compatible with x86. In fact, it booted into x86 mode. Merced, the first implementation had a part of the chip called the IVE, Intel Value Engine, that implemented x86 very slowly.

You would boot in x86 mode and run some code to switch to ia64 mode.

HP saw the end of the road for their solo efforts on PA-RISC and Intel eyed the higher end market against SPARC, MIPS, POWER, and Alpha (hehe. all those caps) so they banded together to tackle the higher end.

But as AMD proved, you could win by scaling up instead of dropping an all-new architecture.

* worked at HP during the HP-Intel Highly Confidential project.

philipkglass 9 hours ago | parent | prev | next [-]

I used it for numerical simulations and it was very fast there. But on my workstation many common programs like "grep" were slower than on my cheap Athlon machine. (Both were running Red Hat Linux at the time.) I don't know how much of that was a compiler problem and how much was an architecture problem; the Itanium numerical simulation code was built with Intel's own compiler but all the system utilities were built with GNU compilers.

fooker 9 hours ago | parent | prev | next [-]

>Itanium wasn’t a turd

It required immense multi-year efforts from compiler teams to get passable performance with Itanium. And passable wasn't good enough.

Joel_Mckay 9 hours ago | parent | next [-]

The IA-64 architecture had too much granularity of control dropped into software. Thus, reliable compiler designs were much more difficult to build.

It wasn't a bad chip, but like Cell or modern Dojo tiles most people couldn't run it without understanding parallelism and core metastability.

amd64 wasn't initially perfect either, but was accessible for mere mortals. =3

bombcar 9 hours ago | parent | prev [-]

Wasn't the only compiler that produced code worth anything for Itanium the paid one from Intel? I seem to recall complaining about it on the GCC lists.

hajile 8 hours ago | parent | next [-]

NOTHING produced good code for the original Itanium which is why they switched gears REALLY early on.

Intel first publicly mentioned Poulson all the way back in 2005 just FOUR years after the original chip was launched. Poulson was basically a traditional out-of-order CPU core that even had hyperthreading[0]. They knew really early on that the designs just weren't that good. This shouldn't have been a surprise to Intel as they'd already made a VLIW CPU in the 90s (i860) that failed spectacularly.

[0]https://www.realworldtech.com/poulson/

speed_spread 7 hours ago | parent [-]

Even the i860 found more usage as a specialized CPU than the Itanium. The original Nextcube had an optional video card that used an i860 dedicated to graphics.

hawflakes 9 hours ago | parent | prev [-]

I lost track of it but HP, as co-architects, had its own compiler team working on it. I think SGI also had efforts to target ia64 as well. But the EPIC (Explicitly Parallel Instruction Computing) didn't really catch on. VLIW would need recompilation on each new chip but EPIC promised it would still run.

https://en.wikipedia.org/wiki/Explicitly_parallel_instructio...

nextos 5 hours ago | parent | next [-]

Yes, SGI sold quite a lot of high-end IA-64 machines for HPCs, e.g. https://en.wikipedia.org/wiki/SGI_Altix

fooker 8 hours ago | parent | prev [-]

In the compiler world, these HP compiler folks are leading compiler teams/orgs at ~all the tech companies now, while almost none of the Intel compiler people seem to be around.

textlapse 9 hours ago | parent | prev | next [-]

I have worked next to an Itanium machine. It sounds like a helicopter - barely able to meet the performance requirements.

We have come a long way from that to arm64 and amd64 as the default.

Joel_Mckay 9 hours ago | parent [-]

The stripped down ARM 8/9 for AArch64 is good for a lot of use-cases, but most of the vendor specific ASIC advanced features were never enabled for reliability reasons.

ARM is certainly better than before, but could have been much better. =3

Findecanor 7 hours ago | parent | prev | next [-]

The Itanium had some interesting ideas executed poorly. It was a bloated design by committee.

It should have been iterated on a bit before it was released to the world, but Intel was stressed by there being several 64-bit RISC-processors on the market already.

bombcar 9 hours ago | parent | prev | next [-]

IIRC it didn't even do great against POWER and other bespoke OS/Chip combos, though it did way better there than generic x86.

eej71 9 hours ago | parent | prev | next [-]

Itanium was mostly a turd because it pushed so many optimization issues onto the compiler.

CoastalCoder 9 hours ago | parent [-]

IIRC, wasn't part of the issue that compile-time instruction scheduling was a poor match with speculative execution and/or hardware-based branch prediction?

I.e., the compiler had no access to information that's only revealed at runtime?

duskwuff 6 hours ago | parent [-]

Yes, absolutely. Itanium was designed with the expectation that memory speed/latency would keep pace with CPUs - it didn't.

jcranmer 8 hours ago | parent | prev | next [-]

I acquired a copy of the Itanium manuals, and in flicking through it, you can barely get through a page before going "you did WHAT?" over some feature.

tptacek 5 hours ago | parent [-]

Example example example example must see examples!

jcranmer an hour ago | parent [-]

Some of the examples:

* Itanium has register windows.

* Itanium has register rotations, so that you can modulo-schedule a loop.

* Itanium has so many registers that a context switch is going to involve spilling several KB of memory.

* The main registers have "Not-a-Thing" values to be able to handle things like speculative loads that would have trapped. Handling this for register spills (or context switches!) appears to be "fun."

* It's a bi-endian architecture.

* The way you pack instructions in the EPIC encoding is... fun.

* The rules of how you can execute instructions mean that you kind of have branch delay slots, but not really.

* There are four floating-point environments because why not.

* Also, Itanium is predicated.

* The hints, oh god the hints. It feels like every time someone came up with an idea for a hint that might be useful to the processor, it was thrown in there. How is a compiler supposed to be able to generate all of these hints?

* It's an architecture that's complicated enough that you need to handwrite assembly to get good performance, but the assembly has enough arcane rules that handwriting assembly is unnecessarily difficult.

cmrdporcupine 9 hours ago | parent | prev [-]

Itanium was pointless when Alpha existed already and was already getting market penetration in the high end market. Intel played disgusting corporate politics to kill it and then push the ugly failed Itanium to market, only to have to panic back to x86_64 later.

I have no idea how/why Intel got a second life after that, but they did. Which is a shame. A sane market would have punished them and we all would have moved on.

dessimus 9 hours ago | parent | next [-]

> I have no idea how/why Intel got a second life after that, but they did.

For the same reason the line "No one ever got fired for buying IBM." exists. Buying AMD at large companies was seen as a gamble that deciders weren't will to make. Even now, if you just call up your account managers at Dell, HP, or Lenovo asking for servers or PCs, they are going to quote you Intel builds unless you specifically ask. I don't think I've ever been asked by my sales reps if I wanted an Intel or AMD CPU. Just how many slots/cores, etc.

bombcar 8 hours ago | parent [-]

The Intel chipsets were phenomenally stable; the AMD ones were always plagued by weird issues.

j_not_j 3 hours ago | parent | prev | next [-]

Alpha had a lot of implementation problems, e.g. floating point exceptions with untraceable execution paths.

Cray tried to build the T3E (iirc) out of Alphas. DEC bragged how good Alpha was for parallel computing, big memory etc etc.

But Cray publicly denounced Alpha as unusable for parallel processing (the T3E was a bunch of Alphas in some kind of NUMA shared memory.) It was so difficult to make the chips work together.

This was in the Cray Connect or some such glossy publication. Wish I'd kept a copy.

Plus of course the usual DEC marketing incompetence. They feared Alpha undoing their large expensive machine momentum. Small workstation boxes significantly faster than big iron.

toast0 8 hours ago | parent | prev | next [-]

Historically, when Intel is on their game, they have great products, and better than most support for OEMs and integrators. They're also very effective at marketting and arm twisting.

The arm twisting gets them through rough times like itanium and pentium4 + rambus, etc. I still think they can recover from the 10nm fab problems, even though they're taking their sweet time.

loloquwowndueo 8 hours ago | parent | prev | next [-]

“Sane market” sounds like an oxymoron, technology markets have multiple failed attempts at doing the sane thing.

panick21_ 6 hours ago | parent | prev [-]

Gordon Moore tried to link up with Intel when he was at DEC. Alpha would have become Intels 64 bit architecture. This of course didn't happen and Intel instead linked up with DEC biggest competitor HP, and adopted their, much, much worse VLIW architecture.

Imagine a future where Intel and Apple both adopt DEC and Alpha instead of Intel HP and Apple IBM.