|
| ▲ | kstrauser 8 hours ago | parent | next [-] |
| It absolutely was. It was possible, hypothetically, to write a chunk of code that ran very fast. There were any number of very small bits of high-profile code which did this. However, it was impossible to make general-purpose, not-manually-tuned code run fast on it. Itanium placed demands on compiler technology that simple didn't exist, and probably still don't. Basically, you could write some tuned assembly that would run fast on one specific Itanium CPU release by optimizing for its exact number of execution units, etc. It was not possible to run `./configure && make && make install` for anything not designed with that level of care and end up with a binary that didn't run like frozen molasses. I had to manage one of these pigs in a build farm. On paper, it should've been one of the more powerful servers we owned. In practice, the Athlon servers were several times faster at any general purpose workloads. |
|
| ▲ | hawflakes 9 hours ago | parent | prev | next [-] |
| Itanium was compatible with x86. In fact, it booted into x86 mode. Merced, the first implementation had a part of the chip called the IVE, Intel Value Engine, that implemented x86 very slowly. You would boot in x86 mode and run some code to switch to ia64 mode. HP saw the end of the road for their solo efforts on PA-RISC and Intel eyed the higher end market against SPARC, MIPS, POWER, and Alpha (hehe. all those caps) so they banded together to tackle the higher end. But as AMD proved, you could win by scaling up instead of dropping an all-new architecture. * worked at HP during the HP-Intel Highly Confidential project. |
|
| ▲ | philipkglass 9 hours ago | parent | prev | next [-] |
| I used it for numerical simulations and it was very fast there. But on my workstation many common programs like "grep" were slower than on my cheap Athlon machine. (Both were running Red Hat Linux at the time.) I don't know how much of that was a compiler problem and how much was an architecture problem; the Itanium numerical simulation code was built with Intel's own compiler but all the system utilities were built with GNU compilers. |
|
| ▲ | fooker 9 hours ago | parent | prev | next [-] |
| >Itanium wasn’t a turd It required immense multi-year efforts from compiler teams to get passable performance with Itanium. And passable wasn't good enough. |
| |
| ▲ | Joel_Mckay 9 hours ago | parent | next [-] | | The IA-64 architecture had too much granularity of control dropped into software. Thus, reliable compiler designs were much more difficult to build. It wasn't a bad chip, but like Cell or modern Dojo tiles most people couldn't run it without understanding parallelism and core metastability. amd64 wasn't initially perfect either, but was accessible for mere mortals. =3 | |
| ▲ | bombcar 9 hours ago | parent | prev [-] | | Wasn't the only compiler that produced code worth anything for Itanium the paid one from Intel? I seem to recall complaining about it on the GCC lists. | | |
| ▲ | hajile 8 hours ago | parent | next [-] | | NOTHING produced good code for the original Itanium which is why they switched gears REALLY early on. Intel first publicly mentioned Poulson all the way back in 2005 just FOUR years after the original chip was launched. Poulson was basically a traditional out-of-order CPU core that even had hyperthreading[0]. They knew really early on that the designs just weren't that good. This shouldn't have been a surprise to Intel as they'd already made a VLIW CPU in the 90s (i860) that failed spectacularly. [0]https://www.realworldtech.com/poulson/ | | |
| ▲ | speed_spread 7 hours ago | parent [-] | | Even the i860 found more usage as a specialized CPU than the Itanium. The original Nextcube had an optional video card that used an i860 dedicated to graphics. |
| |
| ▲ | hawflakes 9 hours ago | parent | prev [-] | | I lost track of it but HP, as co-architects, had its own compiler team working on it. I think SGI also had efforts to target ia64 as well.
But the EPIC (Explicitly Parallel Instruction Computing) didn't really catch on.
VLIW would need recompilation on each new chip but EPIC promised it would still run. https://en.wikipedia.org/wiki/Explicitly_parallel_instructio... | | |
|
|
|
| ▲ | textlapse 9 hours ago | parent | prev | next [-] |
| I have worked next to an Itanium machine. It sounds like a helicopter - barely able to meet the performance requirements. We have come a long way from that to arm64 and amd64 as the default. |
| |
| ▲ | Joel_Mckay 9 hours ago | parent [-] | | The stripped down ARM 8/9 for AArch64 is good for a lot of use-cases, but most of the vendor specific ASIC advanced features were never enabled for reliability reasons. ARM is certainly better than before, but could have been much better. =3 |
|
|
| ▲ | Findecanor 7 hours ago | parent | prev | next [-] |
| The Itanium had some interesting ideas executed poorly. It was a bloated design by committee. It should have been iterated on a bit before it was released to the world, but Intel was stressed by there being several 64-bit RISC-processors on the market already. |
|
| ▲ | bombcar 9 hours ago | parent | prev | next [-] |
| IIRC it didn't even do great against POWER and other bespoke OS/Chip combos, though it did way better there than generic x86. |
|
| ▲ | eej71 9 hours ago | parent | prev | next [-] |
| Itanium was mostly a turd because it pushed so many optimization issues onto the compiler. |
| |
| ▲ | CoastalCoder 9 hours ago | parent [-] | | IIRC, wasn't part of the issue that compile-time instruction scheduling was a poor match with speculative execution and/or hardware-based branch prediction? I.e., the compiler had no access to information that's only revealed at runtime? | | |
| ▲ | duskwuff 6 hours ago | parent [-] | | Yes, absolutely. Itanium was designed with the expectation that memory speed/latency would keep pace with CPUs - it didn't. |
|
|
|
| ▲ | jcranmer 8 hours ago | parent | prev | next [-] |
| I acquired a copy of the Itanium manuals, and in flicking through it, you can barely get through a page before going "you did WHAT?" over some feature. |
| |
| ▲ | tptacek 5 hours ago | parent [-] | | Example example example example must see examples! | | |
| ▲ | jcranmer an hour ago | parent [-] | | Some of the examples: * Itanium has register windows. * Itanium has register rotations, so that you can modulo-schedule a loop. * Itanium has so many registers that a context switch is going to involve spilling several KB of memory. * The main registers have "Not-a-Thing" values to be able to handle things like speculative loads that would have trapped. Handling this for register spills (or context switches!) appears to be "fun." * It's a bi-endian architecture. * The way you pack instructions in the EPIC encoding is... fun. * The rules of how you can execute instructions mean that you kind of have branch delay slots, but not really. * There are four floating-point environments because why not. * Also, Itanium is predicated. * The hints, oh god the hints. It feels like every time someone came up with an idea for a hint that might be useful to the processor, it was thrown in there. How is a compiler supposed to be able to generate all of these hints? * It's an architecture that's complicated enough that you need to handwrite assembly to get good performance, but the assembly has enough arcane rules that handwriting assembly is unnecessarily difficult. |
|
|
|
| ▲ | cmrdporcupine 9 hours ago | parent | prev [-] |
| Itanium was pointless when Alpha existed already and was already getting market penetration in the high end market. Intel played disgusting corporate politics to kill it and then push the ugly failed Itanium to market, only to have to panic back to x86_64 later. I have no idea how/why Intel got a second life after that, but they did. Which is a shame. A sane market would have punished them and we all would have moved on. |
| |
| ▲ | dessimus 9 hours ago | parent | next [-] | | > I have no idea how/why Intel got a second life after that, but they did. For the same reason the line "No one ever got fired for buying IBM." exists. Buying AMD at large companies was seen as a gamble that deciders weren't will to make. Even now, if you just call up your account managers at Dell, HP, or Lenovo asking for servers or PCs, they are going to quote you Intel builds unless you specifically ask. I don't think I've ever been asked by my sales reps if I wanted an Intel or AMD CPU. Just how many slots/cores, etc. | | |
| ▲ | bombcar 8 hours ago | parent [-] | | The Intel chipsets were phenomenally stable; the AMD ones were always plagued by weird issues. |
| |
| ▲ | j_not_j 3 hours ago | parent | prev | next [-] | | Alpha had a lot of implementation problems, e.g. floating point exceptions with untraceable execution paths. Cray tried to build the T3E (iirc) out of Alphas. DEC bragged how good Alpha was for parallel computing, big memory etc etc. But Cray publicly denounced Alpha as unusable for parallel processing (the T3E was a bunch of Alphas in some kind of NUMA shared memory.) It was so difficult to make the chips work together. This was in the Cray Connect or some such glossy publication. Wish I'd kept a copy. Plus of course the usual DEC marketing incompetence. They feared Alpha undoing their large expensive machine momentum. Small workstation boxes significantly faster than big iron. | |
| ▲ | toast0 8 hours ago | parent | prev | next [-] | | Historically, when Intel is on their game, they have great products, and better than most support for OEMs and integrators. They're also very effective at marketting and arm twisting. The arm twisting gets them through rough times like itanium and pentium4 + rambus, etc. I still think they can recover from the 10nm fab problems, even though they're taking their sweet time. | |
| ▲ | loloquwowndueo 8 hours ago | parent | prev | next [-] | | “Sane market” sounds like an oxymoron, technology markets have multiple failed attempts at doing the sane thing. | |
| ▲ | panick21_ 6 hours ago | parent | prev [-] | | Gordon Moore tried to link up with Intel when he was at DEC. Alpha would have become Intels 64 bit architecture. This of course didn't happen and Intel instead linked up with DEC biggest competitor HP, and adopted their, much, much worse VLIW architecture. Imagine a future where Intel and Apple both adopt DEC and Alpha instead of Intel HP and Apple IBM. |
|