| ▲ | volemo 7 hours ago |
| I see us not getting rid of CPU, but CPU and GPU being eventually consolidated in one system of heterogeneous computing units. |
|
| ▲ | nine_k 5 hours ago | parent | next [-] |
| CPU and GPU have very different ways of scheduling instructions, requiring somehow different interfaces and programming models.. I'd hazard to say that a GPU and CPU with unified memory access (like the Apple's M series, and most mobile chips) is already such a consolidated system. |
| |
|
| ▲ | junon 2 hours ago | parent | prev | next [-] |
| We're getting there already with e.g. Grace-Blackwell chips. |
|
| ▲ | jagged-chisel 6 hours ago | parent | prev [-] |
| Agreed. Much like “RISC is gonna replace everything” - it didn’t. Because the CPU makers incorporated lessons from RISC into their designs. I can see the same happening to the CPU. It will just take on the appropriate functionality to keep all the compute in the same chip. It’s gonna take awhile because Nvidia et al like their moats. |
| |
| ▲ | StilesCrisis 3 hours ago | parent | next [-] | | CISC only survived because CPUs now dedicate a ton of silicon to decoding the CISC stream into RISC-y microcode. RISC CPUs can avoid this completely, but it turns out backwards compatibility was important to the market and the transistor cost of "instruction decode" just adds like +1 pipeline depth or something. | | |
| ▲ | zephen 37 minutes ago | parent [-] | | > CPUs now dedicate a ton of silicon to decoding the CISC stream into RISC-y microcode. In absolute terms, this is true. But in relative terms, you're talking less than 1% of the die area on a modern, heavily cached, heavily speculative, heavily predictive CPU. |
| |
| ▲ | zozbot234 6 hours ago | parent | prev [-] | | > It will just take on the appropriate functionality to keep all the compute in the same chip. So, an iGPU/APU? Those exist already. Regardless, the most GPU-like CPU architecture in common use today is probably SPARC, with its 8-way SMT. Add per-thread vector SIMD compute to something like that, and you end up with something that has broadly similar performance constraints to an iGPU. |
|