Remix.run Logo
eYrKEC2 2 days ago

Intel has always pursued agglomeration into the main CPU. They sucked up the math co-processor. They sucked up the frontside bus logic. They sucked up the DDR controllers more and more. They have sucked in integrated graphics.

Everything on-die, and with chiplets in-package, is the Intel way.

Default, average integrated graphics will continue to "statisfice" for a greater and greater portion of the market with integrated graphics continuing to grow in power.

carlhjerpe 2 days ago | parent [-]

Intel made fun of AMD for "taping chips together". Intel did everything on a monolithic die for about way too long.

The smaller the node the smaller the yield, chiplets is a necessity now (or architectural changes like Cerebras).

scrlk 2 days ago | parent | next [-]

Which is ironic, given that Intel had to glue two Pentium D dies together to compete with the monolithic Athlon 64 X2: https://en.wikipedia.org/wiki/Pentium_D#Presler

eYrKEC2 2 days ago | parent | prev [-]

Running tests and then fusing off broken cores or shared caches helps to recover lots of yield for bigger chips. Certain parts of the silicon is not redundant, but Intel's designs have redundancy for core pieces and chunks that are very large and hence probabilistically more prone to a manufacturing error.

carlhjerpe 2 days ago | parent [-]

Yep, cerebras takes that thing to the next level with their "wafer chips". A common technique is killing defective cores entirely (how all cheaper CPUs are made).

But reducing size will still increase yield since you can pick and choose.