Remix.run Logo
moffkalast 4 hours ago

If there's really such a bottleneck around ASML, why not design some extra chips for legacy processes that presumably already have well known design workflows?

I mean we're not talking AMD FX and Core 2 Duo here, it's Raptor Lake and Zen 3, it's perfectly viable and still being sold in droves right now.

irdc 4 hours ago | parent | next [-]

That’s what the likes of AMD with their chiplet design have been doing.

There’s also the issue of older process nodes not being profitable enough anymore, which explaines why at the height of the chip supply crunch older ARM chips were in short supply but there was ample stock of the 20nm feature-sized RP2040.

moffkalast 3 hours ago | parent [-]

This is gonna sound super dumb, but I'm not sure how they aren't being profitable if there are shortages, just price things beyond break even level? The average person can't even tell the difference between a Core 5 and a Core 5 Ultra, you can practically sell them at the same price and I'm not even sure they'd notice when actually using them. The performance jump is relatively minor and the bottlenecks are elsewhere.

MadnessASAP 3 hours ago | parent [-]

It mostly comes down to the consumer market not being significant enough by itself. A consumer may not notice a 10% increase in performance per watt or dollar. A large office building probably will, and a datacenter definitely will.

I don't think I'm being entirely hyperbolic when I say the consumer market only exists to put devices that can connect to and feed the datacenter loads into the general populations hands.

frangonf 3 hours ago | parent | prev | next [-]

Isn't exactly this what China is doing? Apart from poaching ex ASML employees? Now reaching 7nm, and just throwing up more energy to catch up in FLOPS like Jensen said?

simne 2 hours ago | parent | prev [-]

Because very large share of market now are datacenters. Difference from desktop is dramatic - for desktop really acceptable very simple chips with bad energy efficiency, but DCs already deal with extremely high power consumption, as they typically "compress" so much consumption in one rack, that constantly working near to physical constraints.

moffkalast 2 hours ago | parent | next [-]

That's the AI hype narrative, but aren't server CPUs only like 25% of the total market? That's tiny compared to consumer volume, though revenue is likely on par given the higher cost per unit.

simne 2 hours ago | parent [-]

> aren't server CPUs only like 25% of the total market?

Yes and no. If just formally calculate, yes, servers are small market volumes. But, they are much less constrained financially, than private person, so from same fab one could earn much more money if sell to server market, than if sell to consumer market.

zozbot234 an hour ago | parent [-]

I don't think that's correct, server chips aren't really "more expensive" than consumer chips when you correctly account for performance. Older-gen server chips have comparable performance to new top-of-the-line consumer chips and sell for a similar price. Newer-gen server chips in turn are priced at a premium over the current value of the older-gen, to account for their higher performance. The lower financial constraints don't enter into it all that much.

adrian_b 19 minutes ago | parent [-]

For many years until about a decade ago (more precisely until the launch of the Intel Skylake Server processors) the server CPUs had a performance per dollar comparable to desktop CPUs so the expensive server CPUs were expensive because of their higher performance.

But since then the prices of server CPUs have ballooned and now their performance per dollar is many times worse than for desktop CPUs. Server CPUs have very good performance per watt, but the same performance per watt is achieved with desktop CPUs by underclocking them.

The only advantage of server CPUs is that they aggregate in a single socket the equivalent of many desktop CPUs, including not only the aggregate number of cores, but also the aggregate number of memory channels and the aggregate number of PCIe lanes. Thus a server computer becomes equivalent with a cluster of desktop computers that would be interconnected by network interfaces much faster than the typically available Ethernet links.

While for embarrassingly parallel tasks a server computer will cost many times more than a cluster of desktop computers with the same performance, it will have a much less disadvantage or it might even have a better performance/cost ratio for tasks with a lot of interprocess/interthread communication, where the tight coupling between the many cores hosted by the same socket ensures a lower latency and a higher throughput for such communication.

The owners of datacenters are willing to pay the much higher prices of modern server CPUs because the consolidation into a single server of multiple old servers brings economies in other components, due to less coolers, less power supplies, less racks, simpler maintenance and administration, etc.

While the prices of server CPUs at retail are huge, the biggest costumers, like cloud owners, can get very large discounts, so for them the difference in comparison with desktop CPUs is not as great as for SMEs and individuals. The large discounts that Intel was forced to accept during the last few years, to avoid losing too much of the market to AMD, were the cause why Intel's server CPU division has lost many billions of $.

scotty79 2 hours ago | parent | prev [-]

You can't make desktop computer 4 times larger but there's very little preventing you form putting 4 racks where you had 1 before. If the floor space is the expensive part of data center then probably some incentives are misaligned.

simne 2 hours ago | parent | next [-]

For about price of land and connectivity - in large city land price begin on few millions dollars per square kilometer, and usage of cable channels could cost from 50$ per meter (easy could be 200$/m).

Plus, space arrange could last years.

Heat dissipation in range of megawatts could be just prohibited by local regulations.

So, space in large cities is very serious problem, and for business it is usually easier to "compress" as much computing power as possible in one rack.

IAmBroom an hour ago | parent [-]

> in large city land price begin on few millions dollars per square kilometer,

There's little need to put large datacenters in downtown Chicago and Manhattan.

sbarre 2 hours ago | parent | prev | next [-]

Bigger chips = more distance to cover for your electrons = more power required = more generated heat = slower throughput for your data.

Surely you don't believe that the entire chip industry had not thought of "wait what if we just make the chips bigger".

moffkalast 2 hours ago | parent [-]

AMD hiding Threadripper behind their back: Uh yeah what a terrible idea, we definitely didn't actually do that. Making a CPU that's twice the size, how ridiculous would that be right?!

simne 2 hours ago | parent | prev [-]

You cannot place dc anywhere, in large cities space is extremely constrained, and land is extremely expensive.

Also big problem - connectivity - you cannot place DC where it cannot be connected to power grid and to very powerful network.

So yes, DC floor space is severely limited.

And the third issue - last decades, rack servers dissipate extremely large amounts of heat, I hear numbers up to tens Kilowatts per rack, which is just hard to dissipate with air cooling (as example, all IBM Power servers have option of liquid cooling, but this is totally different price range).