▲ | bluGill 3 days ago | |
I don't build linux from source, but in my tests with large machines (and my C++ work project with more than 10 million lines of code) somewhere between 40 and 50 cores compile speed starts decreasing as you add more cores. When I moved my source files to a ramdisk the speed got even worse so I know disk IO isn't the issue (there was a lot of RAM on this machine so I don't expect to run low on RAM even with that many cores in use). I don't know how to find the truth, but all signs point to memory bandwidth being the issue. Of course the above is specific to the machines I did my testing on. A different machine may have other differences from my setup. Still my experience matches the claim: at 40 cores memory bandwidth is the bottleneck not CPU speed. Most people don't have 40+ core machines to play with, and so will not see those results. The machines I tested on cost > $10,000 so most would argue that is not affordable. | ||
▲ | menaerus 3 days ago | parent [-] | |
One of the biggest reasons why people see so much compilation improvement speed on Apple M chips - massive bandwidth improvement in contrast to other machines, even some older servers. 100G/s single core main memory. It starts to drop, e.g. it doesn't scale linearly, when you add more and more cores to the workload, due to L3 contention I'd say, but it goes up to 200G/s IIRC. |