| ▲ | nottorp 3 hours ago | |||||||
> Overclocking long ago was an amazing saintly act, milking a lot of extra performance that was just there waiting, without major downsides to take. Back when you bought a 233 Mhz chip with ram at 66 Mhz, ran the bus at 100 Mhz which also increased your ram speed if it could handle it, and everything was faster. > But these days, chips are usually already well tuned. You can feed double or tripple the power into the chip with adequate cooling, but the gain is so unremarkable. +10% +15% +20% is almost never going to be a make or break difference for your work 20% in synthetic benchmarks maybe, or very particular loads. Because you only overclock the CPU these days so anything hitting the ram won't even go to 20%. | ||||||||
| ▲ | mapt 3 hours ago | parent [-] | |||||||
Initially, thermal throttling was a safety valve for a failure condition. A way to cripple performance briefly so as not to let the magic blue smoke out. Only a terrible PC would be thermal throttling out of the box; Only neglectful owners who failed to clean filters, had thermal throttling happening routinely. That's not how it works any more. Many of these CPUs both at the high end and even a few tiers down from the top, are thermal throttling whenever they hit 100% utilization. I'm thinking of Intel's last couple generations particularly. They're shipped with pretty good heatsinks, but not nearly good enough to run stock clocks on all cores at once. Instead, smarter grades of thermal throttling are designed for for routine use to balance loads. Better heatsinks (and watercooling) help a bit, but not enough, you end up hitting a wall; Only the risky process of delidding seems to push further. We're running into limitations on how well a conventional heatsink can transfer the heat from a limited contact patch. GPUs seem to have more effective heatsinks, and are bottlenecked mostly by power requirements. The 600 watt monsters are already melting cables that aren't in perfect condition. | ||||||||
| ||||||||