| ▲ | deliciousturkey 16 hours ago | |||||||
I'd say over 70% of all computing is already been non-CPU for years. If you look at your typical phone or laptop SoC, the CPU is only a small part. The GPU takes the majority of area, with other accelerators also taking significant space. Manufacturers would not spend that money on silicon, if it was not already used. | ||||||||
| ▲ | goku12 15 hours ago | parent | next [-] | |||||||
> I'd say over 70% of all computing is already been non-CPU for years. > If you look at your typical phone or laptop SoC, the CPU is only a small part. Keep in mind that the die area doesn't always correspond to the throughput (average rate) of the computations done on it. That area may be allocated for a higher computational bandwidth (peak rate) and lower latency. Or in other words, get the results of a large number of computations faster, even if it means that the circuits idle for the rest of the cycles. I don't know the situation on mobile SoCs with regards to those quantities. | ||||||||
| ||||||||
| ▲ | swiftcoder 12 hours ago | parent | prev | next [-] | |||||||
> If you look at your typical phone or laptop SoC, the CPU is only a small part In mobile SoCs a good chunk of this is power efficiency. On a battery-powered device, there's always going to be a tradeoff to spend die area making something like 4K video playback more power efficient, versus general purpose compute Desktop-focussed SKUs are more liable to spend a metric ton of die area on bigger caches close to your compute. | ||||||||
| ▲ | PunchyHamster 15 hours ago | parent | prev [-] | |||||||
If going by raw operations done, if the given workload uses 3d rendering for UI that's probably true for computers/laptops. Watching YT video is essentially CPU pushing data between internet and GPU's video decoder, and to GPU-accelerated UI. | ||||||||