Remix.run Logo
wvenable 3 days ago

By prioritizing efficiency, Apple also prioritizes integration. The PC ecosystem prefers less integration (separate RAM, GPU, OS, etc) even at the cost of efficiency.

AnthonyMouse 3 days ago | parent | next [-]

> By prioritizing efficiency, Apple also prioritizes integration. The PC ecosystem prefers less integration (separate RAM, GPU, OS, etc) even at the cost of efficiency.

People always say this but "integration" has almost nothing to do with it.

How do you lower the power consumption of your wireless radio? You have a network stack that queues non-latency sensitive transmissions to minimize radio wake-ups. But that's true for radios in general, not something that requires integration with any particular wireless chip.

How do you lower the power consumption of your CPU? Remediate poorly written code that unnecessarily keeps the CPU in a high power state. Again not something that depends on a specific CPU.

How much power is saved by soldering the memory or CPU instead of using a socket? A negligible amount if any; the socket itself has no significant power draw.

What Apple does well isn't integration, it's choosing (or designing) components that are each independently power efficient, so that then the entire device is. Which you can perfectly well do in a market of fungible components simply by choosing the ones with high efficiency.

In fact, a major problem in the Android and PC laptop market is that the devices are insufficiently fungible. You find a laptop you like where all the components are efficient except that it uses an Intel processor instead of the more efficient ones from AMD, but those components are all soldered to a system board that only takes Intel processors. Another model has the AMD APU but the OEM there chose poorly for the screen.

It's a mess not because the integration is poor but because the integration exists instead of allowing you to easily swap out the part you don't like for a better one.

adgjlsfhk1 3 days ago | parent [-]

> How much power is saved by soldering the memory or CPU instead of using a socket? A negligible amount if any; the socket itself has no significant power draw.

This isn't quite true. When the whole chip is idling at 1-2W, 0.1W of socket power is 10%. Some of Apple's integration almost certainly save power (e.g. putting storage controllers for the SSD on the SOC, having tightly integrated display controllers, etc).

AnthonyMouse 3 days ago | parent [-]

> When the whole chip is idling at 1-2W, 0.1W of socket power is 10%.

But how are you losing 10% of power to the socket at idle? Having a socket might require traces to be slightly longer but the losses to that are proportional to overall power consumption, not very large, and both CPU sockets and the new CAMM memory standard are specifically designed to avoid that anyway (primarily for latency rather than power reasons because the power difference is so trivial).

> Some of Apple's integration almost certainly save power (e.g. putting storage controllers for the SSD on the SOC, having tightly integrated display controllers, etc).

This isn't really integration and it's very nearly the opposite: The primary advantage here in terms of hardware is that the SoC is being fabbed on 3nm and then the storage controller would be too, which would be the same advantage if you would make an independent storage controller on the same process.

Which is the problem with PCs again: The SSDs are too integrated. Instead of giving the OS raw access to the flash chips, they adhere a separate controller just to do error correction and block remapping, which could better be handled by the OS on the main CPU which is fabbed on a newer process or, in larger devices with a storage array, a RAID controller that performs the task for multiple drives at once.

And which would you rather have, a dual-core ARM thing integrated with your SSD, or the same silicon going to two more E-cores on the main CPU which can do the storage work when there is any but can also run general purpose code when there isn't?

happycube 3 days ago | parent | prev | next [-]

There's a critical instruction for Objective C handling (I forget exactly what it is) but it's faster than intel's chips even in Rosetta 2's x86 emulation.

wvenable 3 days ago | parent [-]

I believe it's the `lock xadd` instruction. It's faster when combined with x86 Total Store Ordering mode that the Rosetta emulation runs under.

saagarjha 3 days ago | parent [-]

Looking at objc_retain apparently it's a lock cmpxchg these days

Panzer04 3 days ago | parent | prev [-]

Eh, probably the biggest difference is in the OS. The amount of time Linux or Windows will spend using a processor while completely idle can be a bit offensive.

acdha 3 days ago | parent [-]

It’s all of the above. One thing Apple excels at is actually using their hardware and software together whereas the PC world has a long history of one of the companies like Intel, Microsoft, or the actual manufacturer trying to make things better but failing to get the others on-board. You can in 2025 find people who disable power management because they were burned (hopefully not literally) by some combination of vendors slacking on QA!

One good example of this is RAM. Apple Silicon got some huge wins from lower latency and massive bandwidth, but that came at the cost of making RAM fixed and more expensive. A lot of PC users scoffed at the default RAM sizes until they actually used one and realized it was great at ~8GB less than the equivalent PC. That’s not magic or because Apple has some super elite programmers, it’s because they all work at the same company and nobody wants to go into Tim Cook’s office and say they blew the RAM budget and the new Macs need to cost $100 more. The hardware has compression support and the OS and app teams worked together to actually use it well, whereas it’s very easy to imagine Intel adding the feature but skimping on speed / driver stability, or Microsoft trying to implement it but delaying release for a couple years, or not working with third-party developers to optimize usage, etc. – nobody acting in bad faith but just what inevitably happens when everyone has different incentives.

p_ing 3 days ago | parent | next [-]

Windows 10 introduced memory compression. Here's a discussion from 2015 [0]. And one on Linux by IBM from 2013 [1]. But the history goes way back [2].

I don't know why that '8GiB is great!' -- no, no it isn't. Your memory usage just spills over to swap faster. It isn't more efficient (not with those 16KiB pages).

[0] https://learn.microsoft.com/en-us/shows/Seth-Juarez/Memory-C...

[1] https://events.static.linuxfound.org/sites/events/files/slid...

[2] https://en.wikipedia.org/wiki/Virtual_memory_compression#Ori...

acdha 3 days ago | parent | next [-]

Yes, I'm aware that Windows has memory compression, so let's think about why it's less successful and Windows systems need more memory than Macs.

The Apple version has a very high-performance hardware implementation versus Microsoft's software implementation (not a slam on Microsoft, they just have to support more hardware).

The Apple designers can assume a higher performance baseline memory subsystem because, again, they're working with hardware designers at the same company who are equally committed to making the product succeed.

The core Mac frameworks are optimized to reduce VM pressure and more Mac apps use the system frameworks, which means that you're paying the overhead tax less.

Many Mac users use Safari instead of Chrome so they're saving multiple GB for an app which most people have open constantly as well as all of the apps which embed a WebKit view.

Again, this is not magic, it's aligned incentives. Microsoft doesn't control Intel, AMD, and Qualcomm's product design, they can't force Google to make Chrome better, and they can't force every PC vendor not to skimp on hardware. They can and do work with those companies but it takes longer and in some cases the cost incentives are wrong – e.g. a PC vendor knows 99% of their buyers will blame Windows if they use slower RAM to save a few bucks or get paid to preload of McAfee which keeps the memory subsystem busy constantly so they take the deal which adds to their bottom line now.

p_ing 3 days ago | parent [-]

Neither macOS nor Windows use a hardware-based accelerator for memory compression. It's all done in software. Linux zram uses Intel QAT but that's only available on a limited number of processors.

You seem to be under the mistaken impression that Microsoft cannot gear Windows to act differently based on the installed hardware (or processor). That's quite untrue.

acdha 3 days ago | parent | next [-]

It was software on Intel but they presumably added instructions with the intention of using them:

https://asahilinux.org/docs/hw/cpu/apple-instructions/

> You seem to be under the mistaken impression that Microsoft cannot gear Windows to act differently based on the installed hardware (or processor).

Definitely not - my point is simply that all of these things are harder and take longer if they have to support multiple implementations and get other companies to ship quality implementations.

p_ing 2 days ago | parent [-]

> Definitely not - my point is simply that all of these things are harder and take longer if they have to support multiple implementations and get other companies to ship quality implementations.

What's your source for it is "harder" or "takes longer"? #ifdef is a quite well known processor directive to developers and easy to use.

acdha 2 days ago | parent [-]

> What's your source for it is "harder" or "takes longer"?

Windows devices’ power management and battery life has been behind Apple since the previous century? If you think hardware support is a simple #ifdef, ask yourself how a compile-time flag can detect firmware updates, driver versions, or flakey hardware. It’s not that Apple’s hardware is perfect but that those are made by the same company so you don’t get Dell telling you to call Broadcom who are telling you to call Microsoft.

astrange 3 days ago | parent | prev [-]

> Neither macOS nor Windows use a hardware-based accelerator for memory compression.

Not true.

tguvot 3 days ago | parent | prev | next [-]

i think i had 20+ years ago servers with this https://www.eetimes.com/serverworks-rolls-out-chip-set-based...

achandlerwhite 3 days ago | parent | prev [-]

He didn’t say 8 gigs is great but that you can get by with about 8 gigs less than equivalent on Windows.

astrange 3 days ago | parent | prev | next [-]

> Apple Silicon got some huge wins from lower latency and massive bandwidth, but that came at the cost of making RAM fixed and more expensive.

The memory latency actually isn't good, only bandwidth is good really. But there is a lot of cache to hide that. (The latency from fetching between CPU clusters is actually kind of bad too, so it's important not to contend on those cache lines.)

> A lot of PC users scoffed at the default RAM sizes until they actually used one and realized it was great at ~8GB less than the equivalent PC.

Less than that. Unified memory means that the SSD controller, display, etc subtract from that 8GB, whereas on a PC they have some of their own RAM on the side.

wvenable 3 days ago | parent | prev | next [-]

I don't buy it. Software is not magically using less RAM because it was compiled for MacOS. The actual RAM use by the OS itself is relatively small for both operating systems.

Panzer04 3 days ago | parent | prev | next [-]

Is this meant to be contradicting what I said?

It's all in the OS. There's absolutely no reason RAM can't be managed similarly effectively on a non-integrated product.

Android is just Linux with full attention paid to power saving measures. These OS can get very long battery life, but in my experience the typical experience is something or other keeps the processor active and halves your expected battery life.

acdha 3 days ago | parent [-]

My point is that it's not just the OS for Apple, because every part of the device is made by people with the same incentives. Android is slower and has worse battery efficiency than iOS not because Google are a bunch of idiots (quite the contrary) but because they have to support a wider range of hardware and work with the vendors who are going to use slower, less capable components to save $3 per device. Apple had a decade lead on basic things like storage security because when they decided to do that, the hardware team committed to putting high-quality encryption into the SoC and that meant that iOS could just assume that feature existed and was fast starting on the 3GS whereas Google had to spend years and years haranguing the actual phone manufacturers into implementing what was at the time seen as a costly optional feature.

mschuster91 3 days ago | parent | prev [-]

Apple can get away with less RAM because the flash storage is blazing fast and attached directly to the CPU, making swap much more painless than on most Windows machines that get bottom-of-the-barrel storage and controllers.

acdha 3 days ago | parent [-]

Yes, that's exactly what I'm talking about: Apple can do that because everyone involved works at the same place and owns the success or failure of the product. Google or Microsoft have to be a lot more conservative because they have limited ability to force the hardware vendors to do something and they'll probably get blamed more if it doesn't work well: people are primed to say “Windows <version + 1> sucks!” even if the real answer is that their PC was marginally-specced when they bought it 5 years ago.