| ▲ | jasonjmcghee 7 hours ago |
| Out of sheer curiosity, why do apple devices have astronomically longer battery life when sleeping? (How is the sleep so efficient?) I was busy with work and didn't touch my personal laptop for a few weeks and it still had well over half the battery. |
|
| ▲ | iknowstuff 7 hours ago | parent | next [-] |
| They write their own software. And firmware. Other OEMs can just beg their tier 1/2 suppliers to get their shit together and put components to sleep properly by making windows, drivers, and firmware work well together. Also things like lpddr5x, ssd controller built into the SoC with cache in unified ram (instead of running a whole ass separate computer with its own ram on an m2 stick) etc |
| |
| ▲ | cogman10 7 hours ago | parent | next [-] | | This is it. Sleep is such a finicky thing which requires all parts of the system to do it right. My desktop lost the ability to sleep because I guess the nvidia drivers have decided that you are wrong to want to put things to sleep. | |
| ▲ | trelane 7 hours ago | parent | prev | next [-] | | Exactly. This is precisely why I stopped buying Windows computers and started buying System76. Well that and the support. Looks like Framework has started heading this direction too, which is nice to see. | |
| ▲ | jeffbee 7 hours ago | parent | prev [-] | | Great point about the storage. That is another place where the repairability meme is really not helping. Moving the storage controller up into the host SoC is a good idea and the PC world should adopt it. Apple's storage controller is not even a PCIe peripheral internally, so it's saving power and latency cutting out that interface, even when it's active. | | |
| ▲ | Plasmoid2000ad 4 hours ago | parent [-] | | I'm having a tough time wrapping my head around how this could work for PCs today. I'm guessing Intel/AMD could integrate a single SSD controller that OEMs could use for a specially socketed SSD? I'm not familiar enough with SSD controllers - but what limits would this introduce. I'm thinking they can't be totally generic - with any NAND chips, any layout, 1-4 chips and TLC or QLC NAND - any capacity etc. It strikes me it would be limiting - you would become restricted to a a small subset of SSDs, maybe not forwards compatible with newer NAND chips etc. I'd think only the minority of PC Laptops would make sense to have this - ones with soldered SSDs - and I don't know many of these. So Intel/AMD would need a big push to integrate any controller. Maybe Windows ARM laptops, if the controller makes a big enough difference, will do this. I'm curious now if any Snapdragon devices are doing this already. |
|
|
|
| ▲ | zekica 7 hours ago | parent | prev | next [-] |
| Mainly because Microsoft wants to have "connected standby": the CPU is running in a low power mode (not powered off like "old" S3 sleep), can be turned on periodically and can turn on other devices even when the computer is "sleeping". My Zen2 based Lenovo laptop has 6-7 hours of battery when doing basic tasks in both Windows and Linux, but sleep on Linux lasts a week while on Windows it's empty in 24 hours. |
| |
| ▲ | iknowstuff 7 hours ago | parent | next [-] | | Macs have that too, just implemented well. In addition, CPUs with connected standby don’t have the normal sleep so even on linux they run in connected standby. Maybe its less buggy in your case? Consider yourself lucky, lots of people encounter problems with sleep on linux | | |
| ▲ | trelane 6 hours ago | parent [-] | | > lots of people encounter problems with sleep on linux Yeah, because they buy a Windows laptop, slap Linux on it, and expect it to work. OSX sucks even more by this metric; it won't even install! |
| |
| ▲ | cheema33 4 hours ago | parent | prev | next [-] | | > Mainly because Microsoft wants to have "connected standby"... And that is OK, as long as they provide a way for you to disable it. I do not want my laptop to be doing things when I put it in sleep mode. Nothing at all. Save battery life above all else when sleeping. But Microsoft does not appear to provide a way to do that. At least none that I can see. | |
| ▲ | TiredOfLife 6 hours ago | parent | prev [-] | | And the funny thing is that with Windows 10 they completely abandoned all the software that could take advantage of connected standby |
|
|
| ▲ | Neikius 7 hours ago | parent | prev | next [-] |
| Trying to reduce idle power use of a simple esp32 based project I did a while back... Yeah it is indeed tricky. Apple having full control of their hardware supply chain, firmware and software helps a ton. And PC standardization issues do no good either. On the other hand framework is actually in a good position to do something about it. Similar to valve. I think they do have more control than a regular PC vendor when also using Linux ad they have a very limited portfolio of devices and can actually upstream software fixes. |
|
| ▲ | jeffbee 7 hours ago | parent | prev [-] |
| I think it's just a vertical integration thing. They know what's in the machine and they can make sure that their suspend path puts every peripheral to sleep. Linux has no idea what's in your machine and there may be some device in there somewhere that freaks out if the machine goes to sleep without saying goodnight. Even a 50mW draw will destroy the suspend power budget. Chromebooks have similar vertical integration with respect to ChromeOS and they also enjoy long sleep life. Hypothetically an integrator like Framework can also guarantee this but I can't vouch for it being true, and they would not have any control over Ubuntu updates after the laptop is delivered to the customer. Just to beat my favorite dead horse, this is why the insistence on SO-DIMMs "BEcAuse it's rEpAIrAble" has wrecked the reputation of a lot of laptops. DDR on a stick is fundamentally hostile to sleep power draw. Soldered-down LPDDR memory has always been massively superior for energy savings, and LP-CAMM finally solves the issue. |
| |
| ▲ | Rohansi 7 hours ago | parent [-] | | How does soldering memory help reduce sleep power consumption vs. using a socket? What is different other than how they are physically connected to the board? | | |
| ▲ | jeffbee 6 hours ago | parent [-] | | It's not the form factor itself that is the problem. LPDDR is more efficient for various reasons and cannot be on a DIMM. It physically will not work with a socket. That is the problem that LP-CAMM solves: LPDDR but still removable. | | |
| ▲ | Koffiepoeder 5 hours ago | parent [-] | | You did not answer the question. | | |
| ▲ | jeffbee 4 hours ago | parent [-] | | Did I not? I'm trying my best here. The question is sort of off-target, though. What I am trying to say is: 1) DDR uses more power than LPDDR; 2) LPDDR cannot work on a DIMM socket, because of its lower voltage signals, and other reasons; 3) SO-DIMMs always contain the higher power DDR; QED) if you insist on SO-DIMMs, then you have to spend more energy. | | |
| ▲ | Koffiepoeder 4 hours ago | parent [-] | | Rohansi was basically asking 'why', you keep on reiterating that DDR uses more power than LPDDR, but fail to answer why this is the case. Is it clock speed? Is it voltage? Is it a protocol/specification difference? 'various reasons' is not an answer. | | |
| ▲ | ua709 4 hours ago | parent | next [-] | | There is no physics based reason why it couldn't work. If the industry really wanted to do it they could. But they don't. The primary reason is LPDDR just has too many pins. A DDR5 SODIMM has 262 pins and is an unwieldy beast. LPDDR5 has 644 pins. LPCAMM2 really shows the trade-offs. It adds a lot of bulk and cost, and repairability hasn't been valued high enough by the market to cover that overhead for most consumers. That's why Micron exited the market they played a big part in founding. https://www.ifixit.com/News/95078/lpcamm2-memory-is-finally-... | |
| ▲ | jeffbee 4 hours ago | parent | prev [-] | | LPDDR is very different from DDR so I don't really feel like diving into it in this tiny box. It has its own oscillators so the CPU doesn't have to clock it while asleep; it adaptively refreshes less often according to temperature; during self-refresh the cells are charged to a lower voltage that wouldn't really work for high-speed I/O but works fine for retention. |
|
|
|
|
|
|