Remix.run Logo
ankurdhama 2 days ago

So the "hardware failure" happening exactly at the same time the Windows update installation failed are not related? That sounds like a one in a billion kind of coincident.

eli 2 days ago | parent | next [-]

An upgrade process involves heavy CPU use, disk read/writes, and at least a few power cycles in short time period. Depending what OP was doing on it otherwise, it could've been the highest temperature the device had ever seen. It's not so crazy.

My guess would've been SSD failure, which would make sense to seem to appear after lots of writes. In the olden days I used to cross my fingers when rebooting spinning disk servers with very long uptimes because it was known there was a chance they wouldn't come back up even though they were running fine.

jonathanlydall 2 days ago | parent | next [-]

Not for a server, but many years ago my brother had his work desktop fail after he let it cold boot for the first time in a very long time.

Normally he would leave his work machine turned on but locked when leaving the office.

Office was having electrical work done and asked that all employees unplug their machines over the weekend just in case of a surge or something.

On the Monday my brother plugged in machine and it wouldn’t turn on. Initially the IT guy remarked that my brother didn’t follow the instructions to unplug it.

He later retracted the comment after it was determined the power supply capacitors had gone bad a while back, but the issue with them was not apparent until they had a chance to cool down.

GCUMstlyHarmls 2 days ago | parent | prev | next [-]

> In the olden days I used to cross my fingers when rebooting spinning disk servers with very long uptimes because it was known there was a chance they wouldn't come back up even though they were running fine.

HA! Not just me then!

I still have an uneasy feeling in my guts doing reboots, especially on AM5 where the initial memory timing can take 30s or so.

I think most of my "huh, its broken now?" experiences as a youth were probably the actual install getting wonky though, rather than the few rare "it exploded" hardware failures after reboot, though that definitely happened.

zelon88 2 days ago | parent | prev | next [-]

This, 100%.

I'd like to add my reasoning for a similar failure of an HP Proliant server I encountered.

Sometimes hardware can fail during long uptime and not become a problem until the next reboot. Consider a piece of hardware with 100 features. During typical use, the hardware may only use 50 of those features. Imagine one of the unused features has failed. This would not cause a catastrophic failure during typical use, but on startup (which rarely occurs) that feature is necessary and the system will not boot without it. If it could, it could still perform it's task... because the damaged feature is not needed. But it can't get past the boot phase, where the feature is required.

Tl;dr the system actually failed months ago and the user didn't notice because the missing feature was not needed again until the next reboot.

startupsfail 2 days ago | parent [-]

Is there a good reason why upgrades need to stress-test the whole system? Can't they go slowly, throttling resource usage to background levels?

They involve heavy CPU use, stress the whole system completely unnecessary, the system easily sees the highest temperature the device had ever seen during these stress tests. If during that strain something fails or gets corrupted, it's a system-level corruption...

Incidentally, Linux kernel upgrades are not better. During DKMS updates the CPU load skyrockets and then a reboot is always sketchy. There's no guarantee that something would not go wrong, a secure boot issue after a kernel upgrade in particular could be a nightmare.

zelon88 2 days ago | parent [-]

To answer your question; it helps to explain what the upgrade process entails.

In the case of Linux DKMS updates: DKMS is re-compiling your installed kernel modules to match the new kernel. Sometimes a kernel update will also update the system compiler. In that instance it can be beneficial for performance or stability to have all your existing modules recompiled with the new version of the compiler. The new kernel comes with a new build environment, which DKMS uses to recompile existing kernel modules to ensure stability and consistency with that new kernel and build system.

Also, kernel modules and drivers may have many code paths that should only be run on specific kernel versions. This is called 'conditional compilation' and it is a technique programmers use to develop cross platform software. Think of this as one set of source code files that generates wildly different binaries depending on the machine that compiled it. By recompiling the source code after the new kernel is installed, the resulting binary may be drastically different than the one compiled by the previous kernel. Source code compiled on a 10 year old kernel might contain different code paths and routines than the same source code that was compiled on the latest kernel.

Compiling source code is incredibly taxing on the CPU and takes significantly longer when CPU usage is throttled. Compiling large modules on extremely slow systems could take hours. Managing hardware health and temperatures is mostly a hardware level decision controlled by firmware on the hardware itself. That is usually abstracted away from software developers who need to be able to be certain that the machine running their code is functional and stable enough to run it. This is why we have "minimum hardware requirements."

Imagine if every piece of software contained code to monitor and manage CPU cooling. You would have software fighting each other over hardware priorities. You would have different systems for control, with some more effective and secure than others. Instead the hardware is designed to do this job intrinsically, and developers are free to focus on the output of their code on a healthy, stable system. If a particular system is not stable, that falls on the administrator of that system. By separating the responsibility between software, hardware, and implementation we have clear boundaries between who cares about what, and a cohesive operating environment.

startupsfail a day ago | parent [-]

The default could be that a background upgrade should not be a foreground stress test.

Imagine you are driving a car and from time ro time, without any warning, it suddenly starts accelerating and decelerating aggressively. Your powertrain, engine, breaks are getting tear and wear, oh and at random that car also spins out and rolls, killing everyone inside (data loss).

This is roughly how current unattended upgrades work.

SecretDreams 2 days ago | parent | prev [-]

> Depending what OP was doing on it otherwise, it could've been the highest temperature the device had ever seen. It's not so crazy.

Kind of big doubt. This was probably not slamming the hardware.

refulgentis 2 days ago | parent [-]

That was absolutely slamming the hardware. (source: worked on Android, and GPs comments re: this are 100% correct. I’d need a bit more, well anything, to even come around to the idea the opposite is even plausible. Best steelman is naïvete, like “aren’t updates are just a few mvs and a reboot?”)

tobyjsullivan 2 days ago | parent | prev | next [-]

Over my 35 years of computer use, most hardware failures (very, very rare) happen during a reboot or power-on. And most of my reboots happen when installing updates. It actually seems like a very high probability in my limited experience.

Of course, it’s possible that the windows update was a factor, when combined with other conditions.

fc417fc802 2 days ago | parent [-]

There's also the case where the hardware has failed but the system is already up so it just keeps running. It's when you finally go to reboot that everything falls apart in a visible manner.

da_chicken 2 days ago | parent | next [-]

This is one of the reasons I am not a fan of uptime worship. It's not a stable system until it's able to cold boot.

Say you have a system that has been online for 5 years continuously until a power outage knocks it out. When power is restored, the system doesn't boot to a working system. How far back do you have to go to in your backups to find a known good system? And this isn't just about hardware failure, it's an issue of configuration changes, too.

phire 2 days ago | parent | prev [-]

I also notice that people with lots of experience with computers will automatically reboot when they encounter minor issues (have you tried turning it off and on again?).

When it then completely falls apart on reboot, they spend several hours trying to fix it and completely forget the "early warning signs" that motivated them to reboot in the first place.

I've think the same applies to updates. I know the time I'm most likely to think about installing updates is when my computer is playing up.

ssl-3 2 days ago | parent [-]

I try to do the opposite, and reboot only as a last resort.

If I reboot it and it starts working again, then I haven't fixed it at all.

Whatever the initial problem was is likely to still present after reboot -- and it will tend will pop up again later even if things temporarily seem to be working OK.

fc417fc802 2 days ago | parent | next [-]

How do you avoid sinking time into chasing illusory bugs?

close04 2 days ago | parent | prev [-]

> Whatever the initial problem was is likely to still present after reboot

You only know this after the reboot. Reboot to fix the issue and if it comes back then you know you have to dig deeper. Why sink hours of effort into fixing a random bit flip? I'll take the opposite position and say that especially for consumer devices most issues are caused by some random event resulting in a soft error. They're very common and if they happen you don't "troubleshoot" that.

ssl-3 a day ago | parent [-]

With any system: When I can find and correct the problem out of the gate, then it remains corrected the issue does not recur.

GranPC 2 days ago | parent | prev | next [-]

For all we know, this thing was on its last legs (these machines do run very hot!) and the update process might have been the final nail in the coffin. That doesn't mean Microsoft set out to kill OP's machine... Same thing could have happened if OP ran make -j8 -- we wouldn't blame GNU make.

wnevets 2 days ago | parent | prev | next [-]

This reminds me of the 3090 hardware problems being exposed by Amazons New World [1]. Everyone really wanted to blame the software.

https://www.pcgamer.com/amazon-new-world-killing-rtx-3090-gp...

Graziano_M 2 days ago | parent | prev | next [-]

I had a friend's dad's computer's HDD fail while I was installing Linux on it to show him it. That was terrifying. I still remember the error, and I just left with it (and Windows) unable to boot. Later my friend told me that the drive was toast.

Come to think of it, maybe it was me. I might have trashed the MBR? I remember the error, though, "Non system disk or disk error".

toast0 2 days ago | parent | next [-]

IIRC, that error text comes from the mbr. You may have trashed the partition table?

Graziano_M a day ago | parent [-]

Yeah, I think so. It's been ~25 years, and only while typing out that comment did I remember the error message and realize that's probably what I had done.

If I recall correctly, he ended up scrapping the drive.

justinclift 2 days ago | parent | prev [-]

Yeah, sounds like the drive was still physically detected but that the expected boot loader wasn't present any more.

wvenable 2 days ago | parent | prev | next [-]

If had happened any other time, there wouldn't be a blog post about it and we wouldn't be reading about it.

olyjohn 2 days ago | parent | prev | next [-]

I've fixed thousands of PCs and Macs over my career. Coincidences like this happen all the time. I mean, have you seen the frequency of updates these days? There are always some kind of updates happening. So the chances of your system breaking during an update is not actually that slim.

helf 2 days ago | parent [-]

[dead]

2 days ago | parent | prev | next [-]
[deleted]
pdpi 2 days ago | parent | prev | next [-]

I think it's fair to say they're related, yes. But causality can well be the other way around — that Windows upgrade failed because of flaky hardware.

santoshalper 2 days ago | parent | prev | next [-]

Two bugs occurring at the same time is definitely not one in a billion, and with billions of computers in the world, weird shit is going to happen.

2 days ago | parent | prev | next [-]
[deleted]
Aurornis 2 days ago | parent | prev | next [-]

> That sounds like a one in a billion kind of coincident

Hardware is more likely to fail under load than at idle.

Blaming the last thing that was happening before hardware failed isn't a good conclusion, especially when the failure mode manifests as random startup failures instead of a predictable stop at some software stage.

taneq 2 days ago | parent | prev | next [-]

A software update can absolutely trigger or unmask a hardware bug. It’s not an either/or thing, it’s usually (if a hardware issue is actually present) both in tandem:

nightfly 2 days ago | parent | prev | next [-]

windows update just doing a normal write causing the active chunk of flash memory being used to hold something in the boot loader to a different failed/failing section

ezfe 2 days ago | parent | prev | next [-]

This happens all the time, people always doubt it - but the patterns are always consistent: large updates kill hardware that's in progress of failing

justsomehnguy 2 days ago | parent | prev | next [-]

"Hardware failure" => "WinUpdate failure" => "Corrupted system" conforms the Occam's razor.

croes 2 days ago | parent | prev [-]

Like winning the lottery?

Happens quite often