| ▲ | netcoyote 2 days ago |
| I've told this story before on HN, but my biz partner at ArenaNet, Mike O'Brien (creator of battle.net) wrote a system in Guild Wars circa 2004 that detected bitflips as part of our bug triage process, because we'd regularly get bug reports from game clients that made no sense. Every frame (i.e. ~60FPS) Guild Wars would allocate random memory, run math-heavy computations, and compare the results with a table of known values. Around 1 out of 1000 computers would fail this test! We'd save the test result to the registry and include the result in automated bug reports. The common causes we discovered for the problem were: - overclocked CPU - bad memory wait-state configuration - underpowered power supply - overheating due to under-specced cooling fans or dusty intakes These problems occurred because Guild Wars was rendering outdoor terrain, and so pushed a lot of polygons compared to many other 3d games of that era (which can clip extensively using binary-space partitioning, portals, etc. that don't work so well for outdoor stuff). So the game caused computers to run hot. Several years later I learned that Dell computers had larger-than-reasonable analog component problems because Dell sourced the absolute cheapest stuff for their computers; I expect that was also a cause. And then a few more years on I learned about RowHammer attacks on memory, which was likely another cause -- the math computations we used were designed to hit a memory row quite frequently. Sometimes I'm amazed that computers even work at all! Incidentally, my contribution to all this was to write code to launch the browser upon test-failure, and load up a web page telling players to clean out their dusty computer fan-intakes. |
|
| ▲ | PunchyHamster 15 hours ago | parent | next [-] |
| > Several years later I learned that Dell computers had larger-than-reasonable analog component problems because Dell sourced the absolute cheapest stuff for their computers; I expect that was also a cause. Case in point: I was getting memory errors on my gaming machine, that persisted even after replacing the sticks. It caused windows bluesreen maybe once a month so I kinda lived with it as I couldn't afford to replace whole setup (I theoretized something on motherboard is wrong) Then my power supply finally died (it was cheap-ish, not cheap-est but it had few years already). I replaced it, lo and behold, memory errors were gone |
| |
| ▲ | versteegen 15 hours ago | parent [-] | | I'm surprised "faulty PSU" is not on GP's list of common problems. Almost every unstable computer I've ever experienced has been due to either a dying PSU (not an under-specced one) or dying power conversion capacitors on the motherboard. | | |
| ▲ | chedabob 14 hours ago | parent | next [-] | | Ye some of the weirdest issues I've fixed have been PSU related. I had a PC come to me that would boot fine, but if you opened the CD drive it'd shut off instantly. | |
| ▲ | urxvtcd 9 hours ago | parent | prev | next [-] | | There's a Polish electronics forum that's infamous because it's kind of actively hostile to them noobs. "Blacklisted power supply, closing thread." is a micro meme at this point. | |
| ▲ | drob518 10 hours ago | parent | prev | next [-] | | I concur. A lot of “flakey” issues can be traced to poor quality power supplies. That’s a component that doesn’t get any attention in spec sheets other than a max power rating and I think a lot of manufacturers skimp there. As long as the system boots up and runs for a few minutes, they ship it. | | |
| ▲ | MrDrMcCoy 6 hours ago | parent [-] | | Heck, even dirty power from the wall can contribute. I've seen improvements in stability from putting things behind power conditioners. | | |
| ▲ | drob518 6 hours ago | parent [-] | | Definitely that too, particularly in 2nd-world countries. I remember having a difficult time with dirty power for some hardware products I was responsible for at one time, where the customers were in the Middle East nd Africa in the 1990s. We ended up having to have the PS manufacturer do a redesign to help compensate for dirty power. It can be done, but it costs a bit more. |
|
| |
| ▲ | likelystory 12 hours ago | parent | prev | next [-] | | I could see that: - Firefox may be more prevalent on those using Linux, since FF is less “corporate” than Chrome or Edge. - People using Linux are probably putting Linux on old machines that had versions of Windows that are no longer supported. However, what I can’t say next is “PSUs would get old and stop putting out as much” because that doesn’t tend to happen. They just die. Those running Linux on some old tower may hook up too many devices to an underpowered PSU which could cause problems, but I doubt this is the norm. If it’s not PSUs, what is it? It’s not electromagnetic radiation doing the bitflipping because that’s too rare. Maybe bitflips could be caused by low-quality peripherals. People also don’t vacuum out laptops like they used to vacuum out towers and desktops, so maybe it’s dust. Or maybe it’s all a ruse and FF is buggy, but they don’t have time to figure it out. | | |
| ▲ | sandworm101 9 hours ago | parent [-] | | >> People using Linux are probably putting Linux on old machines Maybe for linux noobs. But i would suggest that most linux users are not noobs booting a disused pentium from a live CD. They are running linux on the same hardware as windows users. I would further suggest that as anyone installing a not-windows OS is more tech savvy than the average, that linux users actually take better care of thier machines. Linux users take pride in thier machines whereas the average windows user barely knows that computers have fans. As any linux user for thier specifications and they will quote system reports and memory figues like Marisa Tomei discussing engine timings. Ask a random windows user and they will probably start with the name of the store that sold it. | | |
| ▲ | PaulDavisThe1st 7 hours ago | parent [-] | | Unix user for 35 years, Linux for 30+ years ... my case fan died during the summer of last year ... just took the side panel off and kept things running. So much for taking pride in my machine :) |
|
| |
| ▲ | BorisMelnik 10 hours ago | parent | prev [-] | | yeah dell consumer pc psus were so awful | | |
| ▲ | mock-possum 8 hours ago | parent [-] | | Which is kinda crazy to me, in light of how durable their business laptops have been in my experience. I’ve owned maybe 6 pc laptops in my career, and the only 2 that’ve survived that nearly 20 year space are both dells. |
|
|
|
|
| ▲ | dvngnt_ a day ago | parent | prev | next [-] |
| GW1 was my childhood. The MMO with no monthly fees appealed to my Mom and I met friends for years. The 8 skill build system was genius, as was the cut scenes featuring your player character. If there's ever a 3rd game I would love to see something allowing for more expression through build creation though I could see how that's hard to balance. |
| |
| ▲ | alexchantavy 17 hours ago | parent | next [-] | | The PvP was so deep too. You would go 4v4 or 8v8 and coordinate a “3, 2, 1 spike” on a target so that all your damage would arrive at the same time regardless of spell windup times and be too much for the other team’s healer to respond to. Could also fake spike to force the other team’s healer to waste their good heal on the wrong player while you downed the real target. Good times. | |
| ▲ | ndesaulniers 21 hours ago | parent | prev | next [-] | | I still remember summoning flesh golems as a necromancer! Too much of my life sunk into GW1. Beat all 4(?) expansions. Logged in years later after I finally put it down to find someone had guessed my weak password, stole everything, then deleted all my characters. C'est la vie. | |
| ▲ | jiggunjer a day ago | parent | prev [-] | | Didn't they launch a remake of gw1 recently. Maybe I can get my kids hooked on that instead of this Roblox crap. | | |
| ▲ | pndy a day ago | parent | next [-] | | Yes, they did relaunch it as Guild Wars Reforged with Steam Deck and controller support and other changes https://wiki.guildwars.com/wiki/Guild_Wars_Reforged | |
| ▲ | hobofan 15 hours ago | parent | prev | next [-] | | Yes they did, but the social bump that was there shortly after release has significantly calmed down already. It did rekindle my love for the game, but most outposts are empty, even in the international districts, so I think it's hard to get hooked on it for new joiners. | |
| ▲ | post-it a day ago | parent | prev [-] | | For what it's worth, Roblox is how I discovered code at age 10. | | |
| ▲ | Cthulhu_ 15 hours ago | parent | next [-] | | It was ZZT for me, no idea how old I was, probably 8-10 or so. But when you take a bird's eye view, it's interesting and great to see how over the years, games where you can build your own games remain popular and a common entryway into software development. But also how Epic went from ZZT via Unreal to Fortnite, with the latter now being another platform (or what Zucc wanted to call a metaverse) for creativity. Other notable mentions off the top of my head where people can build or invent their own games (in-game, via an external editor or through community support) or go crazy in besides Roblox are Second Life (...I think), LittleBigPlanet, Warcraft/Starcraft (which led to the genre of MOBAs), Geometry Dash, Mario Maker, TES, Source engine games, Minecraft, etc etc. | |
| ▲ | youarentrightjr a day ago | parent | prev [-] | | How do you mean? Is there programming inside the game (ala Minecraft or Factorio)? | | |
| ▲ | cortesoft 21 hours ago | parent | next [-] | | Roblox is basically a developer platform for making games | |
| ▲ | LoganDark 21 hours ago | parent | prev [-] | | Roblox has a development environment for creating games (Roblox Studio) and the engine uses a fork of Lua as a scripting language. I also was introduced to programming through Roblox. |
|
|
|
|
|
| ▲ | dpe82 a day ago | parent | prev | next [-] |
| As a mobile dev at YouTube I'd periodically scroll through crash reports associated with code I owned and the long tail/non-clustered stuff usually just made absolutely no sense and I always assumed at least some of it was random bit flips, dodgy hardware, etc. |
| |
| ▲ | Cthulhu_ 15 hours ago | parent | next [-] | | I heard the same thing from a colleague who worked on a Dutch banking app, they were quite diligent in fixing logic bugs but said that once you fix all of those, the rest is space rays. As an aside, Apple and Google's phone home crash reports is a really good system and it's one factor that makes mobile app development fun / interesting. | |
| ▲ | grishka 16 hours ago | parent | prev [-] | | For the Mastodon Android app, I also sometimes see crashes that make no sense. For example, how about native crashes, on a thread that is created and run by the system, that only contains system libraries in its stack trace, and that never ran any of my code because the app doesn't contain any native libraries to begin with? Unfortunately I've never looked at crashes this way when I worked at VKontakte because there were just too many crashes overall. That app had tens of millions of users so it crashed a lot in absolute numbers no matter what I did. | | |
| ▲ | gf000 15 hours ago | parent | next [-] | | Well, vendors' randomly modified android systems are chock full of bugs, so it could have easily been some fancy os-specific feature failing not just in your case, but probably plenty other apps. | |
| ▲ | dpe82 7 hours ago | parent | prev | next [-] | | Usually I'd just look at clusters of crashes (those that had similar stack traces) but sometimes when you're running a very small % experiment there's not enough signal so you end up looking at everything. And oh boy was there a lot of noise. In an app with >billion users you get all kinds of wild stuff. | |
| ▲ | saagarjha 11 hours ago | parent | prev [-] | | Bugs in the system libraries? |
|
|
|
| ▲ | jodrellblank 7 hours ago | parent | prev | next [-] |
| This is getting off-topic but I’m amazed by this ability to reach out to computers around the world as a sensor array and infer things we can’t easily find out in other ways. It’s in popular culture and HN comments most often as spyware and mass surveillance of people, and that’s a bit of a shame. GPS location and movement data is what gives Google maps its near-real-time view of traffic on all roads, and busy-ness of all shops. I think they collect location data from people riding public transport so they can tell you how long people wait on average at bus stops before getting on a bus. Does Google collect atmospheric pressure readings from phone altimeters and use it for weather models? Could they? Kindle collects details on books people read, how far they read, where they stop, which sections they highlight and quote, which words they look up in dictionaries. I wonder if anyone’s curated a list of things like this which do happen or have been tried, excluding the “gathers user data for advertising” category which would become the biggest one, drowning out everything else. I think current phones use accelerometer data to detect possible car crashes and call emergency services. Google could use that in aggregate to identify accident blackspots but I don’t know if they do. But that would be less useful because the police already know everywhere a big accident happens because people call the police. So that’s data easily found a different way. |
| |
| ▲ | seanw444 6 hours ago | parent | next [-] | | > It’s in popular culture and HN comments most often as spyware and mass surveillance of people, and that’s a bit of a shame. I don't know whether you mean it's a shame that people consider it spyware, or if you meant that it's a shame that it manifests as spyware typically. I agree with the latter, not the former. It usually is spyware. If companies went for simple opt-in popups with a brief description of the reasoning, I'd be all for that. I sometimes opt-in to these requests myself, despite being a fairly privacy-conscious person, because I understand the benefit they have to the people collecting the data for good purposes. But when surveillance is opt-out (or no choice given), it's just spyware. | | |
| ▲ | jodrellblank 4 hours ago | parent [-] | | I mean what you did is a shame. I asked to put the spyware aside for one sub-thread and focus on the astonishing worldwide sensor array, and you talked about the spyware and nothing else. |
| |
| ▲ | MBCook 7 hours ago | parent | prev [-] | | Doesn’t Google also use the phone accelerometer to try and spot earthquakes? |
|
|
| ▲ | Helmut10001 19 hours ago | parent | prev | next [-] |
| I don't understand why ECC memory is not the norm these days. It is only slightly more expensive, but solves all these problems. Some consumer mainboards even support it already. |
| |
| ▲ | Agingcoder 17 hours ago | parent | next [-] | | No it doesn’t :-) I’ve had plenty of servers with faulty ecc dimms that didn’t trigger , and would only show faults when actual memory testing. I had a hard time convincing some of our admins the first time ( ‘no ecc faults you can’t be right ‘ ) but I won the bet. Edit: very old paper by google on these topics. My issues were 6-7 years ago probably. https://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf | | |
| ▲ | thebruce87m 16 hours ago | parent | next [-] | | That shouldn’t make sense. It’s not like the ECC info is stored in additional bits separate from the data, it’s built in with the data so you can’t “ignore” it. Hmm, off to read the paper. | | |
| ▲ | smalley 3 hours ago | parent | next [-] | | The ECC information is stored in separate DRAM devices on the DIMM. This is responsible for some of the increased cost of DIMMs with ECC at a given size. When marketed the extra memory for ECC are typically not included in the size for DIMMs so a 32GB DIMM with and without ECC will have differing numbers of total DRAM devices. There's a pretty good set of diagrams and descriptions of the faults in this paper https://dl.acm.org/doi/10.1145/3725843.3756089. Also to the parent: there's an updated public paper on DDR4 era fault observations https://ieeexplore.ieee.org/document/10071066 | | |
| ▲ | thebruce87m an hour ago | parent [-] | | I think you responded to the wrong person, unless you think I was implying that the extra bits needed for ECC didn’t need extra space at all? I wasn’t suggesting that - just that they aren’t like a checksum that is stored elsewhere or something that can be ignored - the whole 72 bits are needed to decode the 64 bits of data and the 64 bits of data cannot be read independently. |
| |
| ▲ | Agingcoder 12 hours ago | parent | prev [-] | | I fully agree with you ! Neither soft nor hard memory errors, nothing… but but flips ,and reproducible at that. We scanned all our machines following this ( a few thousand servers ) and found out that ram issues were actually quite common, as said in the paper. |
| |
| ▲ | close04 13 hours ago | parent | prev | next [-] | | If we’re being pragmatic, it solves enough problems that you could still call it an undisputed win for stability. | |
| ▲ | RealityVoid 10 hours ago | parent | prev | next [-] | | I'm sorry, but I, just like your admins, don't believe this. It's theoretically possible to have "undetectable" errors, but it's very unlikely and you'd see a much higher than this incidence of detected unrecoverable errors and you'd see a much higher incidence than this of repaired errors. I just don't buy the argument of "invisible errors". EDIT: took a look on the paper you linked and it basically says the same thing I did. The probability of these cases becomes increasingly and increasingly small and while ECC would indeed, not reduce it to _zero_ it would greatly greatly reduce it. | | |
| ▲ | Agingcoder 9 hours ago | parent [-] | | Well my admins eventually believed me , so I’m fairly comfortable with what I said. We also had a few thousands of physical servers with about of terabyte of ram each. You are right : we did see repaired errors, but we also saw (indirectly, and after testing ) unrepaired ones | | |
| ▲ | RealityVoid 7 hours ago | parent [-] | | Ok, I am sure there is _some_ amount of unrepairable errors. But the initial discussion was that ECC ram makes it go away and your point that it doesn't. And the vast vast majority of the errors, according to my understanding and to the paper you pointed to, are repairable. About 1 out of 400 ish errors are non-repairable. That's a huge improvement! If you had ECC ram, the failures Firefox sees here would drop from 10% to 0.025%! That is highly significant! Even more! 2 bit errors now you would be informed of! You would _know_ what is wrong. You could have 3(!) bit errors and this you might not see, but they'd be several orders of magnitude even rarer. So yes, it would not 100% go away, but 99.9 % go away. That's... Making it go away in my book. And last but not least, this paper mentions uncorrectable errors. It says nothing of undetectable ecc errors! You said _undetectable_ errors. I'm sure they happen, but would be surprised if you have any meaningful incidence of this, even at terabytes of data. It's probay on the order of 0.000625 of errors you can get ( but if you want I can do more solid math) | | |
| ▲ | Agingcoder 7 hours ago | parent [-] | | We’re in agreement. I think we diverge on ‘making it go away in my book’. When you’re the one having to debug all these bizarre things ( there were real money numbers involved so these things mattered ), over millions of jobs every day , rare events with low probability don’t disappear - they just happen and take time to diagnose and fix. So in my book ecc improves the situation, but I still had to deal with bad dimms, and ecc wasn’t enough. We used not to see these issues because we already had too many software bugs, but as we got increasingly reliable, hardware issues slowly became a problem, just like compiler bugs or other elements of the chain usually considered reliable. I fully agree that there are lots of other cases where this doesn’t matter and ecc is good enough. Thanks for taking the time to reply ! | | |
| ▲ | RealityVoid 6 hours ago | parent [-] | | Oh, I get this point. If you have a sufficiently large amount of data an you monitor the errors and your software gets better and better even low probability cases will happen and will stand out. But this is sort of the march of nines. My knee jerk reaction to blaming ECC is "naaah". Mostly because it's such a convenient scapegoat. It happens, I'm sure, but it would not be the first explanation I reach for. I once heard someone blame "cosmic rays" on a bug that happened multiple times. You can imagine how irked I was on the dang cosmic rays hitting the same data with such consistency! Anyways, I'm sorry if my tone sounded abrasive, I, too, have appreciated the discussion. | | |
| ▲ | Agingcoder 2 hours ago | parent [-] | | :-) never forget Occam’s razor ! No you were not abrasive at all - I’ve learned to assume good faith in forum conversations. In retrospect I should have started by giving the context ( march of 9s is a good description) actually, which would have made everything a lot clearer for everyone. |
|
|
|
|
| |
| ▲ | kasabali 16 hours ago | parent | prev [-] | | were they 3-bit flips? | | |
| ▲ | thfuran 11 hours ago | parent [-] | | It seems extremely unlikely that you’d end up with a lot of those but no smaller detectable errors. |
|
| |
| ▲ | hurfdurf 15 hours ago | parent | prev | next [-] | | Why? Intel making and keeping it workstation/Xeon-exclusive for a premium for too long. And AMD is still playing along not forcing the issue with their weird "yeah, Zen supports it, but your mainboard may or may not, no idea, don't care, do your own research" stance. These days it's a chicken and egg problem re: price and availability and demand. See also https://news.ycombinator.com/item?id=29838403 | | |
| ▲ | m000 15 hours ago | parent | next [-] | | Maybe it's high time for some regulation? E.g. EU enforced mandatory USB-C charging from 2025, and pushes for ending production of combustion engine cars by 2035. Why not just make ECC RAM mandatory in new computers starting e.g. from 2030? AMD is already one step away from being compliant. So, it's not an outlandish requirement. And regulating will also force Intel to cut their BS, or risk losing the market. | | |
| ▲ | funcDropShadow 12 hours ago | parent | next [-] | | OMG no. Politician have no business making technological decisions. They make it harder to innovate, i.e. to invent the next generation of ECC with a different name. | | |
| ▲ | m000 11 hours ago | parent | next [-] | | I would argue that in the present conditions, regulation can actually foster and guide real innovation. With no regulations in place, companies would rather innovate in profit extraction rather improving technology. And if they have enough market capture, they may actually prefer to not innovate, if that would hurt profits. | |
| ▲ | cestith 9 hours ago | parent | prev | next [-] | | ECC is like Ethernet. The name doesn’t have to change for the technology to update. | |
| ▲ | saagarjha 11 hours ago | parent | prev [-] | | Politicians don’t have to be dumb. |
| |
| ▲ | free652 12 hours ago | parent | prev [-] | | Cost. You are about to making computers 10-20% more expensive. Computers also aren't used much these days, and phones and tables don't have ECC | | |
| ▲ | m000 11 hours ago | parent [-] | | ECC has only 10-15% more transistor count. So you're only making one component of the computer 15% more expensive. This should have been a non-brainer, at least before the recent DRAM price hikes. Also, while computers may not be used much for cosmic rays to be a risk factor, but they're still susceptible to rowhammer-style attacks, which ECC memory makes much harder. Finally, if you account for the current performance loss due to rowhammer counter-measures, the extra cost of ECC memory is partially offset. |
|
| |
| ▲ | Helmut10001 15 hours ago | parent | prev [-] | | Thanks for the details. I agree and had the same experience, trying to figure out if an AMB motherboard supports ECC or not. It is almost impossible to know ahead of trying it. At least we have ZFS now for parity checks on cold storage. |
| |
| ▲ | Dylan16807 17 hours ago | parent | prev | next [-] | | Well for DDR5 that's 25% more chips which isn't great even if you don't get ripped off by market segmentation. It's possible DDR6 will help. If it gets the ability to do ECC over an entire memory access like LPDDR, that could be implemented with as little as 3% extra chip space. | | | |
| ▲ | epx 12 hours ago | parent | prev | next [-] | | And checksummed filesystems. | |
| ▲ | PunchyHamster 15 hours ago | parent | prev | next [-] | | In case of Intel it's mostly coz they want to sell it as enterprise/workstation feature and make people pay extra. AMD has been better on it but BIOS/mobo vendors not so much | |
| ▲ | sznio 14 hours ago | parent | prev | next [-] | | What I'm wondering, even without ECC, afaik standard ram still has a parity bit, so a single flip should be detected. With ECC it would be fixed, without ECC it would crash the system. For it to get through and cause an app to malfunction you need two bit flips at least. | | |
| ▲ | ciupicri 13 hours ago | parent | next [-] | | I think standard RAM used to have long long time ago, but not anymore. DDR5 finally readd it sort of. | | |
| ▲ | roryirvine 9 hours ago | parent [-] | | Yes, 30 pin SIMMs (the most common memory format from the mid-80s to the mid-90s) came in either '8 chip' or '9 chip' variants - the 9th chip being for the parity bit. Most motherboards supported both, and the choice of which to use came down to the cost differential at the time of building a particular machine. The wild swings in DRAM prices meant that this could go from being negligible to significant within the course of a year or two! When 72 pin SIMMs were introduced, they could in theory also come in a parity version but in reality that was fairly rare (full ECC was much better, and only a little more expensive). I don't think I ever saw an EDO 72 pin SIMM with parity, and it simply wasn't an option for DIMMs and later. |
| |
| ▲ | meindnoch 13 hours ago | parent | prev [-] | | Wrong. Regular RAM has no parity bit. |
| |
| ▲ | colechristensen 19 hours ago | parent | prev | next [-] | | Bit flips do not only happen inside RAM Also, in a game, there is a tremendously large chance that any particular bit flip will have exactly 0 effect on anything. Sure you can detect them, but one pixel being wrong for 1/60th of a second isn't exactly ... concerning. The chance for a bit flip to affect a critical path that is noticeable by the player is very low, and quite a bit lower if you design your game to react gracefully. There's a whole practice of writing code for radiation hardened environments that largely consists of strategies for recovering from an impossible to reach state. | | |
| ▲ | PunchyHamster 15 hours ago | parent | next [-] | | > The chance for a bit flip to affect a critical path that is noticeable by the player is very low, and quite a bit lower if you design your game to react gracefully. Nobody does > There's a whole practice of writing code for radiation hardened environments that largely consists of strategies for recovering from an impossible to reach state. And again, nobody except stuff that goes to space and few critical machines does. The closest normal user will get to code written like that are probably car ECUs, there are even automotive targeted MCUs that not only run ecc but also 2 cores in parallel and crash if they disagree | | |
| ▲ | colechristensen 6 hours ago | parent [-] | | Sure they do, you just have to think about it a different way. It boils down to exception handling, you don't expect all of your bugs or security vulnerabilities to be known and write your code to be able to react to unplanned states without crashing. Bugs or security vulnerabilities can look a lot like a cosmic ray... a buffer overflow putting garbage in unexpected memory locations vs a cosmic ray putting garbage in unexpected memory locations... a lot of the mitigations are quite the same. |
| |
| ▲ | colinb 18 hours ago | parent | prev | next [-] | | > code for radiation hardened environments I’m aware of code that detects bit flips via unreasonable value detection (“this counter cannot be this high so quickly”). What else is there? | | |
| ▲ | gmueckl 17 hours ago | parent | next [-] | | For safety critical systems, one strategy is to store at least two copies of important data and compare them regularly. If they don't match, you either try to recover somehow or go into a safe state, depending on the context. | | |
| ▲ | d1sxeyes 17 hours ago | parent | next [-] | | At least three copies, so you can recover based on consensus. | | |
| ▲ | Dylan16807 17 hours ago | parent | next [-] | | If your pieces of important data are very tiny, that's probably your best option. If they're hundreds of bytes or more, then two copies plus two hashes will do a better job. | | |
| ▲ | d1sxeyes 12 hours ago | parent | next [-] | | Ah, true! You just restore the one that matches its hash. Elegant. | |
| ▲ | rixed 8 hours ago | parent | prev [-] | | A single hash should be enough. | | |
| ▲ | Dylan16807 5 hours ago | parent [-] | | Yes, but what's easier depends on layout. "Consensus" makes me think of multiple entire nodes, and in that situation you can have a nice symmetry by making each node store one copy and one small hash. If you're doing something that's more centralized then one hash might be simpler, but if you're centralized then you should probably use your own error correction codes instead of having multiple copies. |
|
| |
| ▲ | qznc 8 hours ago | parent | prev | next [-] | | In many cases the system is perfectly safe when it shuts off. Two is enough for that. | |
| ▲ | pizza 13 hours ago | parent | prev [-] | | “never go to sea with two chronometers, take one or three” | | |
| ▲ | DennisP 8 hours ago | parent [-] | | Seems like chronometers would be a case where two are better than one, because the mistakes are analog. If they don't exactly agree, just take the average. You'll have more error than if you were lucky enough to take the better chronometer, but less than if you had taken only the worse one. Minimizing the worst case is probably the best way to stay off the rocks. |
|
| |
| ▲ | Helmut10001 15 hours ago | parent | prev [-] | | I use ZFS even on consumer devices, these days. Parity checks all the way! |
| |
| ▲ | vntok 17 hours ago | parent | prev | next [-] | | You can have voting systems in place, where at least 2 out of 3 different code paths have to produce the same output for it to be accepted. This can be done with multiple systems (by multiple teams/vendors) or more simply with multiple tries of the same path, provided you fully reload the input in between. | |
| ▲ | qznc 17 hours ago | parent | prev [-] | | The simplest one is a watchdog: If something stops with regular notifications, then restart stuff. | | |
| ▲ | gmueckl 17 hours ago | parent [-] | | A watchdog guards against unresponsive software. It doesn't protect against bad data directly. Not all bad data makes a system freeze. |
|
| |
| ▲ | 19 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | Helmut10001 19 hours ago | parent | prev [-] | | Interesting, I was not aware! Do you have a statistics for the bit flips in RAM %? My feeling would be its the majority of bit flips that happen, but I can be wrong. | | |
| ▲ | Tomte 17 hours ago | parent | next [-] | | IEC 61508 estimates a soft error rate of about 700 to 1200 FIT (Failure in Time, i.e. 1E-9 failures/hour). That was in the 2000s though, and for embedded memory above 65nm. I would expect smaller sizes to be more error-prone. | |
| ▲ | colechristensen 19 hours ago | parent | prev [-] | | It would be quite hard to gather that data and would be highly dependent on hardware and source of bit flip. But there's volatile and nonvolatile memory all over in a computer and anywhere data is in flight be it inside the CPU or in any wires, traces, or other chips along the data path can be subject to interference, cosmic rays, heat or voltage related errors, etc. | | |
| ▲ | ZiiS 18 hours ago | parent [-] | | It should be fairly easy to see statistically if ECC helps, people do run Firefox on it. The number of bits in registers, busses, cache layers is very small compared to the number in RAM. Obviously they might be hotter or more likely to flip. | | |
| ▲ | bpye 17 hours ago | parent [-] | | I believe caches and maybe registers often have ECC too though I'm sure there are still gaps. |
|
|
|
| |
| ▲ | bell-cot 14 hours ago | parent | prev [-] | | Talk to someone in consumer sales about customer priorities. A bit-cheaper computer? Or one which which is, in theory, more resilient against some rare random sort of problem which customers do not see as affecting them. |
|
|
| ▲ | mobilio a day ago | parent | prev | next [-] |
| Yup! I've read this decade ago... https://www.codeofhonor.com/blog/whose-bug-is-this-anyway |
| |
| ▲ | john_strinlai a day ago | parent [-] | | for people that dont know, www.codeofhonor.com is netcoyotes (the gp comment) blog, and there is some good reading to be had there |
|
|
| ▲ | Modified3019 a day ago | parent | prev | next [-] |
| Thanks to asrock motherboards for AMD’s threadripper 1950x working with ECC memory, that’s what I learned to overclock on. I eventually discovered with some timings I could pass all the usual tests for days, but would still end up seeing a few corrected errors a month, meaning I had to back off if I wanted true stability. Without ECC, I might never have known, attributing rare crashes to software. From then on I considered people who think you shouldn’t overlock ECC memory to be a bit confused. It’s the only memory you should be overlocking, because it’s the only memory you can prove you don’t have errors. I found that DDR3 and DDR4 memory (on AMD systems at least) had quite a bit of extra “performance” available over the standard JEDEC timings. (Performance being a relative thing, in practice the performance gained is more a curiosity than a significant real life benefit for most things. It should also be noted that higher stated timings can result in worse performance when things are on the edge of stability.) What I’ve noticed with DDR5, is that it’s much harder to achieve true stability. Often even cpu mounting pressure being too high or low can result in intermittent issues and errors. I would never overclock non-ECC DDR5, I could never trust it, and the headroom available is way less than previous generations. It’s also much more sensitive to heat, it can start having trouble between 50-60 degrees C and basically needs dedicated airflow when overclocking. Note, I am not talking about the on chip ECC, that’s important but different in practice from full fat classic ECC with an extra chip. I hate to think of how much effort will be spent debugging software in vain because of memory errors. |
| |
| ▲ | monster_truck a day ago | parent | next [-] | | DDR4 and 5 both have similar heat sensitivity curves which call for increased refresh timings past 45C. Some of the (legitimately) extreme overclockers have been testing what amounts to massive hunks of metal in place of the original mounting plates because of the boards bending from mounting pressure, with good enough results. On top of all of this, it really does not help that we are also at the mercy of IMC and motherboard quality too. To hit the world records they do and also build 'bulletproof', highest performance, cost is no object rigs, they are ordering 20, 50 motherboards, processors, GPUs, etc and sitting there trying them all, then returning the shit ones. We shouldn't have to do this. I had a lot of fun doing all of this myself and hold a couple very specific #1/top 10/100 results, but it's IMHO no longer worth the time or effort and I have resigned to simply buying as much ram as the platform will hold and leaving it at JEDEC. | |
| ▲ | golem14 a day ago | parent | prev | next [-] | | Hmm, I wonder if we see, now since we are in a RAM availability crisis, more borderline to bad RAMs creep into the supply chain. If we had a time series graph of this data, it might be revealing. | | |
| ▲ | monster_truck a day ago | parent [-] | | If you look around you'll see people already putting the new, chinese made DDR4 through its paces, it's holding up far better than anyone expected. Every single time I've had someone pay me to figure out why their build isn't stable, it's always some combination of cheap power supply with no noise filtering, cheap motherboard, and poor cooling. Can't cut corners like that if you want to go fast. That is to say, I've never encountered "almost ok" memory. They're quite good at validation. | | |
| ▲ | iamflimflam1 17 hours ago | parent | next [-] | | The danger is we’ll start to see more QA rejects coming into the market. The temptation to mix in factory rejects into your inventory is going to get very high for a lot of resellers. | |
| ▲ | kombine 17 hours ago | parent | prev [-] | | Where does one find these? I'm looking for DDR4 ECC for my homelab. |
|
| |
| ▲ | bpye 17 hours ago | parent | prev | next [-] | | Similar experience. I played with overclocking the DDR5 ECC memory I have on my system, it would appear to be stable and for quite a while it would be. But after a few days I'd notice a handful of correctable errors. I now just run at the standard 5600MHz timing, I really don't find the potential stability trade off worth it. We already have enough bugs. | |
| ▲ | kmeisthax a day ago | parent | prev [-] | | > From then on I considered people who think you shouldn’t overlock ECC memory to be a bit confused. It’s the only memory you should be overlocking, because it’s the only memory you can prove you don’t have errors. This attitude is entirely corporate-serving cope from Intel to serve market segmentation. They wanted to trifurcate the market between consumers, business, and enthusiast segments. Critically, lots of business tasks demand ECC for reliability, and business has huge pockets, so that became a business feature. And while Intel was willing to sell product to overclockers[0], they absolutely needed to keep that feature quarantined from consumer and business product lines lest it destroy all their other segmentation. I suspect they figured a "pro overclocker" SKU with ECC and unlocked multipliers would be about as marketable as Windows Vista Ultimate, i.e. not at all, so like all good marketing drones they played the "Nobody Wants What We Aren't Selling" card and decided to make people think that ECC and overclocking were diametrically supposed. [0] In practice, if they didn't, they'd all just flock to AMD. | | |
| ▲ | gruez a day ago | parent | next [-] | | >[0] In practice, if they didn't, they'd all just flock to AMD. only when AMD had better price/performance, not because of ECC. At best you have a handful of homelabbers that went with AMD for their NAS, but approximately nobody who cares about performance switched to AMD for ECC ram, because ECC ram also tend to be clocked lower. Back in Zen 2/3 days the choice was basically DDR4-3600 without ECC, or DDR4-2400 with ECC. | |
| ▲ | pushedx a day ago | parent | prev [-] | | At the beginning of your comment I was wondering if the "attitude" that was corporate serving was the anti-ECC stance or the pro-ECC stance (based on the full chunk that you quoted). I'm glad that by the end of the comment you were clearly pro ECC. Any workstation where you are getting serious work done should use ECC |
|
|
|
| ▲ | aiiane 16 hours ago | parent | prev | next [-] |
| I remember one of the first impressions I had in GW1 during test events was the sense of scale in the world that still managed to avoid excessive harsh geometry angles for the most part. Not surprised to hear it was pushing more polygons than average. P.S. GW1 remains one of my favorite games and the source of many good memories from both PvP and PvE. From fun stories of holding the Hall of Heroes to some unforgettable GvG matches, y'all made a great game. |
|
| ▲ | jug a day ago | parent | prev | next [-] |
| As a community alpha tester of GW1, this was a fun read! Such an educational journey and what a well organized and fruitful one too. We could see the game taking shape before our eyes! As a European, I 100% relied on being young and single with those American time zones. :D Tests could end in my group at like 3 am, lol. |
| |
| ▲ | netcoyote 19 hours ago | parent [-] | | Oh yeah, those were some good times. It was great getting early feedback from you & the other alpha testers, which really changed the course of our efforts. I remember in the earlier builds we only had a “heal area” spell, which would also heal monsters, and no “resurrect” spell, so it was always a challenge to take down a boss and not accidentally heal it when trying to prevent a player from dying. |
|
|
| ▲ | samiv 13 hours ago | parent | prev | next [-] |
| Plot twist. The memory bit flip checking code was actually buggy and contained UB. No, seriously did you actually verify the code for correctness before relying on it's results? |
|
| ▲ | pndy 2 days ago | parent | prev | next [-] |
| I didn't expect to read bits of GW story here from one of the founders - thanks! |
|
| ▲ | Dylan16807 17 hours ago | parent | prev | next [-] |
| > And then a few more years on I learned about RowHammer attacks on memory, which was likely another cause -- the math computations we used were designed to hit a memory row quite frequently. For that one I'd guess no, because under normal circumstances hot locations like that will stay in cache. |
|
| ▲ | arprocter a day ago | parent | prev | next [-] |
| >Sometimes I'm amazed that computers even work at all! Funny you say this, because for a good while I was running OC'd RAM I didn't see any instability, but Event Viewer was a bloodbath - reducing the speed a few notches stopped the entries (iirc 3800MHz down to 3600) |
|
| ▲ | monster_truck a day ago | parent | prev | next [-] |
| Every interesting bug report I've read about Guild Wars is Dwarf Fortress tier. A very hardcore, longtime player who was recounting some of the better ones to me shared a most excellent one wrt spirits or ghosts, some sort of player summoned thing that were sticking around endlessly and causing OOM errors? |
|
| ▲ | nxobject 13 hours ago | parent | prev | next [-] |
| > Several years later I learned that Dell computers had larger-than-reasonable analog component problems because Dell sourced the absolute cheapest stuff for their computers; I expect that was also a cause Oh god yes… Dell OptiPlexes and bad caps went together in those days. I’m half convinced Valve put the gray towers in Counter-Strike so IT employees wasting time could shoot them up for therapy. |
|
| ▲ | Analemma_ a day ago | parent | prev | next [-] |
| There's a famous Raymond Chen post about how a non-trivial percentage of the blue screen of death reports they were getting appeared to be caused by overclocking, sometimes from users who didn't realize they had been ripped off by the person who sold them the computer: https://devblogs.microsoft.com/oldnewthing/20050412-47/?p=35.... Must've been really frustrating. |
| |
| ▲ | jnellis 21 hours ago | parent | next [-] | | This was a design choice by AMD at the time for their Athlon Slot A cpus. Use the same slot A board which you could set the cpu speed by bridging a connections. Since the Slot A came in a package, you couldn't see the actual cpu etching. So shady cpu sellers would pull the cover off high speed cpus, and put them on slow speed cpus after overclocking them to unstable levels. | |
| ▲ | projektfu a day ago | parent | prev [-] | | E.g., running a Pentium 75, at 75MHz. |
|
|
| ▲ | Agentlien 19 hours ago | parent | prev | next [-] |
| That's a really cool anecdote. The overclock makes sense. When we released Need For Speed (2015) I spent some time in our "war room", monitoring incoming crash reports and doing emergency patches for the worst issues. The vast majority of crashes came from two buckets: 1. PCs running below our minimum specs 2. Bugs in MSI Afterburner. |
| |
| ▲ | kasabali 16 hours ago | parent [-] | | > Bugs in MSI Afterburner. Do you mean the OSD? | | |
| ▲ | Agentlien 13 hours ago | parent [-] | | It seemed to be the monitoring side of it which caused a lot of crashes. It was apparently a very common issue in many games around that time. |
|
|
|
| ▲ | fennecbutt 7 hours ago | parent | prev | next [-] |
| That's awesome. But also guild waaars, GW2 I played from beta for years, but it just got boring. Endless expansions with weird story. We need GW3 already but my fear is mmo as a genre is dying. |
| |
| ▲ | uncSoft 7 hours ago | parent [-] | | They just need to call it GW Classic apparently and it will sell |
|
|
| ▲ | PaulHoule 11 hours ago | parent | prev | next [-] |
| Back in the 90's I had an overclocked AMD486 machine which seemed OK most of the time but had segfaults compiling the Linux kernel. I sent in a bug report and Alan Cox closed it saying it was the fault of my machine being overclocked. I dialed the machine back to the rated speed but it failed completely within 6 months. |
|
| ▲ | sidewndr46 11 hours ago | parent | prev | next [-] |
| Well wow I wasn't expecting to see yet another story from Patrick Wyatt here in the comments! Much appreciated, I've enjoyed reading everything you've written over the years. |
|
| ▲ | danielEM 12 hours ago | parent | prev | next [-] |
| > problems because Dell sourced the absolute cheapest stuff for their computers; Price itself has nothing to cause problems, it is either bad design or false or incomplete data on datasheets or all of it. Please STOP spreading this narrative, the right thing is to make ads, datasheets, marketing materials etc, etc to tell you the truth that is necessary for you to make proper decision as client/consumer. |
|
| ▲ | taneq 20 hours ago | parent | prev | next [-] |
| Wow, that’s really interesting! I always suspected bit flips happened undetected way more than we thought, so it’s great to get some real life war stories about it. Also thanks for Guild Wars, many happy hours spent in GW2. :) |
|
| ▲ | just_testing a day ago | parent | prev | next [-] |
| I loved reading your comment and got curious: how he detected the bitflips? |
| |
| ▲ | mayama 21 hours ago | parent [-] | | It looks like computing math heavy process with known answer, like 301st prime, and comparing the result. General memory testing programs like memtest86 or memtester sets random bits into memory and verify it. |
|
|
| ▲ | Salgat a day ago | parent | prev | next [-] |
| Mike is such a legend. |
|
| ▲ | yownie 4 hours ago | parent | prev | next [-] |
| this exactly the type of stories I come to HN to read, thanks! |
|
| ▲ | benatkin 8 hours ago | parent | prev | next [-] |
| > Several years later I learned that Dell computers had larger-than-reasonable analog component problems because Dell sourced the absolute cheapest stuff for their computers; I expect that was also a cause. Yikes. Dude, you're getting a Packard Bell. |
|
| ▲ | SunnyNeon 14 hours ago | parent | prev | next [-] |
| How did you determine which of the causes it was? |
|
| ▲ | andrepd 11 hours ago | parent | prev | next [-] |
| Amazing story! Reminds me of old gamasutra posts like these https://web.archive.org/web/20170522151205/http://www.gamasu... |
|
| ▲ | cookiengineer a day ago | parent | prev | next [-] |
| I kind of wanted to confirm that. At that time I was still using a Compaq business laptop on which I played Guild Wars. The Turion64 chipset was the worst CPU I've ever bought. Even 10 years old games had rendering artefacts all over the place, triangle strips being "disconnected" and leading to big triangles appearing everywhere. It was such a weird behavior, because it happened always around 10 minutes after I started playing. It didn't matter _what_ I was playing. Every game had rendering artefacts, one way or the other. The most obvious ones were 3d games like CS1.6, Guild Wars, NFSU(2), and CC Generals (though CCG running better/longer for whatever reason). The funny part behind the VRAM(?) bitflips was that the triangles then connected to the next triangle strip, so you had e.g. large surfaces in between houses or other things, and the connections were always in the same z distance from the camera because game engines presorted it before uploading/executing the functional GL calls. After that laptop I never bought these types of low budget business laptops again because the experience with the Turion64 was just so ridiculously bad. |
|
| ▲ | jiggawatts a day ago | parent | prev | next [-] |
| Some multiplayer real-time strategy (RTS) games used deterministic fixed-point maths and incremental updates to keep the players in sync. Despite this, there would be the occasional random de-sync kicking someone out of a game, more than likely because of bit flips. |
| |
| ▲ | netcoyote 19 hours ago | parent [-] | | For RTS games I wish we could blame bit flips, but more typically it is uninitialized memory, incorrectly-not-reinitialized static variables, memory overwrites, use-after-free, non-deterministic functions (eg time), and pointer comparisons. God I love C/C++. It’s like job security for engineers who fix bugs. | | |
| ▲ | blep-arsh 16 hours ago | parent [-] | | Some games are reliable enough. I found out the DRAM in my PC was going bad when Factorio started behaving weird. Did a memory test to confirm. Yep, bitflips. |
|
|
|
| ▲ | hsbauauvhabzb a day ago | parent | prev | next [-] |
| Did you/he ever consider redundant allocation for high value content and hash checks for low value assets that are still important? I imagine the largest volume of game memory consumption is media assets which if corrupted would really matter, and the storage requirement for important content would be reasonably negligible? |
| |
| ▲ | nomel a day ago | parent | next [-] | | I think the most reasonable take would be to just tell the users hardware is borked, they're going to have a bad outside the game too, and point them to one of the many guides around this topic. I don't think engineering effort should ever be put into handling literal bad hardware. But, the user would probably love you for letting them know how to fix all the crashing they have while they use their broken computer! To counter that, we're LONG overdue for ECC in all consumer systems. | | |
| ▲ | AlotOfReading a day ago | parent | next [-] | | I put engineering effort into handling bad hardware all the time because safety critical, :) It significantly overlaps the engineering to gracefully handle non-hardware things like null pointers and forgetting to update one side of a communication interface. 80/20 rule, really. If you're thoughtful about how you build, you can get most of the benefits without doing the expensive stuff. | |
| ▲ | shakna a day ago | parent | prev [-] | | I think I sit in another camp. A lot of my engineering efforts are in working around bad hardware. Better the user sees some lag due to state rebuild versus a crash. Most consumers have what they have, and use what they have. Upgrading everything is now rare. If they got screwed, they'll remain screwed for a few years. | | |
| |
| ▲ | andai a day ago | parent | prev | next [-] | | That's an interesting idea. How might you implement that? Like RAID but on the level of variables? Maybe the one valid use case for getters/setters? :) | | |
| ▲ | hsbauauvhabzb a day ago | parent [-] | | As another user fairly pointed out, ECC. But a compiler level flag would probably achieve the redundancy, sourcing stuff from disk etc would probably still need to happen twice to ensure that bit flips do not occur, etc. |
| |
| ▲ | a day ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | rurban 17 hours ago | parent | prev [-] |
| I hate HW soo much. To revise the biggest problems in computing, beside out of tokens: HW bugs |