| ▲ | Unpowered SSDs slowly lose data(xda-developers.com) |
| 714 points by amichail a day ago | 286 comments |
| |
|
| ▲ | carra a few seconds ago | parent | next [-] |
| We may be facing a grim situation in a few years because of this. Right now most consumer-grade storage is flash memory, and all of it suffers from this. SSDs, pendrives, SD cards, Compact Flash... Apparently games for the Nintendo 3DS and PS Vita are already suffering from this, and people losing photos in faulty SDs is hardly news. |
|
| ▲ | userbinator a day ago | parent | prev | next [-] |
| One key point about retention which is not often mentioned, and indeed neither does this article, is that retention is inversely proportional to program/erase cycles and decreases exponentially with increasing temperature. Hence why retention specs are usually X amount of time after Y cycles at Z temperature. Even a QLC SSD that has only been written to once, and kept in a freezer at -40, may hold data for several decades. Manufacturers have been playing this game with DWPD/TBW numbers too --- by reducing the retention spec, they can advertise a drive as having a higher endurance with the exact same flash. But if you compare the numbers over the years, it's clear that NAND flash has gotten significantly worse; the only thing that has gone up, multiplicatively, is capacity, while endurance and rentention have both gone down by a few orders of magnitude. For a long time, 10 years after 100K cycles was the gold standard of SLC flash. Now we are down to several months after less than 1K cycles for QLC. |
| |
| ▲ | londons_explore 17 hours ago | parent | next [-] | | I'm sad that drives don't have a 'shutdown' command which writes a few extra bytes of ECC data per page into otherwise empty flash cells. It turns out that a few extra bytes can turn a 1 year endurance into a 100 year endurance. | | |
| ▲ | adrian_b 14 hours ago | parent | next [-] | | There are programs with which you can add any desired amount of redundancy to your backup archives, so that they would survive corruption that does not affect a greater amount of data than the added redundancy. For instance, on Linux there is par2cmdline. For all my backups, I create pax archives, which are then compressed, then encrypted, then expanded with par2create, then aggregated again in a single pax file (the legacy tar file formats are not good for faithfully storing all metadata of modern file systems and each kind of tar program may have proprietary non-portable extensions to handle this, therefore I use only the pax file format). Besides that, important data should be replicated and stored on 2 or even 3 SSDs/HDDs/tapes, which should preferably be stored themselves in different locations. | | |
| ▲ | antonkochubey 13 hours ago | parent | next [-] | | Unfortunately some SSD controllers plainly refuse to read data they consider corrupted, even if you have extra parity that could potentially restore corrupted data, your entire drive might refuse to read. | | |
| ▲ | lazide 12 hours ago | parent [-] | | Huh? The issue being discussed is random blocks, yes? If your entire drive is bricked, that is an entirely different issue. | | |
| ▲ | jeremyvisser 12 hours ago | parent [-] | | Here’s the thing. That SSD controller is the interface between you and those blocks. If it decides, by some arbitrary measurement, as defined by some logic within its black box firmware, that it should stop returning all blocks, then it will do so, and you have almost no recourse. This is a very common failure mode of SSDs. As a consequence of some failed blocks (likely exceeding a number of failed blocks, or perhaps the controller’s own storage failed), drives will commonly brick themselves. Perhaps you haven’t seen it happen, or your SSD doesn’t do this, or perhaps certain models or firmwares don’t, but some certainly do, both from my own experience, and countless accounts I’ve read elsewhere, so this is more common than you might realise. | | |
| ▲ | cogman10 4 hours ago | parent | next [-] | | I really wish this responsibility was something hoisted up into the FS and not a responsibility of the drive itself. It's ridiculous (IMO) that SSD firmware is doing so much transparent work just to keep the illusion that the drive is actually spinning metal with similar sector write performance. | |
| ▲ | londons_explore 6 hours ago | parent | prev | next [-] | | The mechanism is usually that the SSD controller requires that some work be done before your read - for example rewriting some access tables to record 'hot' data. That work can't be done because there is no free blocks. However, no space can be freed up because every spare writable block is bad or is in some other unusable state. The drive is therefore dead - it will enumerate, but neither read nor write anything. | |
| ▲ | reactordev 11 hours ago | parent | prev | next [-] | | This is correct, you still have to go through firmware to gain access to the block/page on “disk” and if the firmware decides the block is invalid than it fails. You can sidestep this by bypassing the controller on a test bench though. Pinning wires to the chips. At that point it’s no longer an SSD. | |
| ▲ | lazide 8 hours ago | parent | prev [-] | | Yes, and? HDD controllers dying and head crashes are a thing too. At least in the ‘bricked’ case it’s a trivial RMA - corrupt blocks tend to be a harder fight. And since ‘bricked’ is such a trivial RMA, manufacturers have more of an incentive to fix it or go broke, or avoid it in the first place. This is why backups are important now; and always have been. | | |
| ▲ | mort96 6 hours ago | parent [-] | | We're not talking about the SSD controller dying. The SSD controller in the hypothetical situation that's being described is working as intended. |
|
|
|
| |
| ▲ | mywittyname 3 hours ago | parent | prev | next [-] | | This is fine, but I'd prefer an option to transparently add parity bits to the drive, even if it means losing access to capacity. Personally, I keep backups of critical data on a platter disk NAS, so I'm not concerned about losing critical data off of an SSD. However, I did recently have to reinstall Windows on a computer because of a randomly corrupted system file. Which is something this feature would have prevented. | |
| ▲ | casenmgreen 2 hours ago | parent | prev [-] | | Thank you for this. I had no knowledge of pax, or that par was an open standard, and I care about what they help with. Going to switch over to using both in my backups. |
| |
| ▲ | consp 16 hours ago | parent | prev | next [-] | | Blind question with no attempt to look it up: why don't filesystems do this? It won't work for most boot code but that is relatively easy to fix by plugging it in somewhere else. | | |
| ▲ | lxgr 14 hours ago | parent | next [-] | | Wrong layer. SSDs know which blocks have been written to a lot, have been giving a lot of read errors before etc., and often even have heterogeneous storages (such as a bit of SLC for burst writing next to a bunch of MLC for density). They can spend ECC bits much more efficiently with that information than a file system ever could, which usually sees the storage as a flat, linear array of blocks. | | |
| ▲ | adrian_b 14 hours ago | parent | next [-] | | This is true, but nevertheless you cannot place your trust only in the manufacturer of the SSD/HDD, as I have seen enough cases when the SSD/HDD reports no errors, but nonetheless it returns corrupted data. For any important data you should have your own file hashes, for corruption detection, and you should add some form of redundancy for file repair, either with a specialized tool or simply by duplicating the file on separate storage media. A database with file hashes can also serve other purposes than corruption detection, e.g. it can be used to find duplicate data without physically accessing the archival storage media. | | |
| ▲ | lxgr 13 hours ago | parent [-] | | Verifying at higher layers can be ok (it's still not ideal!), but trying to actively fix things below that are broken usually quickly becomes a nightmare. |
| |
| ▲ | DeepSeaTortoise 12 hours ago | parent | prev [-] | | IMO it's exactly the right layer, just like for ECC memory. There's a lot of potential for errors when the storage controller processes and turns the data into analog magic to transmit it. In practice, this is a solved problem, but only until someone makes a mistake, then there will be a lot of trouble debugging it between the manufacturer certainly denying their mistake and people getting caught up on the usual suspects. Doing all the ECC stuff right on the CPU gives you all the benefits against bitrot and resilience against all errors in transmission for free. And if all things go just right we might even be getting better instruction support for ECC stuff. That'd be a nice bonus | | |
| ▲ | lxgr 12 hours ago | parent | next [-] | | > There's a lot of potential for errors when the storage controller processes and turns the data into analog magic to transmit it. That's a physical layer, and as such should obviously have end-to-end ECC appropriate to the task. But the error distribution shape is probably very different from that of bytes in NAND data at rest, which is different from that of DRAM and PCI again. For the same reason, IP does not do error correction, but rather relies on lower layers to present error-free datagram semantics to it: Ethernet, Wi-Fi, and (managed-spectrum) 5G all have dramatically different properties that higher layers have no business worrying about. And sticking with that example, once it becomes TCP's job to handle packet loss due to transmission errors (instead of just congestion), things go south pretty quickly. | | |
| ▲ | johncolanduoni 9 hours ago | parent [-] | | > And sticking with that example, once it becomes TCP's job to handle packet loss due to transmission errors (instead of just congestion), things go south pretty quickly. Outside of wireless links (where FEC of some degree is necessary regardless) this is mostly because TCP’s checksum is so weak. QUIC for example handles this much better, since the packet’s authenticated encryption doubles as a robust error detecting code. And unlike TLS over TCP, the connection is resilient to these failures: a TCP packet that is corrupted but passes the TCP checksum will kill the TLS connection on top of it instead of retransmitting. | | |
| ▲ | lxgr 9 hours ago | parent [-] | | Ah, I meant go south in terms of performance, not correctness. Most TCP congestion control algorithms interpret loss exclusively as a congestion signal, since that's what most lower layers have historically presented to it. This is why newer TCP variants that use different congestion signals can deal with networks that violate that assumption better, such as e.g. Starlink: https://blog.apnic.net/2024/05/17/a-transport-protocols-view... Other than that, I didn't realize that TLS has no way of just retransmitting broken data without breaking the entire connection (and a potentially expensive request or response with it)! Makes sense at that layer, but I never thought about it in detail. Good to know, thank you. |
|
| |
| ▲ | johncolanduoni 10 hours ago | parent | prev [-] | | ECC memory modules don’t do their own very complicated remapping from linear addresses to physical blocks like SSDs do. ECC memory is also oriented toward fixing transient errors, not persistently bad physical blocks. |
|
| |
| ▲ | londons_explore 15 hours ago | parent | prev | next [-] | | The filesystem doesn't have access to the right existing ECC data to be able to add a few bytes to do the job. It would need to store a whole extra copy. There are potentially ways a filesystem could use heirarchical ECC to just store a small percentage extra, but it would be far from theoretically optimal and rely on the fact just a few logical blocks of the drive become unreadable, and those logical blocks aren't correlated in write time (which I imagine isn't true for most ssd firmware). | | |
| ▲ | mrspuratic 11 hours ago | parent | next [-] | | CD storage has an interesting take, the available sector size varies by use, i.e.
audio or MPEG1 video (VideoCD) at 2352 data octets per sector (with two media level ECCs), actual data at 2048 octets per sector where the extra EDC/ECC can be exposed by reading "raw". I learned this the hard way with VideoPack's malformed VCD images, I wrote a tool to post-process the images to recreate the correct EDC/ECC per sector. Fun fact, ISO9660 stores file metadata simultaneously in big-endian and little form (AFAIR VP used to fluff that up too). | | |
| ▲ | xhkkffbf 7 hours ago | parent [-] | | Octets? Don't you mean "bytes"? Or is that word problematic now? | | |
| ▲ | theragra 5 hours ago | parent | next [-] | | I wonder if OP used "octets" because physical pattern in the CD used to represent a byte is a sequence of 17 pits and lands. BTW, byte size during the history varied from 4 to 24 bit!
Even now, based on interpretation, you can say 16 bit bytes do exist. Char type can be 16 bit on some DSP systems. I was curious, so I checked. Before this comment, I only knew about 7 bit bytes. | |
| ▲ | asveikau 4 hours ago | parent | prev | next [-] | | The term octets is pretty common in network protocol RFCs, maybe their vocabulary is biased in the direction of that writing. | |
| ▲ | ralferoo 5 hours ago | parent | prev [-] | | Personally, I prefer the word "bytes", but "octets" is technically more accurate as there are systems that use differently sized bytes. A lot of these are obsolete but there are also current examples, for example in most FPGA that provide SRAM blocks, it's actually arranged as 9, 18 or 36-bit wide with the expectation that you'll use the extra bits for parity or flags of some kind. |
|
| |
| ▲ | lazide 12 hours ago | parent | prev [-] | | Reed Solomon codes, or forward error correction is what you’re discussing. All modern drives do it at low levels anyway. It would not be hard for a COW file system to use them, but it can easily get out of control paranoia wise. Ideally you’d need them for every bit of data, including metadata. That said, I did have a computer that randomly bit flipped when writing to storage sometimes (eventually traced it to an iffy power supply), and PAR (a type of reed solomon coding forward error correction library) worked great for getting a working backup off the machine. Every other thing I tried would end up with at least a couple bit flip errors per GB, which make it impossible. |
| |
| ▲ | DeepSeaTortoise 15 hours ago | parent | prev [-] | | You can still do this for boot code if the error isn't significant enough to make all of the boot fail. The "fixing it by plugging it in somewhere else" could then also be simple enough to the point of being fully automated. ZFS has "copies=2", but iirc there are no filesystems with support for single disk erasure codes, which is a huge shame because these can be several orders of magnitude more robust compared to a simple copy for the same space. | | |
| |
| ▲ | victorbjorklund 16 hours ago | parent | prev | next [-] | | That does sound like a good idea (even if I’m sure some very smart people know why it would be a bad idea) | |
| ▲ | alex_duf 12 hours ago | parent | prev [-] | | I guess the only way to do this today is with a raid array? |
| |
| ▲ | ACCount37 16 hours ago | parent | prev | next [-] | | Because no one is willing to pay for SLC. Those QLC NAND chips? Pretty much all of them have an "SLC mode", which treats each cell as 1 bit, and increases both write speeds and reliability massively. But who wants to have 4 times less capacity for the same price? | | |
| ▲ | userbinator 16 hours ago | parent | next [-] | | 4 times less capacity but 100x or more endurance or retention at the same price looks like a great deal to me. Alternatively: do you want to have 4x more capacity at 1/100th the reliability? Plenty of people would be willing to pay for SLC mode. There is an unofficial firmware hack that enables it: https://news.ycombinator.com/item?id=40405578 1TB QLC SSDs are <$100 now. If the industry was sane, we would have 1TB SLC SSDs for less than $400, or 256GB ones for <$100, and in fact SLC requires less ECC and can function with simpler (cheaper, less buggy, faster) firmware and controllers. But why won't the manufacturers let you choose? The real answer is clearly planned obsolescence. I have an old SLC USB drive which is only 512MB, but it's nearly 20 years old and some of the very first files I wrote to it are still intact (I last checked several months ago, and don't expect it's changed since then.) It has probably had a few hundred full-drive-writes over the years --- well worn-out by modern QLC/TLC standards, but barely-broken-in for SLC. | | |
| ▲ | ACCount37 15 hours ago | parent | next [-] | | The real answer is: no one actually cares. Very few people have the technical understanding required to make such a choice. And of those, fewer people still would actually pick SLC over QLC. At the same time: a lot of people would, if facing a choice between a $50 1TB SSD and a $40 1TB SSD, pick the latter. So there's a big incentive to optimize on cost, and not a lot of incentive to optimize on anything else. This "SLC only" mode exists in the firmware for the sake of a few very specific customers with very specific needs - the few B2B customers that are actually willing to pay that fee. And they don't get the $50 1TB SSD with a settings bit flipped - they pay a lot more, and with that, they get better QC, a better grade of NAND flash chips, extended thermal envelopes, performance guarantees, etc. Most drives out there just use this "SLC" mode for caches, "hot spot" data and internal needs. | | |
| ▲ | volemo 15 hours ago | parent | next [-] | | Agreed. I have some technical understanding of SLC’s advantages, but why would I choose it over QLC? My file system has checksums on data and metadata, my backup strategy is solid, my SSD is powered most days, and before it dies I’ll probably upgrade my computer for other reasons. | |
| ▲ | Aurornis 9 hours ago | parent | prev [-] | | > Very few people have the technical understanding required to make such a choice. And of those, fewer people still would actually pick SLC over QLC. There was a period of time when you could still by consumer SLC drives and pay a premium for them. I still have one. Anyone assuming the manufacturers are missing out on a golden market opportunity of hidden SLC drive demand is missing the fact that they already offered these. They know how well (or rather, how poorly) they sell. Even if consumers had full technical knowledge to make decisions, most would pick the TLC and QLC anyway. Some of these comments are talking about optimizing 20 year old drives for being used again two decades later, but ignoring the fact that a 20 year old drive is nearly useless and could be replaced by a superior option for $20 on eBay. The only thing that would change, practically speaking, is that people looking for really old files on drives they haven’t powered up for 20 years wouldn’t be surprised that the were missing. The rest of us will do just fine with our TLC drives and actual backups to backup services or backup mediums. I’ll happily upgrade my SSD every 4-5 years and enjoy the extra capacity over SLC while still coming out money ahead and not losing data. |
| |
| ▲ | Sohcahtoa82 6 hours ago | parent | prev | next [-] | | > But why won't the manufacturers let you choose? The real answer is clearly planned obsolescence. No, it's not. The real answer is that customers (Even B2B) are extremely price sensitive. Look, I know the prevailing view is that lower quality is some evil corporate plan to get you to purchase replacements on a more frequent basis, but the real truth is that consumers are price sensitive, short sighted, and often purchasing without full knowledge. There's a race to the bottom on price, which means quality suffers. You put your typical customer in front of two blenders at the appliance store, one is $20 and the other is $50, most customers will pick the $20 one, even when armed with the knowledge that the $50 version will last longer. When it comes to QLC vs SLC, buyers don't care. They just want the maximum storage for the smallest price. | | |
| ▲ | unethical_ban 5 hours ago | parent [-] | | For your specific example, I would buy the $20 because I would assume the $50 is just as bad. Having built computers casually for some time, I never recall being told by the marketing department or retailer that one kind of SSD was more reliable than another. The only thing that is ever advertised blatantly is speed and capacity. I saw the kind of SSD sometimes, but it was never explained what that meant to a consumer (the same way SMR hard drives were never advertised as having slow reads) If I saw "this kind of SSD is reliable for 10 years and the other one is reliable for 2" then I may have made a decision based on that. |
| |
| ▲ | mort96 6 hours ago | parent | prev | next [-] | | > do you want to have 4x more capacity at 1/100th the reliability? Yes. QLC SSDs are reliable enough for my day-to-day use, but even QLC storage is quite expensive and I wouldn't want to pay 4x (or realistically, way more than 4x) to get 2TB SLC M.2 drives instead of 2TB QLC M.2 drives. | |
| ▲ | big-and-small 16 hours ago | parent | prev | next [-] | | Funny enough I just managed to find this exact post and comment on google 5 minutes ago when I started wondering whatever it's actually possible to use 1/4 of capacity in SLC mode. Though what make me wonder is that some reviews of modern SSDs certainly mention that that pSCL is somewhat less than 25% of capacity, like 400GB pSLC cache for 2TB SSD: https://www.tomshardware.com/pc-components/ssds/crucial-p310... So you get more like 20% of SLC capacity at least on some SSDs | |
| ▲ | kvemkon 10 hours ago | parent | prev | next [-] | | NVMe protocol introduced namespaces. Is it not the feature perfect for users to decide themselves, how to create 2 virtual SSDs with TLC and pseudo-SLC-mode, choosing how much space to sacrifice for pSLC? | | |
| ▲ | wmf 5 hours ago | parent [-] | | Most people want to use pSLC as cache or as the whole drive, not as a separate namespace. |
| |
| ▲ | Aurornis 9 hours ago | parent | prev | next [-] | | > Alternatively: do you want to have 4x more capacity at 1/100th the reliability? If the original drive has sufficient reliability, then yes I do want that. And the majority of consumers do, too. Chasing absolute extreme highest powered off durability is not a priority for 99% of people when the drives work properly for typical use cases. I have 5 year old SSDs where the wear data is still in the single digit percentages despite what I consider moderately heavy use. > I have an old SLC USB drive which is only 512MB, but it's nearly 20 years old and some of the very first files I wrote to it are still intact (I last checked several months ago, and don't expect it's changed since then.) It has probably had a few hundred full-drive-writes over the years --- well worn-out by modern QLC/TLC standards, but barely-broken-in for SLC. Barely broken in, but also only 512MB, very slow, and virtually useless by modern standards. The only positive is that the files are still intact on that old drive you dusted off. This is why the market doesn’t care and why manufacturers are shipping TLC and QLC: They aren’t doing a planned obsolescence conspiracy. They know that 20 years from now or even 10 years from now that drive is going to be so outdated that you can get a faster, bigger new one for pocket change. | |
| ▲ | throwaway290 12 hours ago | parent | prev | next [-] | | > I have an old SLC USB drive which is only 512MB, but it's nearly 20 years old and some of the very first files I wrote to it are still intact (I last checked several months ago It's not about age of drive. It's how much time it spent without power. | |
| ▲ | justsomehnguy 13 hours ago | parent | prev [-] | | > If the industry was sane Industry is sane in both the common and capitalist sense. The year 2025 and people still buy 256Tb USB thumbdrives for $30, because nobody cares except for the price. |
| |
| ▲ | big-and-small 16 hours ago | parent | prev | next [-] | | To be honest you can buy 4TB SSD for $200 now, so I guess market would be larger if people were aware of how easy would it be to make such SSDs work in SLC mode exclusively. | |
| ▲ | anthk 13 hours ago | parent | prev [-] | | Myself wants. I remember when the UBIFS module (or some kernel settings) for the Debian kernel was MLC against SLC. You could store 4X more data now, but at a cost of really bad reability: A SINGLE bad shutdown and your partitions would be corrupted up to the point of not being able to properly boot any more, having to reflash the NAND. | | |
| ▲ | moffkalast 9 hours ago | parent [-] | | Well then buy an industrial SSD, they're something like 80-240 GB and you get power loss protection capacitors too. Just not the datacenter ones, those melt immediately without rack airflow. |
|
| |
| ▲ | RachelF 21 hours ago | parent | prev | next [-] | | Endurance going down is hardly a surprise given that the feature size has gone down too. The same goes for logic and DRAM memory. I suspect that 2035 years time, hardware from 2010 will work, while that from 2020 will be less reliable. | | |
| ▲ | lotrjohn 20 hours ago | parent | next [-] | | Completely anecdotal, and mostly unrelated, but my NES from 1990 is still going strong. Two PS3’s that I have owned simply broke. CRTs from 1994 and 2002 still going strong. LCD tvs from 2012 and 2022 just went kaput for no reason. Old hardware rocks. | | |
| ▲ | userbinator 17 hours ago | parent | next [-] | | LCD tvs from 2012 and 2022 just went kaput for no reason. Most likely bad capacitors. The https://en.wikipedia.org/wiki/Capacitor_plague may have passed, but electrolytic capacitors are still the major life-limiting component in electronics. | | |
| ▲ | londons_explore 16 hours ago | parent | next [-] | | MLCC's look ready to take over nearly all uses of electrolytics. They still degrade with time, but in a very predictable way. That makes it possible to build a version of your design with all capacitors '50 year aged' and check it still works. Sadly no engineering firm I know does this, despite it being very cheap and easy to do. | |
| ▲ | DoesntMatter22 3 hours ago | parent | prev [-] | | Looks like that plague stopped in 2007? I have a 8 year old LCD that died out of nowhere as well, So I'm guessing wouldn't be affected by this. Could still be a capacitor issue though |
| |
| ▲ | theragra 5 hours ago | parent | prev | next [-] | | I had an LCD that worked from around 2005 to 2022. It became very yellow closer to 2022 for some reason. It was Samsung PVA, I think it was model 910T. | | | |
| ▲ | Dylan16807 18 hours ago | parent | prev | next [-] | | For what it's worth my LCD monitor from 2010 is doing well. I think the power supplied died at one point but I already had a laptop supply to replace it with. | |
| ▲ | dfex 18 hours ago | parent | prev [-] | | Specifically old Japanese hardware from the 80s and 90s - this stuff is bulletproof | | |
| ▲ | jacquesm 17 hours ago | parent [-] | | I still have a Marantz amp from the 80's that works like new, it hasn't even been recapped. |
|
| |
| ▲ | bullen 17 hours ago | parent | prev | next [-] | | I concur; in my experience ALL my 24/7 drives from 2009-2013 still work today and ALL my 2014+ are dead, started dying after 5 years, last one died 9 years later. Around 10 drives in each group. All older drives are below 100GB (SLC) all never are above 200GB (MLC). I reverted back to older drives for all my machines in 2021 after scoring 30x unused X25-E on ebay. The only MLC I use today are Samsungs best industrial drives and they work sort of... but no promises. And SanDisc SD cards that if you buy the cheapest ones last a surprising amount of time. 32GB lasted 11-12 years for me. Now I mostly install 500GB-1TB ones (recently = only been running for 2-3 years) after installing some 200-400GB ones that work still after 7 years. | | |
| ▲ | Aurornis 9 hours ago | parent [-] | | > in my experience ALL my 24/7 drives from 2009-2013 still work today and ALL my 2014+ are dead, As a counter anecdote, I have a lot of SSDs from the late 2010s that are still going strong, but I lost some early SSD drives to mysterious and unexpected failures (not near the wear-out level). | | |
| |
| ▲ | Dylan16807 21 hours ago | parent | prev | next [-] | | As far as I'm aware flash got a bit of a size boost when it went 3D and hasn't shrunk much since then. If you use the same number of bits per cell, I don't know if I would expect 2010 and 2020 or 2025 flash to vary much in endurance. For logic and DRAM the biggest factors are how far they're being pushed with voltage and heat, which is a thing that trends back and forth over the years. So I could see that go either way. | |
| ▲ | robotnikman 6 hours ago | parent | prev | next [-] | | I recently found a 1GB USB drive from around 2006 I used to use. I plugged in and most of the files were still readable! There were some that were corrupted and unreadable unfortunately. | |
| ▲ | tensility 7 hours ago | parent | prev [-] | | Oh, it would be nice if it were just feature size. Over the prior 15 years, the nand industry has doubled its logical density three times over with the trick of encoding more than one bit per physical voltage well, making the error bounds on leaking wells tighter and tighter and amplifying the bit rot impact, in number of ECC corrections consumed, per leaked voltage well. |
| |
| ▲ | hxorr 21 hours ago | parent | prev | next [-] | | I also seem to remember reading retention is proportional to temperature at time of write. Ie, best case scenario = write data when drive is hot, and store in freezer. Would be happy if someone can confirm or deny this. | | |
| ▲ | pbmonster 18 hours ago | parent | next [-] | | I know we're talking theoretical optimums here, but: don't put your SSDs in the freezer. Water ingress because of condensation will kill your data much quicker than NAND bit rot at room temperature. | | |
| ▲ | cesaref 11 hours ago | parent | next [-] | | I'm interested in why SSDs would struggle with condensation. What aspect of the design is prone to issues? I routinely repair old computer boards, replace leaky capacitors, that sort of thing, and have cleaned boards with IPA and rinsed in tap water without any issues to anything for many years. | | | |
| ▲ | dachris 17 hours ago | parent | prev | next [-] | | Would an airtight container and liberal addition of dessicants help? | | |
| ▲ | pbmonster 16 hours ago | parent | next [-] | | Sure. Just make sure the drive is warm before you take it out of the container - because this is when the critical condensation happens: you take out a cold drive an expose it to humid room temperature air. Then water condenses on (and in) the cold drive. Re-freezing is also critical, the container should contain no humid air when it goes into the freezer, because the water will condense and freeze as the container cools. A tightly wrapped bag, desiccant and/or purging the container with dry gas would prevent that. | | |
| ▲ | mkesper 15 hours ago | parent [-] | | A vacuum sealer would probably help to avoid the humid air, too. | | |
| ▲ | mort96 6 hours ago | parent [-] | | Only if you wait for the drive to heat up before you remove the vacuum seal. |
|
| |
| ▲ | Aurornis 9 hours ago | parent | prev [-] | | Be careful, airtight doesn’t mean it’s not moisture permeable over time. Color changing desiccant is a good idea to monitor it. |
| |
| ▲ | Onavo 17 hours ago | parent | prev [-] | | What about magnetic tape? | | |
| ▲ | pbmonster 17 hours ago | parent [-] | | For long term storage? Sure, everybody does it. In the freezer? Better don't, for the same reason. There are ways to keep water out of frozen/re-frozen items, of course, but if you mess up you have water everywhere. |
|
| |
| ▲ | userbinator 20 hours ago | parent | prev | next [-] | | That's probably this: https://www.sciencedirect.com/science/article/abs/pii/S00262... | |
| ▲ | CarVac 20 hours ago | parent | prev [-] | | I definitely remember seeing exactly this. |
| |
| ▲ | aidenn0 3 hours ago | parent | prev | next [-] | | On the other hand when capacity goes up, the cycle-count goes down for the same workload. A 4TB drive after 1K cycles has written the same amount of data as 100GB drive after 40K cycles. | |
| ▲ | karczex 17 hours ago | parent | prev | next [-] | | That's how it has to work. To increase capacity you have to make smaller cells where charge may easier diffuse from one cell to another. Also to make drive faster, stored charge has to be smaller, which also decrease endurance. With SLC and QLC comparison is even worse as QLC is basically clever hack to store 4 times more data in the same number physical cells - it's tradeoff. | | |
| ▲ | bullen 17 hours ago | parent [-] | | Yes, but that tradeoff comes with a hidden cost: complexity! I much rather have 64GB of SLC at 100K WpB than 4TB of MLC at less than 10K WpB. The spread functions that move bits around to even the writes or caches will also fail. The best compromise is of course to use both kinds for different purposes: SLC for small main OS (that will inevitably have logs and other writes) and MLC for slowly changing large data like a user database or files. The problem is now you cannot choose because the factories/machines that make SLC are all gone. | | |
| ▲ | userbinator 16 hours ago | parent | next [-] | | The problem is now you cannot choose because the factories/machines that make SLC are all gone. You can still get pure SLC flash in smaller sizes, or use TLC/QLC in SLC mode. I much rather have 64GB of SLC at 100K WpB than 4TB of MLC at less than 10K WpB. It's more like 1TB of SLC vs. 3TB of TLC or 4TB of QLC. All three take the same die area, but the SLC will last a few orders of magnitude longer. | | |
| ▲ | karczex 15 hours ago | parent [-] | | SLC are produced, but the issue is that there is no (I'm aware of) SLC products for consumer market |
| |
| ▲ | mort96 6 hours ago | parent | prev [-] | | My problem is: I have more than 64GB of data |
|
| |
| ▲ | awei 7 hours ago | parent | prev | next [-] | | So AWS S3 Glacier might actually be cold | |
| ▲ | nutjob2 20 hours ago | parent | prev [-] | | > Even a QLC SSD that has only been written to once, and kept in a freezer at -40, may hold data for several decades. So literally put your data in cold storage. | | |
|
|
| ▲ | dale_glass a day ago | parent | prev | next [-] |
| So on the off-chance that there's a firmware engineer in here, how does this actually work? Like does a SSD do some sort of refresh on power-on, or every N hours, or you have to access the specific block, or...? What if you interrupt the process, eg, having a NVMe in an external case that you just plug once a month for a few minutes to just use it as a huge flash drive, is that a problem? What about the unused space, is a 4 TB drive used to transport 1 GB of stuff going to suffer anything from the unused space decaying? It's all very unclear about what all of this means in practice and how's an user supposed to manage it. |
| |
| ▲ | fairfeather a day ago | parent | next [-] | | SSD firmware engineer here. I work on enterprise stuff, so ymmv on consumer grade internals. Generally, the data refresh will all happen in the background when the system is powered (depending on the power state). Performance is probably throttled during those operations, so you just see a slightly slower copy while this is happening behind the scenes. The unused space decaying is probably not an issue, since the internal filesystem data is typically stored on a more robust area of media (an SLC location) which is less susceptible to data loss over time. As far as how a user is supposed to manage it, maybe do an fsck every month or something? Using an SSD like that is probably ok most of the time, but might not be super great as a cold storage backup. | | |
| ▲ | easygenes a day ago | parent | next [-] | | So say I have a 4TB USB SSD from a few years ago, that's been sitting unpowered in a drawer most of that time. How long would it need to be powered on (ballpark) for the full disk refresh to complete? Assume fully idle. (As a note: I do have a 4TB USB SSD which did sit in a drawer without being touched for a couple of years. The data was all fine when I plugged it back in. Of course, this was a new drive with very low write cycles and stored climate controlled. Older worn out drive would probably have been an issue.) Just wondering how long I should keep it plugged in if I ever have a situation like that so I can "reset the fade clock" per se. | | |
| ▲ | gblargg 17 hours ago | parent [-] | | More certain to just do a full read of the drive to force error correction and updating of any weakening data. | | |
| ▲ | geokon 17 hours ago | parent [-] | | noob question... how do i force a full read? | | |
| ▲ | homebrewer 16 hours ago | parent | next [-] | | the most basic solution that will work for every filesystem and every type of block device without even mounting anything, but won't actually check much except device-level checksums: sudo pv -X /dev/sda
or even just: sudo cat /dev/sda >/dev/null
and it's pretty inefficient if the device doesn't actually have much data, because it also reads (and discards) empty space.for copy-on-write filesystems that store checksums along with the data, you can request proper integrity checks and also get the nicely formatted report about how well that went. for btrfs: sudo btrfs scrub start -B /
or zfs: sudo zpool scrub -a -w
for classic (non-copy-on-write) filesystems that mostly consist of empty space I sometimes do this: sudo tar -cf - / | cat >/dev/null
the `cat` and redirection to /dev/null is necessary because GNU tar contains an optimization that doesn't actually read anything when it detects /dev/null as the target. | | |
| ▲ | medoc 16 hours ago | parent [-] | | Just as a note, and I checked that it's not the case with the GNU coreutils: on some systems, cp (and maybe cat) would mmap() the source file. When the output is the devnull driver, no read occurs because of course its write function does nothing... So, using a pipe (or dd) maybe a good idea in all cases (I did not check the current BSDs). |
| |
| ▲ | easygenes 12 hours ago | parent | prev | next [-] | | This is the most straightforward reliable option: > sudo dd if=/dev/sdX of=/dev/null bs=4M status=progress iflag=direct | | | |
| ▲ | fainpul 17 hours ago | parent | prev [-] | | On macOS / Linux you can use `dd` to "copy" everything from /dev/yourssd to /dev/null. Just be careful not to do it the other way! https://www.man7.org/linux/man-pages/man1/dd.1.html I have no idea if forcing a read is good / the right way. I'm just answering how to do it. | | |
|
|
| |
| ▲ | gruez a day ago | parent | prev | next [-] | | >Generally, the data refresh will all happen in the background when the system is powered (depending on the power state). How does the SSD know when to run the refresh job? AFAIK SSDs don't have an internal clock so it can't tell how long it's been powered off. Moreover does doing a read generate some sort of telemetry to the controller indicating how strong/weak the signal is, thereby informing whether it should refresh? Or does it blindly refresh on some sort of timer? | | |
| ▲ | fairfeather 20 hours ago | parent | next [-] | | Pretty much, but it depends a lot on the vendor and how much you spent on the drive. A lot of the assumptions about enterprise SSDs is that they’re powered pretty much all the time, but are left in a low power state when not in use. So, data can still be refreshed on a timer, as long as it happens within the power budget. There are several layers of data integrity that are increasingly expensive to run. Once the drive tries to read something that requires recovery, it marks that block as requiring a refresh and rewrites it in the background. | |
| ▲ | rasz 19 hours ago | parent | prev [-] | | https://www.techspot.com/news/60501-samsung-addresses-slow-8... samsung fix was aggressive scanning and rewriting in the background |
| |
| ▲ | BrenBarn 19 hours ago | parent | prev | next [-] | | So you need to do an fsck? My big question after reading this article (and others like it) is whether it is enough to just power up the device (for how long?), or if each byte actually needs to be read. The case an average user is worried about is where they have an external SSD that they back stuff up to on a relatively infrequent schedule. In that situation, the question is whether just plugging it and copying some stuff to it is enough to ensure that all the data on the drive is refreshed, or if there's some explicit kind of "maintenance" that needs to be done. | |
| ▲ | bullen 16 hours ago | parent | prev | next [-] | | Ok, so all bits have to be rotated, even when powered on, to not loose their state? Edit: found this below: "Powering the SSD on isn't enough. You need to read every bit occasionally in order to recharge the cell." Hm, so does the firmware have a "read bits to refersh them" logic? | | |
| ▲ | ACCount37 16 hours ago | parent [-] | | Kind of. It's "read and write back" logic, and also "relocate from a flaky block to a less flaky block" logic, and a whole bunch of other things. NAND flash is freakishly unreliable, and it's up to the controller to keep this fact concealed from the rest of the system. |
| |
| ▲ | rendaw 18 hours ago | parent | prev | next [-] | | > maybe do an fsck every month or something Isn't that what periodic "scrub" operations are on modern fs like ZFS/BTRFS/BCacheFS? > the data refresh will all happen in the background when the system is powered This confused me. If it happens in the background, what's the manual fsck supposed to be for? | |
| ▲ | whitepoplar a day ago | parent | prev | next [-] | | How long does the data refresh take, approx? Let's say I have an external portable SSD that I keep stored data on. Would plugging the drive into my computer and running dd if=/dev/sdX of=/dev/null bs=1M status=progress
work to refresh any bad blocks internally? | | |
| ▲ | fairfeather 20 hours ago | parent [-] | | A full read would do it, but I think the safer recommendation is to just use a small hdd for external storage. Anything else is just dealing with mitigating factors | | |
| ▲ | whitepoplar 19 hours ago | parent [-] | | Thanks! I think you're right about just using an HDD, but for my portable SSD situation, after a full read of all blocks, how long would you leave the drive plugged in for? Does the refresh procedure typically take a while, or would it be completed in roughly the time it would take to read all blocks? | | |
|
| |
| ▲ | divan 13 hours ago | parent | prev [-] | | I had to google what 'ymmv' means. To save other people's time – it's 'your mileage may vary'. |
| |
| ▲ | rossjudson 18 hours ago | parent | prev | next [-] | | Keep in mind that when flash memory is read, you don't get back 0 or 1. You get back (roughly) a floating point value -- so you might get back 0.1, or 0.8. There's extensive code in SSD controllers to reassemble/error correct/compensate for that, and LDPC-ish encoding schemes. Modern controllers have a good idea how healthy the flash is. They will move data around to compensate for weakness. They're doing far more to detect and correct errors than a file system ever will, at least at the single-device level. It's hard to get away from the basic question, though -- when is the data going to go "poof!" and disappear? That is when your restore system will be tested. | | |
| ▲ | londons_explore 16 hours ago | parent | next [-] | | Unless I am misunderstanding the communication protocol between the flash chip and the controller, there is no way for the controller to know that analogue value. It can only see the digital result. Maybe as a debug feature some registers can be set up adjust the threshold up and down and the same data reread many times to get an idea of how close certain bits are to flipping, but it certainly isn't normal practice for every read. | |
| ▲ | spixy 4 hours ago | parent | prev [-] | | return value < 0.5 ? 0 : 1; |
| |
| ▲ | zozbot234 a day ago | parent | prev [-] | | Typically unused empty space is a good thing, as it will allow drives to run in MLC or SLC mode instead of their native QLC. (At least, this seems to be the obvious implication from performance testing, given the better performance of SLC/MLC compared to QLC.) And the data remanence of SLC/MLC can be expected to be significantly better than QLC. | | |
| ▲ | gruez a day ago | parent [-] | | >as it will allow drives to run in MLC or SLC mode instead of their native QLC That depends on the SSD controller implementation, specifically whether it proactively moves stuff from the SLC cache to the TLC/QLC area. I expect most controllers to do this, given that if they don't, the drive will quickly lose performance as it fills up. There's basically no reason not proactively move stuff over. | | |
| ▲ | kasabali 17 hours ago | parent [-] | | Cheap DRAM-less controllers usually wait until the drive is almost full to start folding. And then they'll only be folding just enough to free up some space. Most benchmark results are consistent with this behavior. |
|
|
|
|
| ▲ | traceroute66 a day ago | parent | prev | next [-] |
| I assume this blog is a re-hash of the JDEC retention standards[1]. The more interesting thing to note from those standards is that the required retention period differs between "Client" and "Enterprise" category. Enterprise category only has power-off retention requirement of 3 months. Client category has power-off retention requirement of 1 year. Of course there are two sides to every story... Enterprise category standard has a power-on active use of 24 hours/day, but Client category only intended for 8 hours/day. As with many things in tech.... its up to the user to pick which side they compromise on. [1]https://files.futurememorystorage.com/proceedings/2011/20110... |
| |
| ▲ | throw0101a a day ago | parent | next [-] | | > I assume this blog is a re-hash of the JDEC retention standards[1]. Specifically in JEDEC JESD218. (Write endurance in JESD219.) | |
| ▲ | Springtime 17 hours ago | parent | prev | next [-] | | In the longer JEDEC overview document[1] it explains that in the ideal 'direct' testing method retention testing is only performed after the endurance testing. Which is only after the drive has had its max spec'd TBW written to it. While if the endurance testing would exceed 1000 hours an extrapolated approach can be used to stress below the TBW but using accelerated techniques (including capping the max writable blocks to increase wear on the same areas). Which is less dramatic than the retention values seem at first and than what gets communicated in articles I've seen. Even in the OP's linked article it takes a comment to also highlight this, while the article itself only cites its own articles that contain no outside links or citations. [1] https://www.jedec.org/sites/default/files/Alvin_Cox%20%5BCom... | |
| ▲ | tcfhgj a day ago | parent | prev [-] | | With 1 year power-off retention you still loose data, so still a compromise on data retention |
|
|
| ▲ | leo_e 10 hours ago | parent | prev | next [-] |
| We learned this the hard way with "cold" backups stored in a literal safe. We treated NVMe drives like digital stone tablets. A year later, we tried to restore a critical snapshot and checksums failed everywhere. We now have a policy to power-cycle our cold storage drives every 6 months just to refresh the charge traps. It's terrifying how ephemeral "permanent" storage actually is. Tape is annoying to manage, but at least it doesn't leak electrons just sitting on a shelf. |
| |
| ▲ | deltoidmaximus 8 hours ago | parent | next [-] | | What does power-cycle mean in this case? I've seen this topic come up a lot on a datahoarder forum but there was never any consensus on what it would take to assure the data was refreshed because no one really knew how it worked. People would assume anything from a short power up to simply rewriting all data as possibilities. The firmware engineer responding up thread is actually the best information I've seen on this. And it kind of confirmed my suspicions. He works on enterprise drives and they run a refresh cycle as background cycle that seems to start on a random timer. I didn't see any response (yet) on how long this process takes but I think we can infer that it takes a while based on it being a background process. Probably hours or more. And this is all for an enterprise drive, I wouldn't be surprised if consumer drives have a less robust or even non-existent refresh function. I'm of the opinion that that based on the little information about some implementations of this function, the logical conclusion is you should just rewrite all of the data over again on your cold backups for the cycle. Powering it on isn't enough, even powering it on for a day might not do anything. As you said this is a pretty big drawback for uses flash as a cold backup. | |
| ▲ | ianburrell 4 hours ago | parent | prev | next [-] | | I think we should stop considering flash as permanent storage. It is temporary storage that keeps working as long as given power. I wish there was archival storage something long-lasting and large capacity. That would make a good complement to flash for writing backups and long-lasting data. | |
| ▲ | thenthenthen 5 hours ago | parent | prev | next [-] | | What is the goldielocks area here, spinning rust? My 20 year old ide hdds seem to be… ok | |
| ▲ | qingcharles 6 hours ago | parent | prev [-] | | I was talking to a lawyer recently and somehow this subject came up and he confessed he has been scanning and storing all his files onto SSDs for years which he keeps in a safe to meet a 7-year retention requirement. |
|
|
| ▲ | breput 16 hours ago | parent | prev | next [-] |
| The spinrite[0] user group has noticed some of these effects, even on in-service drives. The theory is that operating system files, which rarely change, are written and almost never re-written. So the charges begin to decay over time and while they might not be unreadable, reads for these blocks require additional error correction, which reduces performance. There have been a significant number of (anecdotal) reports that a full rewrite of the drive, which does put wear on the cells, greatly increases the overall performance. I haven't personally experienced this yet, but I do think a "every other year" refresh of data on SSDs makes sense. [0] https://www.grc.com/sr/spinrite.htm |
| |
| ▲ | 2WSSd-JzVM 13 hours ago | parent | next [-] | | If full rewrite helps doesn't it sound like TRIM implementation in SSDs is buggy or insufficient? Or internal cell wear-maps aren't detailed enough. Anyway plenty ways it can go wrong, SSD firmware had also plenty of high profile bugs, including total bricking. | |
| ▲ | londons_explore 16 hours ago | parent | prev [-] | | There are lots of other potential causes for the same effect... Eg. Data structures used for page mapping getting fragmented and therefore to access a single page written a long time ago requires checking hundreds of versions of mapping tables. |
|
|
| ▲ | tzs a day ago | parent | prev | next [-] |
| What about powered SSDs that contain files that are rarely read? My desktop computer is generally powered except when there is a power failure, but among the million+ files on its SSD there are certainly some that I do not read or write for years. Does the SSD controller automatically look for used blocks that need to have their charge refreshed and do so, or do I need to periodically do something like "find / -type f -print0 | xargs -0 cat > /dev/null" to make sure every file gets read occasionally? |
| |
| ▲ | markhahn a day ago | parent | next [-] | | no, the firmware does any maintenance.
good firmware should do gradual scrub whenever it's idle.
unfortunately, there's no real way to know whether the firmware is good, or doing anything. I wonder if there's some easy way to measure power consumed by a device - to detect whether it's doing housekeeping. | | |
| ▲ | mbreese 20 hours ago | parent [-] | | Honestly this is one of my favorite things about ZFS. I know that a disk scan is performed every week/month (whatever schedule). And I also know that it has verified the contents of each block. It is very reassuring in that way. | | |
| ▲ | ziml77 18 hours ago | parent | next [-] | | You've validated that the scrub is actually running, right? I know that the lack of a default schedule for ZFS scrubs caused Linus Media Group to lose a bunch of archived videos to bitrot. | |
| ▲ | dotancohen 15 hours ago | parent | prev [-] | | In threads like this I keep hearing about ZFS. What would be the drawbacks of running ZFS as a home user? I keep my OS on the SSD and my files on spinning rust, if that's relevant. | | |
| ▲ | mbreese 13 hours ago | parent [-] | | 1) you have to have an OS that supports it. 2) even if your OS supports it, you may have difficulty using it for your root volume, so partitioning is probably required. 2a) in your case you may not want to use it on your boot volume which would negate the SSD benefit for you. 3) it is recommended that you have ECC RAM due to the checksums. This isn’t a hard and fast requirement, but it does make you more resilient to bitflips. 4) it isn’t the absolute fastest file system. But it’s not super slow. There are caching options for read and write that benefit from SSDs, but you’re just adding costs here to get speed increases. I only use it on servers or NASs. The extra hassles of using it on a workstation keep me from running it on a laptop. Unless you want to use FreeBSD that is… then you’d be fine (and FreeBSD is pretty usable as a daily driver). Realistically, I’m not sure how practical it is for most home users. But it is an example of what a filesystem can offer when it is well designed. | | |
| ▲ | deltoidmaximus 8 hours ago | parent | next [-] | | I'm always surprised how often ZFS is recommended when this comes up but not BTRFS which also has checksumming and scrubs and doesn't suffer some of ZFS's drawbacks of complexity and OS integration. | | |
| ▲ | mbreese 5 hours ago | parent [-] | | This is a fair point. I think that the instability of early releases of BTRFS and the (lack of) commitment of especially RedHat made me not spend too much time working with it. The lack of a RAID solution made it not feasible for my purposes for a long time, and I was already quite familiar with ZFS through working with Solaris and FreeBSD. Trust in filesystems is hard won and easily lost[0]. I also think the popularity of FreeNAS especially contributed to the popularity of ZFS. [0] I still look at XFS skeptically after a crash I suffered nearly 20 years ago. It’s not a rational fear, but it’s still there. |
| |
| ▲ | dotancohen 11 hours ago | parent | prev [-] | | I use Debian at home, with separate boot, /, and /home/ partitions. I have no idea what type of cheap memory is stuffed into the motherboard - it's certainly not homogeneous. I do prioritise resiliency over speed, or even space. Still something I should look into? Thank you! | | |
| ▲ | mbreese 9 hours ago | parent | next [-] | | The servers I use ZFS on are Debian, so it’s well supported in that way. I’m pretty sure ZFS on Debian uses dkms, so if you want to try it on a data partition, it will work. Still, unless you want to tinker with something new I can’t really recommend it. Would it work? Yes. Do you need it? No. You’re probably fine with whatever FS you currently have running. ZFS works on Debian, but it’s not first-party support (due to licensing). Do I think you’d have issues if you wanted to try it? Probably not. I’m just conservative in what I’d recommended for a daily use machine. I prioritize working over everything else, so I’d hate for you to try it and end up with a non working system. Here’s what I’d recommend instead - try it in a VM first. See if you can get it to work in a system setup like yours. See if it’s something that you like. If you want to use it on your primary machine, then you’ll be able to make a more informed decision. | |
| ▲ | chungy 9 hours ago | parent | prev [-] | | I use ZFS on both my desktop and laptop each with Linux (in addition to a server, also running ZFS, but on FreeBSD). It's actually really not terribly hard, but I might be biased since I've been doing since it 2011 :) If you can/are willing to use UEFI, ZFSBootMenu is a Linux oriented solution that replicates the power of FreeBSD's bootloader, so you can manage snapshots and boot environments and rollback checkpoints all at boot without having to use recovery media (that used to be required when doing ZFS on Linux). Definitely worth looking into: https://zfsbootmenu.org/ |
|
|
|
|
| |
| ▲ | poolnoodle 15 hours ago | parent | prev | next [-] | | Maybe tangentially related to my Pixel phone losing photos [0]? [0] https://news.ycombinator.com/item?id=46033131 | |
| ▲ | JensenTorp a day ago | parent | prev | next [-] | | I also need an answer to this. | | |
| ▲ | dboreham a day ago | parent [-] | | It's fine. But the whole drive can turn to dust at any time, of course. | | |
| |
| ▲ | seg_lol a day ago | parent | prev [-] | | You should absolutely be doing a full block read of your disk, dd if=/dev/disk of=/dev/null every couple weeks | | |
|
|
| ▲ | kevstev 20 hours ago | parent | prev | next [-] |
| Is there a real source that confirms this with data? I generally like xda, but the quality of their articles is uneven and they trend towards click bait headlines that try to shock/surprise you with thin content underneath. There have been a string of "Here is the one piece of software you didn't know you needed for your NAS" and it turns out to be something extremely popular like home assistant. This article just seems to link to a series of other xda articles with no primary source. I wouldn't ever trust any single piece of hardware to store my data forever but this feels like clickbait- At one point they even state "...but you shouldn't really worry about it..." |
|
| ▲ | testartr a day ago | parent | prev | next [-] |
| what is the exact protocol to "recharge" an ssd which was offline for months? do I just plug it in and let the computer on for a few minutes? does it needs to stay on for hours? do I need to run a special command or TRIM it? |
| |
| ▲ | PaulKeeble a day ago | parent | next [-] | | We really don't know. One thing I wish some of these sites would do is actually test how long it takes for the drives to decay and also do a retest after they have been left powered for say 10 minutes to an hour, read completely, written to a bit etc and see if they can determine what a likely requirement is. The problem is the test will take years, be out of date by the time its released and new controllers will be out with potentially different needs/algorithms. | | |
| ▲ | unsnap_biceps a day ago | parent [-] | | There was one guy who tested this https://www.tomshardware.com/pc-components/storage/unpowered... The data on this SSD, which hadn't been used or powered up for two years, was 100% good on initial inspection. All the data hashes verified, but it was noted that the verification time took a smidgen longer than two years previously. HD Sentinel tests also showed good, consistent performance for a SATA SSD.
Digging deeper, all isn't well, though. Firing up Crystal Disk Info, HTWingNut noted that this SSD had a Hardware ECC Recovered value of over 400. In other words, the disk's error correction had to step in to fix hundreds of data-based parity bits.
...
As the worn SSD's data was being verified, there were already signs of performance degradation. The hashing audit eventually revealed that four files were corrupt (hash not matching). Looking at the elapsed time, it was observed that this operation astonishingly took over 4x longer, up from 10 minutes and 3 seconds to 42 minutes and 43 seconds.
Further investigations in HD Sentinel showed that three out of 10,000 sectors were bad and performance was 'spiky.' Returning to Crystal Disk Info, things look even worse. HTWingNut notes that the uncorrectable sectors count went from 0 to 12 on this drive, and the hardware ECC recovered value went from 11,745 before to 201,273 after tests on the day.
| | |
| ▲ | zozbot234 a day ago | parent [-] | | Note that the SSD that showed corrupted files was the one that had been worn out well beyond the manufacturer's max TBW rating (4× the TBW or so). There was a difference of two-to-three orders of magnitude in the ECC count between the "fresh" and the worn-out SSD; I'd call that very significant. It will be interesting to see if there's any update for late 2025. |
|
| |
| ▲ | PunchyHamster a day ago | parent | prev | next [-] | | I'd imagine full read of the whole device might trigger any self-preservation, but I'd also imagine it's heavily dependent on manufacturer and firmware | |
| ▲ | reflexe 11 hours ago | parent | prev | next [-] | | I think that reading all of the information from the SSD should “recharge” it in most cases. The SSD controller should detect any bit flips and be able to correct them. However, this is implementation detail in the SSD FW. For Linux UBI devices, this will suffice. | |
| ▲ | nixpulvis a day ago | parent | prev | next [-] | | I too wonder about this. I'd love to see someone build a simulated "fast decay" SSD which can show how various firmware actually behaves. | |
| ▲ | nrhrjrjrjtntbt 21 hours ago | parent | prev | next [-] | | Back it up. 1 2 3. | |
| ▲ | tensility 11 hours ago | parent | prev | next [-] | | Read off all live data on the drive. This should cause the nand management firmware to detect degrading cells via ECC and move the data in order to refresh isolated cell voltage levels. | |
| ▲ | antisthenes a day ago | parent | prev | next [-] | | I would run something like CHKDSK, or write a script to calculate a hash of every file on disk. No idea if that's enough, but it seems like a reasonable place to start. | |
| ▲ | beefnugs 17 hours ago | parent | prev [-] | | If you are using a backup program like "kopia" there are special commands to recheck all hash blocks. You just can't trust the hardware to know how to do this, need backup software with multiple backup locations, it will know how to recheck integrity |
|
|
| ▲ | lizknope 8 hours ago | parent | prev | next [-] |
| I've been hearing about this for years and it makes sense theoretically but has anyone ever actually seen it? What are the errors reported? Or does the drive return bad data but reports no error? There was a guy on reddit that took about 20 cheap USB flash drives and checked 1 every 6 months. I think after 3 years nothing was bad yet. I've copied OS ISO images to USB flash drives and I know they sat for at least 2 years unused. Then I used it to install the OS and it worked perfectly fine with no errors reported. I still have 3 copies of all data and 1 of those copies is offsite but this scare about SSDs losing data is something that I've never actually seen. |
|
| ▲ | cobertos 19 hours ago | parent | prev | next [-] |
| How does one keep an SSD powered without actually mounting it or working with it? I have backups on SSDs and old drives I would like to keep somewhat live. Should I pop them in an old server? Is there an appliance that just supplies power? Is there a self-hosted thing I can monitor disks which I have 0 access usage for and don't want connected to anything but want to keep "live" |
| |
| ▲ | nirui 19 hours ago | parent | next [-] | | The simplest trick is just don't use SSD for long term backup, use a normal magnetic hard drive instead, those thing way lasts _longer_ (but not forever, even in human timescale). I have a HDD that was 17+ years since it last powered on. I dug it out recently to re-establish some memories, and discovered that it still reads. But of course you need to take care of them well, put them in an Anti-Static Bag or something similar, and make sure the storage environment is dry. It's not prefect, but at least you don't have to struggle that much maintaining SSDs. | | |
| ▲ | ziml77 18 hours ago | parent [-] | | I'm seeing threads where even for HDDs people are recommending you mount them yearly to do a full check of the data and to ensure that everything keeps moving freely. | | |
| ▲ | procaryote 6 hours ago | parent | next [-] | | regardless if it actually helps the longevity, it's probably a pretty good way to notice when you need another copy of the data. If you have your precious data on three harddrives and one starts thowing errors during your yearly check, you can get a replacement in good time | |
| ▲ | lofaszvanitt 17 hours ago | parent | prev [-] | | Superstition.... | | |
| ▲ | akimbostrawman 14 hours ago | parent [-] | | While unlikely solar flares or other environmental conditions could damage data over a long time. |
|
|
| |
| ▲ | Brybry 18 hours ago | parent | prev [-] | | There are cheap USB<->PATA/SATA or USB<->NVME adapters out there that usually also come with 120v AC -> 3/5/12v DC PATA/SATA power supplies (and if the SATA SSDs only need 5v then some adapters might work with USB alone). I use them for working with old unmounted hard drives or for cloning drives for family members before swapping them. But they would probably work for just supplying power too? The one I use the most is an 18 year old Rosewill RCW-608. I don't know if the firmware/controller would do what it needs to do with only power connected. I wonder if there's some way to use SMART value tracking to tell? Like if power on hours increments surely the controller was doing the things it needs to do? |
|
|
| ▲ | cosmic_cheese 21 hours ago | parent | prev | next [-] |
| Is there any type of flash-based storage (preferably accessible to end users) that focuses on long term data retention? If not, that feels like a substantial hole in the market. Non-flash durable storage tend to be annoying or impractical for day to day use. I want to be able to find a 25 year old SD card hiding in some crevice and unearth an unintentional time capsule, much like how one can pick up 20+ year old MiniDiscs and be able to play the last thing their former owners recorded to them perfectly. |
| |
| ▲ | 55873445216111 19 hours ago | parent | next [-] | | Yes, NOR Flash guarantees 20 years data retention. It's about $30/GiB. | |
| ▲ | tensility 11 hours ago | parent | prev [-] | | There are also expensive high-grade SLC NAND flash chips available that offer significantly higher retention time than the cheaper commodity channel TLC NAND (i.e. 10 years versus 3 years). In general, though, whether NAND or NOR, the fundamental way that flash works is by creating an isolated voltage charge for each bit (or several bits, for TLC), making it effectively a vast grid of very tiny batteries. Like all batteries, no matter how well-stored, they will eventually leak energy to the point where the voltage levels change enough to matter. Further, it's not enough to simply make power available to this grid since the refresh of the cells requires active management by a nand controller chip and its associated software stack. |
|
|
| ▲ | m0dest 20 hours ago | parent | prev | next [-] |
| So, product idea: A powered "cold storage box" for M.2 SSDs. 2 to 8 M.2 slots. Periodically, an internal computer connects one of the slots, reads every byte, waits for some period of time, then powers off. Maybe shows a little green light next to each drive when the last read was successful. Could be battery-powered. |
| |
| ▲ | boznz 19 hours ago | parent | next [-] | | here's a better idea, buy a mechanical hard disk. | |
| ▲ | loeg 20 hours ago | parent | prev | next [-] | | How much are you saving vs an always-on ARM board with M.2 slots? Is it worth the pennies? | |
| ▲ | throwaway894345 20 hours ago | parent | prev [-] | | > Could be battery-powered. How often does it need to run? If it could be solar powered you could probably avoid a whole bunch of complexity per unit longevity. |
|
|
| ▲ | brian-armstrong a day ago | parent | prev | next [-] |
| Powering the SSD on isn't enough. You need to read every bit occasionally in order to recharge the cell. If you have them in a NAS, then using a monthly full volume check is probably sufficient. |
| |
| ▲ | derkades a day ago | parent | next [-] | | Isn't that the SSD controller's job? | | |
| ▲ | brian-armstrong a day ago | parent | next [-] | | It would surely depend on the SSD and the firmware it's running. I don't think you can entirely count on it. Even if it were working perfectly, and your strategy was to power the SSD on periodicially to refresh the cells, how would you know when it had finished? | | |
| ▲ | ethin a day ago | parent [-] | | NVMe has read recovery levels (RRLs) and two different self-test modes (short and long) but what both of those modes do is entirely up to the manufacturer. So I'd think the only way to actually do this is to have host software do it, no? Or would even that not be enough? I mean, in theory the firmware could return anything to the host but... That feels too much like a conspiracy to me? |
| |
| ▲ | seg_lol a day ago | parent | prev [-] | | Do you know any firmware engineers? |
| |
| ▲ | Izkata a day ago | parent | prev [-] | | Huh. I wonder if this is why I'd sometimes get random corruption on my laptop's SSD. I'd reboot after a while and fsck would find issues in random files I haven't touched in a long time. | | |
| ▲ | gruez a day ago | parent | next [-] | | If you're getting random corruption like that, you should replace the SSD. SSDs (and also hard drives) already have built-in ECC, so if you're getting errors on top, it not just random cosmic rays. It's your SSD being extra broken, and doesn't bode too well for the health of the SSD as a whole. | | |
| ▲ | Izkata a day ago | parent [-] | | I bought a replacement but never bothered swapping it. The weird thing is the random corruption stopped happening a few years ago (confirmed against old backups, so it's not like I'm just not noticing). |
| |
| ▲ | brian-armstrong a day ago | parent | prev | next [-] | | It's quite possible. Some SSDs are worse offenders for this than others. I have some Samsung 870 EVOs that lost data the way you described. Samsung knew about the issue and quietly swept it under the rug with a firmware update, but once the data was lost, it was gone for good. | | |
| ▲ | arprocter 7 hours ago | parent | next [-] | | I got bit by this. RMA'd the bad drive and the replacement hasn't had problems (iirc it was made in a different country to the faulty one) Long thread here: https://www.techpowerup.com/forums/threads/samsung-870-evo-b... | |
| ▲ | PunchyHamster a day ago | parent | prev | next [-] | | Huh, I thought I got some faulty one, mine died shortly after warranty ended (and had a bunch of media errors before that) | |
| ▲ | ethin a day ago | parent | prev [-] | | I ran into this firmware bug with the two drives in my computer. They randomly failed after a while -- and by "a while" I mean less than a year of usage. Took two replacements before I finally realized that I should check for an fw update |
| |
| ▲ | formerly_proven a day ago | parent | prev [-] | | Unless your setup is a very odd Linux box, fsck will never check the consistency of file contents. | | |
| ▲ | Izkata a day ago | parent | next [-] | | It found problems in the tree - lost files, wrong node counts, other stuff - which led to me finding files that didn't match previous backups (and when opened were obviously corrupted, like the bottom half of an image being just noise). Once I found this was a problem I've also caught ones that couldn't be read (IOError) that fsck would delete on the next run. I may not have noticed had fsck not alerted me something was wrong. | |
| ▲ | suspended_state a day ago | parent | prev | next [-] | | But metadata is data too, right? I guess the next question is, would it be possible for parts of the FS metadata to remain untouched for a time long enough for the SSD data corruption process to occur. | |
| ▲ | giantrobot 20 hours ago | parent | prev [-] | | A ZFS scrub (default scheduled monthly) will do it. |
|
|
|
|
| ▲ | sevensor a day ago | parent | prev | next [-] |
| Flash is programmed by increasing the probability that electrons will tunnel onto the floating gate and erased by increasing the probability they will tunnel back off. Those probabilities are never zero. Multiply that by time and the number of cells, and the probability you don’t end up with bit errors gets quite low. The difference between slc and mlc is just that mlc has four different program voltages instead of two, so reading back the data you have to distinguish between charge levels that are closer together. Same basic cell design. Honestly I can’t quite believe mlc works at all, let alone qlc. I do wonder why there’s no way to operate qlc as if it were mlc, other than the manufacturer not wanting to allow it. |
| |
| ▲ | 55873445216111 a day ago | parent | next [-] | | All the big 3D NAND makers have already switched from floating gate to charge trapping. Basically the same as what you describe but basically the electrons get stuck in a non-conductive region instead of on an insulated gate. | | |
| ▲ | GCUMstlyHarmls 14 hours ago | parent | next [-] | | > The grand artificers of the Unseen University—those who insist on calling themselves “makers” despite mostly making trouble—gave up on the old Floating Gate enchantments ages ago. Too fiddly, too explosive, and prone to drifting off when no one was looking. Now they use Charge Trapping. Same idea in principle: coax a few errant lightning-spirits into doing something useful. But instead of perching them on a precariously insulated sigil like nervous pigeons, the wizards now shove the little blighters into a quiet, rubbery pocket of reality where they can’t conduct, escape, or start a small civil war. Everyone agrees it’s much safer, except for the pocket. ~ Terry Pratchett's Disklessworld, book 8. (Re: Any sufficiently advanced technology is indistinguishable from magic) | |
| ▲ | sevensor 11 hours ago | parent | prev [-] | | Cool! I’ve been out of the industry for over a decade now, so my process knowledge is a little quaint at this point. |
| |
| ▲ | Someone a day ago | parent | prev | next [-] | | > I do wonder why there’s no way to operate qlc as if it were mlc, other than the manufacturer not wanting to allow it. You can run an error-correcting code on top of the regular blocks of memory, storing, for example (really an example; I don’t know how large the ‘blocks’ that you can erase are in flash memory), 4096 bits in every 8192 bits of memory, and recovering those 4096 bits from each block of 8192 bits that you read in the disk driver. I think that would be better than a simple “map low levels to 0, high levels to 1” scheme. | |
| ▲ | userbinator a day ago | parent | prev | next [-] | | I do wonder why there’s no way to operate qlc as if it were mlc, other than the manufacturer not wanting to allow it. There is a way to turn QLC into SLC: https://news.ycombinator.com/item?id=40405578 | | | |
| ▲ | bobmcnamara a day ago | parent | prev | next [-] | | > I do wonder why there’s no way to operate qlc as if it were mlc, other than the manufacturer not wanting to allow it. Loads of drives do this(or SLC) internally. Though it would be handy if a physical format could change the provisioning at the kernel accessible layer. | |
| ▲ | em500 a day ago | parent | prev | next [-] | | > Honestly I can’t quite believe mlc works at all, let alone qlc. I do wonder why there’s no way to operate qlc as if it were mlc, other than the manufacturer not wanting to allow it. Manufacturers often do sell such pMLC or pSLC (p = pseudo) cells as "high endurance" flash. | |
| ▲ | testartr a day ago | parent | prev | next [-] | | the market demands mostly higher capacity tlc/qlc works just fine, it's really difficult to consume the erase cycles unless you really are writing 24/7 to the disk at hundred of megabytes a second | | |
| ▲ | tcfhgj a day ago | parent [-] | | I have a MLC SSD with TBW/GB much higher than the specified TBW/GB guarantee of usual qlc SSDs |
| |
| ▲ | tensility 10 hours ago | parent | prev [-] | | At the physical device layer (i.e. what a nand controller vendor programs to), a nand flash device that supports ONFI 2 SLC Mode can be configured to use (some of) its blocks in SLC mode rather than MLC/TLC/QLC/etc. This allows one to divide the array into high-reliability versus high-capacity regions. |
|
|
| ▲ | buildbot 5 hours ago | parent | prev | next [-] |
| Making sure all your important data can be read and is checked at read is like zfs scrubs sole purpose. Seems like at least a monthly scrub is a very good idea for SSDs. |
|
| ▲ | reflexe 11 hours ago | parent | prev | next [-] |
| Also, FYI for the one person here who uses raw nand flash: run ubihealthd (https://lwn.net/Articles/663751/). It will trigger reads in random areas in flash, and try ti correct any errors found. Without it, the same issue as in the original article will happen (even if the device is powered on): areas in the NAND were not read for long time will have more and more errors, causing them to be non recoverable. |
|
| ▲ | mghackerlady 9 hours ago | parent | prev | next [-] |
| This is a really good case for better file systems with built in error correction and self healing. On linux they have btrfs which kinda does this, and some support for zfs. In the BSD land we have zfs and hammer2. Does NT or Mac have anything like this? I think Mac might have some unofficial zfs support but I don't know the state that's in |
| |
| ▲ | johnebgd 9 hours ago | parent [-] | | The self healing only works if you have redundant drives. If all SSD’s are like this maybe they need alarms that sound as an onboard battery gets low to ensure someone plugs them in. |
|
|
| ▲ | jcynix a day ago | parent | prev | next [-] |
| Hmm, so what about these modern high density hard drives which store track parameters for their servos in on-board flash (aka OptiNAND)? Do we get "spinning rust" which might loose the information where exactly it stored the data? https://blog.westerndigital.com/optinand-explained/ |
| |
| ▲ | userbinator 16 hours ago | parent | next [-] | | I think you're talking about two different things; "adaptives" are usually stored in EEPROM or the MCU's NOR flash, which is basically better-than-SLC levels of reliability, especially as they're written once and treated as ROM after that. OptiNAND is a "SSHD" and thus has the same concerns with retention as an SSD. https://en.wikipedia.org/wiki/Hybrid_drive | |
| ▲ | binaryturtle a day ago | parent | prev | next [-] | | I wonder too. I don't trust SSDs/ flash for my archives, hence I'm stuck on max. 18TB drives atm. | |
| ▲ | markhahn a day ago | parent | prev [-] | | shouldn't really be a problem, since the capacity required for this use is so low. the real issue here is QLC in which the flash cell's margins are being squeezed enthusiastically... |
|
|
| ▲ | Wowfunhappy 21 hours ago | parent | prev | next [-] |
| > But, most people don't need to worry about it. [...] You should always have a backup anyway. [...] Backing up your data is the simplest strategy to counteract the limitations of storage media. Having multiple copies of your data on different types of storage ensures that any unexpected incidents protect your data from vanishing forever. This is exactly what the 3-2-1 backup rule talks about: 3 copies of data on at least 2 different storage media, with 1 copy stored off-site. Um. Backups seem like exactly why I might have data on an unpowered SSD. I use HDDs right now because they're cheaper, but that might not be true some day. Also, I would expect someone less technically inclined than I am to just use whatever they have lying around, which may well be an SSD. |
| |
| ▲ | jollyllama 9 hours ago | parent [-] | | Yeah. That part was so out of touch, I have to wonder who they're carrying water for. Most people backup their data? What planet does the author live on? |
|
|
| ▲ | groestl 15 hours ago | parent | prev | next [-] |
| A strong believe of mine: there is no storage, only communication. I hold that thought since I first heard of SRAM, and I think it applies to everything, knowledge, technology, societies, our universe in general.. |
| |
| ▲ | rafterydj 9 hours ago | parent [-] | | Interesting statement. I suppose you could argue that long term archival storage is communication with a long tailed, sometimes ill-defined window for receiving the message. Few people write books they don't want anyone to read. |
|
|
| ▲ | EVAN1098 12 hours ago | parent | prev | next [-] |
| It’s true that SSDs can lose data over time without power, but it usually takes a long period and depends on storage temperature and drive quality. For normal users who power their devices regularly, it’s not a big concern. Still, it’s a good reminder to keep backups if the data really matters. |
|
| ▲ | lxgr 15 hours ago | parent | prev | next [-] |
| As far as I understand, this even applies to some seemingly read-only storage such as game cartridges, e.g. those for the Nintendo Switch. Flash storage is apparently cheaper (especially for smaller production runs) and/or higher density these days, so these cartridges just use that and make it appear ROM-like via a controller. |
|
| ▲ | hosh 13 hours ago | parent | prev | next [-] |
| Does this also apply to thumbdrives? |
| |
| ▲ | pja 13 hours ago | parent [-] | | Yes, it applies to all flash memory. Timescales will vary depending on the fabrication type, ambient temperature etc etc. |
|
|
| ▲ | behringer 20 hours ago | parent | prev | next [-] |
| The article implies this is not a concern for "regular" people. That is absolutely false. How many people get their family photos when they finally decide to recycle that 15 year old PC in the basement? How many people have a device that they may only power up ever few years, like on vacation. In fact, I have a device that I've only used on rare occasions these days (an arcade machine) that now I suspect I'll have to reinstall since It's been 2 or 3 years since I've last used it. This is a pretty big deal that they don't put on the box. |
|
| ▲ | nubinetwork a day ago | parent | prev | next [-] |
| I've got some old SSDs just to test this myself, the old 256gb corsairs I tested previously were fine after a year and a half, but I might have misplaced them...(they only had 10% write life left, so no huge loss) the 512gb samsungs on my desk should be getting pretty ripe soon though, I'll have to check those too. |
|
| ▲ | 2rsf 15 hours ago | parent | prev | next [-] |
| I was in a team that wrote the firmware to handle that 15 years ago, with focus on automotive implementations where temperature might be high and access to the data is harder. |
|
| ▲ | BaardFigur a day ago | parent | prev | next [-] |
| I don't use my drive much. I still boot it up snd write some data, just not the long term one. Am I in risk? |
| |
| ▲ | zozbot234 a day ago | parent [-] | | AIUI, informal tests have demonstrated quite a bit of data corruption in Flash drives that are literally so worn out that they might as well be about to fail altogether - well beyond any manufacturer's actual TBW specs - but not otherwise, least of all in new drives that are only written once over for the test. It seems that if you don't wear out your drive all that much you'll have far less to worry about. | | |
|
|
| ▲ | storus a day ago | parent | prev | next [-] |
| I thought that was an ancient issue with Samsung 740? I had that one and it was slowly losing speed when unpowered due to an accumulation of errors and rewriting the individual sectors once for the whole drive made it work fine for a year. |
|
| ▲ | spoaceman7777 18 hours ago | parent | prev | next [-] |
| A solution I haven't yet seen in this thread is to buy multiple drives, and sacrifice the capacity of one of those drives to maintain single parity via a raidz1 configuration with zfs. (raidz2 or raidz3 are likely better, as you can guard against full drive failures as well, but you'd need to increase the number of drives' capacity that you're using for parity.) zfs in these filesystem-specific parity-raid implementations also auto-repairs corrupted data whenever read, and the scrub utility provides an additional tool for recognizing and correcting such issues proactively. This applies to both HDDs and SSDs. So, a good option for just about any archival use case. |
| |
| ▲ | fsckboy 18 hours ago | parent | next [-] | | this is about drives that are not plugged in. are you saying parity would let you simply detect that the data had gone bad? increasing the number of drives would increase the decay rate, more possibilities for a first one to expire. if your parity drive expired first, you would think you had errors when you didn't yet. | | |
| ▲ | spoaceman7777 17 hours ago | parent [-] | | No, I'm talking about parity raid (raidz1/z2/z3, or, more familiarly, raid 5 and 6). In a raidz1, you save one of the n drives' worth of space to store parity data. As long as you don't lose that same piece of data on more than one drive, you can reconstruct it when it's brought back online. And, since the odds of losing the same piece of data on more than one drive is much lower than the odds of losing any piece of data at all, it's safer. Upping it to two drives worth of data, and you can even suffer a complete drive failure, in addition to sporadic data loss. |
| |
| ▲ | fweimer 18 hours ago | parent | prev [-] | | How would this work? Wouldn't all these drives start loosing data at roughly at the same time? | | |
| ▲ | spoaceman7777 17 hours ago | parent [-] | | Yes, but different pieces of data. The stored parity allows you to reconstruct any piece of data as long as it is only lost on one of the drives (in the single parity scenario). The odds of losing the same piece of data on multiple drives is much lower than losing any piece of data at all. | | |
| ▲ | danparsonson 17 hours ago | parent [-] | | But the data is not disappearing, it's corrupted - so how do you know which bits are good and which are not? |
|
|
|
|
| ▲ | paulkrush a day ago | parent | prev | next [-] |
| I had to search around and feel like a dork not knowing this. I have my data backed up, but I keep the SSDs because it's nice to have the OS running like it was... I guess I need to be cloning the drives to ISOs and storing on spinning rust. |
| |
| ▲ | pluralmonad a day ago | parent | next [-] | | I learned this when both my old laptops would no longer boot after extended off power time (couple years). They were both stored in a working state and later both had SSDs that were totally dead. | | |
| ▲ | justin66 a day ago | parent [-] | | Were the SSDs toasted, or were you able to reinstall to them? | | |
| |
| ▲ | dpoloncsak a day ago | parent | prev | next [-] | | I could be wrong, but I believe the general consensus is along the lines of "SSDs for in-use data, it's quicker and wants to be powered on often. HDDs for long-term storage, as they don't degrade when not in use nearly as fast as SSDs do. | | |
| ▲ | PunchyHamster a day ago | parent | next [-] | | I'd imagine HDDs also don't like not spinning for years(as mechanical elements generally like to be used from time to time). But at least platters itself are intact | |
| ▲ | joezydeco a day ago | parent | prev [-] | | I've been going through stack of external USB drives with laptop disks in them. They're all failing in some form or another. I'm going to have to migrate it all to a NAS with server-class drives I guess | | |
| ▲ | Yokolos a day ago | parent [-] | | At the very least, you can usually still get the data off of them. Most SSDs I've encountered with defects failed catastrophically, rendering the data completely inaccessible. |
|
| |
| ▲ | gosub100 a day ago | parent | prev [-] | | or you could power them on 1-2x /year. | | |
| ▲ | ggm a day ago | parent [-] | | Power them on and run something to exercise the read function over every bit. Thats why a ZFS filesystem integrity check/scrub is the useful model. I'm unsure if dd if=/the/disk of=/dev/null does the read function. | | |
| ▲ | fragmede a day ago | parent [-] | | why would it not? it's a low level tool to do exactly that. you could "of" it to somewhere else if you're worried it's not. I like to | hexdump -C, on an xterm set to a green font on a black background for a real matrix movie kind of feel. |
|
|
|
|
| ▲ | stuxnet79 18 hours ago | parent | prev | next [-] |
| Welp, new fear unlocked. I need to move all my backups to ZFS sooner rather than later ... |
|
| ▲ | fsckboy 18 hours ago | parent | prev | next [-] |
| >unpowered SSDs slowly lose data so it's as if the data... rusts, a little bit at a time |
| |
|
| ▲ | pmarreck 20 hours ago | parent | prev | next [-] |
| shameless plug of my anti-bitrot tool, which I am actually enhancing with a --daemon mode currently https://github.com/pmarreck/bitrot_guard |
|
| ▲ | fuzztester 13 hours ago | parent | prev | next [-] |
| Does the same apply to USB thumb drives, i.e. do they lose their data if not plugged in? |
| |
| ▲ | zozbot234 12 hours ago | parent [-] | | USB thumb drives (and SD cards) are bottom-of-the-barrel flash, so yes, absolutely. |
|
|
| ▲ | coppsilgold 21 hours ago | parent | prev | next [-] |
| My scrub script: dd if=$1 of=/dev/null iflag=direct bs=16M status="progress"
smartctl -a $1
If someone wants to properly study SSD data-retention they could encrypt the drive using plain dm-crypt and fill the encrypted volume with zeroes and check at some time point afterwards to see if there are any non-zero blocks. This is an accessible way (no programming involved) to let you write random data to the SSD and save it without actually saving the entire thing - just the key. It will also ensure maximum variance in charge levels of all the cells. Will also prevent the SSD from potentially playing tricks such as compression. |
|
| ▲ | tensility 11 hours ago | parent | prev | next [-] |
| Good advice; however, past experience suggests that conventional magnetic hard drives suffer problems of stiction when left in cold storage for too long. I wouldn't trust either technology for long-term archival purposes. |
|
| ▲ | dboreham a day ago | parent | prev | next [-] |
| Quick note to not store any valuable data on a single drive. And when you store it on two drives, don't use the same kind of drive. (Speaking from bitter experience using spinning drives in servers that had a firmware bug where they all died at the time number of seconds of power-on time). |
| |
| ▲ | burnt-resistor 21 hours ago | parent [-] | | That's kicking the can down the road for double the cost. Only a backup on spinning rust is actually a backup. Furthermore, replication isn't a backup. |
|
|
| ▲ | canadiantim a day ago | parent | prev | next [-] |
| What is the best way to store data for a long time then? |
| |
| ▲ | markhahn a day ago | parent | next [-] | | all the major players say "tape".
(but that's partly for practical issues like scaling and history) | | |
| ▲ | octorian 17 hours ago | parent [-] | | And yet nobody wants to actually offer a tape-based solution that's practical, easy to get, holds enough data, and doesn't cost a blithering fortune. Even if you are willing to spend that small fortune, good luck actually getting all the parts together without enterprise contracts. |
| |
| ▲ | schneehertz 20 hours ago | parent | prev [-] | | Depending on your time requirements, carving it into a stone tablet is generally a good choice |
|
|
| ▲ | KPGv2 2 hours ago | parent | prev | next [-] |
| > SSDs have all but replaced hard drives when it comes to primary storage. Really? I could have sworn that primary storage was the one place they weren't going to replace HDDs. Aren't they more of a thing for cache? I've aged and busied myself beyond keeping track of this stuff anymore. I'm going to buy a packable NAS in the next couple months and be done with it. Hopefully ZFS since apparently that's the bee's knees and I won't have to think about RAIDs anymore. |
|
| ▲ | roschdal 16 hours ago | parent | prev | next [-] |
| Shitty Storage Device |
|
| ▲ | burnt-resistor 21 hours ago | parent | prev | next [-] |
| Not having a (verified) backup is driving without a seatbelt. |
|
| ▲ | bossyTeacher a day ago | parent | prev | next [-] |
| This is why I would rather pay someone a couple of dollars per year to handle all this for me. If need be pay two providers to have a backup. |
| |
| ▲ | loloquwowndueo a day ago | parent | next [-] | | Who do you pay for this? (To rephrase : which cloud storage vendors do you use?) interested in the $2/month price point :) | | |
| ▲ | Terr_ a day ago | parent | next [-] | | I assume "couple of" was figurative, to indicate the cost is substantially less than managing your own bank of SSDs and ensuring it is periodically powered etc. [Edit: LOL, I see someone else posted literally the same example within the same minute. Funny coincidences.] That said, they could also be storing relatively small amounts. For example, I back up to Backblaze B2, advertised at $6/TB/month, so ~300 MB at rest will be a "couple" bucks. | | |
| ▲ | Dylan16807 18 hours ago | parent [-] | | > managing your own bank of SSDs If I have enough data to need multiple SSDs (more than 8TB) then the cloud cost is not going to be substantially less. B2 is going to be above $500 a year. I can manage to plug a backup SSD into a phone charger a couple times a year, or leave it plugged into one when it's not in my computer being updated. Even if I rate that handful of minutes of labor per year at a gratuitous $100, I'm still saving money well before the 18 month mark. |
| |
| ▲ | ggm a day ago | parent | prev | next [-] | | tell me about this $2/week filestore option. I'm interested. | | |
| ▲ | 867-5309 a day ago | parent [-] | | continuing the bizarre trend, I'm here for the $2/day deal | | |
| ▲ | bigstrat2003 a day ago | parent | next [-] | | That would be Tarsnap. Cool product and the owner is a good dude, but good Lord is it expensive. I would love to support him but just can't afford it. | |
| ▲ | topato a day ago | parent | prev [-] | | I'D love to be paying $2/minute! |
|
| |
| ▲ | PunchyHamster a day ago | parent | prev [-] | | Backblaze B2 is $6TB/mo, so if you have around 300GB... stuff like restic or kopia backups nicely to it | | |
| ▲ | Terr_ a day ago | parent | next [-] | | Recently started fiddling with restic and B2, it worked fairly seamlessly once I stopped trying too hard being fancy with permissions and capabilities (cap_dac_read_search). There were some conflicts trying to have both "the way that works interactively" [0] versus "the way that works well with systemd". [AmbientCapabilities=] One concern I have is B2's downloading costs means verifying remote snapshots could get expensive. I suppose I could use `restic check --read-data-subset X` to do a random spot-check of smaller portions of the data, but I'm not sure how valuable that would be. I like how it resembles LUKS encryption, where I can have one key for the automated backup process, and a separate memorize-only passphrase for if things go Very Very Wrong. [0] https://restic.readthedocs.io/en/latest/080_examples.html#ba... | |
| ▲ | markhahn 21 hours ago | parent | prev [-] | | $72/yr is somewhere around 3x purchase price (per year). BB seems to be pretty smart about managing their exposure, infrastructure overhead, etc. |
|
| |
| ▲ | djtango 21 hours ago | parent | prev [-] | | Do we actually know the clouds will do this? S3 is just about coming to its 20th anniversary. Long enough to experience data rot to a small degree but realistically what proportion of users have archived things away for 10+ years then audited the fidelity of their data on retrieval after fetching it from Glacier |
|
|
| ▲ | lofaszvanitt 17 hours ago | parent | prev | next [-] |
| xda-developers..... reliable source. nonetheless time to ask the ssd makers whats their take on this. they have support,time to write to them. |
|
| ▲ | yapyap a day ago | parent | prev | next [-] |
| good to know but apart from some edge cases this doesnt matter that much |
|
| ▲ | fnord77 15 hours ago | parent | prev | next [-] |
| well poop, I was just about to buy an 8Tb ssd to use as a backup device |
|
| ▲ | TacticalCoder a day ago | parent | prev | next [-] |
| So basically if you like to put SSDs on shelves (for offline backups), you should read them from scratch once in a while? I rotate religiously my offline SSDs and HDDs (I store backups on both SSDs and HDDs): something like four at home (offline onsite) and two (one SSD, one HDD) in a safe at the bank (offline offsite). Every week or so I rsync (a bit more advanced than rsync in that I wrap rsync in a script that detects potential bitrot using a combination of an rsync "dry-run" and known good cryptographic checksums before doing the actual rsync [1]) to the offline disks at home and then every month or so I rotate by swapping the SSD and HDD at the bank with those at home. Maybe I should add to the process, for SSDs, once every six months: ... $ dd if=/dev/sda | xxhsum
I could easily automate that in my backup'ing script by adding a file lastknowddtoxxhash.txt containing the date of the last full dd to xxhsum, verifying that, and then asking, if a SSD is detected (I take it on a HDD it doesn't matter), if a full read to hash should be done.Note that I'm already using random sampling on files containing checksums in their name, so I'm already verifying x% of the files anyway. So I'd probably be detecting a fading SSD quite easily. Additionally I've also got a server with ZFS in mirroring so this, too, helps keep a good copy of the data. FWIW I still have most of the personal files from my MS-DOS days so I must be doing something correctly when it comes to backing up data. But yeah: adding a "dd to xxhsum" of the entire disks once every six months in my backup'ing script seems like a nice little addition. Heck, I may go hack that feature now. [1] otherwise rsync shall happily trash good files with bitrotten ones |
|
| ▲ | formerly_proven a day ago | parent | prev [-] |
| > Even the cheapest SSDs, say those with QLC NAND, can safely store data for about a year of being completely unpowered. More expensive TLC NAND can retain data for up to 3 years, while MLC and SLC NAND are good for 5 years and 10 years of unpowered storage, respectively. This is somewhat confused writing. Consumer SSDs usually do not have a data retention spec, even in this very detailed Micron datasheet you won't find it: https://advdownload.advantech.com/productfile/PIS/96FD25-S2T...
Meanwhile the data retention spec for enterprise SSDs is at the end of their rated life, which is usually a DPWD/TBW intensity you won't reach in actual use anyway - that's where numbers like "3 months @ 50 °C" or whatever come from. In practice, SSDs don't tend to loose data over realistic time frames. Don't hope for a "guaranteed by design" spec on that though, some pieces of silicon are more equal than others. |
| |
| ▲ | Yokolos a day ago | parent [-] | | Any given TBW/DWPD values are irrelevant for unpowered data retention. Afaik, nobody gives these values in their datasheet and I'm wondering where their numbers are from, because I've never seen anything official. At this point I'd need to be convinced that the manufacturers even know themselves internally, because it's never been mentioned by them and it seems to be outside the intended use cases for SSDs | | |
|