| ▲ | Unpowered SSD endurance investigation finds data loss and performance issues(tomshardware.com) |
| 123 points by progval 3 days ago | 87 comments |
| |
|
| ▲ | jauntywundrkind 2 days ago | parent | next [-] |
| Note: the worn drives were very worn. 128GB drive with 280TB written is ~2400 cycles. >5x it's 480x rating! Even though it's a cheap drive, it's rated endurance wasn't really that low. 600 cycles (1200TBW/2TB) is pretty common for consumer SSD: that's higher than 480x but not vastly higher. Glad folks are chiming in with the temperature sensitivity notes. I have parts coming in for a much bigger home-cloud system, and was planning on putting it in the attic. But it's often >110°F in the summer up there! I don't know how much of a difference that would make, given that the system will be on 24/7; some folks seem to say having it powered on should be enough, others note that usually it's during read that cells are refreshed. Doing an annual dd if=/nvme0n1 of=/dev/zero bs=$((1024*1024)) hadn't been the plan, but maybe it needs to be! |
| |
| ▲ | ahofmann 2 days ago | parent | next [-] | | Just my 2 cents: I've installed quite a few servers and racks of hardware in very unsuitable spaces. LTO drives in dusty rooms, important disks in hot and cold (summer and winter), dusty and vibrating rooms and so on. I was very worried about them dying way sooner than people would expect. The only thing that died after two years was the LTO drives. Everything else is still running. So I wouldn't worry much about your hot attic. 43° Celsius outside means constantly over 50° inside the server, which sounds horrible but in the last 15 years of my experience nothing died because the room was too hot, or cold, or humid, or shaky. I've installed dust Filters in front of all air intakes, though. My reasoning is, that dust kills way faster than temperature. | |
| ▲ | ilikepi 2 days ago | parent | prev | next [-] | | Ok I gotta ask...why of=/dev/zero and not of=/dev/null ? | | |
| ▲ | jauntywundrkind 2 days ago | parent [-] | | That's probably much better, oops! | | |
| ▲ | gruez 2 days ago | parent [-] | | You don't even need dd since you're reading the whole file end to end. Something like cat /dev/nvme0n1 > /dev/null
would suffice. |
|
| |
| ▲ | userbinator a day ago | parent | prev [-] | | 15 years ago the average MLC flash was rated for closer to 5k cycles, not 600. The fact that even the "fresh" drive, with just 1 cycle, is already showing some degradation is also concerning. |
|
|
| ▲ | asdefghyk 4 hours ago | parent | prev | next [-] |
| I wonder how ssd endurance is effected - if at all , by storage temperature. I’m thinking if it was kept at 3 degrees or even colder like frozen. Since chemical reactions proceed slower with a lower temperature. Photographic negatives can have deterioration greatly slowed if stored at some low temperatures. Would need to take steps to avoid condensation. … Would be interesting to know for hard drives to …. |
|
| ▲ | csdvrx 2 days ago | parent | prev | next [-] |
| For long term storage, prefer hard drives (careful about CMR vs SMR) If you have specific random IO high performance needs, you can either - get a SLC drive like https://news.solidigm.com/en-WW/230095-introducing-the-solid... - make one yourself by hacking the firmware: https://news.ycombinator.com/item?id=40405578 Be careful when you use something "exotic", and do not trust drives that are too recent to be fully tested: I learned my lesson for M2 2230 drives https://www.reddit.com/r/zfs/comments/17pztue/warning_you_ma... which seems validated by the large numbers of similar experiences like https://github.com/openzfs/zfs/discussions/14793 |
| |
| ▲ | vlovich123 2 days ago | parent | next [-] | | > - make one yourself by hacking the firmware: https://news.ycombinator.com/item?id=40405578
Be careful when you use something "exotic", and do not trust drives that are too recent to be fully tested Do you realize the irony of cautioning about buying off the shelf hardware but recommending hacking firmware yourself? | | |
| ▲ | userbinator 2 days ago | parent | next [-] | | That "firmware hack" is just enabling an option that manufacturers have always had (effectively 100% "SLC cache") but almost always never use for reasons likely to do with planned obsolescence. | | |
| ▲ | vlovich123 2 days ago | parent [-] | | Converting a QLC chip into an SLC is not planned obsolescence. It’s a legitimate tradeoff after analyzing the marketplace that existing MTBF write lifetimes are within acceptable consumer limits and consumers would rather have more storage. Edit: and to preempt the “but make it an option”. That requires support software they may not want to build and support requests from users complaining that toggling SLC mode lost all the data or toggling QLC mode back on did similarly. It’s a valid business decision to not support that kind of product feature. | | |
| ▲ | Dylan16807 2 days ago | parent | next [-] | | And for the vast majority of use cases, even if QLC wears out TLC would be fine indefinitely. Limiting it to SLC capacity would be ridiculous. | |
| ▲ | userbinator 2 days ago | parent | prev [-] | | I have USB drives with good old SLC flash, whose data is still intact after several decades (rated for retention of 10 years at 55C after 100K cycles - and they have not been cycled anywhere near that much.) and consumers would rather have more storage No one from the manufacturers tells them that the "more storage" - multiplicatively more - lasts exponentially less. For the same price, would you rather have a 1TB drive that will retain data for 10 years after having written 100PB, or a 4TB one that will only hold that data for 3 months after having written 2PB? That requires support software they may not want to build The software is already there if you know where to look. and support requests from users complaining that toggling SLC mode lost all the data or toggling QLC mode back on did similarly Do they also get support requests from users complaining that they lost all data after reformatting the drive? It’s a valid business decision to not support that kind of product feature. The only "valid business decision" is to make things that don't last as long, so recurring revenue is guaranteed. Finally, the "smoking gun" of planned obsolescence: SLC flash requires nowhere near as much ECC and thus controller/firmware complexity as MLC/TLC/QLC. It is also naturally faster. The NRE costs of controllers supporting SLC flash is a fraction of those for >1 bit per cell flash. QLC in particular, according to one datasheet I could find, requires ECC that can handle a bit error rate of 1E-2. One in a hundred bits read will be incorrect in normal operation of a QLC storage device. That's how idiotic it is --- they're operating at the very limits of error correction, just so they can have a measly 4x capacity increase over SLC which is nearly perfect and needs very minimal ECC. All this energy and resource usage dedicated to making things more complex and shorter-lasting can't be considered anything other than planned obsolescence. Contrast this with SmartMedia, the original NAND flash memory card format, rated for 100K-1M cycles, using ECC that only needs to correct at most 1 bit in 2048, and with such high endurance that it doesn't even need wear leveling. Also consider that SLC drives should cost a little less than 4x the price of QLC ones of the same capacity, given the lower costs of developing controllers and firmware, and the same price of NAND die, yet those rare SLC drives which are sold cost much more --- they're trying to price them out of reach of most people, given how much better they actually are. | | |
| ▲ | vlovich123 2 days ago | parent [-] | | No you’re right. You’ve uncovered a massive conspiracy where they’re out to get you. > No one from the manufacturers tells them that the "more storage" - multiplicatively more - lasts exponentially less.
For the same price, would you rather have a 1TB drive that will retain data for 10 years after having written 100PB, or a 4TB one that will only hold that data for 3 months after having written 2PB? These numbers seem completely made up since these come with a 1 year warranty and such a product would be a money loser. > Also consider that SLC drives should cost a little less than 4x the price of QLC ones of the same capacity, given the lower costs of developing controllers and firmware, and the same price of NAND die, yet those rare SLC drives which are sold cost much more --- they're trying to price them out of reach of most people, given how much better they actually are. You have demonstrated a fundamental lack of understanding in economics. When there’s less supply (ie these products aren’t getting made), things cost more. You are arguing that it’s because these products are secretly too good whereas the simpler explanation is just that the demand isn’t there. | | |
| ▲ | userbinator a day ago | parent [-] | | When there’s less supply (ie these products aren’t getting made), things cost more. SLC and QLC is literally the same silicon these days, just controlled by an option in the firmware; the former doesn't even need the more complex sense and program/erase circuitry of the latter, and yields of die which can function acceptably in TLC or QLC mode are lower. If anything, SLC can be made from reject or worn MLC/TLC/QLC, something that AFAIK only the Chinese are attempting. Yet virgin SLC die are priced many times more, and drives using them nearly impossible to find. such a product would be a money loser. You just admitted it yourself - they don't want to make products that last too long, despite them actually costing less. Intel's Optane is also worth mentioning as another "too good" technology. | | |
| ▲ | vlovich123 a day ago | parent [-] | | I think you’re casually dismissing the business costs associated with maintaining a SKU and assuming manufacturing cost is the only thing that drives the final cost which isn't strictly true. The lower volumes specifically are why costs are higher regardless of it “just” being a firmware difference. |
|
|
|
|
| |
| ▲ | Brian_K_White 2 days ago | parent | prev [-] | | They did not recommend. They listed. |
| |
| ▲ | sitkack 2 days ago | parent | prev | next [-] | | Tape is extremely cheap now. I booted up a couple laptops that have been sitting unpowered for over 7 years and the sata SSD in one of them has missing sectors. It had zero issues when shutdown. | | |
| ▲ | seszett 2 days ago | parent | next [-] | | Is tape actually cheap? Tape drives seem quite expensive to me, unless I don't have the right references. | | |
| ▲ | wtallis 2 days ago | parent | next [-] | | Tapes are cheap, tape drives are expensive. Using tape for backups only starts making economic sense when you have enough data to fill dozens or hundreds of tapes. For smaller data sets, hard drives are cheaper. | | |
| ▲ | sitkack 2 days ago | parent | next [-] | | Used LTO5+ drives are incredibly cheap, you can get a whole tape library with two drives and many tape slots for under 1k. Tapes are also way more reliable than hard drives. | |
| ▲ | AStonesThrow 2 days ago | parent | prev [-] | | HDDs are a pragmatic choice for “backup” or offline storage. You’ll still need to power them up, just for testing, and also so the “grease” liquefies and they don’t stick. Up through 2019 or so, I was relying on BD-XL discs, sized at 100GB each. The drives that created them could also write out M-DISC archival media, which was fearsomely expensive as a home user, but could make sense to a small business. 100GB, spread over one or more discs, was plenty of capacity to save the critical data, if I were judiciously excluding disposable stuff, such as ripped CD audio. |
| |
| ▲ | dogma1138 2 days ago | parent | prev [-] | | If you don’t have a massive amount of data to backup, used LTO5/6 drives are quite cheap, software and drivers is another issue however with a lot of enterprise kit. The problem ofc is that with a tape you need to also have a backup tape drive on hand. Overall if you get a good deal you can have a reliable backup setup for less than $1000 with 2 drives and a bunch of tape. But this is only good if you have single digit of TBs or low double digit of TBs to backup since it’s slow and with a single tape drive you’ll have to swap tapes manually. LTO5 is 1.5TB and LTO6 is 2.5TB (more with compression) it should be enough for most people. | | |
| ▲ | Dylan16807 2 days ago | parent | next [-] | | > But this is only good if you have single digit of TBs or low double digit of TBs That's not so enticing when I could get 3 16TB hard drives for half the price, with a full copy on each drive plus some par3 files in case of bad sectors. | | |
| ▲ | dogma1138 2 days ago | parent [-] | | You could, it’s really a question of what your needs are and what your backup strategy is. Most people don’t have that much data to back up, I don’t backup movies and shows I download because I can always rebuild the library from scratch I only backup stuff I create, so personal photos, videos etc. I’m not using a tape backup either, cloud backup is enough for me its cheap as long as you focus your backups to what matters the most. |
| |
| ▲ | sitkack 2 days ago | parent | prev [-] | | I have used LTO5 drives under FreeBSD and Linux. Under Linux I used both LTFS and tar. There was zero issues with software. | | |
| ▲ | dogma1138 2 days ago | parent [-] | | Older drives are a bit better but still ymmv. Had quite a few issues with Ethernet based drives on Linux in the past. |
|
|
| |
| ▲ | CTDOCodebases 2 days ago | parent | prev | next [-] | | The issue with tape is that you have to store it in a temperature controlled environment. | |
| ▲ | matheusmoreira 2 days ago | parent | prev | next [-] | | Tape sucks unless you've got massive amounts of money to burn. Not only are tape drives expensive, they only read the last two tape generations. It's entirely possible to end up in a future where your tapes are unreadable. | | |
| ▲ | Dylan16807 2 days ago | parent [-] | | There's a lot of LTO drives around. I strongly doubt there will be any point in the reasonable lifetime of LTO tapes (let's say 30 years) where you wouldn't be able to get a correct-generation drive pretty easily. |
| |
| ▲ | fpoling 2 days ago | parent | prev [-] | | While the tape is relatively cheap, the tape drives are not. The new ones typically starts at 4K USD, although sometimes for older models the prices can drop below 2K. | | |
| ▲ | sitkack 2 days ago | parent [-] | | You can get LTO5+ drives on ebay for $100-400. Buying new doesn't make sense for homelab. |
|
| |
| ▲ | dragontamer 2 days ago | parent | prev | next [-] | | If you care about long term storage, make a NAS and run ZFS scrub (or equivalent) every 6 months. That will check for errors and fix them as they come up. All error correction has a limit. If too many errors build up, it becomes unrecoverable errors. But as long as you reread and fix them within the error correction region, it's fine. | | |
| ▲ | csdvrx 2 days ago | parent | next [-] | | > run ZFS scrub (or equivalent) every 6 months zfs in mirror mode offers redundancy at the block level but scrub requires plugging the device > All error correction has a limit. If too many errors build up, it becomes unrecoverable errors There are software solutions. You can specify the redundancy you want. For long term storage, if using a single media that you can't plug and scrub, I recommend par2 (https://en.wikipedia.org/wiki/Parchive?useskin=vector) over NTFS: there are many NTFS file recovery tools, and it shouldn't be too hard to roll your own solution to use the redundancy when a given sector can't be read | |
| ▲ | WalterGR 2 days ago | parent | prev | next [-] | | What hardware, though? I want to build a NAS / attached storage array but after accidentally purchasing an SMR drive[0] I’m a little hesitant to even confront the project. A few tens of TBs. Local, not cloud. [0] Maybe 7 years ago. I don’t know if anything has changed since, e.g. honest, up-front labeling. [0*] For those unfamiliar, SMR is Shingled Magnetic Recording. https://en.m.wikipedia.org/wiki/Shingled_magnetic_recording | | |
| ▲ | justinclift a day ago | parent | next [-] | | I have a homelab with a bunch of old HP Gen 8 Microservers. They hold 4x 3.5" hdds and also an ssd (internally, replacing the optical slot): https://www.ebay.com/itm/156749631079 These are reasonably low power, and can take up to 16GB of ECC ram which is fine for small local NAS applications. The cpu is socketed, so I've upgraded most of mine to 4 core / 8 thread Xeons. From rough memory of the last time I measured the power usage at idle, it was around 12w with the drives auto-spun down. They also have a PCIe slot in the back, though it's older gen, but you'll be able to put a 10GbE card in it if that's your thing. Software wise, TrueNAS works pretty well. Proxmox works "ok" too, but this isn't a good platform for virtualisation due to the maximum of 16GB ram. | |
| ▲ | matheusmoreira 2 days ago | parent | prev | next [-] | | > What hardware, though? Good question. There seems to be no way to tell whether or not we're gonna get junk when we buy hard drives. Manufacturers got caught putting SMR into NAS drives. Even if you deeply research things before buying, everything could change tomorrow. Why is this so hard? Why can't we have a CMR drive that just works? That we can expect to last for 10 years? That properly reports I/O errors to the OS? | |
| ▲ | code_biologist 2 days ago | parent | prev | next [-] | | The Backblaze Drive Stats are always a good place to start: https://www.backblaze.com/blog/backblaze-drive-stats-for-202... There might be SMR drives in there, but I suspect not. | |
| ▲ | wmf 2 days ago | parent | prev | next [-] | | Nothing can really save you from accidentally buying the wrong model other than research. For tens of TBs you can use either 4-8 >20TB HDDs or 6-12 8TB SSDs (e.g. Asustor). The difference really comes down to how much you're willing to pay. | |
| ▲ | 3np 2 days ago | parent | prev | next [-] | | Toshi Nx00/MG/MN are good picks. The company never failed us and I don't believe they've had the same kinds of controversies as the US competition. Please don't tell everyone so we can still keep buying them? ;) | |
| ▲ | dragontamer 2 days ago | parent | prev [-] | | SMR will store your data, just slowly. It was a mistake for the Hard Drive business community to push them so hard IMO. But these days the 20TB+ drives are all HAMR or other heat/energy assisted tech. If you are buying 8TB or so, just make sure to avoid SMR but otherwise you're fine. Even then, SMR stores data fine, it's just really really slow. |
| |
| ▲ | ErneX 2 days ago | parent | prev [-] | | I use TrueNAS and it does a weekly scrub IIRC. |
| |
| ▲ | AshamedCaptain 2 days ago | parent | prev [-] | | > (careful about CMR vs SMR) Given the context of long term storage... why? | | |
| ▲ | 0cf8612b2e1e 2 days ago | parent | next [-] | | After I was bamboozled with a SMR drive, always great to just make the callout to those who might be unaware. What a piece of garbage to let vendors upsell higher numbers. (Yes, I know some applications can be agnostic to SMR, but it should never be used in a general purpose drive). | |
| ▲ | whoopdedo 2 days ago | parent | prev [-] | | Untested hypothesis, but I would expect the wider spacing between tracks in CMR makes it more resilient against random bit flips. I'm not aware of any experiments to prove this and it may be worth doing. If the HD manufacture can convince us that SMR is just as reliable for archival storage it would help them sell those drives since right now lots of people are avoiding SMR due to poor performance and the infamy of the bait-and-switch that happened a few years back. |
|
|
|
| ▲ | gnabgib 2 days ago | parent | prev | next [-] |
| Discussion on the original source: (20 points, 3 days ago, 5 comments) https://news.ycombinator.com/item?id=43702193 Related: SSD as Long Term Storage Testing (132 points, 2023, 101 comments) https://news.ycombinator.com/item?id=35382252 |
|
| ▲ | ein0p 2 days ago | parent | prev | next [-] |
| This is a known issue. You have to power up your SSDs (and flash cards, which are based on even more flimsy/cost optimized version of the same tech) every now and then for them to keep data. SSDs are not suitable for long term cold storage or archiving. Corollary: don't lose that recovery passphrase you've printed out for your hardware crypto key, the flash memory in it is also not eternal. |
| |
| ▲ | jsheard 2 days ago | parent | next [-] | | A not-so-fun fact is that this even applies to modern read-only media, most notably Nintendo game carts. Back in the day they used mask ROMs which ought to last more or less forever, but with the DS they started using cheaper NOR or NAND flash for larger games, and then for all games with the 3DS onwards. Those carts will bit-rot eventually if left unpowered for a long time. | | |
| ▲ | vel0city 2 days ago | parent | next [-] | | I've noticed a number of GBA carts I've picked up used (and probably not played in a long while) fail to load on the first read. Sometimes no logo, sometimes corrupted logo. Turning it off and on a couple of times solved the issue, and once it boots OK it'll boot OK pretty much every time after. Probably until it sits on the shelf for a long while. | | |
| ▲ | jsheard 2 days ago | parent [-] | | I think GBA games were all MaskROMs, so with those it's probably just due to the contacts oxidizing or something. |
| |
| ▲ | Dylan16807 2 days ago | parent | prev [-] | | Do we have confirmation that the carts are able to refresh the data? |
| |
| ▲ | computator 2 days ago | parent | prev | next [-] | | > You have to power up your SSDs every now and then for them to keep data. What is the protocol you should use with SSDs that you’re storing? Should you: - power up the SSD for an instant (or for some minutes?) without needing to read anything? - or power up the cells where your data resides by reading the files you had created on the SSD? - or rewrite the cells by reading your files, deleting them, and writing them back to the SSD? | | |
| ▲ | gblargg 2 days ago | parent | next [-] | | I'd at least just read all the used blocks on the drive. partclone is the most efficient that comes to mind, because it just copies used sectors. Just redirect to /dev/null. partclone.filesystem --source /dev/sdx --clone --output /dev/null
| | |
| ▲ | justinclift a day ago | parent [-] | | If you just need to read all of the sectors, then couldn't you just dd or cat the source drive instead? |
| |
| ▲ | rapjr9 a day ago | parent | prev | next [-] | | Maybe someone should design and sell a "drivekeeper" that you can plug all your backup SSD's into and it will power them up on a time table and do whatever is necessary to cause them to maintain integrity. Could be something like a Raspberry Pi with a many-port USB hub, or with a PCB with a bunch of connectors the raw drives can plug into. Could maybe even give a warning if a drive is degrading. Possibly it could be a small device with a simple MCU and a battery that you snap directly onto the SSD's connector? | |
| ▲ | mikequinlan 2 days ago | parent | prev [-] | | >What is the protocol you should use with SSDs that you’re storing? The correct protocol is to copy the data to a more durable medium and store that. | | |
| ▲ | hedora 2 days ago | parent [-] | | Or leave the drive on all the time in an enclosure that keeps the nand cool (near room temperature). Any decent SSD will background rewrite itself over time at an appropriate rate. Detecting that it needs to do so after 2 years in storage seems trickier to get right (no realtime clock) and trickier to test. I’d be curious to know how quickly the drives in the article “heal” themselves if left plugged in. There’s nothing wrong with the hardware, even if they did lose a few sectors. If you really want to create a correct protocol, you can accelerate nand aging by sticking them in an oven at an appropriate temperature. (Aging speedup vs temperature curves are available for most nand devices.) |
|
| |
| ▲ | zamadatix 2 days ago | parent | prev | next [-] | | The article states as much but to sum it all up as just that is leaving most of the good stuff out. Perhaps the most interesting part of the experiment series has been just how much longer these cheap drives with tons of writes have been lasting compared to the testing requirements (especially with so much past write endurance on the one just now starting to exhibit trouble). Part of the impetus for the series seemed to be lots of claims on how quickly to expect massive problems without any actual experimental tests of consumer drives to actually back it up. Of course n=4 with 1 model of 1 brand drives but it's taken ~20x longer than some common assumptions to start seeing problems on a drive at 5x its endurance rating. | |
| ▲ | RicoElectrico 2 days ago | parent | prev | next [-] | | Please explain to me how is that supposed to work. For all I know the floating gate is, well, isolated and only writes (which SSDs don't like if they're repeated on the same spot) touch it through mechanisms not unlike MOSFET aging i.e. carrier injection.
Reading on the other hand depends on the charge in floating gate altering Vt of the transistor below, this action not being able to drain any charge from the floating gate. | | |
| ▲ | ein0p 2 days ago | parent | next [-] | | If you at least read the data from the drive from time to time, the controller will "refresh" the charge by effectively re-writing data that can't be read without errors. Controllers will also tolerate and correct _some_ bit flips on the fly, topping up cells, or re-mapping bad pages. Think of it as ZFS scrub, basically, except you never see most of the errors. | |
| ▲ | wmf 2 days ago | parent | prev [-] | | According to a local expert (ahem), leakage can occur through mechanisms like Fowler-Nordheim tunneling or Poole-Frenkel emission, often facilitated by defects in the oxide layers. |
| |
| ▲ | matheusmoreira 2 days ago | parent | prev [-] | | > don't lose that recovery passphrase you've printed out for your hardware crypto key, the flash memory in it is also not eternal Yeah. Paper is the best long term storage medium, known to last for centuries. https://wiki.archlinux.org/title/Paperkey It's a good idea to have a backup copy of the encryption keys. Losing signing keys is not a big deal but losing encryption keys can lead to severe data loss. | | |
| ▲ | smittywerben 2 days ago | parent | next [-] | | > Paper is the best long term storage medium, known to last for centuries. Carve it into stone if you it need to last longer than the Romans. | | |
| ▲ | hedora 2 days ago | parent | next [-] | | Clay works well. The Rosicrucian museum in San Jose has a cuneiform tax receipt stating that a goat was eaten by wolves, and therefore no tax is due on it. This is similar, from 1634BC, but I don’t know what it says: https://archive.org/details/mma_cuneiform_tablet_impressed_w... | |
| ▲ | matheusmoreira 2 days ago | parent | prev [-] | | They say M-Discs are analogous to carving data onto stone. I have my doubts though. Searched the web and found posts claiming they are just regular Blu-ray discs now. |
| |
| ▲ | bobmcnamara 2 days ago | parent | prev [-] | | Good paper too: low acid content in a low cellulose eating insect environment. |
|
|
|
| ▲ | rkagerer 2 days ago | parent | prev | next [-] |
| I would never buy a no-name SSD. Did it once long ago and got bit, wrote a program to sequentially write a pseudorandom sequence across the whole volume then read back and verify, and proved all 8 Pacer SSD's I had suffered corruption. |
| |
| ▲ | WalterGR 2 days ago | parent [-] | | That’s also fairly common for cheap ‘thumb drives’, as I understand it. I’ve been bitten by that before. (Edit: Allegedly if you use low-numbered storage blocks you’ll be okay, but the advertised capacity (both packaging and what it reports to OS) is a straight-up lie.) | | |
| ▲ | WalterGR 2 days ago | parent [-] | | (Edit: Retail packaging, I mean. Too late to edit my comment.) |
|
|
|
| ▲ | bityard 2 days ago | parent | prev | next [-] |
| I didn't think it was controversial that SSDs are terrible at long term storage? |
| |
| ▲ | creatonez 2 days ago | parent | next [-] | | Right, but the news is that someone finally actually tested the unplugged-is-worse theory in a long term real world test. So much of the existing data about SSD endurance has been exclusively SSDs in datacenters plugged in 24/7, so it's nice to see some data showing what the difference actually looks like. | |
| ▲ | wmf 2 days ago | parent | prev [-] | | I wouldn't say it's controversial but I suspect most people don't know about it. There's been a lot of discussion about SSD write endurance but almost none about retention. |
|
|
| ▲ | stego-tech 2 days ago | parent | prev | next [-] |
| I mean, this reads as I expected it to: SSDs as cold storage are a risky gamble, and the longer the drive sits unpowered (and the more use it sees over its life), the less reliable it becomes. Man, I’d (figuratively) kill for a startup to build a prosumer-friendly LTO drive. I don’t have anywhere near the hardware expertise myself, but I’d gladly plunk down the dosh for a drive if they weren’t thousands of dollars. Prosumers and enthusiasts deserve shelf-stable backup solutions at affordable rates. |
| |
| ▲ | 0manrho 2 days ago | parent [-] | | I agree with the general sentiment, but just a headsup for those who might be unaware... > thousands of dollars I know this varies a lot based on location, but if you're stateside, you can get perfectly functional used LTO6 drives for $200-$500 that support 6TB RW cartridges that you can still buy new for quite cheap (20pk of 6TB RW LTO6 carts[0] for $60), much cheaper than any other storage medium I'm aware of and still reasonably dense and capacious still by modern standards. Sure it's 3-4 generations behind the latest LTO's, and you might need to get yourself a SAS HBA/AIC to plug it in to a consumer system (those are quite cheap as well, don't need to get a full RAID controller), but all in that's still quite an affordable and approachable cold storage solution for prosumers. Is there a reason this wouldn't work for you? Granted, the autoloader/enclosed library units are still expensive as all hell, even the hold ones, and I'd recommend sticking to quantum over IBM/HPE for the drives given how restrictive the later can be over firmware/software, but just wanted to put it out there that you don't necessarily need to spend 4 figures to get a perfectly functional tape solution with many many TB's worth of storage. 0: https://buy.hpe.com/us/en/storage/storage-media/tape-media/l... | | |
| ▲ | throwaway81523 2 days ago | parent [-] | | LTO6 capacity is 2.5TB. 6TB is "compressed" which these days is near worthless because data collections that size are likely to already compressed (example: video). If the idea is to back up a hard drive, it's not nice to require big piles of tape to do it. Right now even the current LTO generation (LTO9, 18TB native capacity) can't back up the current highest capacity hard drives (28TB or more, not sure). In fact HDD is currently cheaper per TB (20GB drive at $229 at Newegg) than LTO6 tape ($30 for 2.5TB) even if the tape drive is free. LTO would be interesting for regular users if the current generation drive cost $1000 or less new with warranty rather than as crap from ebay. It's too much of an enterprise thing now. Also I wonder what is up with LTO tape density. IBM 3592 drives currently have capacity up to 50TB per tape in a cartridge the same size as an LTO cartridge, so they are a couple generations ahead of LTO. Of course that stuff is even more ridiculously expensive. | | |
| ▲ | fluidcruft 2 days ago | parent | next [-] | | Unless I misunderstood something, the comment you replied to has a link to HPE selling packs of 20 new LTO-6 cartridges (2.5TB x 20 = 50TB) for $60 (i.e. $3 per 2.5TB cartridge) which is far cheaper than a hard drive. | | |
| ▲ | 0manrho 2 days ago | parent [-] | | Correct, it also misses the core point of the discussion: SSD's and HDD's are unreliable cold storage long term. Cheap HDD's are better than Cheap SSD's (debatable if you're willing to spring for high end parts, but that's outside the scope of value/affordability), but if that data is truly important, it's well established best practice to replicate (be it cloning, mirroring, parity, what have you) your storage media. Sometimes, unfortunately, a user will value the data more than they can afford to properly replicate/secure it it, and compromises must be made. Also, Tape does require a higher buy in than most (individual) SSD's/HDD's demand before you can even start investing in actual storage media, even if going used, so there are absolutely valid contexts where the "right"/best available approach is just throwing $200 at second HDD and cloning/mirroring it, but best available compromise in any given specific context is a separate discussion from general best practices or best value in broad terms. |
| |
| ▲ | 0manrho 2 days ago | parent | prev [-] | | Correct, and absolutely worth noting, but point still stands. Had no intention of misleading, I called it a 6TB drive because that's what they're called (technically 6.25TB if we really want to get pedantic). Whether using LTO's compression or not, whether your data is already compressed or not, it's still a reasonable affordable, dense, reliable, approachable cold storage offering. Same is true even for LTO5. It only starts to go sideways when you step up to LTO7 and above or try to get an autoloading all-in-one library unit. Though you can get lucky if you're patient/persistent in your bargain hunting. | | |
| ▲ | stego-tech 2 days ago | parent [-] | | You're both beating around the bush that is the core issue, though, and that's a lack of backup media that isn't a HDD for storing large amounts of data indefinitely, nevermind on a medium that doesn't have to be powered on every X interval to ensure it's still functional. Prosumers/enthusiasts generally have three options for large-scale data backups (18TB+), and none are as remotely affordable as the original storage medium: * A larger storage array to hold backups and/or versions as needed (~1.25x the $ cost of your primary array to account for versions) * Cloud-based storage (~$1300/yr from Backblaze B2 for 18TB; AWS Glacier Deep Freeze is far cheaper, but the Egress costs per year for testing are comparable to B2) * LTO drives ($3300 for an mLogic LTO-8 drive, plus media costs) Of those, LTO drives are (presently) the only ones capable of having a stable "shelf life" at a relatively affordable rate. Most consumers with datasets that size likely aren't reading that data more than once or twice a year to test the backup itself, and even then maybe restoring one or twice in their lifetime. LTO is perfect for this operating model, letting users create WORM tapes for the finished stuff (e.g., music and video collections), and use a meager rotation of tapes for infrequent backups (since more routinely-accessed data could be backed up to cloud providers for cheaper than the cost of an associated daily LTO backup rotation). LTO is also far more resilient to being shipped than HDDs, making it easier to keep offline copies with family or friends across the country to protect your data from large-scale disasters. It's the weird issue of making it cheaper than ever for anyone to hoard data, but more expensive than ever to back it up safely. It's a problem that's unlikely to go away anytime soon, given Quantum's monopoly on LTO technology and IBM's monopoly on drive manufacturing, making it a ripe market for a competitor. I'd still love to see someone take a crack at it though. The LTO Consortium could use a shake-up, and the market for shelf-stable tape backup could do with some competition in general to depress prices a bit. | | |
| ▲ | 0manrho 2 days ago | parent | next [-] | | I'm absolutely perplexed at how I'm beating around the bush regarding > a lack of backup media that isn't a HDD for storing large amounts of data indefinitely When I recommended LTO, which you yourself say that > LTO is perfect for this operating model Agreed. Which is why I recommended it. As did you. Because it's a solved issue. > * LTO drives ($3300 for an mLogic LTO-8 drive, plus media costs) LTO-8/9 aren't the only options. LTO5/6/7 aren't defunct/unusable/unavailable. That's like complaining that SSD's are too expensive because you're only looking at Micron 9550 or Optane P5800's and their ilk. > making it a ripe market for a competitor. You'd have to engineer your own controllers and drives and likely cartridges as well, including drivers, firmware and software, which is neither cheap nor easy which is why no one has done this. It's doable, but the initial CapEx is astronomical, and the target market outside of enterprise is small meaning the return is unlikely to make it worth it, so you'd also have to spend big on advertising to appeal to said prosumers to try to sell them on something that most people would think of as cumbersome or obsolete ("Tape?! This isn't the 80's!", sure, we know better, but does the layman? that's not an easy sell), or find someway to make inroads against IBM/HPE/Quantum in the enterprise space which is unlikely for a not-already established big name. Even in the remote chance that they can beat IBM/HPE/Quantum on $/TB on new latest gen products, they almost certainly can't do that meaningfully cheaper than buying used Quantums from a few generations ago. Would it be nice? Sure. In the meantime, price sensitive prosumers absolutely do not have to pay multiple thousands to get into tape storage. And if the data being backed up is truly that important, and we only limit it to new on shelf/current gen solutions, a one time (or once per decade or less) low 4 figures expense for a tape unit and media that is a fraction of HDD's $/TB value even at the cutting edge is not an unreasonable expense. Shit, people pay more than that for some QNAP/Synology junk with spinning disks and end up with less capacity and resilience with more headaches. If the goal is "I want to back up the data on a single HDD and don't want to spend thousands to do it" the answer is to buy another HDD and mirror/clone it. The reality is tape is still around because it already is and continues to be quite affordable (in addition to it's shelf-life/reliability), and in all likelihood (barring some breakthrough) going to outlast HDD's. | | |
| ▲ | throwaway81523 2 days ago | parent [-] | | > LTO-8/9 aren't the only options. LTO5/6/7 aren't defunct/unusable/unavailable. LTO 5 and 6 have too little capacity to be really viable these days. LTO 7 is more interesting but you're still looking at drive cost of $1000+ and media cost almost as much as HDD's per TB. |
| |
| ▲ | throwaway81523 2 days ago | parent | prev [-] | | For $1392/year you can get a Hetzner SX65 which is a fairly beefy server with 4x 22TB hard drives so that beats your Backblaze figure by about 2x, but still, it's far more than thee cost of the drives. There are also bigger models with more drives, where raid-6 overhead becomes less of an issue. https://www.hetzner.com/dedicated-rootserver/matrix-sx/ A 20TiB Hetzner StorageBox (managed Raid-6 storage with scp/Borg access) is $552 a year which is also pretty good compared to Backblaze. I have a 5TiB one and it has been solid, if a little bit slow some of the time. I think the StorageBox line is about due for a refresh since there has been a big drop in HDD prices lately, despite tariffs or whatever. Are Seagate Barracudas terrible drives, or what? They are $229 for a 20TB unit at Newegg right now. | | |
| ▲ | 0manrho 2 days ago | parent [-] | | Hetzner is a great service, but what you're pitching is in no way a solution tailored for the aforementioned usecase. We're talking long term cold storage backup medium, as in meant to last many years. SX65, Storagebox, and Backblaze are not cold storage. SX65 would be $7000 over 5 years for 80TB without redundancy. You could get an LTO-7 or even 8 drive and many times SX65's storage for less, and have literal hundreds if not thousands left over for compute or whatever else with no recurring cost. Hell you could get an autoloader all-in-one tape library with tapes to fill it for less than that. There are absolutely scenarios where SX65/Storagebox/Backblaze/Cloud-hosted-storage makes sense and is a decent value, but this isn't one of them. If you want off-site cold storage, Glacier handily beats them with money and TB's to spare. If you want always-on and available "warmer" storage, great, but that's an entirely separate discussion/usecase. | | |
| ▲ | throwaway81523 2 days ago | parent [-] | | Yes I guess for 80TB and 5 years, LTO starts looking better. For 20TB, StorageBox is still ahead. Per https://aws.amazon.com/s3/glacier/pricing/ it looks like Glacier costs $3.6 per TB per month, which is a lot more than StorageBox even not counting egress fees. Is there a cheaper class of Glacier that I missed? Another idea is simply to buy a 29TB hard drive or pair of them to keep spinning, doing occasional integrity checks. I've had terrible luck using HDD's for cold storage but by now have had a few spinning in servers for 5 years without failures. Those are hosted servers in data centers though. Environmental conditions at home might not be as good. Two HDD's idling might use $100 of electricity over 5 years. An LTO7 drive on ebay is $1000+, while new is still close to $4000. I'm dubious of used tape drives but maybe it is an ok idea. Hmm. New LTO7 tapes are around $50 (6TB capacity) so just barely ahead of HDD. I wonder why there is no "LTO as a service" where someone has an LTO drive in a data center, and for a fee will write a tape for you and ship it to you. |
|
|
|
|
|
|
|
|
| ▲ | jeffbee 2 days ago | parent | prev | next [-] |
| Endurance is proportional to programming temperature. In the video, when all four SSDs are installed at once, the composite device temperature ranges over 12º. This should be expected to influence the outcomes. |
|
| ▲ | ksec 2 days ago | parent | prev | next [-] |
| That is basically saying dont store your data on USB Memory. Are there any way we could fix those corrupted bit flip files with USB Flash? |
|
| ▲ | 0manrho 2 days ago | parent | prev [-] |
| Not surprsing. Flash is for hot/warm storage, not for cold storage, but using literal bottom of the bargin-bin barrel no-name drives that are already highly worn really doesn't tell us anything. Even if these were new or in powered on systems for their whole life, I wouldn't have high confidence in their data retention, reliability or performance quite frankly. Granted, there is something to be said about using budget/used drives en masse and brute forcing your way to reliability/performance/capacity on a budget through shear scale, but that requires some finesse and understanding of storage array concepts, best-practices, systems and software. By no means beyond the skills of an average homelaber/HN reader if you're willing to spend a few hours of research, but importantly you would want to evaluate them as an array/bulk, not individually in that instance, else you lose context. That also typically requires a total monetary investment beyond what most homelabbers/consumers/prosumers are willing to invest even if the end-of-the-day TB/$ ratio ends up quite competitive. There's also many many types of flash drives (well beyond just MLC/TLC/QLC), and there's a huge difference between no names, white labels, and budget ADATA's and the like, and an actual proper high end/enterprise SSD (and a whole spectrum in between). And No, 990/9100 Pro's from Samsung and other similar prosumer drives are not high end flash. Good enough for home gamers and most prosumers, absolutely! They would also likely yield significant improvements vs these levens in OP. I'm not trying to say those prosumer drives are bad drives. They aren't (The levens though, even new in box, absolutely are). I'm merely saying that a small sample of some of the worst drives you can buy that are already beyond their stated wear and tear is frankly a poor sample to derive any real informed opinion on flash's potential or abilities. TL;DR: This really doesn't tell us much other than "bad flash is bad flash". |