| ▲ | csdvrx 3 days ago |
| For long term storage, prefer hard drives (careful about CMR vs SMR) If you have specific random IO high performance needs, you can either - get a SLC drive like https://news.solidigm.com/en-WW/230095-introducing-the-solid... - make one yourself by hacking the firmware: https://news.ycombinator.com/item?id=40405578 Be careful when you use something "exotic", and do not trust drives that are too recent to be fully tested: I learned my lesson for M2 2230 drives https://www.reddit.com/r/zfs/comments/17pztue/warning_you_ma... which seems validated by the large numbers of similar experiences like https://github.com/openzfs/zfs/discussions/14793 |
|
| ▲ | vlovich123 3 days ago | parent | next [-] |
| > - make one yourself by hacking the firmware: https://news.ycombinator.com/item?id=40405578
Be careful when you use something "exotic", and do not trust drives that are too recent to be fully tested Do you realize the irony of cautioning about buying off the shelf hardware but recommending hacking firmware yourself? |
| |
| ▲ | userbinator 3 days ago | parent | next [-] | | That "firmware hack" is just enabling an option that manufacturers have always had (effectively 100% "SLC cache") but almost always never use for reasons likely to do with planned obsolescence. | | |
| ▲ | vlovich123 3 days ago | parent [-] | | Converting a QLC chip into an SLC is not planned obsolescence. It’s a legitimate tradeoff after analyzing the marketplace that existing MTBF write lifetimes are within acceptable consumer limits and consumers would rather have more storage. Edit: and to preempt the “but make it an option”. That requires support software they may not want to build and support requests from users complaining that toggling SLC mode lost all the data or toggling QLC mode back on did similarly. It’s a valid business decision to not support that kind of product feature. | | |
| ▲ | Dylan16807 3 days ago | parent | next [-] | | And for the vast majority of use cases, even if QLC wears out TLC would be fine indefinitely. Limiting it to SLC capacity would be ridiculous. | |
| ▲ | userbinator 2 days ago | parent | prev [-] | | I have USB drives with good old SLC flash, whose data is still intact after several decades (rated for retention of 10 years at 55C after 100K cycles - and they have not been cycled anywhere near that much.) and consumers would rather have more storage No one from the manufacturers tells them that the "more storage" - multiplicatively more - lasts exponentially less. For the same price, would you rather have a 1TB drive that will retain data for 10 years after having written 100PB, or a 4TB one that will only hold that data for 3 months after having written 2PB? That requires support software they may not want to build The software is already there if you know where to look. and support requests from users complaining that toggling SLC mode lost all the data or toggling QLC mode back on did similarly Do they also get support requests from users complaining that they lost all data after reformatting the drive? It’s a valid business decision to not support that kind of product feature. The only "valid business decision" is to make things that don't last as long, so recurring revenue is guaranteed. Finally, the "smoking gun" of planned obsolescence: SLC flash requires nowhere near as much ECC and thus controller/firmware complexity as MLC/TLC/QLC. It is also naturally faster. The NRE costs of controllers supporting SLC flash is a fraction of those for >1 bit per cell flash. QLC in particular, according to one datasheet I could find, requires ECC that can handle a bit error rate of 1E-2. One in a hundred bits read will be incorrect in normal operation of a QLC storage device. That's how idiotic it is --- they're operating at the very limits of error correction, just so they can have a measly 4x capacity increase over SLC which is nearly perfect and needs very minimal ECC. All this energy and resource usage dedicated to making things more complex and shorter-lasting can't be considered anything other than planned obsolescence. Contrast this with SmartMedia, the original NAND flash memory card format, rated for 100K-1M cycles, using ECC that only needs to correct at most 1 bit in 2048, and with such high endurance that it doesn't even need wear leveling. Also consider that SLC drives should cost a little less than 4x the price of QLC ones of the same capacity, given the lower costs of developing controllers and firmware, and the same price of NAND die, yet those rare SLC drives which are sold cost much more --- they're trying to price them out of reach of most people, given how much better they actually are. | | |
| ▲ | vlovich123 2 days ago | parent [-] | | No you’re right. You’ve uncovered a massive conspiracy where they’re out to get you. > No one from the manufacturers tells them that the "more storage" - multiplicatively more - lasts exponentially less.
For the same price, would you rather have a 1TB drive that will retain data for 10 years after having written 100PB, or a 4TB one that will only hold that data for 3 months after having written 2PB? These numbers seem completely made up since these come with a 1 year warranty and such a product would be a money loser. > Also consider that SLC drives should cost a little less than 4x the price of QLC ones of the same capacity, given the lower costs of developing controllers and firmware, and the same price of NAND die, yet those rare SLC drives which are sold cost much more --- they're trying to price them out of reach of most people, given how much better they actually are. You have demonstrated a fundamental lack of understanding in economics. When there’s less supply (ie these products aren’t getting made), things cost more. You are arguing that it’s because these products are secretly too good whereas the simpler explanation is just that the demand isn’t there. | | |
| ▲ | userbinator 2 days ago | parent [-] | | When there’s less supply (ie these products aren’t getting made), things cost more. SLC and QLC is literally the same silicon these days, just controlled by an option in the firmware; the former doesn't even need the more complex sense and program/erase circuitry of the latter, and yields of die which can function acceptably in TLC or QLC mode are lower. If anything, SLC can be made from reject or worn MLC/TLC/QLC, something that AFAIK only the Chinese are attempting. Yet virgin SLC die are priced many times more, and drives using them nearly impossible to find. such a product would be a money loser. You just admitted it yourself - they don't want to make products that last too long, despite them actually costing less. Intel's Optane is also worth mentioning as another "too good" technology. | | |
| ▲ | vlovich123 2 days ago | parent [-] | | I think you’re casually dismissing the business costs associated with maintaining a SKU and assuming manufacturing cost is the only thing that drives the final cost which isn't strictly true. The lower volumes specifically are why costs are higher regardless of it “just” being a firmware difference. |
|
|
|
|
| |
| ▲ | Brian_K_White 2 days ago | parent | prev [-] | | They did not recommend. They listed. |
|
|
| ▲ | sitkack 3 days ago | parent | prev | next [-] |
| Tape is extremely cheap now. I booted up a couple laptops that have been sitting unpowered for over 7 years and the sata SSD in one of them has missing sectors. It had zero issues when shutdown. |
| |
| ▲ | seszett 3 days ago | parent | next [-] | | Is tape actually cheap? Tape drives seem quite expensive to me, unless I don't have the right references. | | |
| ▲ | wtallis 3 days ago | parent | next [-] | | Tapes are cheap, tape drives are expensive. Using tape for backups only starts making economic sense when you have enough data to fill dozens or hundreds of tapes. For smaller data sets, hard drives are cheaper. | | |
| ▲ | sitkack 2 days ago | parent | next [-] | | Used LTO5+ drives are incredibly cheap, you can get a whole tape library with two drives and many tape slots for under 1k. Tapes are also way more reliable than hard drives. | |
| ▲ | AStonesThrow 3 days ago | parent | prev [-] | | HDDs are a pragmatic choice for “backup” or offline storage. You’ll still need to power them up, just for testing, and also so the “grease” liquefies and they don’t stick. Up through 2019 or so, I was relying on BD-XL discs, sized at 100GB each. The drives that created them could also write out M-DISC archival media, which was fearsomely expensive as a home user, but could make sense to a small business. 100GB, spread over one or more discs, was plenty of capacity to save the critical data, if I were judiciously excluding disposable stuff, such as ripped CD audio. |
| |
| ▲ | dogma1138 3 days ago | parent | prev [-] | | If you don’t have a massive amount of data to backup, used LTO5/6 drives are quite cheap, software and drivers is another issue however with a lot of enterprise kit. The problem ofc is that with a tape you need to also have a backup tape drive on hand. Overall if you get a good deal you can have a reliable backup setup for less than $1000 with 2 drives and a bunch of tape. But this is only good if you have single digit of TBs or low double digit of TBs to backup since it’s slow and with a single tape drive you’ll have to swap tapes manually. LTO5 is 1.5TB and LTO6 is 2.5TB (more with compression) it should be enough for most people. | | |
| ▲ | Dylan16807 3 days ago | parent | next [-] | | > But this is only good if you have single digit of TBs or low double digit of TBs That's not so enticing when I could get 3 16TB hard drives for half the price, with a full copy on each drive plus some par3 files in case of bad sectors. | | |
| ▲ | dogma1138 2 days ago | parent [-] | | You could, it’s really a question of what your needs are and what your backup strategy is. Most people don’t have that much data to back up, I don’t backup movies and shows I download because I can always rebuild the library from scratch I only backup stuff I create, so personal photos, videos etc. I’m not using a tape backup either, cloud backup is enough for me its cheap as long as you focus your backups to what matters the most. |
| |
| ▲ | sitkack 2 days ago | parent | prev [-] | | I have used LTO5 drives under FreeBSD and Linux. Under Linux I used both LTFS and tar. There was zero issues with software. | | |
| ▲ | dogma1138 2 days ago | parent [-] | | Older drives are a bit better but still ymmv. Had quite a few issues with Ethernet based drives on Linux in the past. |
|
|
| |
| ▲ | CTDOCodebases 3 days ago | parent | prev | next [-] | | The issue with tape is that you have to store it in a temperature controlled environment. | |
| ▲ | matheusmoreira 3 days ago | parent | prev | next [-] | | Tape sucks unless you've got massive amounts of money to burn. Not only are tape drives expensive, they only read the last two tape generations. It's entirely possible to end up in a future where your tapes are unreadable. | | |
| ▲ | Dylan16807 2 days ago | parent [-] | | There's a lot of LTO drives around. I strongly doubt there will be any point in the reasonable lifetime of LTO tapes (let's say 30 years) where you wouldn't be able to get a correct-generation drive pretty easily. |
| |
| ▲ | fpoling 3 days ago | parent | prev [-] | | While the tape is relatively cheap, the tape drives are not. The new ones typically starts at 4K USD, although sometimes for older models the prices can drop below 2K. | | |
| ▲ | sitkack 3 days ago | parent [-] | | You can get LTO5+ drives on ebay for $100-400. Buying new doesn't make sense for homelab. |
|
|
|
| ▲ | dragontamer 3 days ago | parent | prev | next [-] |
| If you care about long term storage, make a NAS and run ZFS scrub (or equivalent) every 6 months. That will check for errors and fix them as they come up. All error correction has a limit. If too many errors build up, it becomes unrecoverable errors. But as long as you reread and fix them within the error correction region, it's fine. |
| |
| ▲ | csdvrx 3 days ago | parent | next [-] | | > run ZFS scrub (or equivalent) every 6 months zfs in mirror mode offers redundancy at the block level but scrub requires plugging the device > All error correction has a limit. If too many errors build up, it becomes unrecoverable errors There are software solutions. You can specify the redundancy you want. For long term storage, if using a single media that you can't plug and scrub, I recommend par2 (https://en.wikipedia.org/wiki/Parchive?useskin=vector) over NTFS: there are many NTFS file recovery tools, and it shouldn't be too hard to roll your own solution to use the redundancy when a given sector can't be read | |
| ▲ | WalterGR 3 days ago | parent | prev | next [-] | | What hardware, though? I want to build a NAS / attached storage array but after accidentally purchasing an SMR drive[0] I’m a little hesitant to even confront the project. A few tens of TBs. Local, not cloud. [0] Maybe 7 years ago. I don’t know if anything has changed since, e.g. honest, up-front labeling. [0*] For those unfamiliar, SMR is Shingled Magnetic Recording. https://en.m.wikipedia.org/wiki/Shingled_magnetic_recording | | |
| ▲ | justinclift a day ago | parent | next [-] | | I have a homelab with a bunch of old HP Gen 8 Microservers. They hold 4x 3.5" hdds and also an ssd (internally, replacing the optical slot): https://www.ebay.com/itm/156749631079 These are reasonably low power, and can take up to 16GB of ECC ram which is fine for small local NAS applications. The cpu is socketed, so I've upgraded most of mine to 4 core / 8 thread Xeons. From rough memory of the last time I measured the power usage at idle, it was around 12w with the drives auto-spun down. They also have a PCIe slot in the back, though it's older gen, but you'll be able to put a 10GbE card in it if that's your thing. Software wise, TrueNAS works pretty well. Proxmox works "ok" too, but this isn't a good platform for virtualisation due to the maximum of 16GB ram. | |
| ▲ | matheusmoreira 3 days ago | parent | prev | next [-] | | > What hardware, though? Good question. There seems to be no way to tell whether or not we're gonna get junk when we buy hard drives. Manufacturers got caught putting SMR into NAS drives. Even if you deeply research things before buying, everything could change tomorrow. Why is this so hard? Why can't we have a CMR drive that just works? That we can expect to last for 10 years? That properly reports I/O errors to the OS? | |
| ▲ | code_biologist 3 days ago | parent | prev | next [-] | | The Backblaze Drive Stats are always a good place to start: https://www.backblaze.com/blog/backblaze-drive-stats-for-202... There might be SMR drives in there, but I suspect not. | |
| ▲ | wmf 3 days ago | parent | prev | next [-] | | Nothing can really save you from accidentally buying the wrong model other than research. For tens of TBs you can use either 4-8 >20TB HDDs or 6-12 8TB SSDs (e.g. Asustor). The difference really comes down to how much you're willing to pay. | |
| ▲ | 3np 3 days ago | parent | prev | next [-] | | Toshi Nx00/MG/MN are good picks. The company never failed us and I don't believe they've had the same kinds of controversies as the US competition. Please don't tell everyone so we can still keep buying them? ;) | |
| ▲ | dragontamer 2 days ago | parent | prev [-] | | SMR will store your data, just slowly. It was a mistake for the Hard Drive business community to push them so hard IMO. But these days the 20TB+ drives are all HAMR or other heat/energy assisted tech. If you are buying 8TB or so, just make sure to avoid SMR but otherwise you're fine. Even then, SMR stores data fine, it's just really really slow. |
| |
| ▲ | ErneX 3 days ago | parent | prev [-] | | I use TrueNAS and it does a weekly scrub IIRC. |
|
|
| ▲ | AshamedCaptain 3 days ago | parent | prev [-] |
| > (careful about CMR vs SMR) Given the context of long term storage... why? |
| |
| ▲ | 0cf8612b2e1e 3 days ago | parent | next [-] | | After I was bamboozled with a SMR drive, always great to just make the callout to those who might be unaware. What a piece of garbage to let vendors upsell higher numbers. (Yes, I know some applications can be agnostic to SMR, but it should never be used in a general purpose drive). | |
| ▲ | whoopdedo 3 days ago | parent | prev [-] | | Untested hypothesis, but I would expect the wider spacing between tracks in CMR makes it more resilient against random bit flips. I'm not aware of any experiments to prove this and it may be worth doing. If the HD manufacture can convince us that SMR is just as reliable for archival storage it would help them sell those drives since right now lots of people are avoiding SMR due to poor performance and the infamy of the bait-and-switch that happened a few years back. |
|