▲ | simonw 5 days ago | |||||||||||||
I wonder if it's where old S3 hard drives go to die? Presumably AWS have the world's single largest collection of used storage devices - if you RAID them up you can probably get reliable performance out of them for Glacier? | ||||||||||||||
▲ | donavanm 4 days ago | parent | next [-] | |||||||||||||
Not quite. Hardware with customer data or corp IP (eg any sort of storage or nvram) doesnt leave the "red zone"[1] without being destroyed. And reusing EOL hardware is a nightmare of failure rates and consistency issues. Its usually more cost effective to scrap the entire rack once its depreciated, or potentially at the 4-5 year mark at most. More revenue is generated by replacing the entire rack with new hardware that will make better use of the monthly recurring cost (MRC) of that rack position/power whips/etc. [1] https://www.aboutamazon.com/news/aws/aws-data-center-inside | ||||||||||||||
▲ | bob1029 5 days ago | parent | prev | next [-] | |||||||||||||
I still don't know if it's possible to make it profitable with old drives in this kind of arrangement, especially if we intend to hit their crazy durability figures. The cost of keeping drives spinning is low, but is double-digit margin % in this context. You can't leave drives unpowered in a warehouse for years on end and say you have 11+ nines of durability. | ||||||||||||||
| ||||||||||||||
▲ | lijok 5 days ago | parent | prev [-] | |||||||||||||
You don’t raid old drives as it creates cascading failures because recovering from a failed drive adds major wear to other drives | ||||||||||||||
|