| ▲ | spiralcoaster 3 days ago |
| The amount of self-aggrandizing and lack of self awareness tells me this author is doing to do all of this again. This post could be summed up with "I should have had backups. Lesson learned", but instead they deflect to whining about how their local desktop is a mess and they NEED to store everything remotely to stay organized. They're going to dazzle you with all of their hardened bunker this, and multiple escape route that, not realizing all of their complex machinery is metaphorically running off of a machine with no battery backup. One power outage and POOF! |
|
| ▲ | gblargg 3 days ago | parent | next [-] |
| The author doesn't grasp what putting all your eggs into one basket means: > Before anyone says “you put all your eggs in one basket,” let me be clear: I didn’t. I put them in one provider, with what should have been bulletproof redundancy: That's one basket. A single point of failure. "But it should have been impossible to fail!" Backups are to handle the "impossible" failure (in reality nothing is 100% reliable). |
| |
| ▲ | davidhyde 3 days ago | parent | next [-] | | This one time, traveling through Asia, a simple merchant transaction triggered a fraud alarm on my card. The default for my bank at the time was to cancel my card automatically. This was before the days where cards could become unblocked. I had to travel to another city to pick up a new card in 10 working ways. This was a Mastercard credit card. I thought I was smart traveling with both a mastercard and a Visa card. Well, the Visa card was automatically cancelled too. Due to the same event. No cards for me to use to get to that city and I had to resort to a dodgy western union transfer to move forward. Also, try booking a flight with cash, it’s not fun. My point is that the basket that eggs are put in is not always clear in hindsight. I wasn’t even aware that Mastercard and visa shared fraud alerts and that they were automatically linked. The author’s article is not about backups, it’s about accountability. | | |
| ▲ | mcv 3 days ago | parent | next [-] | | Visa and Mastercard are also an excellent example of two companies we trust way too much despite having proven many times they don't deserve that trust. | |
| ▲ | seuros 3 days ago | parent | prev [-] | | My post is not about backup strategy, it’s about what happens when the infrastructure itself becomes hostile, and support throw you from one team to another. AWS didn't just delete files. They gaslit me for 20 days while violating their own stated policies. |
| |
| ▲ | Tohsig 3 days ago | parent | prev [-] | | Fully agree. It's the same reason why you wouldn't put a repo on Github and then mirror that repo to Github. At a minimum any good mirror would be on a different provider, and ideally (if we get real picky) on a completely different cloud service. | | |
| ▲ | seuros 3 days ago | parent [-] | | When you put a repo in Github, everybody that forked or clone that repo become a Mirror. |
|
|
|
| ▲ | Jedd 3 days ago | parent | prev | next [-] |
| Yeah, that post was hard to read. I'll concede that I'm hugely empathetic for people that suffer data loss. The pithy aphorism about there being two types of people -- those who haven't lost data, and those who do backups -- is doubly droll because only the second group really appreciates the phrase. But it's surprising to find people with more than a decade in IT who don't appreciate the risks here. The timeline reveals there were 13 days from when the first signs of trouble surfaced, to when the account was deleted. So a fortnight of very unsubtle reminders to do something AND a fortnight in which to act. (I recently learned the phrase BATNA[0] and in modern <sic> IT where it's Turtles as a Service, all the way down, it's amazing how often this concept is applicable.) Author seems very keen to blame his part-time sysadmin rather than his systems architect. I can understand the appeal of that blame distribution algorithm, but it's nonetheless misguided. The phrasing: > But here’s the dilemma they’ve created: What if you have petabytes of data? How do you backup a backup? inverts the horse & cart. If you have a petabyte of data that's important, that you can't recreate from other sources, your concern is how to keep your data safe. If you're paying someone to keep a copy, pay (at least one other) person to keep another copy. Even that isn't something I'd call safe though. [0] https://en.wikipedia.org/wiki/Best_alternative_to_a_negotiat... |
|
| ▲ | mcv 3 days ago | parent | prev | next [-] |
| Yeah, at some point the article says: > I’d done everything right. Vault encryption keys stored separately from my main infrastructure. Defense in depth. Zero trust architecture. The works. Did you? Is putting all your eggs in one basket "defense in depth"? Is total trust in AWS "zero trust architecture"? I'm not defending AWS here; they fully deserve all the fallout they can get from this, and I do feel for the dev who lost all their stuff through AWS's fuckup. Lots of people do the same. My current employer does the same. It's a major bank, and all of their stuff is Microsoft. Azure, SharePoint, Office, Teams, the works. I think it's foolish to trust a single foreign company with all your vital data and infrastructure, operating in q country where the government demands access to everything, but this is what everybody does now. We trust "the cloud" way too much, and expose ourselves to these sort of fuckups. |
| |
| ▲ | seuros 3 days ago | parent [-] | | I dont disagree with your broader point—centralizing everything in one provider is a systemic risk. The architecture was built assuming infrastructure within AWS might fail. What I didn’t plan for was the provider itself turning hostile, skipping their own retention policy, and treating verification as a deletion trigger. | | |
| ▲ | throwaway413121 3 days ago | parent | next [-] | | > The architecture was built assuming infrastructure within AWS might fail. From what i gather it was not. Or did you have a strategy for a 0-warning complete AWS service closure? Just imagine AWS closing their doors from one day to the next due to economic losses, or due to judicial inquiries into their illegal practices: were you really prepared for their failure? The cloud was never data living in tiny rain droplets and swimming across the earth to our clients. The cloud was always somebody else's computer(s) that they control, and we don't. I'm sorry you learnt that lesson the hard way. | |
| ▲ | darkwater 21 hours ago | parent | prev | next [-] | | If you have very important data for you, and you don't pay very high bills to AWS, you should really have at least a cold backup somewhere else (even on your own hardware).
If you have a big cloud account, paying big money every month, at least with AWS you are in a pretty safe spot, even if people will say a different thing here. And if you have a similar horror story with a tens/hundred of thousands of dollars (or more) monthly AWS invoice, please speak, I'm very curious to learn what happened. | |
| ▲ | mcv 3 days ago | parent | prev [-] | | It probably wasn't even hostility; it's just that accounts is also an infrastructure component. And when that fails, everything fails. Tying everything to a single account creates a single point of failure. It's one of the reasons I don't use my Google account for everything (another is that I don't want them to know everything about me), and I strongly dislike Google's and Microsoft's attempts to force their accounts on me for everything. |
|
|
|
| ▲ | lamontcg 3 days ago | parent | prev | next [-] |
| > Me: “You’re answering like I’m Piers Morgan asking ‘Do you condemn October 7th?’ and you reply with historical complexity dating to 1948.” Yeah... If I'm working tickets at AWS that kind of dickishness is going to ensure that I don't do more than the least amount of effort for you. Maybe I could burn my entire weekend trying to see if I can rescue your data... or maybe I'm going to do nothing more than strictly follow procedure and let my boss know that I tried... |
|
| ▲ | 2 days ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | seuros 3 days ago | parent | prev [-] |
| I did have backups. Multi-region. Redundant. I followed AWS’s own best practices to the letter. The only failure I didn’t plan for? AWS becoming the failure. The provider nuking everything in violation of their own retention policies. That’s not a backup problem, that is a provider trust problem. The reason i did not kept a local copy, was that i formatted my computer after a hardware failure, after the nurse dropped the laptop in the hospital i was on. Since i have a AWS backup, i just started with a fresh OS while waiting to get discharged to return home and redownload everything. When i returned 6 days days later, the backup was gone. |
| |
| ▲ | throwaway413121 3 days ago | parent [-] | | As someone who has lost data myself, i'm really sorry this happened to you. I refrained from commenting on your article originally, but you seem stuck in a mental state blaming AWS for deleting your "backups" that you established with "best practices". But you need to be aware that you never had backups in the way most sysadmins mean. If i need a friend to take care of a loved one while i'm away, and my backup plan is having the same person take them care of them but in a different house or with a different haircut, that's no backup plan: that's bus factor = 1. Backups mean having a second (or third, etc) copy of your data stored with a 3rd party. Backup assumes you have an original copy of the entirety of the data to begin with. From this point, and i'm sorry it bit you like this, but you never followed any good sysadmin practices about backups and disaster recovery. I have no idea what AWS best practices say, but trusting a single actor (whether hardware manufacturer or services provider) with all your data has always been against the 3-2-1 golden rule of backups and what happened to you was inevitable. Blame AWS all you want, but Google does exactly the same thing all the time, deleting 15-years-old accounts with all associated data with no recourse. Some of us thought the cloud was safe and had all their "cross-region" backups burn in flames in OVH Strasbourg. We could never trust cloud companies, and some of us never did. I never trusted AWS with my data, and i'm sorry you made that mistake, but you may also take the opportunity to learn how to handle backups properly in the future and never trust a single egg basket, or whatever metaphor is more appropriate. Good luck to you in the future! |
|