Remix.run Logo
mmooss 11 hours ago

In a situation like this one, good crisis leadership is essential. dang, HN will help you with tips from vast collected experience (please chip in):

1. Blame: The first thing to do is to point the finger. That doesn't mean analysing the technical issue, which can delay this step and limit your options, but figuring out who is politically easiest to blame. Often, that's the new guy, but outside contractors and vendors without good connections are also a common solution. Even if you are technically responsible for hiring them, you can always push them under the bus with a little skill. This small sacrifice helps unify, focus, and motivate the rest of the team.

2. Emotion: Inject your emotion into the situation and make that the implicit, but indisputable priority. Particularly, outrage and anger - This is completely _____. These people are utterly _____ (I'd use all caps, but that's not allowed on HN). Make sure everyone's attention is over their shoulder, on your emotion, and infect the team with it. Threats are an effective tool here - this is a crisis, and anyone who is calm is not emotionally engaged. Otherwise, they won't care enough about this problem - without you driving them, they probably wouldn't care much at all. Anyway, you don't have time for niceties like empathy or even basic respect.

3. Speed: Respnsiveness to stakeholders is very important. People need answers now. Give them answers they want to hear, outcomes they will be comfortable with. Don't worry if different groups hear different things. Your team will find a way to make it all work - that's their job.

4. Communication: Good communication is essential. Make sure you clearly tell your team what they should be doing; repeat it several times to prevent misunderstanding. Especially people with experience can have minds of their own; keep them on track. The situation is a crisis so you can't take any risks; stay on top of them and everything they do, and give input if you're not certain they are doing exactly what you would be doing.

5. Victimhood: Find a way to turn the tables: Make it about you, and how you're the victim here, and feed the fire with more outrage. With this and outrage, nobody will undermine the team by challenging your ideas or authority, which is the most essential component of a successful outcome. Remember, without you this all falls apart.

Have I missed anything?

yearolinuxdsktp 6 hours ago | parent [-]

Engagement: make sure that every member of the team is either on the incident bridge or has dropped what they are doing to watch you diagnose the problem. The more eyes on the problem, the more awareness of the pain will be absorbed by all. If members need to leave to get food or put children to bed, tell them to order delivery and to ask their spouse to do their job. Demonstrate human touch by allowing them to turn off camera while they are eating.

Comprehensiveness: propose extreme, sweeping solutions, such as a lights-out restart of all services, shutting down all incoming requests, and restoring everything to yesterday's backup. This demonstrates that you are ready to address the problem in a maximally comprehensive way. If someone suggests a config change rollback, or a roll-forward patch, ask them why are gambling company time with localized changes, and ask them why are they willing to gamble company time on technical analysis?

Root Cause Analysis Meeting: spend the entire meeting time rehashing the events, pointing fingers and assigning blame. Be sure to mention how the incident could've been over sooner if you just restarted and rolled back every single thing. Be sure to demonstrate out-of-the-box thinking by discussing unrealistic grandiose solutions. When the time is up, run the meeting over by 30 minutes and force all to stay while realistic solution ideas are finally discussed in overtime. This makes it clear to the team that nothing is more important than this incident's RCA--their time surely is not. If someone asks to tap out to pick their kids up after school, remind them that they are making enough money to call them an Uber.

Alerting: be sure to identify anything remotely resembling leading indicators, and add Critical-level wake-you-up alerts with sensitive thresholds for those indicator. Database exceeding 50% CPU? Critical! Filesystem queue length exceeding 5? Critical! Heap usage over 50%? Critical! 100 errors in one minute on a 100000 requests per minute service? Critical! Single log line indicating DNS resolution failure anywhere in the system? Critical! (What if AWS's DNS is down again?) Service requests rate 10% higher than typical peak? Critical! If anyone objects to such critical alerts, ask them why do they want to be responsible for not preventing the next incident?