▲ | rcrowley 4 days ago | ||||||||||||||||||||||||||||||||||||||||
You don't (typically) lose the data on the ephemeral drive across a reboot but you definitely can (and do!) when there are more permanent hardware failures. (They really happen!) That's why PlanetScale always maintains at least three copies of the data. We guarantee durability via replication, not by trusting the (slow, network-attached) block device. I did an interview all about PlanetScale Metal a couple of months ago: <https://www.youtube.com/watch?v=3r9PsVwGkg4> | |||||||||||||||||||||||||||||||||||||||||
▲ | n_u 4 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||
Hi, thank you for your work on this and being willing to answer questions on it. "We guarantee durability via replication". I've starting noticing this pattern more where distributed systems provide durability by replicating data rather than writing it to disk and achieving the best of both worlds. I'm curious 1. Is there a name for this technique? 2. How do you calculate your availability? This blog post[1] has some rough details but I'd love to see the math. 3. I'm guessing a key part of this is putting the replicas in different AZs and assuming failures aren't correlated so you can multiply the probabilities directly. How do you validate that failures across AZs are statistically independent? Thanks! [1] https://planetscale.com/blog/planetscale-metal-theres-no-rep... | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
▲ | zenerdi0de 4 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||
[dead] |