| ▲ | Ephemeral Infrastructure: Why Short-Lived Is a Good Thing(lukasniessen.medium.com) | ||||||||||||||||||||||
| 30 points by birdculture 5 days ago | 13 comments | |||||||||||||||||||||||
| ▲ | hermitcrab a minute ago | parent | next [-] | ||||||||||||||||||||||
All software is ephemeral on a human timescale, isn't it? | |||||||||||||||||||||||
| ▲ | cortesoft 34 minutes ago | parent | prev | next [-] | ||||||||||||||||||||||
I do appreciate the way Kubernetes forces you to plan for instance failure from the beginning, and that it creates standards on how to deal with it. However, I feel like this article really glosses over the challenge of stateful workloads by simply handing over that responsibility to the cloud providers. A lot of us have to run our own servers in our own datacenters for various reasons, so we have to solve that problem ourselves. Luckily, the same principals apply for stateful workloads, it is just more challenging. You have to plan for instance failures while still preserving your data. Even more luckily, the tools for this have gotten better and better. Various database controllers are getting much better at handling clustering and failover for you, so you can handle instances and nodes going down without losing data and without having to outsource the management to the cloud. | |||||||||||||||||||||||
| ▲ | xyzzy_plugh 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
I've written this about four times for two employers and two clients: ABC: Always Be Cycling Basic premise is to encode, be it lifecycle rules or a cron, behavior such that instances are cycled after at most 7 days, but there should always be an instance cycling (with some cool down period of course). It has never not improved overall system stability and in a few cases even decreased costs significantly. | |||||||||||||||||||||||
| ▲ | kennethwolters an hour ago | parent | prev | next [-] | ||||||||||||||||||||||
for me it feels like: Everything is stateful by default/convenience. Building robust systems is in part about confining statefulness to as few parts as possible. To contain statefulness. It’s to buy you some time and capacity. Yet the toughest problems often arise in the stateful parts of the system as well as quasi-stateless parts which sometimes develop hidden statefulness (think of syncing webclient and server state). So being good at handling stateful systems is valuable. Maybe one should even embrace statefulness. However, the AWS Solution Architect will tell you otherwise. | |||||||||||||||||||||||
| ▲ | drob518 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
This seems to be rediscovering "pets vs. cattle." | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | N_Lens 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
I think most of us learned this from an early age - computer systems often degrade as they keep running and need to be reset from time to time. I remember when I had my first desktop PC at home (Windows 95) and it would need a fresh install of Windows every so often as things went off the rails. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | godber 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
Nice post, one more thing to keep in mind with your StatefulSets is how long the service running in the pod takes to come back up. Many will scan the on disk state for integrity and perform recovery tasks. These can take a while and mean the overall service is in a degraded state. Manage these things and any stateful distributed service can run easily in Kubernetes. | |||||||||||||||||||||||
| ▲ | preisschild 4 hours ago | parent | prev [-] | ||||||||||||||||||||||
Have been doing this in production for years now with Cluster-API + Talos. When I update the Kubernetes or Talos version new nodes will be created, and after the existing pods are rescheduled on new nodes the old nodes are deleted. Works pretty well. | |||||||||||||||||||||||