| ▲ | wutwutwat 6 hours ago | |
It's not just low level, in most cases, it's also overkill. Most companies aren't "web scale" ™ and don't need an orchestrator built for google level elasticity, they need a vm autoscaling group if anything. Most apps don't need such granular control over fs access, network policies, root access, etc, they need `ufw allow 80 && ufw enable` Most apps don't need a 15 stage, docker layer caching optimized, archive promotion build pipeline that takes 30 minutes to get a copy change shipped to prod, they need a `git clone me@github.com:me/mine.git release_01 && ln -s release_01 /var/www/me/mine/current` This is coming from someone who has had roles both as a backend product engineer and as a devops/platform engineer, who has been around long enough to remember "deploy" to prod was eclipse ftping php files straight to the prod server on file save. I manage clusters for a living for companies that went full k8s and never should have gone full k8s. ECS would have worked for 99% of these apps, if they even needed that. Just like the js ecosystem went bat shit insane until things started to swing back towards sanity and people started to trim the needless bloat, the same is coming or due for the overcomplexity of devops/backend deployments | ||
| ▲ | valzam 4 hours ago | parent [-] | |
If this works `git clone me@github.com:me/mine.git release_01 && ln -s release_01 /var/www/me/mine/current` then your Docker builds should also be extremely quick. Where I have seen extremely slow docker builds is with Python services using ML libraries. But those I reallly don't want to be building on the production servers. "ECS would have worked for 99% of these apps, if they even needed that." I used to agree with that but is EKS really that much more complicated? Yes you pay for the k8s control plane but you gain tooling that is imho much easier to work with than IaC. | ||