Remix.run Logo
FridgeSeal 19 hours ago

FWIW I’ve been using ECS at my current work (previously K8s) and to me it feels just flat worse:

- only some of the features

- none of the community

- all of the complexity but none of the upsides.

It was genuinely a bit shocking that it was considered a serious product seeing as how chaotic it was.

avandekleut 18 hours ago | parent [-]

Can you elaborate on some of the issues you faced? I was considering deploying to ECS fargate as we are all-in on AWS.

FridgeSeal 18 hours ago | parent | next [-]

Any kind of git-ops style deployment was out.

ECS merges “AWS config” and “app/deployment config together” so it was difficult to separate “what should go in TF, and what is a runtime app configuration. In comparison this is basically trivial ootb with K8s.

I personally found a lot of the moving parts and names needlessly confusing. Tasks e.g. were not your equivalent to “Deployment”.

Want to just deploy something like Prometheus Agent? Well, too bad, the networking doesn’t work the same, so here’s some overly complicated guide where you have to deploy some extra stuff which will no doubt not work right the first dozen times you try. Admittedly, Prom can be a right pain to manage, but the fact that ECS makes you do _extra_ work on top of an already fiddly piece of software left a bad taste in my mouth.

I think ECS get a lot of airtime because of Fargate, but you can use Fargate on K8s these days, or, if you can afford the small increase in initial setup complexity, you can just have Fargates less-expensive, less-restrictive, better sibling: Karpenter on Spot instances.

physicsguy 10 hours ago | parent [-]

I think the initial setup complexity is less with ECS personally, and the ongoing maintenance cost is significantly worse on K8s when you run anything serious which leads to people taking shortcuts.

Every time you have a cluster upgrade with K8s there’s a risk something breaks. For any product at scale, you’re likely to be using things like Istio and Metricbeat. You have a whole level of complexity in adding auth to your cluster on top of your existing SSO for the cloud provider. We’ve had to spend quite some time changing the plugin for AKS/EntraID recently which has also meant a change in workflow for users. Upgrading clusters can break things since plenty of stuff (less these days) lives in beta namespaces, and there’s no LTS.

Again, it’s less bad than it was, but many core things live(d) in plugins for clusters which have a risk of breaking when you upgrade cluster.

My view was that the initial startup cost for ECS is lower and once it’s done, that’s kind of it - it’s stable and doesn’t change. With K8s it’s much more a moving target, and it requires someone to actively be maintaining it, which takes time.

In a small team I don’t think that cost and complexity is worth it - there are so many more concepts that you have to learn even on top of the cloud specific ones. It requires a real level of expertise so if you try and adopt it without someone who’s already worked with it for some time you can end up in a real mess

andycowley 18 hours ago | parent | prev [-]

If your workloads are fairly static,ECS is fine. Bringing up new containers and nodes takes ages with very little feedback as to what's going on. It's very frustrating when iterating on workloads.

Also fargate is very expensive and inflexible. If you fit the narrow particular use case it's quicker for bringing up workloads, but you pay extra for it.