Remix.run Logo
drzaiusx11 10 hours ago

We all have personal AWS environments and use them as need arises at my org. Doesn't stop the fact cloudformation deployments take inordinate amounts of time for seemingly no reason. Basic shit like pushing a new ECS task takes 10+ minutes alone. Need to push an IAM policy change by itself? 5 minutes. Maybe it's the CDK, but we've only been on that a couple years, prior we used a ansible and cloudformation templates directly but it wasn't any better. This compounds with each dev and each change across multiple stacks. Not only that cloudformation easily gets "stuck" in unrecoverable states when rollback fails and you have to manually clean up to clean up drift which can easily eat your entire day. I'll note that our stacks have good separation by concerns, doesn't matter. A full deployment of a single ECS service easily takes 30 minutes. This is so wasteful it's absurd. I'd love to NOT have to use a shim like LocalStack but the alternative is what?

raw_anon_1111 10 hours ago | parent [-]

I have been using a modified version of this for 8 years. I didn’t write it

https://github.com/1Strategy/fargate-cloudformation-example/...

It’s never taken 30 minutes to pass in a new parameter value for the Docker container.

Also as far as rollbacks just use —disable-rollbacks.

The only time I’ve had CFT get stuck is using custom resources when I didn’t have proper error handling and I didn’t send the failure signal back to CFT.

This is with raw CFT using SAM.

drzaiusx11 9 hours ago | parent [-]

Failed deployments without rollbacks still leave you in a unusable state and manual rollbacks of a failed service deployment can take as long to cleanup as the failed rollback you just disabled especially when dealing with persistent resources. That linked fargate stack is fairly bare bones in comparison to what we run in ECS and we maintain our own AMIs that are built nightly for security updates and ECR resources from docker build pipelines which need to go together in a real AWS environment to have any hope of actually working. A failure in one has cascading effects on others and cleanup is a pain. Passing a new parameter isn't a real exercise and we need a new docker build with every code change. Glad you have a minimalist setup and can get by with what? 10m deployments end to end? Sadly that's not the world I live in...

raw_anon_1111 9 hours ago | parent [-]

Why are you running your own AMIs for ECS instead of just using Fargate?

The build pipeline I used in CodeBuild was build the Docker container and a sidecar Nginx container.

The parameter you pass in is the new Docker container you just built.

But how would LocalStack help?

You also don’t have massive CDK apps. The Docker images are going to change much more frequently than your persistent layer. You’re not going to be bringing up and down your VPCs, database clusters etc.

drzaiusx11 6 hours ago | parent [-]

Own AMIs? Simply cost. No other reason, although we're evaluating it again, so we'll see.

We actually have several "massive" CDK projects now, depending on what metric you use for determining size. Our largest CDK app has more than 60 stacks, but with a cellular architecture that's artifially inflating the numbers a bit (n unique stacks against k AWS accounts where k > n but for n > 20, < 100) Maybe the speed at which we change persistent layers (99% additive) will slow down someday, but when you maintain a large number of services (>14) with constantly changing external contracts, it probably won't; it hasn't the last 6 years, it only gets faster.