Remix.run Logo
raw_anon_1111 10 hours ago

I have been using a modified version of this for 8 years. I didn’t write it

https://github.com/1Strategy/fargate-cloudformation-example/...

It’s never taken 30 minutes to pass in a new parameter value for the Docker container.

Also as far as rollbacks just use —disable-rollbacks.

The only time I’ve had CFT get stuck is using custom resources when I didn’t have proper error handling and I didn’t send the failure signal back to CFT.

This is with raw CFT using SAM.

drzaiusx11 9 hours ago | parent [-]

Failed deployments without rollbacks still leave you in a unusable state and manual rollbacks of a failed service deployment can take as long to cleanup as the failed rollback you just disabled especially when dealing with persistent resources. That linked fargate stack is fairly bare bones in comparison to what we run in ECS and we maintain our own AMIs that are built nightly for security updates and ECR resources from docker build pipelines which need to go together in a real AWS environment to have any hope of actually working. A failure in one has cascading effects on others and cleanup is a pain. Passing a new parameter isn't a real exercise and we need a new docker build with every code change. Glad you have a minimalist setup and can get by with what? 10m deployments end to end? Sadly that's not the world I live in...

raw_anon_1111 9 hours ago | parent [-]

Why are you running your own AMIs for ECS instead of just using Fargate?

The build pipeline I used in CodeBuild was build the Docker container and a sidecar Nginx container.

The parameter you pass in is the new Docker container you just built.

But how would LocalStack help?

You also don’t have massive CDK apps. The Docker images are going to change much more frequently than your persistent layer. You’re not going to be bringing up and down your VPCs, database clusters etc.

drzaiusx11 6 hours ago | parent [-]

Own AMIs? Simply cost. No other reason, although we're evaluating it again, so we'll see.

We actually have several "massive" CDK projects now, depending on what metric you use for determining size. Our largest CDK app has more than 60 stacks, but with a cellular architecture that's artifially inflating the numbers a bit (n unique stacks against k AWS accounts where k > n but for n > 20, < 100) Maybe the speed at which we change persistent layers (99% additive) will slow down someday, but when you maintain a large number of services (>14) with constantly changing external contracts, it probably won't; it hasn't the last 6 years, it only gets faster.