Remix.run Logo
jpalepu33 11 hours ago

Great write-up on the evolution of your architecture. The progression from 200ms → 14ms is impressive.

The lesson about "delete code to improve performance" resonates. I've been down similar paths where adding middleware/routing layers seemed like good abstractions, but they ended up being the performance bottleneck.

A few thoughts on this approach:

1. Warm pools are brilliant but expensive - how are you handling the economics? With multi-region pools, you're essentially paying for idle capacity across multiple data centers. I'm curious how you balance pool size vs. cold start probability.

2. Fly's replay mechanism is clever, but that initial bounce still adds latency. Have you considered using GeoDNS to route users to the correct regional endpoint from the start? Though I imagine the caching makes this a non-issue after the first request.

3. For the JWT approach - are you rotating these tokens per-session? Just thinking about the security implications if someone intercepts the token.

The 79ms → 14ms improvement is night and day for developer experience. Latency under 20ms feels instant to humans, so you've hit that sweet spot.

mnazzaro 11 hours ago | parent [-]

1. The pools are very shallow- two machines per pool. While it's certainly possible for 3 tasks to get requested in the same region within 30 seconds, we handle that by falling back to the next closest region if a pool is empty. This is uncommon, though. 2. I haven't considered it, but yeah- the caching seems to work great for us. 3. The tokens are generated per-task, so if you are worried about your token getting leaked, you can just delete the task!

hinkley 10 hours ago | parent [-]

One of the perennial problems with on call situations I encountered was that at some point everyone knew that a production incident was going on and people were either trying to help or learn by following along running the same diagnostics the on point people were running, and exhausting the available resources that were needed to diagnose the problem.

Splunk was a particular problem that way, but I also started seeing it with Grafana, at least in extremis, once we migrated to self hosted on AWS from a vendor. Most times it was fine, but if we had a bug that none of the teams could quickly disavow as being theirs, we had a lot of chefs in the kitchen and things would start to hiccup.

There can be thundering herds in dev. And a bunch of people trying a repro case in a thirty second window can be one of them. The question is if anyone has the spare bandwidth to notice that it’s happening or if everyone trudges along making the same mistakes every time.