| ▲ | domk 6 hours ago |
| One of our interviews is a technical design question that asks the candidate to design a web-based system for public libraries. It explicitly tests for how simple they can keep it, starting at "a single small town library" scale and then changing the requirements to "every library in the country". The top ever performance was someone who answered that by estimating that even at max theoretical scale, all you need a medium sized server and Postgres. |
|
| ▲ | vrosas 6 hours ago | parent | next [-] |
| I have 100% failed interviews by giving that answer when their definition of scale was 10,000!!!! req/sec. Like sorry dude in 2026 that's not much different than 10 req/sec and my original design would work just fine... But that's what happens when your interviewer is a "senior" 24 year old just reading off the prompt. |
| |
| ▲ | Sohcahtoa82 an hour ago | parent | next [-] | | > 10,000!!!! req/sec I've forgotten how to count that low. I'm gonna need a Kubernetes cluster with a distributed database with a caching layer, RabbitMQ/Kafka/whatever, and... | |
| ▲ | silveraxe93 5 hours ago | parent | prev | next [-] | | 10,000!!!! is such a huge number I don't think we could even represent with a computer. Being obviously pedantic here, I agree with what you meant. | |
| ▲ | IshKebab 5 hours ago | parent | prev [-] | | Well, it depends what those requests are doing surely? I always thought it was weird to treat "request" as a unit of measurement. Are you requesting a static help page, or a GraphQL search query? |
|
|
| ▲ | milkshakeyeah 6 hours ago | parent | prev | next [-] |
| Wait, so you are telling me that not every company builds Spotify on design system interview? Impossible |
| |
| ▲ | uberduper 4 hours ago | parent [-] | | AWS loop a long while back wanted me to design a playlist system so my dumbass brain snapped to m3u files or w/e people were using back then and designed a system to host/share playlist files. The teenager (ok probably in their 20s) interviewing me seemed more and more confused as we went on but he never tried to redirect me to what he really intended. |
|
|
| ▲ | withinboredom 6 hours ago | parent | prev [-] |
| Most people forget that the early web was built in server closets on-site handling hundreds of requests per second. The business was sold hyperscalers because devs wanted more servers and were tired of arguing WHY they wanted more servers. Then they got sold on Highly Available services because every second you're down is a dollar, or more, lost. Nobody mentioned that the cost of building and maintaining it costs more than the money you'd lose except for the largest of organizations. Don't even get me started on the resume-driven development that came along with it. And maybe I'm completely wrong. This is a perspective of one. |
| |
| ▲ | busterarm 5 hours ago | parent [-] | | Honestly I think that the real result of this is developers that don't really understand the underlying tooling and invent all sorts of bad architectures. One common example I cite is at one job I owned Kafka and RabbitMQ clusters. Zero consideration was given to message size recommendations and we had incidents on the regular because some application was shoving multi-hundred megabyte messages into RMQ. They'd do other stupid shit like not ack their messages which would cause them to never be removed from local disk. This was a huge org, public company, hiring "only the best and brightest". Management endlessly just threw more hardware at it rather than make the engineers fix their obviously bad architecture. What a headache. Some companies take the "prioritize engineer happiness" thing right off a cliff. |
|