Remix.run Logo
Swizec 12 hours ago

> if you are not too write heavy you can scale Postgres to very high read throughput with read replicas and only a single master

The number of interviewees (I do the sys design question) who want to jump straight into massively distributed eventually consistent complicated systems for 5 reads/second is too damn high. 1,000,000 users is not a lot.

I wish we did better with teaching folks that while we (as an industry) were focused on horizontal this and that, computers got fast. And huge. Amazon will rent you a 32TB RAM server these days. Your database will scale just fine and ACID is far too valuable to throw.

cryptonector 11 hours ago | parent [-]

> The number of interviewees (I do the sys design question) who want to jump straight into massively distributed eventually consistent complicated systems for 5 reads/second is too damn high. 1,000,000 users is not a lot.

Not only can you get quite far w/ PG and w/o sharding, but if you run out of write bandwidth you might be able to (depending on what your schema looks like) shard your PGs. For example, suppose you have a ride sharing type app: you might have one DB per market with only user and driver data for that market, then have the apps go to the user's home market for authentication, and to the local market for rides (similarly if a driver ends up driving someone to a different market, you can add them there temporarily).