| ▲ | saisrirampur 7 hours ago |
| I’m a huge Postgres fan. That said, I don’t agree with the blanket advice of “just use Postgres.” That stance often comes from folks who haven’t been exposed enough to (newer) purpose-built technologies and the tremendous value they can create The argument, as in this blog, is that a single Postgres stack is simpler and reduces complexity. What’s often overlooked is the CAPEX and OPEX required to make Postgres work well for workloads it wasn’t designed for, at even reasonable scale. At Citus Data, we saw many customers with solid-sized teams of Postgres experts whose primary job was constant tuning, operating, and essentially babysitting the system to keep it performing at scale. Side note, we’re seeing purpose-built technologies show up much earlier in a company’s lifecycle, likely accelerated by AI-driven use cases. At ClickHouse, many customers using Postgres replication are seed-stage companies that have grown extremely quickly. We pulled together some data on these trends here:
https://clickhouse.com/blog/postgres-cdc-year-in-review-2025... A better approach would be to embrace the integration of purpose-built technologies with Postgres, making it easier for users to get the best of both worlds, rather than making overgeneralized claims like “Postgres for everything” or “Just use Postgres.” |
|
| ▲ | pimlottc 7 hours ago | parent | next [-] |
| I took it to mean “make Postgres your default choice”, not “always use Postgres no matter what” |
| |
| ▲ | SOLAR_FIELDS 4 hours ago | parent | next [-] | | This is my philosophy. When the engineer comes to me and says that they want to use NotPostgres, they have to justify why, with data and benchmarks, Postgres is not good enough. And that’s how it should be | |
| ▲ | saisrirampur 6 hours ago | parent | prev [-] | | I personally see a difference between “just use Postgres” and “make Postgres your default choice.” The latter leaves room to evaluate alternatives when the workload calls for it, while the former does not. When that nuance gets lost, it can become misleading for teams that are hitting or even close to hitting—the limits of Postgres, who may continue tuning Postgres spending not only time but also significant $$. IMO a better world is one where developers can have a mindset of using best-in-class where needed. This is where embracing integrations with Postgres will be helpful! | | |
| ▲ | jghn 6 hours ago | parent | next [-] | | I think that the key point being made by this crowd, of which I'm one, is somewhere in the middle. The way I mean it is "Make Postgres your default choice. Also *you* probably aren't doing anything special enough to warrant using something different". In other words, there are people and situations where it makes sense to use something else. But most people believing they're in that category are wrong. | | |
| ▲ | cortesoft 2 hours ago | parent [-] | | > Also you probably aren't doing anything special enough to warrant using something different". I always get frustrated by this because it is never made clear where the transition occurs to where you are doing something special enough. It is always dismissed as, "well whatever it is you are doing, I am sure you don't need it" Why is this assumption always made, especially on sites like HackerNews? There are a lot of us here that DO work with scales and workloads that require specialized things, and we want to be able to talk about our challenges and experiences, too. I don't think we need to isolate all the people who work at large scales to a completely separate forum; for one thing, a lot of us work on a variety of workloads, where some are big enough and particular enough to need a different technology, and some that should be in Postgres. I would love to be able to talk about how to make that decision, but it is always just "nope, you aren't big enough to need anything else" I was not some super engineer who already knew everything when I started working on large enough data pipelines that I needed specialized software, with horizontal scaling requirements. Why can't we also talk about that here? | | |
| ▲ | SenHeng 2 hours ago | parent [-] | | And another related one, you’ll know when you’ll need it. No I don’t. I’ve never used the thing so I don’t know when it’ll come in useful. |
|
| |
| ▲ | PunchyHamster 6 hours ago | parent | prev [-] | | The point is really that you can only evaluate which of alternatives is better once you have working product with data big enough - else it's just basically following trends and hoping your barely informed decision won't be wrong. | | |
| ▲ | SOLAR_FIELDS 4 hours ago | parent | next [-] | | Postgres is widely used enough with enough engineering company blog posts that the vast majority of NotPostgres requests already have a blog post that either demonstrates that pg falls over at the scale that’s being planned for or it doesn’t. If they don’t, the trade off for NotPostgres is such that it’s justifiable to force the engineer to run their own benchmarks before they are allowed to use NotPostgres | |
| ▲ | saisrirampur 6 hours ago | parent | prev [-] | | Agree to disagree here. I see a world where developers need to think about (reasonable) scale from day one, or at least very early. We’ve been seeing this play out at ClickHouse - the need for purpose-built OLAP is reducing from years to months. Also integration with ClickHouse is few weeks of effort for potentially significantly faster performance for analytics. | | |
| ▲ | bcrosby95 5 hours ago | parent | next [-] | | Reasonable scale means... what exactly? Here's my opinion: just use postgres. If you're experienced enough to not when I say that, go for it, the advice isn't for you. If you aren't, I'm probably saving you from yourself. "Reasonable scale" to these people could mean dozens of inserts per second, which is why people talking vagueries around scale is madenning to me. If you aren't going to actually say what that means, you will lead people who don't know better down the wrong path. | |
| ▲ | strken 2 hours ago | parent | prev | next [-] | | I see a world where developers need to think about REASONABLE scale from day one, with all caps and no parentheses. I've sat in on meetings about adding auth rate limiting, using Redis, to an on-premise electron client/Node.js server where the largest installation had 20 concurrent users and the largest foreseeable installation had a few thousand, in which every existing installation had an average server CPU utilisation of less than a percent. Redis should not even be a possibility under those circumstances. It's a ridiculous suggestion based purely on rote whiteboard interview cramming. Stick a token_bucket table in Postgres. I'm also not convinced that thinking about reasonable scale would lead to a different implementation for most other greenfield projects. The nice thing about shoving everything into Postgres is that you nearly always have a clear upgrade path, whereas using Redis right from the start might actually make the system less future-proof by complicating any eventual migration. | |
| ▲ | hermanzegerman 6 hours ago | parent | prev [-] | | [flagged] |
|
|
|
|
|
| ▲ | cheriot 6 hours ago | parent | prev | next [-] |
| > I don’t agree with the blanket advice of “just use Postgres.” I take it as meaning use Postgres until there's a reason not to. ie build for the scale / growth rate you have not "how will this handle the 100 million users I dream of." A simpler tech stack will be simpler to iterate on. |
| |
| ▲ | pclmulqdq 4 hours ago | parent | next [-] | | Postgres on modern hardware can likely service 100 million users unless you are doing something data intensive with them. You can get a few hundred TB of flash in one box these days. You need to average over 1 MB of database data per user to get over 100 TB with only 100 million users. Even then, you can mostly just shard your DB. | | |
| ▲ | direwolf20 3 hours ago | parent [-] | | What about throughput? How many times can postgres commit per second on NVMe flash? | | |
| ▲ | pclmulqdq 3 hours ago | parent [-] | | You can do about 100k commits per second, but this also partly depends on the CPU you attach to it. It also varies with how complicated the queries are. With 100 million DAU, you're often going to have problems with this rate unless you batch your commits. With 100 million user accounts (or MAU), you may be fine. | | |
|
| |
| ▲ | quotemstr 5 hours ago | parent | prev [-] | | Yes. That's a good framing. PostgreSQL is a good default for online LOB-y things. There are all sorts of reasons to use something other than PostgreSQL, but raw performance at scale becomes such a reason later than you think. Cloud providers will rent you enormous beasts of machines that, while expensive, will remain cheaper than rewriting for a migration for a long time. |
|
|
| ▲ | groundzeros2015 6 hours ago | parent | prev | next [-] |
| In my experience the functionality of “purpose built systems” is found in Postgres but you have to read the manual. I personally think reading manuals and tuning is a comparably low risk form of software development. |
|
| ▲ | direwolf20 3 hours ago | parent | prev | next [-] |
| Postgres is infinitely extensible, more than MariaDB. But it's very painful to write or configure extensions and you might as well use something different instead of reaching for an extension mechanism. |
|
| ▲ | Fairburn 6 hours ago | parent | prev | next [-] |
| Exactly. Use cases differ.
https://www.geeksforgeeks.org/mysql/difference-between-mysql... |
| |
| ▲ | Dan42 4 hours ago | parent [-] | | That article was clearly written by AI, based on data from 20 years ago. |
|
|
| ▲ | LaGrange 6 hours ago | parent | prev | next [-] |
| > At Citus Data, we saw many customers with solid-sized teams of Postgres experts whose primary job was constant tuning, operating, and essentially babysitting the system to keep it performing at scale. Oh no, not a company hiring a team of specialist in a core technology you need! What next, paying them a good wage? C'mon, it's so much better to get a bunch of random, excuse me, "specialized" SaaS tools that will _surely_ not lead to requiring five teams of specialists in random technologies that will eventually be discontinued once Google acquires the company running them. OK but seriously, yeah sometimes "specialized" is good, though much less rarely than people pretend it to be. Having specialists ain't bad, and I'd say is better than telling a random developer to become a specialist in some cloud tech and pretending you didn't just end up turning a - hopefully decent - developer into a poor DBA. Not to mention that a small team of Postgres specialists can maintain a truly stupendous amount of Postgres. |
| |
| ▲ | bloaf 3 hours ago | parent [-] | | At my company I saw a team of devs pay for a special purpose "query optimized" database with "exabyte capability" to handle... their totally ordinary HR data. I queried said database... it was slow. I looked to see what indexes they had set up... there were none. That team should have just used postgres and spent all the time and money they poured into this fancy database tech on finding someone who knew even a little bit about database design to help them. |
|
|
| ▲ | jongjong 5 hours ago | parent | prev [-] |
| I hate how developers are often very skeptical but all the skepticism goes out the window if the tech is sufficiently hyped up. And TBH, developers are pretty dumb not to realize that the tech tools monoculture is a way for business folks to make us easily replaceable... If all companies use the same tech, it turns us into exchangeable commodities which can easily be replaced and sourced across different organizations. Look at the typical React dev. They have zero leverage and can be replaced by vibe coding kiddies straight out of school or sourced from literally any company on earth. And there are some real negatives to using silver bullet tools. They're not even the best tools for a lot of cases! The React dev is a commodity and they let it happen to them. Outsmarted by dumb business folks who dropped out of college. They probably didn't even come up with the idea; the devs did. Be smarter people. They're going to be harvesting you like Cavendish. |
| |
| ▲ | pimlottc 4 hours ago | parent [-] | | Sure, but the world is vast. I would love to be able to test every UI framework and figure out which is the best, but who’s got time for that? You have to rely on heuristics for some things, and popularity is often a decent indicator. | | |
| ▲ | mattgreenrocks 2 hours ago | parent [-] | | Popularity’s flip side is that it can fuel commodification. I argue popularity is insufficient signal. React as tech is fine, but the market of devs who it is aimed at may not be the most discerning when it comes to quality. |
|
|