Remix.run Logo
n_u 4 days ago

From my position

MySQL pros:

The MySQL docs on how the default storage engine InnoDB locks rows to support transaction isolation levels is fantastic. [1] This can help you better architect your system to avoid lock contention or understand why existing queries may be contending for locks. As far as I know Postgres does not have docs like that.

MySQL uses direct I/O so it disables the OS page cache and uses its own buffer pool instead[2]. Whereas Postgres doesn't use direct I/O so the OS page cache will duplicate pages (called the "double buffering" problem). So it is harder to estimate how large of a dataset you can keep in memory in Postgres. They are working on it though [3]

If you delete a row in MySQL and then insert another row, MySQL will look through the page for empty slots and insert there. This keeps your pages more compact. Postgres will always insert at the bottom of the page. If you have a workload that deletes often, Postgres will not be using the memory as efficiently because the pages are fragmented. You will have to run the VACUUM command to compact pages. [4]

Vitess supports MySQL[5] and not Postgres. Vitess is a system for sharding MySQL that as I understand is much more mature than the sharding options for Postgres. Obviously this GA announcement may change that.

Uber switched from MySQL to Postgres only to switch back. It's a bit old but it's worth a read. [6]

Postgres pros:

Postgres supports 3rd party extensions which allow you to add features like columnar storage, geo-spatial data types, vector database search, proxies etc.[7]

You are more likely to find developers who have worked with Postgres.[8]

Many modern distributed database offerings target Postgres compatibility rather than MySQL compatibility (YugabyteDB[9], AWS Aurora DSQL[10], pgfdb[11]).

My take:

I would highly recommend you read the docs on InnoDB locking then pick Postgres.

[1] https://dev.mysql.com/doc/refman/8.4/en/innodb-locking.html

[2] https://dev.mysql.com/doc/refman/8.4/en/memory-use.html

[3] https://pganalyze.com/blog/postgres-18-async-io

[4] https://www.percona.com/blog/postgresql-vacuuming-to-optimiz...

[5] https://vitess.io/

[6] https://www.uber.com/blog/postgres-to-mysql-migration/

[7] https://www.tigerdata.com/blog/top-8-postgresql-extensions

[8] https://survey.stackoverflow.co/2024/technology#1-databases

[9] https://www.yugabyte.com/

[10] https://aws.amazon.com/rds/aurora/dsql/

[11] https://github.com/fabianlindfors/pgfdb

bri3d 4 days ago | parent | next [-]

> I would highly recommend you read the docs on InnoDB locking then pick Postgres.

This made me laugh pretty hard, but it's basically my take too.

I'd pretty much go with the same thing. It's interesting to me, though, that people see Postgres as the "big database" and MySQL as the "hobby database." I basically see things as the exact opposite - Postgres is incredibly flexible, very nice to use, and these days, has fewer foot guns at small scale (IMO) than MySQL. It's more academically correct and it generally tends to "work better" at almost any achievable "normal" scale.

On the other hand, Postgres is full of pitfalls and becomes very difficult at exceptionally large scale (no, not "your startup got traction" scale). Postgres also doesn't offer nearly the same quality of documentation or recipes for large scale optimization.

Almost everything in the 2016 Uber article you link, which is a _great_ read, is still true to some extent with vanilla Postgres, although there are more proprietary scale-out options available now. Postgres simply has not been "hyper-scaled" to the extent that MySQL has and most massive globally sharded/replicated systems started as MySQL at some point.

For this same reason, you are likely to be able to hire a MySQL-family DBA with more experience at hyper-scale than a Postgres one.

With all that said, I still agree - I'd almost universally start with Postgres, with MySQL as a back-pocket scale-up-and-out option for specific very large use-cases that don't demand complex query execution or transactional workload properties. Unless you have an incredibly specific workload which is a very specific combination of heavy UPDATE and `SELECT * FROM x WHERE id=y`, Postgres will do better at any achievable scale you will find today.

n_u 4 days ago | parent [-]

> This made me laugh pretty hard, but it's basically my take too.

Haha glad you enjoyed it.

> It's interesting to me, though, that people see Postgres as the "big database" and MySQL as the "hobby database." I basically see things as the exact opposite

I agree. As I understand Postgres started as a challenger to SQL[1][2] with support for more complicated data types but then in the mid '90s they added SQL support and it was renamed PostgreSQL.

Anecdotally I have heard from people working in industry in the 2000s-2010s that Postgres was viewed as less mature so many of the large web applications were on MySQL. This is a bit confusing to me because MySQL was released around the same time Postgres added SQL support but maybe it was because MySQL had a company behind it.

Many large scale applications of those days were using MySQL. Facebook developed RocksDB and then MyRocks[3] based on MySQL. Youtube built Vitess [4] which was sharded MySQL which was later used by Slack [5], Square, Pintrest and others.

> It's more academically correct

I'm curious about this. I know that Postgres implements MVCC in a wasteful way and uses the OS page cache in addition to its buffer pool resulting in double buffering rather than direct I/O. I feel like the more I learn about database internals the more I learn about how MySQL did things the "right" way and Postgres's approach is a bit odd. But perhaps I'm missing something.

[1] https://en.wikipedia.org/wiki/PostgreSQL#History

[2] https://db.cs.cmu.edu/papers/2024/whatgoesaround-sigmodrec20...

[3] https://engineering.fb.com/2016/08/31/core-infra/myrocks-a-s...

[4] https://vitess.io/docs/22.0/overview/history/

[5] https://slack.engineering/scaling-datastores-at-slack-with-v...

bri3d 4 days ago | parent | next [-]

> I feel like the more I learn about database internals the more I learn about how MySQL did things the "right" way and Postgres's approach is a bit odd.

This is a good distinction too; I was thinking from the end-user’s standpoint, where Postgres has historically been seen as more faithful to both SQL standards and consistency guarantees.

jedberg 3 days ago | parent | prev | next [-]

> This is a bit confusing to me because MySQL was released around the same time Postgres added SQL support but maybe it was because MySQL had a company behind it.

I think the main reason MySQL took off faster than Postgres originally is because it had better defaults. MySql worked out of the box on modern hardware. Postgres assumed you only have 4MB of memory until well into the 2010s, in part to make it keep running on everything it ever ran on in the past.

So when you first installed Postgres, it would perform terribly until you optimized it.

It's really a fantastic case study in setting good defaults.

pests 3 days ago | parent | prev | next [-]

My reason for choosing MySQL in the early days was due to it being the default choice for PHP apps back in the day. Every tutorial, the mysql_ function, every app like Wordpress or anything else, and phpmysqladmin. Postgres was seen as more corporate / official / big boys club in my view… similar to how clickhouse was viewed here until recently.

apavlo 4 days ago | parent | prev [-]

You are conflating MySQL and InnoDB. The latter does a lot of good things, much more than the former.

n_u 3 days ago | parent [-]

Haha that's probably correct, databases are a huge topic and I still know very little. I learned most of what I know about databases from you and my work in industry.

Perhaps could you share some of those good / bad things InnoDB / MySQL does?

tracker1 4 days ago | parent | prev [-]

To me, every time I've touched MySQL, I've found an issue that just rubbed me the wrong way... starting with the fact that UTF8 isn't, and even across new major versions hasn't changed to an alias for the real UTF8. (VAR)BINARY does case sensitive collation based on db default, it shouldn't do any type of textual comparison even if the data is "text". You can't do foreign keys with ANSI quotes for the table/field names...

Note: some of this may have changed in the past 6+ years I've avoided looking at it again.

sgarland 3 days ago | parent [-]

> (VAR)BINARY

Close [0]: (VAR)CHAR BINARY, which is its own type, uses the `_bin` collation for the column or table character set. (VAR)BINARY stores binary strings, and uses the `binary` character set and collation.

In fairness, the amount of options, gotchas, and WTF involved with collations in any DB is mind-boggling.

[0]: https://dev.mysql.com/doc/refman/8.4/en/binary-varbinary.htm...

tracker1 3 days ago | parent [-]

I just know that I was sitting converted uuid and was getting collisions on different binary values. This was admittedly a couple decades ago though.

By converted it was legacy records where the original int value was injected to uuid format. This was then stored in a binary field as a primary key.