Remix.run Logo
teraflop 2 days ago

> The checksums in WAL are likely not meant to check for random page corruption in the middle; maybe they’re just to check if the last write of a frame was fsynced properly or not?

This is the correct explanation. The purpose is to detect partial writes, not to detect arbitrary data corruption. If detecting corruption was the goal, then checksumming the WAL without also checksumming the database itself would be fairly pointless.

In fact, it's not accurate to say "SQLite does not do checksums by default, but it has checksums in WAL mode." SQLite always uses checksums for its journal, regardless of whether that's a rollback journal or a write-ahead log. [1]

For the purpose of tolerating and recovering from crashes/power failures, writes to the database file itself are effectively idempotent. It doesn't matter if only a subset of the DB writes are persisted before a crash, and you don't need to know which ones succeeded, because you can just roll all of them forward or backward (depending on the mode). But for the journal itself, distinguishing partial journal entries from complete ones matters.

No matter what order the disk physically writes out pages, the instant when the checksum matches the data is the instant at which the transaction can be unambiguously said to commit.

[1]: https://www.sqlite.org/fileformat.html

kentonv 2 days ago | parent | next [-]

Exactly. To put it another way:

Imagine the power goes out while sqlite is in the middle of writing a transaction to the WAL (before the write has been confirmed to the application). What do you want to happen when power comes back, and you reload the database?

If the transaction was fully written, then you'd probably like to keep it. But if it was not complete, you want to roll it back.

How does sqlite know if the transaction was complete? It needs to see two things:

1. The transaction ends with a commit frame, indicating the application did in fact perform a `COMMIT TRANSACTION`.

2. All the checksums are correct, indicating the data was fully synced to disk when it was committed.

If the checksums are wrong, the assumption is that the transaction wasn't fully written out. Therefore, it should be rolled back. That's exactly what sqlite does.

This is not "data loss", because the transaction was not ever fully committed. The power failure happened before the commit was confirmed to the application, so there's no way anyone should have expected that the transaction is durable.

The checksum is NOT intended to detect when the data was corrupted by some other means, like damage to the disk or a buggy app overwriting bytes. Myriad other mechanisms should be protecting against those already, and sqlite is assuming those other mechanisms are working, because if not, there's very little sqlite can do about it.

malone 2 days ago | parent | next [-]

Why is the commit frame not sufficient to determine whether the transaction was fully written or not? Is there a scenario where the commit frame is fsynced to disk but the proceeding data isn't?

adambb 2 days ago | parent [-]

The disk controller may decide to write out blocks in a different order than the logical layout in the log file itself, and be interrupted before completing this work.

grumbelbart2 a day ago | parent | next [-]

Just wondering how SQLite would ever work if it had zero control over this. Surely there must be some "flush" operation that guarantees that everthing so far is written to disk? Otherwise, any "old" block that contains data might have not been written. SQLite says:

> Local devices also have a characteristic which is critical for enabling database management software to be designed to ensure ACID behavior: When all process writes to the device have completed, (when POSIX fsync() or Windows FlushFileBuffers() calls return), the filesystem then either has stored the "written" data or will do so before storing any subsequently written data.

mschuster91 a day ago | parent [-]

A "flush" command does indeed exist... but disk and controller vendors are like patients in Dr. House [1] - everybody lies. Especially if there are benchmarks to be "optimized". Other people here have written up that better than I ever could [2].

[1] https://house.fandom.com/wiki/Everybody_lies

[2] https://news.ycombinator.com/item?id=30371403

johncolanduoni 2 days ago | parent | prev | next [-]

It’s worth noting this is also dependent on filesystem behavior; most that do copy-on-write will not suffer from this issue regardless of drive behavior, even if they don’t do their own checksumming.

hinkley 2 days ago | parent | prev [-]

We still have the elevator algorithm on NVMe?

jrockway 2 days ago | parent | next [-]

NVMe drives do their own manipulation of the datastream. Wear leveling, GC, trying to avoid rewriting an entire block for your 1 bit change, etc. NVMe drives have CPUs and RAM for this purpose; they are full computers with a little bit of flash memory attached. And no, of course they're not open source even though they have full access to your system.

djfivyvusn a day ago | parent [-]

Skynet gotta start somewhere.

bob1029 2 days ago | parent | prev | next [-]

Anything that uses NAND storage technology is going to be optimized in some way like this. NVMe is just the messenger.

lxgr 2 days ago | parent | prev [-]

SQLite runs on anything from servers to Internet-connected lightbulbs.

jrockway 2 days ago | parent [-]

Which lightbulbs include SQLite? I kind of want one.

natebc 2 days ago | parent [-]

these guys have a Cree logo on their homepage so maybe Cree?

https://imaginovation.net/case-study/cree/

At least what I could turn up with a quick web search.

hinkley 2 days ago | parent | prev [-]

For instance, running on ZFS or one of its peers.

zaarn 2 days ago | parent | next [-]

ZFS isn’t viable for SQLite unless you turn off fsync’s in ZFS, because otherwise you will have the same experience I had for years; SQLite may randomly hang for up to a few minutes with no visible cause, if there isn’t sufficient write txg’s to fill up in the background. If your app depends on SQLite, it’ll randomly die.

Btrfs is a better choice for sqlite, haven’t seen that issue there.

Modified3019 2 days ago | parent | next [-]

Interesting. Found a GitHub issue that covers this bug: https://github.com/openzfs/zfs/issues/14290

The latest comment seems to be a nice summary of the root cause, with earlier in the thread pointing to ftruncate instead of fsync being a trigger:

>amotin

>I see. So ZFS tries to drop some data from pagecache, but there seems to be some dirty pages, which are held by ZFS till them either written into ZIL, or to disk at the end of TXG. And if those dirty page writes were asynchronous, it seems there is nothing that would nudge ZFS to actually do something about it earlier than zfs_txg_timeout. Somewhat similar problem was recently spotted on FreeBSD after #17445, which is why newer version of the code in #17533 does not keep references on asynchronously written pages.

Might be worth testing zfs_txg_timeout=1 or 0

jclulow a day ago | parent | prev | next [-]

This isn't an inherent property of ZFS at all. I have made heavy use of SQLite for years (on illumos systems) without ever hitting this, and I would never counsel anybody to disable sync writes: it absolutely can lead to data loss under some conditions and is not safe to do unless you understand what it means.

What you're describing sounds like a bug specific to whichever OS you're using that has a port of ZFS.

zaarn a day ago | parent [-]

I wouldn't recommend SQLite on ZFS (or in general for other reasons), for the precise reason that it either lags or is unsafe.

I've encountered this bug both on illumos, specifically OpenIndiana, and Linux (Arch Linux).

2 days ago | parent | prev | next [-]
[deleted]
throw0101b 2 days ago | parent | prev [-]

> ZFS isn’t viable for SQLite unless you turn off fsync’s in ZFS

Which you can do on a per dataset ('directory') basis very easily:

    zfs set sync=disabled mydata/mydb001
* https://openzfs.github.io/openzfs-docs/man/master/7/zfsprops...

Meanwhile all the rest of your pools / datasets can keep the default POSIX behaviour.

ezekiel68 2 days ago | parent | next [-]

You know what's even easier than doing that? Neglecting to do it or meaning to do it then getting pulled in to some meeting (or other important distraction) and then imagining you did it.

throw0101b 2 days ago | parent [-]

> Neglecting to do it or meaning to do it then getting pulled in to some meeting (or other important distraction) and then imagining you did it.

If your job is to make sure your file system and your database—SQLite, Pg, My/MariDB, etc—are tuned together, and you don't tune it, then you should be called into a meeting. Or at least the no-fault RCA should bring up remediation methods to make sure it's part of the SOP so that it won't happen again.

The alternative the GP suggests is using Btrfs, which I find even more irresponsible than your non-tuning situation. (Heck, if someone on my sysadmin team suggested we start using Btrfs for anything I would think they were going senile.)

johncolanduoni 2 days ago | parent [-]

Facebook is apparently using it at scale, which surprised me. Though that’s not necessarily an endorsement, and who knows what their kernel patcheset looks like.

zaarn a day ago | parent | prev | next [-]

Disabling sync corrupts SQLite databases on powerloss, I've personally experienced this following disabling sync because it causes SQLite to hang.

You cannot have SQLite keep your data and run well on ZFS unless you make a zvol and format it as btrfs or ext4 so they solve the problem for you.

kentonv 2 days ago | parent | prev [-]

Doesn't turning off sync mean you can lose confirmed writes in a power failure?

jandrewrogers 2 days ago | parent | prev [-]

Apropos this use case, ZFS is usually not recommended for databases. Competent database storage engines have their own strong corruption detection mechanisms regardless. What filesystems in the wild typically provide for this is weaker than what is advisable for a database, so databases should bring their own implementation.

tetha 2 days ago | parent | next [-]

Hm.

On the other hand, I've heard people recommend running Postgres on ZFS so you can enable on the fly compression. This increases CPU utilization on the postgres server by quite a bit, read latency of uncached data a bit, but it decreases necessary write IOPS a lot. And as long as the compression is happening a lot in parallel (which it should, if your database has many parallel queries), it's much easier to throw more compute threads at it than to speed up the write-speed of a drive.

And after a certain size, you start to need atomic filesystem snapshots to be able to get a backup of a very large and busy database without everything exploding. We already have the more efficient backup strategies from replicas struggle on some systems and are at our wits end how to create proper backups and archives without reducing the backup freqency to weeks. ZFS has mature mechanisms and zfs-send to move this data around with limited impact ot the production dataflow.

supriyo-biswas 2 days ago | parent | next [-]

Is an incremental backup of the database not possible? Pgbackrest etc. can do this by creating a full backup followed by incremental backups from the WAL.

For Postgres specifically you may also want to look at using hot_standby_feedback, as described in this recent HN article: https://news.ycombinator.com/item?id=44633933

tetha 2 days ago | parent [-]

On the big product clusters, we have incremental pgbackrest backups running for 20 minutes. Full backups take something between 12 - 16 hours. All of this from a sync standby managed by patroni. Archiving all of that takes 8 - 12 hours. It's a couple of terabytes on noncompressible data that needs to move. It's fine though, because this is an append-log-style dataset and we can take our time backing this up.

We also have decently sized clusters with very active data on them, and rather spicy recovery targets. On some of them, a full backup from the sync standby takes 4 hours, we need to pull an incremental backup at most 2 hours afterwards, but the long-term archiving process needs 2-3 hours to move the full backup to the archive. This is the first point in which filesystem snapshots, admittedly, of the pgbackrest repo, become necessary to adhere to SLOs as well as system function.

We do all of the high-complexity, high-throughput things recommended by postgres, and it's barely enough on the big systems. These things are getting to the point of needing a lot more storage and network bandwidth.

hinkley 2 days ago | parent | prev [-]

This was my understanding as well, color me also confused.

wahern 2 days ago | parent | prev | next [-]

But what ZFS provides isn't weaker, and in SQLite page checksums are opt-in: https://www.sqlite.org/cksumvfs.html

EDIT: It seems they're opt-in for PostgreSQL, too: https://www.postgresql.org/docs/current/checksums.html

avinassh 2 days ago | parent [-]

you might like my other post - https://avi.im/blag/2024/databases-checksum/

bad news is, most databases don't do checksums by default.

lxgr 2 days ago | parent | next [-]

This is in fact good news.

Redundantly performing the same performance-intensive tasks on multiple layers makes latency less predictable and just generally wastes resources.

jandrewrogers 2 days ago | parent [-]

Actually bad news. Most popular filesystems and filesystem configurations have limited and/or weak checksums, certainly much worse than you'd want for a database. 16-bit and 32-bit CRCs are common in filesystems.

This is a major reason databases implement their own checksums. Unfortunately, many open source databases have weak or non-existent checksums too. It is sort of an indefensible oversight.

fc417fc802 a day ago | parent | next [-]

Assuming that you expect corruption to be exceedingly rare what's wrong with a 1 in 2^16 or 1 in 2^32 failure rate? That's 4 9s and 9 9s respectively for detecting an event that you hardly expect to happen in the first place.

At 32 bits you're well into the realm of tail risks which include things like massive solar flares or the data center itself being flattened in an explosion or natural disaster.

Edit: I just checked a local drive for concrete numbers. It's part of a btrfs array. Relevant statistics since it was added are 11k power on hours, 24 TiB written, 108 TiB read, and 32 corruption events at the fs level (all attributable to the same power failure, no corruption before or since). I can't be bothered to compute the exact number but at absolute minimum it will be multiple decades of operation before I would expect even a single corruption event to go unnoticed. I'm fairly certain that my house is far more likely to burn down in that time frame.

lxgr 19 hours ago | parent | prev [-]

> Most popular filesystems and filesystem configurations have limited and/or weak checksums,

Because filesystems, too, mainly use them to detect inconsistencies introduced by partial or reordered writes, not random bit flips. That's also why most file systems only have them on metadata, not data.

hawk_ 2 days ago | parent | prev [-]

So when checksums are enabled and the DB process restarts or the host reboots does the DB run the checksum over all the stored data? Sounds like it would take forever for the database to come online. But if it doesn't it may not detect bitrot in time...?

TheDong 2 days ago | parent | prev | next [-]

> ZFS is usually not recommended for databases

Say more? I've heard people say that ZFS is somewhat slower than, say, ext4, but I've personally had zero issues running postgres on zfs, nor have I heard any well-reasoned reasons not to.

> What filesystems in the wild typically provide for this is weaker than what is advisable for a database, so databases should bring their own implementation.

Sorry, what? Just yesterday matrix.org had a post about how they (using ext4 + postgres) had disk corruption which led to postgres returning garbage data: https://matrix.org/blog/2025/07/postgres-corruption-postmort...

The corruption was likely present for months or years, and postgres didn't notice.

ZFS, on the other hand, would have noticed during a weekly scrub and complained loudly, letting you know a disk had an error, letting you attempt to repair it if you used RAID, etc.

It's stuff like in that post that are exactly why I run postgres on ZFS.

If you've got specifics about what you mean by "databases should bring their own implementation", I'd be happy to hear it, but I'm having trouble thinking of any sorta technically sound reason for "databases actually prefer it if filesystems can silently corrupt data lol" being true.

zaarn 2 days ago | parent | next [-]

SQLite on ZFS needs the Fsync behaviour to be off, otherwise SQLite will randomly hang the application as the fsync will wait for the txg to commit. This can take a minute or two, in my experience.

Btrfs is a better choice for SQLite.

supriyo-biswas 2 days ago | parent | next [-]

Btw this concern also applies to other databases, although probably it manifests in the worst way in SQLite. Essentially, you’re doing a WAL over the file systems’ own WAL-like recovery mechanism.

zaarn a day ago | parent [-]

I've not observed other databases locking up on ZFS, Postgres and MySQL both function just fine, without needing to modify any settings.

throw0101b 2 days ago | parent | prev [-]

> SQLite on ZFS needs the Fsync behaviour to be off […]

    zfs set sync=disabled mydata/mydb001
* https://openzfs.github.io/openzfs-docs/man/master/7/zfsprops...
zaarn a day ago | parent [-]

As noted in a sibling comment, this causes corruption on power failure.

jandrewrogers 2 days ago | parent | prev [-]

The point is that a database cannot rely on being deployed on a filesystem with proper checksums.

Ext4 uses 16-/32-bit CRCs, which is very weak for storage integrity in 2025. Many popular filesystems for databases are similarly weak. Even if they have a strong option, the strong option is not enabled by default. In real-world Linux environments, the assumption that the filesystem has weak checksums usually true.

Postgres has (IIRC) 32-bit CRCs but they are not enabled by default. That is also much weaker than you would expect from a modern database. Open source databases do not have a good track record of providing robust corruption detection generally nor the filesystems they often run on. It is a systemic problem.

ZFS doesn't support features that high-performance database kernels use and is slow, particularly on high-performance storage. Postgres does not use any of those features, so it matters less if that is your database. XFS has traditionally been the preferred filesystem for databases on Linux and Ext4 will work. Increasingly, databases don't use external filesystems at all.

mardifoufs 2 days ago | parent [-]

I know MySQL has checksums by default, how does it compare? Is it useful or is it similarly weak?

jandrewrogers 2 days ago | parent [-]

I don't know but LLMs seem to think it uses a 32-bit CRC like e.g. Postgres.

In fairness, 32-bit CRCs were the standard 20+ years ago. That is why all the old software uses them and CPUs have hardware support for computing them. It is a legacy thing that just isn't a great choice in 2025.

a day ago | parent [-]
[deleted]
lxgr 2 days ago | parent | prev | next [-]

No, competent systems just need to have something that, taken together, prevents data corruption.

One possible instance of that is a database providing its own data checksumming, but another perfectly valid one is running one that does not on a lower layer with a sufficiently low data corruption rate.

johncolanduoni 2 days ago | parent | prev [-]

Is not great for databases that do updates in place. Log-structured merge databases (which most newer DB engines are) work fine with its copy-on-write semantics.

lxgr 2 days ago | parent | prev [-]

I believe it's also because of this (from https://www.sqlite.org/wal.html):

> [...] The checkpoint does not normally truncate the WAL file (unless the journal_size_limit pragma is set). Instead, it merely causes SQLite to start overwriting the WAL file from the beginning. This is done because it is normally faster to overwrite an existing file than to append.

Without the checksum, a new WAL entry might cleanly overwrite an existing longer one in a way that still looks valid (e.g. "A|B" -> "C|B" instead of "AB" -> "C|data corruption"), at least without doing an (expensive) scheme of overwriting B with invalid data, fsyncing, and then overwriting A with C and fsyncing again.

In other words, the checksum allows an optimized write path with fewer expensive fsync/truncate operations; it's not a sudden expression of mistrust of lower layers that doesn't exist in the non-WAL path.