Remix.run Logo
Testing Postgres race conditions with synchronization barriers(lirbank.com)
77 points by lirbank 9 hours ago | 40 comments
anvevoice 7 hours ago | parent | next [-]

One pattern I've found useful alongside this: Postgres advisory locks (pg_advisory_xact_lock) for cases where the contention isn't row-level but logic-level. For example, if two requests try to create the "first" resource of a type - there's no existing row to SELECT FOR UPDATE against.

Advisory locks let you serialize on an arbitrary key (like a hash of the entity type + parent ID) without needing a dummy row or separate lock table. They auto-release on transaction end, so no cleanup.

The barrier testing approach from the article would work nicely here too - inject the barrier between acquiring the advisory lock and the subsequent insert, then verify the second transaction blocks until the first commits.

lirbank 7 hours ago | parent | next [-]

Nice - that's a good case for barriers too. When there's no row to SELECT FOR UPDATE against, you'd inject the barrier after acquiring the advisory lock and verify the second transaction blocks until the first commits.

klysm 6 hours ago | parent | prev | next [-]

Seems like a good way to materialize the conflict.

deepsun 5 hours ago | parent | prev [-]

I always did "INSERT ... ON CONFLICT DO NOTHING".

piskov 6 hours ago | parent | prev | next [-]

That whole article should have been:

Use transactions table (just a name, like orders)

On it have an Insert trigger.

It should make a single update with simple “update … set balance += amount where accoundId = id”. This will be atomic thanks to db engine itself.

Also add check constraint >= 0 for balance so it would never become negative even if you have thousands of simultaneous payments. If it becomes negative, it will throw, insert trigger will rethrow, no insert will happen, your backend code will catch it.

That’s it: insert-trigger and check constraint.

No need for explicit locking, no stored procedures, no locks in you backend also, nada. Just a simple insert row. No matter the load and concurrent users it will work like magic. Blazingly fast too.

That’s why there is ACID in DBs.

Shameless plug: learn your tool. Don’t approach Postgresql/Mssql/whathaveyousql like you’re a backend engineer. DB is not a txt file.

mfcl 11 minutes ago | parent | next [-]

The point of the article is to explain barriers and demonstrate them.

The logic used for crediting amounts of money is not important.

valenterry an hour ago | parent | prev | next [-]

> Shameless plug: learn your tool. Don’t approach Postgresql/Mssql/whathaveyousql like you’re a backend engineer.

Erm, knowing and understanding how to use your database is a bread and butter skill of a backend engineer.

shshshsjjjj 3 hours ago | parent | prev [-]

> no locks in you backend

PG is still handling the locks for you, so this isn’t like a bulletproof solution and - like always - depending on your use case, scale, etc this may or may not work.

> No matter the load and concurrent users it will work like magic

Postgres will buckle updating a single row at a certain scale.

————-

Regardless, this article was about testing a type of scenario that is commonly not tested. You don’t always have a great tool like PG on hand that gives you solutions so this testing isn’t needed.

Arifcodes an hour ago | parent | prev | next [-]

Using SERIALIZABLE everywhere sounds nice in theory but it tanks throughput on write-heavy workloads. In practice, most teams I've worked with use a mix: SERIALIZABLE for the critical financial paths, READ COMMITTED + explicit locking for the rest.

The advisory lock pattern mentioned in the thread is underrated. We used pg_advisory_xact_lock for exactly the "first resource creation" race condition on a multi-tenant SaaS. Cleaner than dummy rows and no cleanup needed.

throwaway2ge5hg 7 hours ago | parent | prev | next [-]

Postgres has SERIALIZABLE transaction isolation level. Just use it and then you never have to worry about any of these race conditions.

And if for some reason you refuse to, then this "barrier" or "hooks" approach to testing will in practice not help. It requires you to already know the potential race conditions, but if you are already aware of them then you will already write your code to avoid them. It is the non-obvious race conditions that should scare you.

To find these, you should use randomized testing that runs many iterations of different interleavings of transaction steps. You can build such a framework that will hook directly into your individual DB query calls. Then you don't have to add any "hooks" at all.

But even that won't find all race condition bugs, because it is possible to have race conditions surface even within a single database query.

You really should just use SERIALIZABLE and save yourself all the hassle and effort and spending hours writing all these tests.

LgWoodenBadger 2 hours ago | parent | next [-]

“because it is possible to have race conditions surface even within a single database query.”

This brought back awful memories of MS SQLServer and JDBC. Way back when, maybe Java 1.5 or so, SQLServer would deadlock between connections when all they were doing was executing the exact same statement. Literally. Not the same general statement with different parameters.

danielheath 5 hours ago | parent | prev | next [-]

SERIALIZABLE is really quite hard to retrofit to existing apps; deadlocks, livelocks, and “it’s slow” show up all over the place when you switch it on.

Definitely recommend starting new codebases with it enabled everywhere.

sealeck 4 hours ago | parent [-]

Do you have examples of deadlocks/livelocks you've encountered using SERIALIZABLE? My understanding was that the transaction will fail on conflict (and should then be retried by the application - wrapping existing logic in a retry loop can usually be done without _too_ much effort)...

lirbank 7 hours ago | parent | prev | next [-]

Good call, SERIALIZABLE is a strong option - it eliminates a whole class of bugs at the isolation level. The trade-off is your app needs to handle serialization failures with retry logic, which introduces its own complexity. That retry logic itself needs testing, and barriers work for that too. On randomized testing - that actually has the same limitation you mentioned about barriers: you need to know where to point it. And without coordination, the odds of two operations overlapping at exactly the wrong moment are slim. You'd need enormous pressure to trigger the race reliably, and even then a passing run doesn't prove much. Barriers make the interleaving deterministic so a pass actually means something.

theptip 3 hours ago | parent | prev [-]

Not a silver bullet. When you use serializable you have more opportunities for deadlocks (cases which would otherwise be logic errors at weaker isolation levels).

Serializable just means that within the transaction your logic can naively assume it’s single threaded. It doesn’t magically solve distributed system design for you.

“Just use random testing” isn’t really an answer. Some race conditions only show up with pathological delays on one thread.

theptip 3 hours ago | parent | prev | next [-]

I’ve idly toyed with this problem as well, I think there’s a good opportunity to build a nice framework in Python with monkeypatching (or perhaps in other languages using DB/ORM middleware) so you don’t need to modify the code under test.

I think you can do better than explicit barrier() calls. My hunch is the test middleware layer can intercept calls and impose a deterministic ordering.

(There are a few papers around looking into more complex OS level frameworks to systematically search for concurrency bugs, but these would be tough to drop into the average web app.)

scottlamb 7 hours ago | parent | prev | next [-]

It'd be interesting to see a version of this that tries all the different interleavings of PostgreSQL operations between the two (or N) tasks. https://crates.io/crates/loom does something like this for Rust code that uses synchronization primitives.

lirbank 7 hours ago | parent [-]

Interesting! The barrier approach is more targeted: you specify the exact interleaving you want to test rather than exploring all of them. Trade-off is you need to know which interleavings matter, but you get deterministic tests that run against a real database instead of a simulated runtime. Exploring exhaustive interleaving testing against a real Postgres instance could be a fun follow-up - I'd be curious if it's practical.

scottlamb 5 hours ago | parent [-]

I think you could still do it against a real database—you're already setting it up to a known state before each test, right? Obviously there'd be more runs but I'd expect (hope) that each task would be sufficiently small that the number of permutations would stay within reason.

There would be some challenges for sure. Likely optimistic concurrent patterns would require an equivalent of loom's `yield_now` [1] to avoid getting stuck. And you'd probably need a way to detect one transaction waiting for another's lock to get out of situations like your update lock vs barrier example. I vaguely recall PostgreSQL might have some system catalog table for that or something.

[1] https://docs.rs/loom/0.7.2/loom/#yielding

lirbank 4 hours ago | parent [-]

Yeah, the more I think about it, the more exciting this idea gets. The walkthrough in the article shows exactly why - I intentionally (to later show why that is wrong) place the barrier between the SELECT and UPDATE, which deadlocks instead of triggering the race. Getting the placement right requires reasoning about where the critical interleaving point is. An exhaustive approach would surface both outcomes automatically: this placement deadlocks, this one exposes the bug, this one passes. That would remove the hardest part of writing these tests.

reitzensteinm 4 hours ago | parent [-]

Martin Kleppmann has this tool that's quite relevant: https://martin.kleppmann.com/2014/11/25/hermitage-testing-th...

lirbank 4 hours ago | parent [-]

Oh that is super cool. Great prior art to study in combo with Loom. Very excited to dig in - imagine if there was an easy-to-use data race tester where you didn't have to figure out the interleaving points up front? Just point it at your code and let it find them. Exciting.

reitzensteinm 3 hours ago | parent [-]

Loom does exhaustive search, with clever methods to prune it. On real world programs, you have to set a limit to that because it obviously grows extremely quickly even with the pruning.

I've built something similar to Loom, except it's more focused on extensively modeling the C++11/Rust memory model (https://github.com/reitzensteinm/temper). My experience is that fairly shallow random concurrent fuzzing yields the vast majority of all concurrency bugs.

Antithesis (https://antithesis.com/) are probably the leaders of the pack in going deeper.

fake-name 24 minutes ago | parent | prev | next [-]

I'm confused.

> The simplest case: no transaction, just a SELECT and an UPDATE with a barrier between them:

There is no context where you do not have a transaction. Postgres requires them.

It's likely that the library the author is using is doing automatic implicit transactions, but it seems like the author needs to understand their tools a bit better.

lirbank a minute ago | parent [-]

You're right, Postgres wraps every statement in an implicit transaction. The point of that first example is that the SELECT and UPDATE are in separate auto-committed transactions - there's no explicit transaction block wrapping both. So another connection can change the value between your SELECT and your UPDATE.

haliliceylan 8 hours ago | parent | prev | next [-]

Thats not postgresql problem, thats your code

IMHO you should never write code like that, you can either do UPDATE employees SET salary = salary + 500 WHERE employee_id = 101;

Or if its more complex just use STORED PROCEDURE, there is no point of using database if you gonna do all transactional things in js

lirbank 8 hours ago | parent | next [-]

Fair point - atomic updates like SET salary = salary + 500 sidestep the race condition entirely for simple cases. The examples are intentionally simplified to isolate the concurrency behavior. The barrier pattern is more relevant when you have read-modify-write operations that involve application logic between the read and the write - those can't always collapse into a single UPDATE.

Diggsey 6 hours ago | parent | prev | next [-]

Stored procedures don't eliminate serialization anomalies unless they are run inside a transaction that is itself SERIALIZABLE.

There's essentially no difference between putting the logic in the app vs a stored procedure (other than round trip time)

lirbank 8 hours ago | parent | prev | next [-]

Here's a real-world example where atomic updates aren't an option - an order status transition that reads the current status from one table, validates the transition, and inserts into another:

await db().transaction(async (tx) => { await hooks?.onTxBegin?.();

  const [order] = await tx.select().from(orders)
    .where(eq(orders.id, input.id))
    .for("update");

  const [status] = await tx.select().from(orderStatuses)
    .where(eq(orderStatuses.orderId, input.id))
    .orderBy(desc(orderStatuses.createdAt))
    .limit(1);

  if (input.status === status.code)
    throw new Error("Status already set");

  await tx.insert(orderStatuses).values({ ... });
});

You need the transaction + SELECT FOR UPDATE because the validation depends on current state, and two concurrent requests could both pass the duplicate check. The hooks parameter is the barrier injection point from the article - that's how you test that the lock actually prevents the race.

erpellan 7 hours ago | parent | next [-]

The standard pattern to avoid select for update (which can cause poor performance under load) is to use optimistic concurrency control.

Add a numeric version column to the table being updated, read and increment it in the application layer and use the value you saw as part of the where clause in the update statement. If you see ‘0 rows updated’ it means you were beaten in a race and should replay the operation.

tux3 7 hours ago | parent | next [-]

I don't think such a broad recommendation will be good for most people, it really depends.

Optimistic updates looks great when there is no contention, and they will beat locking in a toy benchmark, but if you're not very careful they can cause insane amplification under load.

It's a similar trap as spinlocks. People keep re-discovering this great performance hack that avoids the slow locks in the standard. And some day the system has a spike that creates contention, and now you have 25 instances with 24 of them spinning like crazy, slowing to a crawl the only one that could be making progress.

It's possible to implement this pattern correctly, and it can be better in some specific situations. But a standard FOR UPDATE lock will beat the average badly implemented retry loop nine times out of ten.

lirbank 7 hours ago | parent | prev [-]

Good point. The barrier pattern from the article applies to both approaches - whether you're using pessimistic locks or optimistic version checks, it's good to verify that the concurrency handling actually works. Barriers let you test that your version check correctly rejects the stale update, the same way they test that your lock prevents the race.

codys 7 hours ago | parent | prev [-]

Seems you could use a single SQL statement for that particular formulation. Something like this, using CTEs is possible, but alternately one can reformat them as subqueries. (note: not sure how the select of orders is intended to be used, so the below doesn't use it, but it does obtain it as an expression to be used)

    WITH
     o AS (
      SELECT FROM orders
      WHERE orders.id = $1
     ),
     os AS (
      SELECT FROM orderStatuses
      WHERE orderStatuses.orderId = $1
      ORDER BY DESC orderStatuses.createdAt
      LIMIT 1
     )
     INSERT INTO orderStatuses ...
     WHERE EXISTS (SELECT 1 FROM os WHERE os.code != $2)
     RETURNING ...something including the status differ check...
Does something like this work with postgres's default behavior?
lirbank 7 hours ago | parent [-]

Absolutely - if you can express the whole operation as a single atomic statement, that's the best outcome. No locks needed, no race to test for. The article is about what comes next: when the logic can't collapse into one query, how do you verify your concurrency handling actually works?

andrenotgiant 6 hours ago | parent | prev [-]

Is there any good reason to use stored procedures in 2026?

scottlamb 4 hours ago | parent [-]

I'd think so. Stored procedures let you do multi-statement sequences in fewer round trips. In 2026 larger systems are as likely as ever to run PostgreSQL on a different machine (or machines) than the application server. While latency between the two generally goes down over time, it's still not nothing. You may care about the latency of individual operations or the throughput impact of latency while holding a lock (see Amdahl's law).

Of course, the reasons not to use stored procedures still apply. They're logic, but they're versioned with the database schema, not with your application, which can be a pain.

jijji 5 hours ago | parent | prev | next [-]

to avoid these conditions i have usually inserted a row into a lock table used for this purpose to create a lock with a unique key for that row with a few minute timer, the once the transaction is complete it will delete the lock row. This way, simultaneous users will only get the first lock, all other requests would fail, and then if the timer expired, we would assume the transaction never completed and it could try again after a few minutes

egedev 3 hours ago | parent | prev | next [-]

We hit exactly this kind of race condition in our Go + Postgres SaaS when handling concurrent waitlist signups. Two requests would read the current count, both pass the limit check, and both insert — exceeding the waitlist cap.

Ended up using SELECT FOR UPDATE on the waitlist row before the count check. Simple but effective. The barrier testing approach described here would have caught this much earlier in development instead of discovering it under load.

One thing I'd add: in Go, it's tempting to handle this at the application level with mutexes, but that breaks the moment you have multiple instances. Pushing the serialization down to Postgres is almost always the right call for correctness.

lirbank 3 hours ago | parent [-]

Hey, thanks for sharing this - these bugs are so easy to miss because everything works fine until you get real concurrent traffic. And yeah, the moment you have multiple instances, app-level mutexes can't save you.

HackerThemAll 7 hours ago | parent | prev [-]

Javascript developers learn kindergarten basics of transactions and SQL. LOL. Is it the camp "we don't need a degree to be programmers"?