▲ | akorotkov 4 days ago | |||||||||||||||||||||||||||||||||||||||||||
We will eventually add the SERIALIZABLE isolation level to OrioleDB, but right now that's not our highest priority. Let me explain why. At first, SSI (serializable snapshot isolation) in PostgreSQL comes with significant shortcomings, including. 1) Overhead. SSI implies a heavyweight lock on any involved index page or heap tuple (even for reads). The overhead of SSI was initially measured at ~10%, but nowadays, scalability has gone much farther. These many HW-locks could slow down in multiple times a typical workload on a multicore machine. 2) SSI needs the user to be able to repeat any transaction due to serialization failure. Even a read-only transaction needs to be DEFERRABLE or might be aborted due to serialization failure (it might "saw impossible things" and needs a retry). In contrast, typically it's not hard to resolve the concurrency problems of writing transactions using explicit row-level and advisory locks, while REPEATABLE READ is enough for reporting. Frankly speaking, during my whole career, I didn't see a single case where SERIALIZABLE isolation level was justified. | ||||||||||||||||||||||||||||||||||||||||||||
▲ | tux3 4 days ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||
I had a case just recently where we needed to enforce a no-overlap constraint too complicated to express in an index (recurring time ranges). Any time you have to check constraints manually, you can't just do it before the write, or after the write, because two REPEATABLE READ write transactions will not see each other's INSERT. You need something like a lock, a two-phase commit, or SERIALIZABLE isolation for writes. Advisory locks have sharp edges, and 2PC is not so simple either, there is a lot that can go wrong. In the case of SERIALIZABLE you do need to retry in case of conflict, but usually the serialization anomalies can be limited to a reasonably fine level. And an explicit retry feels safer than risking a livelock situation when there is contention. | ||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||
▲ | amluto 4 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||
I use SERIALIZABLE on a database for which I have very low writer concurrency (I can count the total writer connections on 1-2 hands) and where I really, really, really don’t want to deal with the fallout if a transaction commits and results in an impossible state. I use MySQL, not Postgres, for this application (for better or for worse), and I can absolutely generate a bad state if I drop MySQL to a level below SERIALIZABLE — I’ve tried it. (Yes, I could probably avoid this with SELECT FOR UPDATE, but I don’t trust MySQL enough and I get adequate performance with SERIALIZABLE.) To make SERIALIZABLE work well, I wrap all the transactions in retry loops, and I deal with MySQL’s obnoxious reporting of which errors are worthy of retries. (Aside from bad committed states, I’ve also seen MySQL return results from a single straightforward query that cannot correspond to any state of the table. It was something like selecting MIN, MAX and COUNT(*) from a table and getting min and max differing and count = 1. It’s been a while — I could be remembering wrong.) | ||||||||||||||||||||||||||||||||||||||||||||
|