| ▲ | storystarling 4 hours ago | ||||||||||||||||
You hit those write limits surprisingly early if you use background workers though. I had a project with very little user traffic that choked on SQLite simply because a few Celery workers were updating job statuses concurrently. It wasn't the volume of data, just the contention from the workers that forced the switch to Postgres. | |||||||||||||||||
| ▲ | liuliu 4 hours ago | parent | next [-] | ||||||||||||||||
Are you sure it is choked on writes not on reads and writes? SQLite default setup is inefficient in many ways (as well as it's default compilation options), and that often cause issues. (I am just asking: are you sure WAL is on?) | |||||||||||||||||
| |||||||||||||||||
| ▲ | adityaathalye 3 hours ago | parent | prev [-] | ||||||||||||||||
SQLite emphatically warns against concurrent writes. It is not designed for that. I'm seeing these numbers on my current scratch benchmark: - Events append to a 10M+ record table (~4+ GiB database). - Reads are fetched from a separate computed table, which is trigger-updated from the append-only table. - WAL-mode ON, Auto-vacuum ON
- 1,600 sequential (in a single process) read-after-write transactions, append-only, no batching.- With a separate writer process (sequential), and concurrently, two reader processes, I'm seeing 400+ append transactions/second (into the append-only table, no batching), and a total of 41,000 reads per second, doing `select *` on the trigger-updated table. My schema is (deliberately) poor --- most of it is TEXT. (edit: add clarifying text) | |||||||||||||||||
| |||||||||||||||||