Remix.run Logo
faangguyindia 4 hours ago

I went from thinking “SQLite is a toy product, not reliable for real data" to "lets use SQLite for almost everything"

SQLite is very good if you can fit into the single writer, multiple readers pattern; you'll never lose data if you use the correct settings, which takes a minute of Google search to figure out.

Today, most of my apps are simply go binary + SQLite + systemd service file.

I've yet to lose data. Performance is great and plenty for most apps

michaelchisari 2 hours ago | parent [-]

The single writer is less of an issue in practice than it's made out to be. Modern nvme drives are incredible and it's trivial to get 5k writes per second in an optimized WAL setup. Way more than most apps could ever dream.

And even then, I've used a batch writer pattern to get 180k writes per second on a commodity vps.

0123456789ABCDE 43 minutes ago | parent | next [-]

all* of that + sharding -> https://sqlite.org/lang_attach.html

ex: main.db + fts.db. reading and writing to main.db is always available; updating the fts index can be done without blocking the main database — it only needs to read, the reads can be chunked, and delayed. fts.db keeps the index + a cursor table — an id or last change ts

could also use a shard to handle tables for metrics, or simply move old data out of main.db

* some examples:

  conn = sqlite3.connect("data.db")
  conn.execute("PRAGMA journal_mode=WAL")        # concurrent reads (see above)
  conn.execute("PRAGMA synchronous=NORMAL")      # fsync at checkpoint, not every commit
  conn.execute("PRAGMA cache_size=-62500")       # ~61 MB page cache (negative = KB)
  conn.execute("PRAGMA temp_store=MEMORY")       # temp tables and indexes in RAM
  conn.execute("PRAGMA busy_timeout=5000")       # wait 5s on lock instead of failing
edit: orms will obliterate your performance, use raw queries instead — just make sure to run static analysis on your code base to catch sqli bugs

my replies are being ratelimited, so let me add here

the heavy duty server other databases have is doing that load bearing work that folks tend to complain about sqlite can't do

the real dmbs's are doing mostly the same work that sqlite does, you just don't have to think about it once they're set up. behind that chunky server process the database is still dealing with writing data to a filesystem, dealing with transaction locks, etc.

by default sqlite is giving you a very stable database file, that when it tells you the transaction completed, it really did complete, and there will be no data loss if the machine crashes.

you can decide to wave some or all of those guaranties in exchange for performance, and this doesn't even have to be an all or nothing situation.

hparadiz 32 minutes ago | parent [-]

Oh fun something I have some metrics on. I just made this benchmark for every php orm a few weeks ago for fun.

https://the-php-bench.technex.us/

There's a huge performance difference between memory and file storage within sqlite itself. Not even getting into tuning specifics.

26 minutes ago | parent [-]
[deleted]
Ringz 44 minutes ago | parent | prev [-]

I usually try to explain it like this: “Single writer” is rarely a real problem, because a writer is not slow. It writes exclusively, but very quickly.

"Batch writer pattern" is a good idea to get rid of expensive commits.