all* of that + sharding -> https://sqlite.org/lang_attach.html
ex: main.db + fts.db. reading and writing to main.db is always available; updating the fts index can be done without blocking the main database — it only needs to read, the reads can be chunked, and delayed. fts.db keeps the index + a cursor table — an id or last change ts
could also use a shard to handle tables for metrics, or simply move old data out of main.db
* some examples:
conn = sqlite3.connect("data.db")
conn.execute("PRAGMA journal_mode=WAL") # concurrent reads (see above)
conn.execute("PRAGMA synchronous=NORMAL") # fsync at checkpoint, not every commit
conn.execute("PRAGMA cache_size=-62500") # ~61 MB page cache (negative = KB)
conn.execute("PRAGMA temp_store=MEMORY") # temp tables and indexes in RAM
conn.execute("PRAGMA busy_timeout=5000") # wait 5s on lock instead of failing
edit: orms will obliterate your performance, use raw queries instead — just make sure to run static analysis on your code base to catch sqli bugsmy replies are being ratelimited, so let me add here
the heavy duty server other databases have is doing that load bearing work that folks tend to complain about sqlite can't do
the real dmbs's are doing mostly the same work that sqlite does, you just don't have to think about it once they're set up. behind that chunky server process the database is still dealing with writing data to a filesystem, dealing with transaction locks, etc.
by default sqlite is giving you a very stable database file, that when it tells you the transaction completed, it really did complete, and there will be no data loss if the machine crashes.
you can decide to wave some or all of those guaranties in exchange for performance, and this doesn't even have to be an all or nothing situation.