Remix.run Logo
mickeyp 6 days ago

SQLite is a cracking database -- I love it -- that is let down by its awful defaults in service of 'backwards compatibility.'

You need a brace of PRAGMAs to get it to behave reasonably sanely if you do anything serious with it.

tejinderss 6 days ago | parent | next [-]

Do you know any good default PRAGMAs that one should enable?

mickeyp 6 days ago | parent | next [-]

These are my PRAGMAs and not your PRAGMAs. Be very careful about blindly copying something that may or may not match your needs.

    PRAGMA foreign_keys=ON
    PRAGMA recursive_triggers=ON
    PRAGMA journal_mode=WAL
    PRAGMA busy_timeout=30000
    PRAGMA synchronous=NORMAL
    PRAGMA cache_size=10000
    PRAGMA temp_store=MEMORY
    PRAGMA wal_autocheckpoint=1000
    PRAGMA optimize <- run on tx start
Note that I do not use auto_vacuum for DELETEs are uncommon in my workflows and I am fine with the trade-off and if I do need it I can always PRAGMA it.

defer_foreign_keys is useful if you understand the pros and cons of enabling it.

porridgeraisin 6 days ago | parent | next [-]

You should pragna optimize before TX end, not at tx start.

Except for long lived connections where you do it periodically.

https://www.sqlite.org/lang_analyze.html#periodically_run_pr...

masklinn 6 days ago | parent [-]

Also foreign_keys has to be set per connection but journal_mode is sticky (it changes the database itself).

porridgeraisin 6 days ago | parent [-]

Yes, if journal_mode was not sticky, a new process opening the db would not know to look for the wal and shm files and read the unflushed latest data from there. On the other hand, foreign key enforcement has nothing to do with the file itself, it's a transaction level thing.

In any case, there is no harm in setting sticky pragmas every connection.

mikeocool 6 days ago | parent | prev | next [-]

Using strict tables is also a good thing to do, if you value your sanity.

adzm 6 days ago | parent | prev [-]

Really, no mmap?

metrix 6 days ago | parent [-]

I'm curious what your suggest mmap pragma would be.

adzm 5 days ago | parent [-]

PRAGMA mmap_size=268435456;

for example? I'm surprised by the downvotes. Using mmap significantly reduced my average read query time; durations about 70% the length!

leetrout 6 days ago | parent | prev | next [-]

Explanation of sqlite performance PRAGMAs

https://kerkour.com/sqlite-for-servers

e2le 6 days ago | parent | prev [-]

Although not what you asked for, the SQLite authors maintain a list of recommended compilation options that should be used where applicable.

https://sqlite.org/compile.html#recommended_compile_time_opt...

mkoubaa 6 days ago | parent | prev [-]

Seems like it's asking to be forked

kbolino 6 days ago | parent | next [-]

SQLite is fairly fork-resistant due to much of its test suite being proprietary: https://www.sqlite.org/testing.html

justin66 6 days ago | parent | prev | next [-]

It has been forked at least once:

https://docs.turso.tech/libsql

fulafel 5 days ago | parent [-]

How are the defaults there?

justin66 5 days ago | parent [-]

The default is, don't use it.

pstuart 6 days ago | parent | prev [-]

The real fork is DuckDB in a way, it has SQLite compatibility and so much more.

The SQLite team also has 2 branches that address concurrency that may someday merge to trunk, but by their very nature they are quite conservative and it may never happen unless they feel it passes muster.

https://www.sqlite.org/src/doc/begin-concurrent/doc/begin_co... https://sqlite.org/hctree/doc/hctree/doc/hctree/index.html

As to the problem that prompted the article, there's another way of addressing the problem that is kind of a kludge but is guaranteed to work in scenarios like theirs: Have each thread in the parallel scan write to it's own temporary database and then bulk import them once the scan is done.

It's easy to get hung up on having "a database" but sharding to different files by use is trivial to do.

Another thing to bear in mind with a lot of SQLite use cases is that the data is effectively read only save for occasional updates. Read only databases are a lot easier to deal with regarding locking.

jitl 6 days ago | parent | next [-]

DuckDB is similar as an in process SQL database, but lacking btree-style ordered indexes makes it a poor performer in key lookups and order-by / range scans if your table is any size larger than trivial.

It’s the classic OLAP (DuckDB) vs OLTP (SQLite) trade off between the two. DuckDB is very good at many things but most applications that need a traditional SQL DB will probably not perform well if you swap it over to DuckDB.

geysersam 6 days ago | parent | next [-]

Duckdb has optional adaptive radix tree indexing (https://duckdb.org/docs/stable/sql/indexes.html)

jitl 6 days ago | parent [-]

Oops, I stand corrected!

What I remember about our evaluation of DuckDB in 2024 concluded that (1) the major limitations were lack of range-scan and index-lookup performance (maybe w/ joins? or update where?), and (2) the DuckDB Node.js module segfaulted too much. Perhaps the engineers somehow missed the ART index it could also be the restriction that data fit in memory to create an index on it (our test dataset was about 50gb)

Kinrany 6 days ago | parent | prev [-]

That's surprising, surely OLAP use cases also need key lookups?

Kinrany 6 days ago | parent | prev | next [-]

> Read only databases are a lot easier to deal with regarding locking.

"A lot easier" sounds like an understatement. What's there to lock when the data is read only?

pstuart 3 days ago | parent [-]

> What's there to lock when the data is read only?

That's my point (it was an intentional understatement). Data that is rarely updated can be treated as read only and updates are done offline and swapped in as needed. This is very much a cope for SQLite's limited concurrency but I think it's worthy of consideration for when the situation merits it.

5 days ago | parent | prev [-]
[deleted]