| ▲ | Many Small Queries Are Efficient in SQLite(sqlite.org) |
| 117 points by tosh 7 hours ago | 81 comments |
| |
|
| ▲ | daitangio 5 hours ago | parent | next [-] |
| I am using SQLite on paperless-ngx (an app to manage pdf [4]).
It is quite difficult to beat SQLite if you do not have a very huge parallelism factor in writes. SQLite is an embedded database: no socket to open, you directly access to it via file system. If you do not plan to use BigData with high number of writers, you will have an hard time beating SQLite on modern hardware, on average use cases. I have written a super simple search engine [1] using python asyncio and SQLite is not the bottleneck so far. If you are hitting the SQLite limit, I have an happy news: PostgreSQL upgrade will be enough for a lot of use cases [2]: you can use it to play with a schemaless mongo-like database, a simple queue system [3] or a search engine with stemming. After a while you can decide if you need a specialized component (i.e. Kafka, Elastic Search, etc) for one of your services. [1]: https://github.com/daitangio/find [2]: https://gioorgi.com/2025/postgres-all/ [3]: https://github.com/daitangio/pque [4]: https://docs.paperless-ngx.com |
| |
| ▲ | CuriouslyC 2 hours ago | parent | next [-] | | The pattern I like to advocate for now is to do customer sharding with SQLite. Cloudflare makes this easy with D1, you can tie Durable Objects to a user as an afterthought. The nice thing about this pattern is that you can create foreign data wrappers for your customer SQLite databases and query them as if they were in postgres, cross customer aggregations are slow but individual customer analytics are quite fast, and this gives you near infinite scalability. | |
| ▲ | storystarling 2 hours ago | parent | prev [-] | | You hit those write limits surprisingly early if you use background workers though. I had a project with very little user traffic that choked on SQLite simply because a few Celery workers were updating job statuses concurrently. It wasn't the volume of data, just the contention from the workers that forced the switch to Postgres. | | |
| ▲ | liuliu 2 hours ago | parent | next [-] | | Are you sure it is choked on writes not on reads and writes? SQLite default setup is inefficient in many ways (as well as it's default compilation options), and that often cause issues. (I am just asking: are you sure WAL is on?) | | |
| ▲ | conradkay an hour ago | parent [-] | | I'd imagine that's it. With WAL you can probably hit >1000 writes a second |
| |
| ▲ | adityaathalye an hour ago | parent | prev [-] | | SQLite emphatically warns against concurrent writes. It is not designed for that. I'm seeing these numbers on my current scratch benchmark: - Events append to a 10M+ record table (~4+ GiB database). - Reads are fetched from a separate computed table, which is trigger-updated from the append-only table. - WAL-mode ON, Auto-vacuum ON {:dbtype "sqlite",
:auto_vacuum "INCREMENTAL",
:connectionTestQuery "PRAGMA journal_mode;",
:preferredTestQuery "PRAGMA journal_mode;",
:dataSourceProperties
{:journal_mode "WAL",
:limit_worker_threads 4,
:page_size 4096,
:busy_timeout 5000,
:enable_load_extension true,
:foreign_keys "ON",
:journal_size_limit 0,
:cache_size 15625,
:maximumPoolSize 1,
:synchronous "NORMAL"}},
- 1,600 sequential (in a single process) read-after-write transactions, append-only, no batching.- With a separate writer process (sequential), and concurrently, two reader processes, I'm seeing 400+ append transactions/second (into the append-only table, no batching), and a total of 41,000 reads per second, doing `select *` on the trigger-updated table. My schema is (deliberately) poor --- most of it is TEXT. (edit: add clarifying text) | | |
| ▲ | TylerE an hour ago | parent [-] | | Isn’t that schema actually the opposite of poor for SQLite, since it stores everything as text internally? |
|
|
|
|
| ▲ | cyanmagenta 5 hours ago | parent | prev | next [-] |
| There is some risk that, if you design your website to use a local database (sqlite, or a traditional database over a unix socket on the same machine), then switching later to a networked database is harder. In other words, once you design a system to do 200 queries per page, you’d essentially have to redesign the whole thing to switch later. It seems like it mostly comes down to how likely it is that the site will grow large enough to need a networked database. And people probably wildly overestimate this. HackerNews, for example, runs on a single computer. |
| |
| ▲ | itopaloglu83 an hour ago | parent | next [-] | | Most of us, majority of the time, don’t need that level of optimization, because not every project is destined to grow 10x quickly. LLM also has this tendency of premature optimization where they start to write very complex classes for users who only want to extract some information just to resolve a quick problem. | |
| ▲ | andersmurphy 4 hours ago | parent | prev | next [-] | | The thing is sqlite can scale further vertically than most network databases. In some context's like writes and interactive transactions it outright scales further. [1] That's before you even get into sharding sqlite. [1] - https://andersmurphy.com/2025/12/02/100000-tps-over-a-billio... | |
| ▲ | 63stack 5 hours ago | parent | prev | next [-] | | I don't see how anyone would design a system that executes 200 queries per page. I understand having a system that is ín use for many many years and accumulates a lot of legacy code eventually ends up there, but designing? Never. That's not design, that's doing a bad job at design. | | |
| ▲ | ctxc 5 hours ago | parent | next [-] | | Sounds a bit like me, reading the comments before the article! | |
| ▲ | 9rx 3 hours ago | parent | prev | next [-] | | > I don't see how anyone would design a system that executes 200 queries per page. They call it the n+1 problem. 200 queries is the theoretically correct approach, but due to high network latency of networked DMBSes you have to hack around it. But if the overhead is low, like when using SQLite, then you would not introduce hacks in the first place. The parent is saying that if you correctly design your application, but then move to system that requires hacks to deal with its real-world shortcomings, that you won't be prepared. Although I think that's a major overstatement. If you have correctly designed the rest of your application too, introducing the necessary hacks into a couple of isolated places is really not a big deal at all. | | |
| ▲ | PaulHoule 2 hours ago | parent [-] | | I'd point to the difference between vector-based vs scalar-based systems in numerics. If your web programming language is more like MATLAB or APL than PHP than maybe it can naturally generate the code to do it all with sets. As it is we are usually writing set-based implementations in scalar-based languages. Part of the "object-relational mapping" problem has always been that SQL is superior to conventional programming languages in many ways. | | |
| ▲ | 9rx 2 hours ago | parent [-] | | Of course, the "object-relational mapping" problem is simply that of latency. In the theoretical world where latency isn't a thing, there is no such thing as the "object-relational mapping" problem. In the real world where you have something like SQLite, it isn't a practical problem either. SQL was originally designed to run on the same machine as the user, so it was never envisioned as a problem. It wasn't until Oracle decided to slap networking protocols on top of an SQL engine did it become one. Unfortunately, they should have exposed a language more conducive to the limitations of the network, performing the mapping in the same place as the database. But, such is the life of commercial computing. Oracle has that now, it was just several decades too late, and by that time everyone else had copied their bad ideas. |
|
| |
| ▲ | anamexis 5 hours ago | parent | prev [-] | | Did you read the OP? |
| |
| ▲ | Kinrany 5 hours ago | parent | prev | next [-] | | There's also the alternative of having a cluster with one local DB in each node | | |
| ▲ | direwolf20 4 hours ago | parent [-] | | Then you have massive synchronization problems if your data isn't almost read–only. | | |
| ▲ | Gabrys1 3 hours ago | parent | next [-] | | if your data isn't mostly read-only, then you're going to have an issue with SQLite. It doesn't nicely support parallel writers | |
| ▲ | CuriouslyC 2 hours ago | parent | prev [-] | | Not if you're sharding correctly. |
|
| |
| ▲ | luckylion 5 hours ago | parent | prev [-] | | The same is true for regular databases though, isn't it? Network adds latency and while it might be fine to run 500 queries with the database being on the same machine, adding 1-5ms per query makes it feel not okay. | | |
| ▲ | magicalhippo 2 hours ago | parent | next [-] | | > adding 1-5ms per query makes it feel not okay Or going from ~1ms over a local wired network to ~10ms over a wireless network. Had a customer performance complaint that boiled down to that, something that should take minutes took hours. Could not reproduce it internally. After a lot of back abd forth I asked if the user machine was wired. Nope, wireless laptop. Got them to plug in like their colleagues and it was fast again. | |
| ▲ | cyanmagenta 3 hours ago | parent | prev [-] | | Yes, that is why I said “local database (sqlite, or a traditional database over a unix socket on the same machine).” This isn’t an sqlite-specific point, although sqlite often runs faster on a single machine because local sockets have some overhead. |
|
|
|
| ▲ | rustybolt 5 hours ago | parent | prev | next [-] |
| This feels like a very elaborate way of saying that doing O(N) work is not a problem, but doing O(N) network calls is. |
| |
| ▲ | password4321 5 hours ago | parent | next [-] | | As another example, a SQL Server optimization per https://learn.microsoft.com/en-us/sql/t-sql/statements/set-n...: > For stored procedures that contain several statements that don't return much actual data, or for procedures that contain Transact-SQL loops, setting SET NOCOUNT to ON can provide a significant performance boost, because network traffic is greatly reduced. | |
| ▲ | Neywiny 5 hours ago | parent | prev | next [-] | | Rather I think their point is that since O(N) is really X * N, it's not the N that gets you, it's the X. | | |
| ▲ | direwolf20 4 hours ago | parent | next [-] | | Right — the network database is also doing O(N) work to return O(N) results from one query but the multiplier is much lower because it doesn't include a network RTT. | |
| ▲ | ahartmetz 4 hours ago | parent | prev [-] | | ...and the difference between "a fancy hash table" (in-process SQLite) and doing a network roundtrip is a few orders of magnitude. |
| |
| ▲ | jstummbillig 5 hours ago | parent | prev | next [-] | | It being so obvious, why is sqlite not the de facto standard? | | |
| ▲ | chuckadams 5 hours ago | parent | next [-] | | No network, no write concurrency, no types to speak of... Where those things aren't needed, sqlite is the de facto standard. It's everywhere. | | |
| ▲ | mickeyp 5 hours ago | parent | next [-] | | Perfect summary. I'll add: insane defaults that'll catch you unaware if you're not careful! Like foreign keys being opt-in; sure, it'll create 'em, but it won't enforce them by default! | | |
| ▲ | ogogmad 4 hours ago | parent [-] | | Is it possible to fix some of these limitations by building DBMSes on top of SQLite, which might fix the sloppiness around types and foreign keys? | | |
| ▲ | Polizeiposaune 3 hours ago | parent [-] | | Using the API with discipline goes a long way. Always send "pragma foreign_keys=on" first thing after opening the db. Some of the types sloppiness can be worked around by declaring tables to be STRICT. You can also add CHECK constraints that a column value is consistent with the underlying representation of the type -- for instance, if you're storing ip addresses in a column of type BLOB, you can add a CHECK that the blob is either 4 or 16 bytes. |
|
| |
| ▲ | BenjiWiebe 3 hours ago | parent | prev [-] | | SQLite did add 'STRICT' tables for type enforcement. Still doesn't have a huge variety of types though. | | |
| ▲ | mikeocool 2 hours ago | parent [-] | | The fact that they didn’t make STRICT default is really a shame. I understand maintaining backwards compatibility, but the non-strict behavior is just so insane I have a hard time imagine it doesn’t bite most developers who use SQLite at some point. |
|
| |
| ▲ | jerf 3 hours ago | parent | prev | next [-] | | Isn't SQLite a de facto standard? Seems like it to me. If I want an embedded SQL engine, it is the "nobody got fired for selecting" choice. A competitor needs to offer something very compelling to unseat it. | | |
| ▲ | jstummbillig 3 hours ago | parent [-] | | I mean as in: Most web stacks do not default to sqlite over MySQL or postgres. Why not? Best default for most users, apparently. | | |
| ▲ | conradkay an hour ago | parent [-] | | I think in the past it was more obvious. Rails switched to SQLite as the default somewhat recently |
|
| |
| ▲ | skrebbel 5 hours ago | parent | prev | next [-] | | I haven't investigated this so I might be behind the times, but last I checked remotely managing an SQLite database, or having some sort of dashboarding tool run management reporting queries and the likes, or make a Retool app for it, was very messy. The benefit of not being networked becomes a downside. Maybe this has been solved though? Anybody here running a serious backend-heavy app with SQLite in production and can share? How do you remotely edit data, do analytics queries etc on production data? | |
| ▲ | Cthulhu_ 5 hours ago | parent | prev | next [-] | | It is for use cases like local application storage, but it doesn't do well in (or isn't designed for) concurrent use cases like any networked services. SQLite is not like the other databases. | |
| ▲ | dahart 3 hours ago | parent | prev [-] | | Partly for the same reason it’s fast for small sites. In their words: “SQLite is not client/server” |
| |
| ▲ | zffr 5 hours ago | parent | prev [-] | | IMO the page is concise and well written. I wouldn’t call it very elaborate. Maybe the page could have been shorter, but not my much. | | |
| ▲ | sodapopcan 4 hours ago | parent [-] | | It's inline with what I perceive as the more informal tone of the sqlite documentation in general. It's slightly wordier but fun to read, and feels like the people who wrote it had a good time doing so. |
|
|
|
| ▲ | charcircuit 5 hours ago | parent | prev | next [-] |
| >For a 50-entry timeline, the latency is usually less than 25 milliseconds. Profiling shows that few of those milliseconds were spent inside the database engine. And instead were spent blocking on the disk for all of the extra queries that were made? Or is it trying to say that the concatenation a handful of strings takes 22 ms. Considering how much games can render with a 16 ms budget I don't see where that time is going rendering html. |
| |
|
| ▲ | nchmy 5 hours ago | parent | prev | next [-] |
| The article doesnt make it at all clear what it is comparing to - mysql running remotely or on the same server? I'm sure sqlite still has less "latency" than mysql on localhost or unix socket, but surely not meaningfully so. So, is SQLite really just that much faster at any SELECT query, or are they just comparing apples and oranges? Or am i mistaken in thinking that communicating to mysql on localhost is comparable latency to sqlite? |
| |
| ▲ | Cthulhu_ 5 hours ago | parent | next [-] | | Even if you're on the same local server, you're still going over a socket to a different service, whereas with sqlite you remain in the same application / address space / insert words I don't fully understand here. So while client/server SQL servers are faster locally than on a remote server, they can (theoretically) never be as fast as SQLite in the same process. Of course, SQLite and client/server database servers have different use cases, so it is kind of an apples and oranges comparison. | |
| ▲ | Neywiny 5 hours ago | parent | prev | next [-] | | I think they're trying to not shame other services, but yes the comparison is vs networked whether that's local on loopback or not. For a small query, which is what they're talking about, it's not inconceivable that formatting into a network packet, passing through the userspace networking functions, into and through kernel, all back out the other side, then again for the response, is indeed meaningfully slower than a simple function call within the program. | |
| ▲ | wild_egg 5 hours ago | parent | prev | next [-] | | Connecting to localhost still involves the network stack and a fair bit of overhead. SQLite is embedded in your program's address space. You call its functions directly like any other function. Depending on your language, there is probably some FFI overhead but it's a lot less than than an external localhost connection | |
| ▲ | zffr 5 hours ago | parent | prev [-] | | I think the most common set up is to have your application server and DB on different hosts. That way you can scale each independently. |
|
|
| ▲ | maxpert 3 hours ago | parent | prev | next [-] |
| A lot of skepticism in comments. Let me remind them doing N loops over local disk with in memory cached pages is absolutely different compared to doing RT over typical VPS network. Having said that there is no silver bullet for dumb code! So let's not conflate the argument the author is trying to make. |
| |
|
| ▲ | yomismoaqui 4 hours ago | parent | prev | next [-] |
| Also SQLite is 35% faster than the filesystem: https://sqlite.org/fasterthanfs.html |
|
| ▲ | Sesse__ 3 hours ago | parent | prev | next [-] |
| Well, it depends. I vividly remember removing 200 small SQLite queries from a routing algorithm in a mobile app (by moving the metadata into a small in-memory data store instead) and roughly doubling its speed. :-) It was a pretty easy call after seeing sqlite3_step being the top CPU user by a large margin. |
| |
| ▲ | ilumanty 3 hours ago | parent [-] | | Yeah, en/decoding results and parameters from and to JS types is also quite the timewaster | | |
|
|
| ▲ | hnlmorg 2 hours ago | parent | prev | next [-] |
| This might be true for SELECTs, but I found INSERTs are massively slower when compared to grouping in transactions. Which should be obvious. But I could see some reading this blog post and jumping to the wrong conclusion. |
| |
| ▲ | PaulHoule 2 hours ago | parent [-] | | It's not the cost of protecting one transaction from another transaction so much as the cost of flushing a transaction to storage to survive a crash. In the bad old days you had to wait for a lever to move and for the disk to rotate at least once! | | |
| ▲ | hnlmorg 2 hours ago | parent [-] | | > It's not the cost of protecting one transaction from another transaction I know it’s not and never suggested it was. I was making the point that writes contain more overhead than reads (which should be obvious) so people should bear that in mind when reading this blog post. Edit: is it “bear” or “bare”? I’m never sure with that phrase haha |
|
|
|
| ▲ | philipodonnell 5 hours ago | parent | prev | next [-] |
| I’ve been experimenting with LiveStoreJS which uses a custom SQLite WASM binary for event sync, so for simplicity I’ve also used it for regular application data in browser and found no issues (yet). It surprised me that using a full database engine in memory could perform well vs native JS objects at scale but perhaps at scale is when it starts to shine. Just be wary of size limits beyond 16-20mb. |
|
| ▲ | NorwegianDude 4 hours ago | parent | prev | next [-] |
| I do t have time to test myself now, but it would be interesting to see a proper benchmark. We all know it's not suitable for high write concurrency, but SQLite should be a very good amount faster for reads because of the lack of overhead. But how much faster is it really? |
| |
| ▲ | adzm 3 hours ago | parent | next [-] | | as an in memory database, I got around 40,000,000 reads per second. Using WAL and a file rather than in memory, around 900,000 reads per second. This is single threaded, for a simple integer ID to string Value query, and a few years old at this point, and only minor config optimizations eg not even using memory mapped io and a ~3gb database with a million or so rows on a Windows machine. The performance really is amazing. | |
| ▲ | solumunus 3 hours ago | parent | prev [-] | | Orders of magnitude I would imagine. Very significantly faster. |
|
|
| ▲ | meken 4 hours ago | parent | prev | next [-] |
| Side note - is this post accessible from the site somewhere? I don’t see where you’d find it (along with the C is Best post [1] shared here recently). [1] https://sqlite.org/whyc.html |
| |
|
| ▲ | delbronski 2 hours ago | parent | prev | next [-] |
| How does one go about deployment and backups with a local db? Like let’s say I have a web app hosted on a cloud service like App Engine or Elastic… if I redeploy my web app how do I make sure my current local db does not get get wiped? How are periodic backups handled? I can think of many hacks to do this, but is there a best practice for this kind of stuff? I’m curious how people do this. |
| |
|
| ▲ | causalscience 5 hours ago | parent | prev | next [-] |
| Make sure you click this link https://sqlite.org/src/timeline So the sqlite developers use their on versioning system which uses sqlite for storage. Funny. |
| |
| ▲ | kmeisthax 5 hours ago | parent [-] | | Yes. Git is the same way: it uses the Linux kernel for storage, and the Linux kernel is managed with Git. :P |
|
|
| ▲ | flipped 4 hours ago | parent | prev | next [-] |
| Has anyone tried using distributed versions of sqlite, such as rqlite? How reliable is it? |
| |
|
| ▲ | lifetimerubyist 4 hours ago | parent | prev | next [-] |
| Definitely was something surprising that I discovered when building with Sqlite recently. We're tought to avoid N+1 queries at almost any cost in RDBMs but in Sqlite, the N+1 can actually be the best option in most cases. I had to build some back-office tools and used Ruby on Rails with SQLITE and didn't bother with doing "efficient" joins or anything. Just index the foreign keys, do N+1s everywhere - you'll be fine. The app is incredibly easy to maintain and add features because of this and the db is super easy to backup - literally just scp the sqlite db file somewhere else. Couldn't be happier with this setup. |
| |
| ▲ | beagle3 4 hours ago | parent [-] | | scp works as long as the app is not making changes at the same time. If there's a chance someone is writing to the database during the copy, you should "sqlite3 database.sqlite .backup" (or ".dump") first; Or, alternatively, on a new enough sqlite3, you have a builtin sqlite3_rsync that is like rsync except it interacts with the sqlite3 updates to guarantee a good copy at the other end. | | |
| ▲ | lifetimerubyist 2 hours ago | parent [-] | | Great tips and you’re right. We just flip into an app-side maintenance mode before we run the backup so we know there’s no writes, scp the file and then flip it back. We only do nightlies so it’s not a problem. The shell script is super simple and we’ve only needed to do nightly backups so far so we run it in a cron at midnight when no one is working. Ezpz. Literally took us an hour to implement and been chugging along without issues for nearly 2 years now without fail. If we ever need more than that I’d probably just setup litestream replication. |
|
|
|
| ▲ | pmbanugo 5 hours ago | parent | prev | next [-] |
| quite interesting. So SQL patterns can be optimised differently in SQLite |
|
| ▲ | hahahahhaah 5 hours ago | parent | prev [-] |
| One index scan beats 200 index lookups though surely? I.e. sometimes one query is cheaper. It is not network anymore. Also you can run your "big" DB like postgres on the same machine too. No law against that. |
| |
| ▲ | wenc 4 hours ago | parent | next [-] | | For analytic queries, yes, a single SQL query often beats many small ones. The query optimizer is allowed to see more opportunities to optimize and avoid unnecessary work. Most SQLite queries however, are not analytic queries. They're more like record retrievals. So hitting a SQLite table with 200 "queries" is similar hitting a webserver with 200 "GET" commands. In terms of ergonomics, SQLite feels more like a application file-format with a SQL interface. (though it is an embedded relational database) https://www.sqlite.org/appfileformat.html | |
| ▲ | dahart 3 hours ago | parent | prev | next [-] | | Depends. Throughput is probably higher, but the latency of a big scan might be larger than a small one, so many small lookups might feel more responsive if they’re each rendered independently. The example on the page doesn’t look like it can be merged into a single scan. I’m not a SQL expert but at a glance it does look like it could maybe be compressed into one or two dozen larger lookups. | |
| ▲ | Kinrany 5 hours ago | parent | prev [-] | | One query isn't cheaper than two queries that do the same amount of IO and processing and operate in the same memory space | | |
| ▲ | silon42 3 hours ago | parent [-] | | Yes, (index) scans are rarely faster typical web apps. Unless you have toy amounts data... or doing batch operations which is not typical (and can be problematic for other transactions due to locking, etc...) |
|
|