Remix.run Logo
andersmurphy 10 hours ago

> SQLite on the same machine is akin to calling fwrite.

Actually 35% faster than fwrite [1].

> This is also a system constraint as it forces a one-database-per-instance design

You can scale incredibly far on a single node and have much better up time than github or anthropic. At this rate maybe even AWS/cloudflare.

> you need to serve traffic beyond your local region

Postgres still has a single node that can write. So most of the time you end up region sharding anyway. Sharding SQLite is straight forward.

> This is fine if you're putting together a site for your neighborhood's mom and pop shop, but once you need to handle a request baseline beyond a few hundreds TPS

It's actually pretty good for running a real time multiplayer app with a billion datapoints on a 5$ VPS [2]. There's nothing clever going on here, all the state is on the server and the backend is fast.

> but you're now compelled to find "clever" strategies to sync state across nodes.

That's the neat part you don't. Because, for most things that are not uplink limited (being a CDN, Netflix, Dropbox) a single node is all you need.

- [1] https://sqlite.org/fasterthanfs.html

- [2] https://checkboxes.andersmurphy.com

shimman 3 hours ago | parent | next [-]

May be an "out" there question, but any tech book suggestions you'd recommend that can teach an average dev on how to build highly performant software with minimal systems?

I feel like the advice from people with your experience is worth way way way way more than what you'd hear from big tech. Like what you said yourself, big tech tends to recommend extremely complicated systems that only seem worth maintaining if you have a trillion dollar monopoly behind it.

andersmurphy 15 minutes ago | parent [-]

Not specific books per say. Though I'd advise starting with some constraints. As that really helps you focus.

Your reading/learning material can spin out of those constraints.

So for me my recent constraints were:

1. Multiplayer/collaborative web apps built by small teams.

2. Single box.

3. I like writing lisp.

So single box pushes me towards a faster language, and something that's easy to deploy. Go would be the natural choice here, but I want a lisp so Clojure is probably the best option here (helps that I already know it). JVM is fast enough and has a pretty good deployment story. Multiplayer web apps, pushed me to explore distributed state vs streaming with centralised state. This became a whole journey which ended with Datastar [1]. Thing is immediate mode streaming HTML needs your database queries to be fast and that's how I ended up on SQLite (I was already a fan, and had used it in production before), but the constraints of streaming HTML forced me to revisit it in anger.

Your constraints could be completely different. They could be:

1. Fast to market.

2. Minimise risk.

3. Mobile + Web

4. Try something new.

Fast to market might mean you go with something like Rails/Django. Minimise risk might mean you go with Rails because you have a load of experience with it. Mobile + web means you read up on Hotwire. Try something new might mean you push more logic into stored procedures and SQL queries so you can get the most out of Postgres and make your Rails app faster. So you read The Art of Postgresql [2] (great book). Or maybe you try hosting rails on a VPS and set up/manage your own postgres instance.

A few companies back mine were:

1. JVM but with a more ruby/rails like development experience.

2. Mobile but not separate iOS/Android projects.

3. Avoid the pain of app store releases.

4. You can't innovate everywhere.

That meant Clojure. React native. Minimal clients with as much driven from the backend as possible. Sticking to postgres and Heroku because it's what we knew and worked well enough.

- [1] https://data-star.dev

- [2] https://theartofpostgresql.com

There's no right answer. Hope that's helpful.

wookmaster 8 hours ago | parent | prev | next [-]

How do you manage HA?

andersmurphy 7 hours ago | parent | next [-]

Backups, litestream gives you streaming replication to the second.

Deployment, caddy holds open incoming connections whilst your app drains the current request queue and restarts. This is all sub second and imperceptible. You can do fancier things than this with two version of the app running on the same box if that's your thing. In my case I can also hot patch the running app as it's the JVM.

Server hard drive failing etc you have a few options:

1. Spin up a new server/VPS and litestream the backup (the application automatically does this on start).

2. If your data is truly colossal have a warm backup VPS with a snapshot of the data so litestream has to stream less data.

Pretty easy to have 3 to 4 9s of availability this way (which is more than github, anthropic etc).

rienbdj 6 hours ago | parent | next [-]

My understanding is litestream can lose data if a crash occurs before the backup replication to object storage. This makes it an unfair comparison to a Postgres in RDS for example?

andersmurphy 6 hours ago | parent | next [-]

Last I checked RDS uploads transaction logs for DB instances to Amazon S3 every five minutes. Litestream by default does it every second (you can go sub second with litestream if you want).

sudodevnull 6 hours ago | parent | prev [-]

your understanding is very wrong. please read the docs or better yet the actual code.

locknitpicker 6 hours ago | parent | prev [-]

> Backups, litestream gives you streaming replication to the second.

You seem terribly confused. Backups don't buy you high availability. At best, they buy you disaster recovery. If your node goes down in flames, your users don't continue to get service because you have an external HD with last week's db snapshots.

andersmurphy 5 hours ago | parent [-]

If anything backups are the key to high availability.

Streaming replication lets you spin up new nodes quickly with sub second dataloss in the event of anything happening to your server. It makes having a warm standby/failover trivial (if your dataset is large enough to warrant it).

If your backups are a week old snapshots, you have bigger problems to worry about than HA.

rovr138 7 hours ago | parent | prev [-]

No offense, you wait. Like everyone's been doing for years in the internet and still do

- When AWS/GCP goes down, how do most handle HA?

- When a database server goes down, how do most handle HA?

- When Cloudflare goes down, how do most handle HA?

The down time here is the server crashed, routing failed or some other issue with the host. You wait.

One may run pingdom or something to alert you.

locknitpicker 6 hours ago | parent [-]

> When AWS/GCP goes down, how do most handle HA?

This is a disingenuous scenario. SQLite doesn't buy you uptime if you deploy your app to AWS/GCP, and you can just as easily deploy a proper RDBMS such as postgres to a small provider/self-host.

Do you actually have any concrete scenario that supports your belief?

runako 5 hours ago | parent [-]

> SQLite doesn't buy you uptime if you deploy your app to AWS/GCP

This is...not true of many hyperscaler outages? Frequently, outages will leave individual VMs running but affect only higher-order services typically used in more complex architectures. Folks running an SQLite on a EC2 often will not be affected.

And obviously, don't use us-east-1. This One Simple Trick can improve your HA story.

locknitpicker 6 hours ago | parent | prev [-]

> You can scale incredibly far on a single node

Nonsense. You can't outrun physics. The latency across the Atlantic is already ~100ms, and from the US to Asia Pacific can be ~300ms. If you are interested in performance and you need to shave off ~200ms in latency, you deploy an instance closer to your users. It makes absolutely no sense to frame the rationale around performance if your systems architecture imposes a massive performance penalty in networking just to shave a couple of ms in roundtrips to a data store. Absurd.

klooney 5 hours ago | parent | next [-]

You need regional state, or you're still back hauling to the db with all the lag.

andersmurphy 6 hours ago | parent | prev [-]

That only solves read latency not write latency. Unless you don't care about consistency.