Remix.run Logo
sourcegrift 8 hours ago

We have everything optimized, and yet somehow DB queries need to be "interpreted" at runtime. There's no reason for DB queries to not be precompiled.

jpfr 3 hours ago | parent | next [-]

The "byte-code" coming from the query planner typically only has a handful of steps in a linear sequence. Joins, filters, and such. But the individual steps can be very costly.

So there is not much to gain from JITing the query plan execution only.

JITing begins to make more sense, when the individual query plan steps (join, filter, ...) themselves be specialized/recompiled/improved/merged by knowing the context of the query plan.

catlifeonmars 7 hours ago | parent | prev | next [-]

This is a neat idea. I want to take it further and precompile the entire DBMS binary for a specific schema.

menaerus 28 minutes ago | parent | next [-]

Someone is already working on it: https://arxiv.org/pdf/2603.02081

WJW 6 hours ago | parent | prev [-]

How will you handle ALTER TABLE queries without downtime?

catlifeonmars 5 hours ago | parent [-]

That would definitely present a bit of a challenge, but:

- not all databases need migrations (or migrations without downtime)

- alternatively, ship the migrations as part of the binary

Adhoc modifications would still be more difficult but tbh that’s not necessarily a bug

array_key_first an hour ago | parent | prev | next [-]

DB queries do get pre compiled and cached if you use prepared statements. This is why you should always use prepared statements if you can.

kbolino 24 minutes ago | parent [-]

It is not always necessary to explicitly use prepared statements, though. For example, the pgx library for Go [1] and the psycopg3 library for Python [2] will automatically manage prepared statements for you.

[1]: https://pkg.go.dev/github.com/jackc/pgx/v5#hdr-Prepared_Stat...

[2]: https://www.psycopg.org/psycopg3/docs/advanced/prepare.html

Asm2D 6 hours ago | parent | prev | next [-]

Many SQL engines have JIT compilers.

The problems related to PostgreSQL are pretty much all described here. It's very difficult to do low-latency queries if you cannot cache the compiled code and do it over and over again. And once your JIT is slow you need a logic to decide whether to interpret or compile.

I think it would be the best to start interpreting the query and start compilation in another thread, and once the compilation is finished and interpreter still running, stop the interpreter and run the JIT compiled code. This would give you the best latency, because there would be no waiting for JIT compiler.

aengelke 5 hours ago | parent | next [-]

> It's very difficult to do low-latency queries if you cannot cache the compiled code

This is not too difficult, it just requires a different execution style. Salesforce's Hyper for example very heavily relies on JIT compilation, as does Umbra [1], which some people regard as one of the fastest databases right now. Umbra doesn't cache any IR or compiled code and still has an extremely low start-up latency; an interpreter exists but is practically never used.

Postgres is very robust and very powerful, but simply not designed for fast execution of queries.

Disclosure: I work in the group that develops Umbra.

[1]: https://umbra-db.com/

chrisaycock 5 hours ago | parent | prev [-]

> I think it would be the best to start interpreting the query and start compilation in another thread

This technique is known as a "tiered JIT". It's how production virtual machines operate for high-level languages like JavaScript.

There can be many tiers, like an interpreter, baseline compiler, optimizing compiler, etc. The runtime switches into the faster tier once it becomes ready.

More info for the interested:

https://ieeexplore.ieee.org/document/10444855

levkk 5 hours ago | parent | prev | next [-]

See prepared statements.

SigmundA 7 hours ago | parent | prev [-]

Postgresql uses a process per connection model and it has no way to serialize a query plan to some form that can be shared between processes, so the time it takes to make the plan including JIT is very important.

Most other DB's cache query plans including jitted code so they are basically precompiled from one request to the next with the same statement.

zaphirplane 7 hours ago | parent | next [-]

What do you mean ? Cause the obvious thing is a shared cache and if there is one thing the writers of a db know it is locking

SigmundA 6 hours ago | parent [-]

Sharing executable code between processes it not as easy as sharing data. AFAIK unless somethings changed recently PG shares nothing about plans between process and can't even share a cached plan between session/connections.

_flux 5 hours ago | parent | next [-]

Write the binary to a file, call it `libquery-id1234.so`, and link that to whichever processes that need it?

vladich an hour ago | parent | next [-]

Won't work well if it executes 20k+ queries per second. Filesystem will be a bottleneck among other things.

SigmundA 2 hours ago | parent | prev [-]

Might want to take a look at some research like this [1] that goes over the issues:

"This obvious drawback of the current software architecture motivates our work: sharing JIT code caches across applications. During the exploration of this idea, we have encountered several challenges. First of all, most JIT compilers leverage both runtime context and profile information to generate optimized code. The compiled code may be embedded with runtime-specific pointers, simplified through unique class-hierarchy analysis, or inlined recursively. Each of these "improve- ments" can decrease the shareability of JIT compiled code."

Anythings doable here with enough dev time. Would be nice if PG could just serialize the query plan itself maybe just as an SO along with non-process specific executable code that then has to be dynamically linked again in other processes.

1. https://dl.acm.org/doi/10.1145/3276494

llm_nerd 6 hours ago | parent | prev [-]

Executable code is literally just data that you mark as executable. It did the JIT code, and the idea that it can't then share it between processes is incomprehensible.

I was actually confused by this submission as it puts so much of an emphasis on initial compilation time, when every DB (apparently except for pgsql) caches that result and shares it/reuses it until invalidation. Invalidation can occur for a wide variety of reasons (data composition changing, age, etc), but still the idea of redoing it on every query, where most DBs see the same queries endlessly, is insane.

vladich 2 hours ago | parent | next [-]

The emphasis on compilation time there is because the JIT provider that comes with Postgres (LLVM-based) is broken in that particular area. But you're right, JITed code can be cached, if some conditions are met (it's position independent, for one). Not all JIT providers do that, but many do. Caching is on the table, but if your JIT-compilation takes microseconds, caching could be rather a burden in many cases. Still for some cases useful.

SigmundA 2 hours ago | parent | prev [-]

No a lot of jitted code has pointers to addresses specific to that process which makes no sense in another process.

To make code shareable between processes takes effort and will have tradeoff in performance since it is not specialized to the process.

If the query plan where at least serializable which is more like a AST then at least that part could be reused and then maybe have jitted code in each processes cached in memory that the plan can reference by some key.

DB's like MSSQL avoid the problem because they run a single OS process with multiple threads instead. This is also why it can handle more connections easily since each connection is not a whole process.

hans_castorp 6 hours ago | parent | prev [-]

> and it has no way to serialize a query plan to some form that can be shared between processes

https://www.postgresql.org/docs/current/parallel-query.html

"PostgreSQL can devise query plans that can leverage multiple CPUs in order to answer queries faster."

SigmundA 6 hours ago | parent [-]

Nothing to do with plan caching, thats just talking about plan execution of parallel operations which is that thread or process based in PG?

If process based then they can send small parts of plan across processes.

hans_castorp 6 hours ago | parent [-]

Ah, didn't see the caching part.

Plans for prepared statements are cached though.

SigmundA 2 hours ago | parent | next [-]

Yes if the client manually prepares the statement it will be cached for just that connection because in PG a connection is a process, but it won't survive from one connection to the next even in same process.

Other databases like MSSQL have prepared statements but they are rarely used now days since plan caching based on query text was introduced decades ago.

AlisdairO 3 hours ago | parent | prev [-]

Only on a per-connection basis