Remix.run Logo
eru 9 hours ago

> However, standard LLVM-based JIT is notoriously slow at compilation. When it takes tens to hundreds of milliseconds, it may be suitable only for very heavy, OLAP-style queries, in some cases.

I don't know anything here, but this seems like a good case for ahead of time compilation? Or at least caching your JIT results? I can image much of the time, you are getting more or less the same query again and again?

olau 8 hours ago | parent | next [-]

Yes.

Some years ago we ported some code from querying out the data and tallying in Python (how many are in each bucket) to using SQL to do that. It didn't speed up the execution. I was surprised by that, but I guess the Postgres interpreter is roughly the same speed as Python, which when you think about it perhaps isn't that surprising.

But Python is truly general purpose while the core query stuff in SQL is really specialized (we were not using stored procedures). So if Pypy can get 5x speedup, it seems to me that it should be possible to get the same kind of speed up in Postgres. I guess it needs funding and someone as smart as the Pypy people.

bob1029 7 hours ago | parent | prev [-]

At some level the application needs to participate in the performance conversation too.

https://www.postgresql.org/docs/current/sql-prepare.html

masklinn 4 hours ago | parent | next [-]

Postgres’s PREPARE is per-connection so it’s pretty limited, and then connection poolers enter the fray and often can’t track SQL-level prepares.

And then the issue is not dissimilar to Postgres’s planner issues.

SigmundA 6 hours ago | parent | prev [-]

Unless you cache query plans like other RDBMS's then the client manually managing that goes away and its not limited to a single connection.

MS SQL still has prepared statements and they really haven't been used in 20 years since it gained the ability to cache plans based on statement text.