| ▲ | fabian2k 8 hours ago |
| The last time I looked into it my impression was that disabling the JIT in PostgreSQL was the better default choice. I had a massive slowdown in some queries, and that doesn't seem to be an entirely unusual experience. It does not seem worth it to me to add such a large variability to query performance by default. The JIT seemed like something that could be useful if you benchmark the effect on your actual queries, but not as a default for everyone. |
|
| ▲ | pjmlp 8 hours ago | parent [-] |
| That is quite strange, given that big boys RDMS (Oracle, SQL Server, DB2, Informix,...) all have JIT capabilities for several decades now. |
| |
| ▲ | SigmundA 6 hours ago | parent [-] | | The big boys all cache query plans so the amount it time it take to compile is not really a concern. | | |
| ▲ | vladich an hour ago | parent | next [-] | | Postgres caches query plans too, the problem is you can only cache what you can share, and if your planner works well, you can share very little, there can be a lot of unique plans even for the same query | |
| ▲ | aengelke 4 hours ago | parent | prev [-] | | That's not generally correct. Compile-time is a concern for several databases. | | |
| ▲ | SigmundA 2 hours ago | parent [-] | | Most systems submit many of the same queries over and over again. Ad-hoc one off queries usually can accept higher initial up-front compile cost because the main results usually take much longer anyway, vs worrying about an extra 100ms of compile. Maybe it was too strong to say its not a concern at all, but nothing like PG where every single request needs to replan and potentially jit unless the client manually prepares and keeps the connection open. |
|
|
|