| ▲ | iamcreasy 2 days ago | |
Very cool! Was there any inherent limitation with postgresql or its extension system that forced pg_lake to use duckdb as query engine? | ||
| ▲ | mslot 2 days ago | parent | next [-] | |
I gave a talk on that at Data Council, then still discussing the pg_lake extensions as part of Crunchy Data Warehouse. https://youtu.be/HZArjlMB6W4?si=BWEfGjMaeVytW8M1 Also, nicer recording from POSETTE: https://youtu.be/tpq4nfEoioE?si=Qkmj8o990vkeRkUa It comes down to the trade-offs made by operational and analytical query engines being fundamentally different at every level. | ||
| ▲ | pgguru 2 days ago | parent | prev [-] | |
DuckDB provided a lot of infrastructure for reading/writing parquet files and other common formats here. It also was inherently multi-threaded and supported being embedded in a larger program (similar to sqllite), so made it a good basis for something that could work outside of the traditional process model of Postgres. Additionally, the postgres extension system supports most of the current project, so wouldn't say it was forced in this case, it was a design decision. :) | ||