| ▲ | loa_observer 5 days ago | |
duckdb is super fast for analytic tasks, especially when u use it with visual eda tool like pygwalker. it allows u handles millions of data visuals and eda in seconds. but i would say, comparing duckdb and sqlite is a little bit unfair, i would still use sqlite to build system in most of cases, but duckdb only for analytic. you can hardly make a smooth deployment if you apps contains duckdb on a lot of platform | ||
| ▲ | trueno 5 days ago | parent [-] | |
depending on the size and needs of distributed system or application im kind of really excited about postgres + pg_lake. postgres has blown my mind at how well it does concurrent writes at least for the types of things i build/support for my org, the pg_lake extension then adds the ability to.. honestly work like a datalake style analytics engine. it intuitively switches whether or not the transaction goes down the normal query path or it uses duckdb which brings giga-aggregation type queries to massive datasets. someone should smush sqlite+duckdb together and do that kind of switching depending on query type | ||