| ▲ | cjonas 3 hours ago |
| We just create mini data "ponds" on the fly by copying tenant isolated gold tier data to parquet in s3. The users/agent queries are executed with duckdb. We run this process when the user start a session and generate an STS token scoped to their tenant bucket path. Its extremely simple and works well (at least with our data volumes). |
|
| ▲ | mattaitken an hour ago | parent | next [-] |
| This is cool. I think for our use case this wouldn’t work. We’re dealing with billions of rows for some tenants. We’re about to introduce alerts where users can write their own TRQL queries and then define alerts from them. Which requires evaluating them regularly so effectively the data needs to be continuously up to date. |
|
| ▲ | Waterluvian 2 hours ago | parent | prev | next [-] |
| Is that why it’s called DuckDb? Because data ponds? |
| |
| ▲ | QuantumNomad_ 2 hours ago | parent | next [-] | | The DuckDB website has the following to say about the name: > Why call it DuckDB? > Ducks are amazing animals. They can fly, walk and swim. They can also live off pretty much everything. They are quite resilient to environmental challenges. A duck's song will bring people back from the dead and inspires database research. They are thus the perfect mascot for a versatile and resilient data management system. https://duckdb.org/faq#why-call-it-duckdb | |
| ▲ | cjonas 2 hours ago | parent | prev | next [-] | | Idk but I named everything in the related code "duckpond" :) | |
| ▲ | mritchie712 2 hours ago | parent | prev [-] | | Hannes (one of the creators) had a pet duck |
|
|
| ▲ | otterley an hour ago | parent | prev | next [-] |
| How large are these data volumes? How long does it take to prepare the data when a customer request comes in? |
| |
| ▲ | cjonas an hour ago | parent [-] | | Small. We're dealing with financial accounts, holdings and transactions. So a user might have 10 accounts, thousands of holdings, 10s of thousands of transactions. Plus a handful of supplemental data tables. Then there is market data that is shared across tenants and updated on interval. This data is maybe 10-20M rows. Just to clarify, the data is prepared when the user (agent) analytics session starts. Right now it takes 5-10s, which means it's typically ready well before the agent has actually determined it needs to run any queries. I think for larger volumes, pg_duckdb would allow this to scale to 10s of millions rows pretty efficiently. |
|
|
| ▲ | boundlessdreamz 2 hours ago | parent | prev [-] |
| How do you copy all the relevant data? Doesn't this create unnecessary load on your source DB? |
| |
| ▲ | cjonas 2 hours ago | parent [-] | | We have various data sources (which is another benefit of this approach). Data from the application DB is currently pulled using the FE apis which handle tenant isolation and allow the application database to deal with the load. I think pg_duckdb could be a good solution here as well, but haven't gotten around to testing it. Other data come from analytics DB. Most of this is landed on an interval via pipeline scripts. |
|