| ▲ | Quack: The DuckDB Client-Server Protocol(duckdb.org) | ||||||||||||||||||||||||||||||||||||||||||||||
| 67 points by aduffy 3 hours ago | 9 comments | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | simlevesque an hour ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
I like DuckDB but I'm not sure what it wants to be. There's always new ways to use it and it's not easy to see what's the right one. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | NortySpock 21 minutes ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Sounds useful for small-ball internal analytics datasets you want to place on shared team server. I can definitely see exploring this for some homelab use. | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | ozgrakkurt 11 minutes ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||
> It would be rather misguided not to build a database protocol on top of HTTP in 2026 This is wrong, HTTP is bad for transferring large amount of data and it is also bad for doing streaming. It is bad for large amount of data because you have timeout issues on some clients, you hit request/response size limits etc. It is obviously bad for streaming as there is no concept of streaming in it. It is comical to go the path of least resistance so lazy people can put a reverse proxy on top of it. And then say HTTP is the only relevant way to do it in 2026. The benchmark doesn't seem to mean much as TCP can max out 50GB/s on a single thread. Pretty sure it can do more than that even. So you could be using anything that isn't terrible and you should get max performance out of this. Also the protocol is something else from the format. For example if you are transferring mp4 over ftp and http you can compare that. If you are transferring different things over different protocols then the comparison means nothing. The benchmark graph for bulk transfer should show more granularity so it is possible to understand how much of the % of the hardware limit it is reaching. Similar to how BLAS GEMM routines are benchmarked based on the % of theoretical max flops of the hardware. > 60 million rows (76 GB in CSV format!) This reads a bit disingenuous. It is dissappointing to see this instead of something like PostgreSQL protocol with support for a columnar format. | |||||||||||||||||||||||||||||||||||||||||||||||