| ▲ | koverstreet 2 hours ago | |||||||||||||||||||||||||
No multithreaded write benchmarks. That's a major omission, given that's where you'll see the biggest difference between b-trees and LSM trees. The paper also talks about the overhead of the mapping table for node lookups, and says "Bf-Tree by default pins the inner nodes in memory and uses direct pointer addresses to reference them. This allows a simpler inner node implementation, efficient node access, and reduced con- tention on the mapping table". But you don't have to pin nodes in memory to use direct pointer lookups. Reserve a field in your key/value pair for a direct in-memory pointer, and after chasing it check that you got the node you expect; only fall back to the mapping table (i.e. hash table of cached nodes) if the pointer is uninitialized or you don't get the node you expect. "For write, conventional B-Tree performs the worst, as a single record update would incur a full page write, as evidenced by the highest disk IO per operation." Only with a random distribution, but early on the paper talks about benchmarking with a Zip-f distribution. Err? The benchmark does look like a purely random distribution, which is not terribly realistic for most use cases. The line about "a single record updating incurring a full page write" also ignores the effect of cache size vs. working set size, which is a rather important detail. I can't say I trust the benchmark numbers. Prefix compression - nice to see this popping up. "hybrid latching" - basically, they're doing what the Linux kernel calls seqlocks for interior nodes. This is smart, but given that their b-tree implementation doesn't use it, you shouldn't trust the b-tree benchmarks. However, I found that approach problematic - it's basically software transactional memory, with all the complexity that implies, and it bleeds out into too much of the rest of your b-tree code. Using a different type of lock for interior nodes where read locks only use percpu counters gives the same performance (read locks touch no cachelines shared by other CPUs) for much lower complexity. Not entirely state of the art, and I see a lot of focus on optimizations that likely wouldn't survive in a larger system, but it does look like a real improvement over LSM trees. | ||||||||||||||||||||||||||
| ▲ | fuzzybear3965 2 hours ago | parent [-] | |||||||||||||||||||||||||
Sure, but on principle, looking at the paper, I'd expect it to outperform B-trees since write amplification is reduced, generally. You thinking about cases requiring ordering of writes to a given record (lock contention)? | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||