▲ | marginalia_nu 4 days ago | |||||||
As part of the problem domain in index lookups, issuing multiple requests at the same time is not possible, unless as part of some entirely guess-based readahead scheme thay may indeed drive up disk utilization but are unlikely to do much else. Large blocks are a solution with that constraint as a given. That paper seems to mostly focus on throughput via concurrent independent queries, rather than single-query performance. It's arriving at a different solution because it's optimizing for a different variable. | ||||||||
▲ | throwaway81523 4 days ago | parent | next [-] | |||||||
In most search engines the top few tree layers are in ram cache, and can also have disk addresses for the next levels. So maybe that can let you start some concurrent requests. | ||||||||
▲ | Veserv 4 days ago | parent | prev [-] | |||||||
Large block reads are just a readahead scheme where you prefetch the next N small blocks. So you are just stating that contiguous readahead is close enough to arbitrary readahead especially if you tune your data structure appropriately to optimize for larger regions of locality. | ||||||||
|