Remix.run Logo
codeaether 4 days ago

Actually, to fully utilize NVME performance, one really need to try to avoid OS overhead by leveraging AsyncIO such as IO_Uring. In fact, 4KB page works quite well if you can issue enough outstanding requests. See a paper from the link below by the TUM folks.

https://dl.acm.org/doi/abs/10.14778/3598581.3598584

dataflow 4 days ago | parent | next [-]

SPDK is what folks who really care about this use, I think.

jandrewrogers 4 days ago | parent | next [-]

The only thing SPDK buys you is somewhat lower latency, which isn't that important for most applications because modern high-performance I/O schedulers usually are not that latency sensitive anyway.

The downside of SPDK is that it is unreasonably painful to use in most contexts. When it was introduced there were few options for doing high-performance storage I/O but a lot has changed since then. I know many people that have tested SPDK in storage engines, myself included, but none that decided the juice was worth the squeeze.

electricshampo1 4 days ago | parent | next [-]

Depending on the IOPS rate for your app; SPDK can result in less CPU time spent in issuing IO/reaping completions compared to ex. io_uring.

See Ex. https://www.vldb.org/pvldb/vol16/p2090-haas.pdf What Modern NVMe Storage Can Do, And How To Exploit It: High-Performance I/O for High-Performance Storage Engines

for actual data on this.

OFC, If your block size is large enough and/or your design is batching enough etc. that you already don't spend much time in issuing IO/reaping completion then as you say, SPDK will not provide much of a gain.

__turbobrew__ 3 days ago | parent | prev [-]

I believe seastar uses it and that is the base of scylladb storage engine: https://seastar.io/

I believe the next generation ceph OSD is built on seastar as well: https://docs.ceph.com/en/reef/dev/crimson/crimson/

With something like ceph, latency is everything as writes need to be synchronously committed to each OSD replica before the writing client is unblocked. I think for ceph they are trying to move to nvme-of to basically bypass the OS for remote NVME access. Im not sure how this will work security wise however as you cannot just have any node on the network reading and writing random blocks of nvme-of devices.

lossolo 3 days ago | parent [-]

> I believe seastar uses it and that is the base of scylladb storage engine: https://seastar.io/

They use DPDK (optionally) for network IO, not SPDK.

vlovich123 4 days ago | parent | prev [-]

SPDK requires taking over the device. OP is correct if you want to have a multi tenant application where the disk is also used for other things.

dataflow 4 days ago | parent [-]

Not an expert on this but I think that's... half-true? There is namespace support which should allow multiple users I think (?), but it does still require direct device access.

vlovich123 4 days ago | parent [-]

Namespaces are a hack device manufacturers came up with to try to make this work anyway. Namespaces at the device level are a terrible idea IMP because it’s still not multi tenant - your just carving up a single drive into logically separated chunks that you have to decide on up front. So you have to say “application X gets Y% of the drive while application A gets B%”. It’s an expensive static allocation that’s not self adjusting based on actual dynamic usage.

10000truths 4 days ago | parent | next [-]

Dynamic allocation implies the ability to shrink as well as grow. How do you envision shrinking an allocation of blocks to which your tenant has already written data that is (naturally) expected to be durable in perpetuity?

vlovich123 3 days ago | parent [-]

you mean something filesystems do as a matter of course? Ignoring resizing them which is also supported through supporting technologies I’m not talking about partitioning a drive. You can have different applications sharing a filesystem just fine, with each application growing how much space it uses naturally as usage increases or shrinks. Partitioning and namespaces are similar (namespaces are significantly more static) in that you have to make decisions about the future really early vs a normal file on a filesystem growing over time.

10000truths 3 days ago | parent [-]

If you're assuming that every tenant's block device is storing a filesystem, then you're not providing your tenant a block device, you're providing your tenant a filesystem. And if you're providing them a filesystem, then you should use something like LVM for dynamic partitioning.

The point of NVMe namespaces is to partition at the block device layer. To turn one physical block device into multiple logical block devices, each with their own queues, LBA space, etc. It's for when your tenants are interacting with the block device directly. That's not a hack, that's intended functionality.

4 days ago | parent | prev [-]
[deleted]
marginalia_nu 4 days ago | parent | prev | next [-]

As part of the problem domain in index lookups, issuing multiple requests at the same time is not possible, unless as part of some entirely guess-based readahead scheme thay may indeed drive up disk utilization but are unlikely to do much else. Large blocks are a solution with that constraint as a given.

That paper seems to mostly focus on throughput via concurrent independent queries, rather than single-query performance. It's arriving at a different solution because it's optimizing for a different variable.

throwaway81523 4 days ago | parent | next [-]

In most search engines the top few tree layers are in ram cache, and can also have disk addresses for the next levels. So maybe that can let you start some concurrent requests.

Veserv 4 days ago | parent | prev [-]

Large block reads are just a readahead scheme where you prefetch the next N small blocks. So you are just stating that contiguous readahead is close enough to arbitrary readahead especially if you tune your data structure appropriately to optimize for larger regions of locality.

marginalia_nu 4 days ago | parent [-]

Well I mean yes, you can use io_uring to read the 128KB blocks as 8 4KB blocks, but that's a very roundabout way of doing it that doesn't significantly improve your performance since with either method, the operation time is more or less the same. If a 128 KB read takes roughly the same time as a 4K read, 8 parallel 4K reads isn't going to be faster with io_uring.

Also, an index with larger block sizes is not equivalent to a structure with smaller block sizes with readahead. The index structure is not the same since having larger coherent blocks gives you better precision in your indexing structure for the same number of total forward pointers, as there's no need to index within each 128 KB block, the forward pointer resolution that would have gone to distinguishing between 4K blocks can instead help you rapidly find the next relevant 128 KB block.

ozgrakkurt 4 days ago | parent | prev [-]

4KB is much slower than 512KB if you are using the whole data. Smaller should be better if there is read amplification