▲ | jiggawatts 3 days ago | ||||||||||||||||||||||||||||||||||
It looks bizarre to see that Azure Storage support was included. That's because there's no need to use fancy SlateDB+LSM tree code to simulate a block device on Azure Storage, it already natively provides "page blobs" that can be mounted as standard disks and used for boot disks, databases, or whatever. These are used as the backing storage for all Azure virtual machine disks and can also be used directly via the HTTP API. For example, SQL Server can store its database files on page blobs, without having to attach them as disks and formatting them as NTFS volumes. See: https://learn.microsoft.com/en-us/azure/storage/blobs/storag... | |||||||||||||||||||||||||||||||||||
▲ | Eikon 3 days ago | parent [-] | ||||||||||||||||||||||||||||||||||
What's so bizarre about it? Are we prevented from implementing features when the platform itself provides adjacent tech? Page blobs are 2x+ more expensive ($0.045/GB vs $0.02/GB) and would require Azure-specific code. We don't dream of maintaining separate implementations for each cloud provider. The approach works identically on S3, GCS, Azure, and local storage. If anything, ZeroFS helps you distance from vendor lock-in, especially if you start to run "usually managed" components on top, say databases. Speaking of Azure's "native" solutions - we benchmarked ZeroFS against Azure Files. ZeroFS is 35-41x faster for most operations and 38% cheaper. If Azure Files performs that poorly, I don't hold much hope for page blobs either: https://www.zerofs.net/zerofs-vs-azure-files | |||||||||||||||||||||||||||||||||||
|