Remix.run Logo
infogulch 6 hours ago

How suitable would this be as a zfs send target to back up your local zfs datasets to object storage?

suprasam 4 hours ago | parent | next [-]

Yes, this is a core use case ZFS fits nicely. See slide 31 "Multi-Cloud Data Orchestration" in the talk.

Not only backup but also DR site recovery.

  The workflow:

  1. Server A (production): zpool on local NVMe/SSD/HD
  2. Server B (same data center): another zpool backed by objbacker.io → remote object storage (Wasabi, S3, GCS)
  3. zfs send from A to B - data lands in object storage

  Key advantage: no continuously running cloud VM. You're just paying for object storage (cheap) not compute (expensive). Server B is in your own data center - it can be a VM too.
For DR, when you need the data in cloud:

  - Spin up a MayaNAS VM only when needed
  - Import the objbacker-backed pool - data is already there
  - Use it, then shut down the VM
p_l 4 hours ago | parent | prev [-]

Quite probably should work just fine.

The secret is that ZFS actually implements an object storage layer on top of block devices and only then implements ZVOL and ZPL (ZFS POSIX filesystem) on top of that.

A "zfs send" is essentially a serialized stream of objects sorted by dependency (objects later in stream will refer to objects earlier in stream, but not the other way around).