| ▲ | rostayob 3 days ago |
| (Disclaimer: I'm one of the authors of TernFS and while we evaluated Ceph I am not intimately familiar with it) Main factors: * Ceph stores both metadata and file contents using the same object store (RADOS). TernFS uses a specialized database for metadata which takes advantage of various properties of our datasets (immutable files, few moves between directories, etc.). * While Ceph is capable of storing PBs, we currently store ~600PBs on a single TernFS deployment. Last time we checked this would be an order of magnitude more than even very large Ceph deployments. * More generally, we wanted a system that we knew we could easily adapt to our needs and more importantly quickly fix when something went wrong, and we estimated that building out something new rather than adapting Ceph (or some other open source solution) would be less costly overall. |
|
| ▲ | mgrandl 3 days ago | parent | next [-] |
| There are definitely insanely large Ceph deployments. I have seen hundreds of PBs in production myself. Also your usecase sounds like something that should be quite manageable for Ceph to handle due to limited metadata activity, which tends to be the main painpoint with CephFS. |
| |
| ▲ | rostayob 3 days ago | parent | next [-] | | I'm not fully up to date since we looked into this a few years ago, at the time the CERN deployments of Ceph were cited as particularly large examples and they topped out at ~30PB. Also note that when I say "single deployment" I mean that the full storage capacity is not subdivided in any way (i.e. there are no "zones" or "realms" or similar concepts). We wanted this to be the case after experiencing situations where we had significant overhead due to having to rebalance different storage buckets (albeit with a different piece of software, not Ceph). If there are EB-scale Ceph deployments I'd love to hear more about them. | | |
| ▲ | mrngm 2 days ago | parent | next [-] | | Ceph has opt-in telemetry since a couple of years. This dashboard[0] panel suggests there are about 4-5 clusters (that send telemetry) within the 32-64 PiB range. It would be really interesting to see larger clusters join in on their telemetry as well. [0] https://telemetry-public.ceph.com/d/ZFYuv1qWz/telemetry?orgI... | | | |
| ▲ | mgrandl 3 days ago | parent | prev [-] | | There are much larger Ceph clusters, but they are enterprise owned and not really publicly talked about. Sadly I can’t share what I personally worked on. | | |
| ▲ | rostayob 3 days ago | parent [-] | | The question is whether there are single Ceph deployments are that large. I believe Hetzner uses Ceph for its cloud offering, and that's probably very large, but I'd imagine that no single tenant is storing hundreds of PBs in it. So it's very easy to shard across many Ceph instances. In our use-case we have a single tenant which stores 100s of PBs (and soon EBs). | | |
| ▲ | ttfvjktesd 3 days ago | parent [-] | | Digital Ocean is also using Ceph[1]. I think these cloud providers could easily have 100s of PBs Clusters at their size, but it's not public information. Even smaller company's (< 500 employees) in today's big data collection age often have more than 1 PB of total data in their enterprise pool. Hosters like Digital Ocean hosts thousands of these companies. I do think that Ceph will hit performance issues at that size and going into the EB range will likely require code changes. My best guess would be that Hetzner, Digital Ocean and similar, maintain their own internal fork of Ceph and have customizations that tightly addresses their particular needs. [1]: https://www.digitalocean.com/blog/why-we-chose-ceph-to-build... |
|
|
| |
| ▲ | kachapopopow 3 days ago | parent | prev [-] | | Ceph is more of: here's a raw block of data, do whatever the hell you want with it, not really good for immutable data. | | |
| ▲ | mgrandl 3 days ago | parent [-] | | Well sure you would have to enforce immutability at the client side. | | |
| ▲ | kachapopopow 3 days ago | parent [-] | | It's more that it has all the systems to allow mutability which add a lot of overhead when used as an immutable system. |
|
|
|
|
| ▲ | eps 3 days ago | parent | prev | next [-] |
| Last point is an extremely important advantage that is often overlooked and denigrated. But having a complex system that you know inside-out because you made it from scratch pays in gold in the long term. |
|
| ▲ | pwlm 3 days ago | parent | prev [-] |
| Any compression at the filesystem level? |
| |
| ▲ | rostayob 3 days ago | parent [-] | | No, we have our custom compressor as well but it's outside the filesystem. |
|