| ▲ | positisop 3 days ago |
| Longhorn is a poorly implemented distributed storage layer. You are better off with Ceph. |
|
| ▲ | willbeddow 3 days ago | parent | next [-] |
| have not used longhorn, but we are currently in the process of migrating off of ceph after an extremely painful relationship with it. Ceph has fundamental design flaws (like the way it handles subtree pinning) that, IMO, make more modern distributed filesystems very useful. SeaweedFS is also cool, and for high performance use cases, weka is expensive but good. |
| |
| ▲ | q3k 3 days ago | parent | next [-] | | That sounds more like a CephFS issue than a Ceph issue. (a lot of us distrust distributed 'POSIX-like' filesystems for good reasons) | |
| ▲ | __turbobrew__ 3 days ago | parent | prev [-] | | Are there any distributed POSIX filesystems which don’t suck? I think part of the issue is that POSIX compliant filesystem just doesn’t scale, and you are just seeing that? | | |
| ▲ | scheme271 3 days ago | parent | next [-] | | I think Lustre works fairly well. At the very least, it's used in a lot of HPC centers to handle large filesystems that get hammered by lots of nodes concurrently. It's open source so nominally free although getting a support contract from specialized consulting firm might be pricey. | | | |
| ▲ | huntaub 3 days ago | parent | prev | next [-] | | Basically, we are building this at Archil (https://archil.com). The reason these things are generally super expensive is that it’s incredibly hard to build. | |
| ▲ | willbeddow 3 days ago | parent | prev [-] | | weka seems to Just Work from our tests so far, even under pretty extreme load with hundreds of mounts on different machines, lots of small files, etc... Unfortunately it's ungodly expensive. |
|
|
|
| ▲ | yupyupyups 3 days ago | parent | prev [-] |
| I've heard Ceph is expensive to run. But maybe that's not true? |
| |
| ▲ | keeperofdakeys 3 days ago | parent | next [-] | | Ceph overheads aren't that large for a small cluster, but they grow as you add more hosts, drives, and more storage. Probably the main gotcha is that you're (ideally) writing your data three times on different machines, which is going to lead to a large overhead compared with local storage. Most resource requirements for Ceph assume you're going for a decently sized cluster, not something homelab sized. | |
| ▲ | jauntywundrkind 3 days ago | parent | prev | next [-] | | I'm only just wading in, after years of intent. I don't feel like Ceph is particularly demanding. It does want a decent amount of ram. 1GB each for monitor, manager, and metadata, up to 16GB total for larger clusters, according to docs. But then each disk's OSD defaults to 4gb, which can add up fast!! And some users can use more. 10Gbe is recommended and more is better here but that seems not unique to ceph: syncing storage will want bandwidth. https://docs.ceph.com/en/octopus/start/hardware-recommendati... | | | |
| ▲ | master_crab 3 days ago | parent | prev [-] | | It’s going to do a good job saturating your lan maintaining quorum on the data. |
|