| ▲ | bojangleslover 7 hours ago |
| Completely different situations. None of the MinIO team worked for free. MinIO is a COSS company (commercial open source software). They give a basic version of it away for free hoping that some people, usually at companies, will want to pay for the premium features. MinIO going closed source is a business decision and there is nothing wrong with that. I highly recommend SeaweedFS. I used it in production for a long time before partnering with Wasabi. We still have SeaweedFS for a scorching hot, 1GiB/s colocated object storage, but Wasabi is our bread and butter object storage now. |
|
| ▲ | Ensorceled 4 hours ago | parent | next [-] |
| > > Working for free is not fun. Having a paid offering with a free community version is not fun. Ultimately, dealing with people who don't pay for your product is not fun. > Completely different situations. None of the MinIO team worked for free. MinIO is a COSS company (commercial open source software). MinIO is dealing with two out of the three issues, and the company is partially providing work for free, how is that "completely different"? |
| |
| ▲ | mbreese 4 hours ago | parent | next [-] | | The MinIO business model was a freemium model (well, Open Source + commercial support, which is slightly different). They used the free OSS version to drive demand for the commercially licensed version. It’s not like they had a free community version with users they needed to support thrust upon them — this was their plan. They weren’t volunteers. You could argue that they got to the point where the benefit wasn’t worth the cost, but this was their business model. They would not have gotten to the point where the could have a commercial-only operation without the adoption and demand generated from the OSS version. Running a successful OSS project is often a thankless job. Thanks for doing it. But this isn’t that. | | |
| ▲ | Ensorceled 4 hours ago | parent [-] | | > Running a successful OSS project is often a thankless job. Thanks for doing it. But this isn’t that. No, even if you are being paid, it's a thankless, painful job to deal with demanding, entitled free users. It's worse if you are not being paid, but I'm not sure why you are asserting dealing with bullshit is just peachy if you are being paid. | | |
| ▲ | merb 2 hours ago | parent [-] | | If that is the case why did minio start with the open source version? If there were only downsides? Sounds like stupid business plan | | |
| ▲ | throwaway894345 an hour ago | parent [-] | | They wanted adoption and a funnel into their paid offering. They were looking out for their own self-interest, which is perfectly fine; however, it’s very different from the framing many are giving in this thread of a saintly company doing thankless charity work for evil homelab users. |
|
|
| |
| ▲ | throwaway894345 an hour ago | parent | prev [-] | | “I don’t want to support free users” is completely different than “we’re going all-in on AI, so we’re killing our previous product for both open source and commercial users and replacing it with a new one” |
|
|
| ▲ | hobofan 7 hours ago | parent | prev | next [-] |
| I can also highly recommend SeaweedFS for development purposes, where you want to test general behaviour when using S3-compatible storage. That's what I mainly used MinIO before, and SeaweedFS, especially with their new `weed mini` command that runs all the services together in one process is a great replacement for local development and CI purposes. |
| |
| ▲ | dizhn 4 hours ago | parent [-] | | I've been using rustfs for some very light local development and it looks.. fine: ) | | |
| ▲ | antonvs 2 hours ago | parent [-] | | Ironically rustfs.com is currently failing to load on Firefox, with 'Uncaught TypeError: can't access property "enable", s is null'. They shoulda used a statically checked language for their website... | | |
|
|
|
| ▲ | codegladiator 7 hours ago | parent | prev | next [-] |
| can vouch for SeaweedFS, been using it since the time it was called weedfs and my managers were like are you sure you really want to use that ? |
|
| ▲ | sshine 5 hours ago | parent | prev | next [-] |
| Wasabi looks like a service. Any recommendation for an in-cluster alternative in production? Is that SeaweedFS? |
| |
| ▲ | jodrellblank 4 hours ago | parent [-] | | I’ve never heard of SeaweedFS, but Ceph cluster storage system has an S3-compatible layer (Object Gateway). It’s used by CERN to make Petabyte-scale storage capable of ingesting data from particle collider experiments and they're now up to 17 clusters and 74PB which speaks to its production stability. Apparently people use it down to 3-host Proxmox virtualisation clusters, in a similar place as VMware VSAN. Ceph has been pretty good to us for ~1PB scalable backup storage for many years, except that it’s a non-trivial system administration effort and needs good hardware and networking investment, and my employer wasn't fully backing that commitment. (We’re moving off it to Wasabi for S3 storage). It also leans more towards data integrity than performance, it's great at being massively-parallel and not so rapid at being single thread high-IOPs. https://ceph.io/en/users/documentation/ https://docs.ceph.com/en/latest/ https://indico.cern.ch/event/1337241/contributions/5629430/a... | | |
| ▲ | ranger_danger 3 hours ago | parent [-] | | Ceph is a non-starter for me because you cannot have an existing filesystem on the disk. Previously I used GlusterFS on top of ZFS and made heavy use of gluster's async geo-replication feature to keep two storage arrays in sync that were far away over a slow link. This was done after getting fed up with rsync being so slow and always thrashing the disks having to scan many TBs every day. While there is a geo-replication feature for Ceph, I cannot keep using ZFS at the same time, and gluster is no longer developed, so I'm currently looking for an alternative that would work for my use case if anyone knows of a solution. | | |
| ▲ | jodrellblank 36 minutes ago | parent [-] | | > "Ceph is a non-starter for me because you cannot have an existing filesystem on the disk. Previously I used GlusterFS on top of ZFS" I became a Ceph admin by accident so I wasn't involved in choosing it and I'm not familiar with other things in that space. It's a much larger project than a clustered filesystem; you give it disks and it distributes storage over them, and on top of that you can layer things like the S3 storage layer, its own filesystem (CephFS) or block devices which can be mounted on a Linux server and formatted with a filesystem (including ZFS I guess, but that sounds like a lot of layers). > "While there is a geo-replication feature for Ceph" Several; the data cluster layer can do it in two ways (stretch clusters and stretch pools), the block device layer can do it in two ways (journal based and snapshot based), the CephFS filesystem layer can do it with snapshot mirroring, and the S3 object layer can do it with multi-site sync. I've not used any of them, they all have their trade-offs, and this is the kind of thing I was thinking of when saying it requires more skills and effort. for simple storage requirements, put a traditional SAN, a server with a bunch of disks, or pay a cheap S3 service to deal with it. Only if you have a strong need for scalable clusters, a team with storage/Linux skills, a pressing need to do it yourself, or to use many of its features, would I go in that direction. https://docs.ceph.com/en/latest/rados/operations/stretch-mod... https://docs.ceph.com/en/latest/rbd/rbd-mirroring/ https://docs.ceph.com/en/latest/cephfs/cephfs-mirroring/ https://docs.ceph.com/en/latest/radosgw/multisite/ |
|
|
|
|
| ▲ | phoronixrly 4 hours ago | parent | prev [-] |
| Nothing wrong? Does minio grant the basic freedoms of being able to run the software, study it, change it, and distribute it? Did minio create the impression to its contributors that it will continue being FLOSS? |
| |
| ▲ | ufocia 4 hours ago | parent [-] | | Yes the software is under AGPL. Go forth and forkify. The choice of AGPL tells you that they wanted to be the only commercial source of the software from the beginning. | | |
| ▲ | phoronixrly 3 hours ago | parent [-] | | > the software is under AGPL. Go forth and forkify. No, what was minio is now aistor, a closed-source proprietary software. Tell me how to fork it and I will. > they wanted to be the only commercial source of the software The choice of AGPL tells me nothing more than what is stated in the license. And I definitely don't intend to close the source of any of my AGPL-licensed projects. | | |
| ▲ | tracker1 4 minutes ago | parent | next [-] | | So fork the last minio, and work from there... nobody is stopping you. | |
| ▲ | regularfry an hour ago | parent | prev | next [-] | | > Tell me how to fork it and I will. https://github.com/minio/minio/fork The fact that new versions aren't available does nothing to stop you from forking versions that are. Or were - they'll be available somewhere, especially if it got packaged for OS distribution. | |
| ▲ | 3 hours ago | parent | prev [-] | | [deleted] |
|
|
|