Remix.run Logo
psviderski 14 hours ago

Hey, creator here. Thanks for sharing this!

Uncloud[0] is a container orchestrator without a control plane. Think multi-machine Docker Compose with automatic WireGuard mesh, service discovery, and HTTPS via Caddy. Each machine just keeps a p2p-synced copy of cluster state (using Fly.io's Corrosion), so there's no quorum to maintain.

I’m building Uncloud after years of managing Kubernetes in small envs and at a unicorn. I keep seeing teams reach for K8s when they really just need to run a bunch of containers across a few machines with decent networking, rollouts, and HTTPS. The operational overhead of k8s is brutal for what they actually need.

A few things that make it unique:

- uses the familiar Docker Compose spec, no new DSL to learn

- builds and pushes your Docker images directly to your machines without an external registry (via my other project unregistry [1])

- imperative CLI (like Docker) rather than declarative reconciliation. Easier mental model and debugging

- works across cloud VMs, bare metal, even a Raspberry Pi at home behind NAT (all connected together)

- minimal resource footprint (<150MB ram)

[0]: https://github.com/psviderski/uncloud

[1]: https://github.com/psviderski/unregistry

topspin 13 hours ago | parent | next [-]

"I keep seeing teams reach for K8s when they really just need to run a bunch of containers across a few machines"

Since k8s is very effective at running a bunch of containers across a few machines, it would appear to be exactly the correct thing to reach for. At this point, running a small k8s operation, with k3s or similar, has become so easy that I can't find a rational reason to look elsewhere for container "orchestration".

jabr 12 hours ago | parent | next [-]

I can only speak for myself, but I considered a few options, including "simple k8s" like [Skate](https://skateco.github.io/), and ultimately decided to build on uncloud.

It was as much personal "taste" than anything, and I would describe the choice as similar to preferring JSON over XML.

For whatever reason, kubernetes just irritates me. I find it unpleasant to use. And I don't think I'm unique in that regard.

1dom 6 hours ago | parent | next [-]

> For whatever reason, kubernetes just irritates me. I find it unpleasant to use. And I don't think I'm unique in that regard.

I feel the same. I feel like it's a me problem. I was able to build and run massive systems at scale and never used kubernetes. Then, all of a sudden, around 2020, any time I wanted to build or run or do anything at scale, everywhere said I should just use kubernetes. And then when I wanted to do anything with docker in production, not even at scale, everywhere said I should just use kubernetes.

Then there was a brief period around 2021 where everyone - even kubernetes fans - realised it was being used everywhere, even when it didn't need to be. "You don't need k8s" became a meme.

And now, here we are, again, lots of people saying "just use k8s for everything".

I've learned it enough to know how to use it and what I can do with it. I still prefer to use literally anything else apart from k8s when building, and the only time I've ever felt k8s has been really needed to solve a problem is when the business has said "we're using k8s, deal with it".

It's like the Javascript or WordPress of the infrastructure engineering world - it became the lazy answer, IMO. Or the me problem angle: I'm just an aged engineer moaning at having to learn new solutions to old problems.

hhh 19 minutes ago | parent [-]

It’s a nice portable target, with very well defined interfaces. It’s easy to start with and pretty easy to manage if you don’t try to abuse it.

tw04 3 hours ago | parent | prev [-]

How many flawless, painless major version upgrades have you had with literally any flavor of k8s? Because in my experience, that’s always a science experiment that results in such pain people end up just sticking at their original deployed version while praying they don’t hit any critical bugs or security vulnerabilities.

mxey 3 hours ago | parent | next [-]

I’ve run Kubernetes since 2018 and I can count on one hand the times there were major issues with an upgrade. Have sensible change management and read the release notes for breaking changes. The amount of breaking changes has also gone way down in recent years.

jauntywundrkind 2 hours ago | parent | prev [-]

I applaud you for having a specific complaint. 'You might not need it' 'its complex' and 'for some reason it bothers me' are all these vibes based winges that are so abundant. But with nothing specific, nothing contestable.

nullpoint420 13 hours ago | parent | prev | next [-]

100%. I’m really not sure why K8S has become the complexity boogeyman. I’ve seen CDK apps or docker compose files that are way more difficult to understand than the equivalent K8S manifests.

this_user 8 hours ago | parent | next [-]

Docker Compose is simple: You have a Compose file that just needs Docker (or Podman).

With k8s you write a bunch of manifests that are 70% repetitive boilerplate. But actually, there is something you need that cannot be achieved with pure manifest, so you reach for Kustomize. But Kustomize actually doesn't do what you want, so you need to convert the entire thing to Helm.

You also still need to spin up your k8s cluster, which itself consists of half a dozen pods just so you have something where you can run your service. Oh, you wanted your service to be accessible from outside the cluster? Well, you need to install an ingress controller in your cluster. Oh BTW, the nginx ingress controller is now deprecated, so you have to choose from a handful of alternatives, all of which have certain advantages and disadvantages, and none of which are ideal for all situations. Have fun choosing.

stego-tech 3 hours ago | parent | next [-]

Literally got it in one, here. I’m not knocking Kubernetes, mind, and I don’t think anyone here is, not even the project author. Rather, we’re saying that the excess of K8s can sometimes get in the way of simpler deployments. Even streamlined Kubernetes (microk8s, k3s, etc) still ultimately bring all of Kubernetes to the table, and that invites complexity when the goal is simplicity.

That’s not bad, but I want to spend more time trying new things or enjoying the results of my efforts than maintaining the underlying substrates. For that purpose, K8s is consistently too complicated for my own ends - and Uncloud looks to do exactly what I want.

quectophoton 5 hours ago | parent | prev | next [-]

> Docker Compose is simple: You have a Compose file that just needs Docker (or Podman).

And if you want to use more than one machine then you run `docker swarm init`, and you can keep using the Compose file you already have, almost unchanged.

It's not a K8s replacement, but I'm guessing for some people it would be enough and less effort than a full migration to Kubernetes (e.g. hobby projects).

6 hours ago | parent | prev | next [-]
[deleted]
horsawlarway 5 hours ago | parent | prev [-]

This is some serious rose colored glasses happening here.

If you have a service with a simple compose file, you can have a simple k8s manifest to do the same thing. Plenty of tools convert right between the two (incl kompose, which k8s literally hands you: https://kubernetes.io/docs/tasks/configure-pod-container/tra...)

Frankly, you're messing up by including kustomize or helm at all in 80% of cases. Just write the (agreed on tedious boilerplate - the manifest format is not my cup of tea) yaml and be done with the problem.

And no - you don't need an ingress. Just spin up a nodeport service, and you have the literal identical experience to exposing ports with compose - it's just a port on the machines running the cluster (any of them - magic!).

You don't need to touch an ingress until you actually want external traffic using a specific hostname (and optionally tls), which is... the same as compose. And frankly - at that point you probably SHOULD be thinking about the actual tooling you're using to expose that, in the same way you would if you ran it manually in compose. And sure - arguably you could move to gateways now, but in no way is the ingress api deprecated. They very clearly state...

> "The Ingress API is generally available, and is subject to the stability guarantees for generally available APIs. The Kubernetes project has no plans to remove Ingress from Kubernetes."

https://kubernetes.io/docs/concepts/services-networking/ingr...

---

Plenty of valid complaints for K8s (yaml config boilerplate being a solid pick) but most of the rest of your comment is basically just FUD. The complexity scale for K8s CAN get a lot higher than docker. Some organizations convince themselves it should and make it very complex (debatably for sane reasons). For personal needs... Just run k3s (or minikube, or microk8s, or k3ds, or etc...) and write some yaml. It's at exactly the same complexity as docker compose, with a slightly more verbose syntax.

Honestly, it's not even as complex as configuring VMs in vsphere or citrix.

KronisLV 2 hours ago | parent [-]

> And no - you don't need an ingress. Just spin up a nodeport service, and you have the literal identical experience to exposing ports with compose - it's just a port on the machines running the cluster (any of them - magic!).

https://kubernetes.io/docs/concepts/services-networking/serv...

Might need to redefine the port range from 30000-32767. Actually, if you want to avoid the ingress abstraction and maybe want to run a regular web server container of your choice to act as it (maybe you just prefer a config file, maybe that's what your legacy software is built around, maybe you need/prefer Apache2, go figure), you'd probably want to be able to run it on 80 and 443. Or 3000 or 8080 for some other software, out of convenience and simplicity.

Depending on what kind of K8s distro you use, thankfully not insanely hard to change though: https://docs.k3s.io/cli/server#networking But again, that's kind of going against the grain.

everforward 2 hours ago | parent | prev | next [-]

It's not the manifests so much as the mountain of infra underlying it. k8s is an amazing abstraction over dynamic infra resources, but if your infra is fairly static then you're introducing a lot of infra complexity for not a ton of gain.

The network is complicated by the overlay network, so "normal" troubleshooting tools aren't super helpful. Storage is complicated by k8s wanting to fling pods around so you need networked storage (or to pin the pods, which removes almost all of k8s' value). Databases are annoying on k8s without networked storage, so you usually run them outside the cluster and now you have to manage bare metal and k8s resources.

The manifests are largely fine, outside of some of the more abnormal resources like setting up the nginx ingress with certs.

esseph 11 hours ago | parent | prev [-]

Managing hundreds or thousands of containers across hundreds or thousands of k8s nodes has a lot of operational challenges.

Especially in-house on bare metal.

lnenad 10 hours ago | parent | next [-]

But that's not what anyone is arguing here, nor what (to me it seems at least) uncloud is about. It's about simpler HA multinode setup with a single/low double digit containers.

Glemkloksdjf 8 hours ago | parent | prev | next [-]

Which is fine because it absolutly matches the result.

You would not be able to operate hundreds or thousand of any nodes without operation complexlity and k8s helps you here a lot.

nullpoint420 10 hours ago | parent | prev | next [-]

Talos has made this super easy in my experience.

sceptic123 10 hours ago | parent | prev [-]

I don't think that argument matches with they "just need to run a bunch of containers across a few machines"

psviderski 13 hours ago | parent | prev | next [-]

That’s awesome if k3s works for you, nothing wrong with this. You’re simply not the target user then.

PunchyHamster an hour ago | parent | prev | next [-]

k3s makes it easy to deploy, not to debug any problems with it. It's still essentially adding few hundred thousand lines of code into your infrastructure, and if it is a small app you need to deploy, also wasting a bit of ram

matijsvzuijlen 12 hours ago | parent | prev | next [-]

If you already know k8s, this is probably true. If you don't it's hard to know what bits you need, and need to learn about, to get something simple set up.

epgui 9 hours ago | parent [-]

you could say that about anything…

morcus 9 hours ago | parent [-]

I don't understand the point? You can say that about anything, and that's the whole reason why it's good that alternatives exist.

The clear target of this project is a k8s-like experience for people who are already familiar with Docker and docker compose but don't want to spend the energy to learn a whole new thing for low stakes deployments.

Glemkloksdjf 8 hours ago | parent [-]

Uncloud is so far away from k8s, its not k8s like.

A normal person wouldn't think 'hey lets use k8s for the low stakes deployment over here'.

tevon 5 hours ago | parent | prev | next [-]

Perhaps it feels so easy given your familiarity with it.

I have struggled to get things like this stood up and hit many footguns along the way

_joel 12 hours ago | parent | prev [-]

Indeed, it seems a knee jerk response without justification. k3s is pretty damn minimal.

8 hours ago | parent [-]
[deleted]
tex0 10 hours ago | parent | prev | next [-]

This is a cool tool, I like the idea. But the way `uc machine init` works under the hood is really scary. Lot's of `curl | bash` run as root.

While I would love to test this tool, this is not something I would run on any machine :/

psviderski 9 hours ago | parent | next [-]

Totally valid concern. That was a shortcut to iterate quickly in early development. It’s time to do it properly now. Appreciate the feedback. This is exactly the kind of thing I need to hear before more people try it.

redrove 9 hours ago | parent | prev | next [-]

+1 on this

I wanted to try it out but was put off by this[0]. It’s just straight up curl | bash as root from raw.githubusercontent.com.

If this is the install process for a server (and not just for the CLI) I don’t want to think about security in general for the product.

Sorry, I really wanted to like this, but pass.

[0] https://github.com/psviderski/uncloud/blob/ebd4622592bcecedb...

jabr 4 hours ago | parent | prev | next [-]

There is a `--no-install` flag on both `uc machine init` and `uc machine add` that skips that `curl | bash` install step.

You need to prepare the machine some other way first then, but it's just installing docker and the uncloud service.

I use the `--no-install` option with my own cluster, as I have my own pre-provisioning process that includes some additional setup beyond the docker/uncloud elements.

tontony 9 hours ago | parent | prev [-]

Curious, what would be an ideal (secure) approach for you to install this (or similar) tool?

yabones an hour ago | parent | next [-]

The correct way would be to publish packages on a proper registry/repository and install them with a package manager. For example, create a 3rd party Debian repository, and import the config & signing key on install. It's more work, sure, but it's been the best practice for decades and I don't see that changing any time soon.

tontony 27 minutes ago | parent [-]

Sure, but it all boils down to trust at the end of the day. Why would you trust a third-party Debian repository (that e.g. has a different user namespace and no identity linking to GitHub) more than running something from evidently the same user from GitHub, in this specific case?

I'm not arguing that a repository is nice because versioning, signing, version yanking, etc, and I do agree that the process should be more transparent and verifiable for people who care about it.

rovr138 9 hours ago | parent | prev [-]

It's deploying a script, which then downloads uncloud using curl.

The alternative is, deploying the script and with it have the uncloud files it needs.

INTPenis 3 hours ago | parent | prev | next [-]

We have similar backgrounds, and I totally agree with your k8s sentiment.

But I wonder what this solves?

Because I stopped abusing k8s and started using more container hosts with quadlets instead, using Ansible or Terraform depending on what the situation calls for.

It works just fine imho. The CI/CD pipeline triggers a podman auto-update command, and just like that all containers are running the latest version.

So what does uncloud add to this setup?

zbuttram 13 hours ago | parent | prev | next [-]

Very cool! I think I'll have some opportunity soon to give it a shot, I have just the set of projects that have been needing a tool like this. One thing I think I'm missing after perusing the docs however is, how does one onboard other engineers to the cluster after it has been set up? And similarly, how does deployment from a CI/CD runner work? I don't see anything about how to connect to an existing cluster from a new machine, or at least not that I'm recognizing.

jabr 12 hours ago | parent [-]

There isn't a cli function for adding a connection (independently of adding a new machine/node) yet, but they are in a simple config file (`~/.config/uncloud/config.yaml`) that you can copy or easily create manually for now. It looks like this:

    current_context: default
    contexts:
      default:
        connections:
          - ssh: admin@192.168.0.10
            ssh_key_file: ~/.ssh/uncloud
          - ssh: admin@192.168.0.11
            ssh_key_file: ~/.ssh/uncloud
          - ssh: administrator@93.x.x.x
            ssh_key_file: ~/.ssh/uncloud
          - ssh: sysadmin@65.x.x.x
            ssh_key_file: ~/.ssh/uncloud
And you really just need one entry for typical use. The subsequent entries are only used if the previous node(s) are down.
psviderski 10 hours ago | parent [-]

For CI/CD, check out this GitHub Action: https://github.com/thatskyapplication/uncloud-action.

You can either specify one of the machine SSH target in the config.yaml or pass it directly to the 'uc' CLI command, e.g.

uc --connect user@host deploy

sam-cop-vimes 6 hours ago | parent | prev | next [-]

I really like what is on offer here - thank you for building it. Re the private network it builds with Wireguard, how are services running within this private network supposed to access AWS services such as RDS securely? Tailscale has this: https://tailscale.com/kb/1141/aws-rds

olegp 13 hours ago | parent | prev | next [-]

How's this similar to and different from Kamal? https://kamal-deploy.org/

psviderski 13 hours ago | parent [-]

I took some inspiration from Kamal, e.g. the imperative model but kamal is more a deployment tool.

In addition to deployments, uncloud handles clustering - connects machines and containers together. Service containers can discover other services via internal DNS and communicate directly over the secure overlay network without opening any ports on the hosts.

As far as I know kamal doesn’t provide an easy way for services to communicate across machines.

Services can also be scaled to multiple replicas across machines.

olegp 13 hours ago | parent | next [-]

Thanks! I noticed afterwards that you mention Kamal in your readme, but you may want to add a comparison section that you link to where you compare your solution to others.

Are you working on this full time and if so, how are you funding it? Are you looking to monetize this somehow?

psviderski 13 hours ago | parent [-]

Thank you for the suggestion!

I’m working full time on this, yes. Funding from my savings at the moment and don’t have plans for any external funding or VC.

For monetisation, considering building a self-hosted and managed (SaaS) webUI for managing remote clusters and apps on them with value-added PaaS-like features.

olegp 13 hours ago | parent [-]

That sounds interesting, maybe I could help on the business side of things somehow. I'll email you my calendar link.

psviderski 11 hours ago | parent [-]

Awesome, will reach out!

cpursley 9 hours ago | parent | prev [-]

This is neat, regarding clustering - can this work with distributed erlang/elixir?

jabr 3 hours ago | parent | next [-]

I haven't tried it, but EPMD with DNS discovery should work just fine, and should be similar to this NATS example: https://github.com/psviderski/uncloud-recipes/blob/main/nats...

Basically just configure it with `{service-name}.internal` to find other instances of the service.

psviderski 8 hours ago | parent | prev [-]

I don't know what the specific requirements for the distributed erlang/elixir but I believe the networking should support it. Containers get unique IPs on a WireGuard mesh with direct connectivity and DNS-based service discovery.

avan1 11 hours ago | parent | prev | next [-]

Thanks for the both great tools. just i didn't understand one thing ? the request flow, imaging we have 10 servers where we choose this request goes to server 1 and the other goes to 7 for example. and since its zero down time, how it says server 5 is updating so till it gets up no request should go there.

psviderski 10 hours ago | parent [-]

I think there are two different cases here. Not sure which one you’re talking about.

1. External requests, e.g. from the internet via the reverse proxy (Caddy) running in the cluster.

The rollout works on the container, not the server level. Each container registers itself in Caddy so it knows which containers to forward and distribute requests to.

When doing a rollout, a new version of container is started first, registers in caddy, then the old one is removed. This is repeated for each service container. This way, at any time there are running containers that serve requests.

It doesn’t say any server that requests shouldn’t go there. It just updates upstreams in the caddy config to send requests to the containers that are up and healthy.

2. Service to service requests within the cluster. In this case, a service DNS name is resolved to a list of IP addresses (running containers). And the client decides which one to send a request to or whether to distribute requests among them.

When the service is updated, the client needs to resolve the name again to get the up-to-date list of IPs. Many http clients handle this automatically so using http://service-name as an endpoint typically just works. But zero downtime should still be handled by the client in this case.

unixfox 12 hours ago | parent | prev | next [-]

Awesome tool! Does it provide some basic features that you would get from running a control plane.

Like rescheduling automatically a container on another server if a server is down? Deploying on the less filled server first if you have set limits in your containers?

psviderski 9 hours ago | parent [-]

Thank you! That's actually the trade off.

There is no automatic rescheduling in uncloud by design. At least for now. We will see how far we can get without it.

If you want your service to tolerate a host going down, you should deploy multiple replicas for that service on multiple machines in advance. 'uc scale' command can be used to run more replicas for an already deployed service.

Longer term, I'm thinking we can have a concept of primary/standby replicas for services that can only have one running replica, e.g. databases. Something similar to how Fly.io does this: https://fly.io/docs/apps/app-availability/#standby-machines-...

Regarding deploying on the less filled machine first is doable but not supported right now. By default, it picks the first machine randomly and tries to distributes replicas evenly among all available machines. You can also manually specify what target machine(s) each service should run on in your Compose file.

I want to avoid recreating the complexity with placement constraints, (anti-)affinity, etc. that makes K8s hard to reason about. There is a huge class of apps that need more or less static infra, manual placement, and a certain level of redundancy. That's what I'm targeting with Uncloud.

mosselman 13 hours ago | parent | prev | next [-]

You have a graph that shows a multi provider setup for a domain. Where would routing to either machine happen? As in which ip would you use on the dns side?

psviderski 8 hours ago | parent | next [-]

For the public cluster with multiple ingress (caddy) nodes you'd need a load balancer in front of them to properly handle routing and outage of any of them. You'd use the IP of the load balancer on the DNS side.

Note that a DNS A record with multiple IPs doesn't provide failover, only round robin. But you can use the Cloudflare DNS proxy feature as a poor man's LB. Just add 2+ proxied A records (orange cloud) pointing to different machines. If one goes down with a 52x error, Cloudflare automatically fails over to the healthy one.

calgoo 12 hours ago | parent | prev [-]

Not OP, but you could do "simple" dns load balancing between both endpoints.

11mariom 8 hours ago | parent | prev | next [-]

> - uses the familiar Docker Compose spec, no new DSL to learn

But this goes with assumption that one already know docker compose spec. For exact same reason I'm in love for `podman kube play` to just use k8s manifests to quickly test run on local machine - and not bother with some "legacy" compose.

(I never liked Docker Inc. so I never learned THEIR tooling, it's not needed to build/run containers)

TingPing 7 hours ago | parent [-]

podman-compose works fine. It’s a very simple format.

oulipo2 10 hours ago | parent | prev | next [-]

So it's a kind of better Docker Swarm? It's interesting, but honestly I'd rather have something declarative, so I can use it with Pulumi, would it be complicated to add a declarative engine on top of the tool? Which discovers what services are already up, do a diff with the new declaration, and handles changes?

psviderski 8 hours ago | parent [-]

This is exactly how it works now. The Compose file is the declarative specification of your services you want to run.

When you run 'uc deploy' command:

- it reads the spec from your compose.yaml

- inspects the current state of the services in the cluster

- computes the diff and deployment plan to reconcile it

- executes the plan after the confirmation

Please see the docs and demo: https://uncloud.run/docs/guides/deployments/deploy-app

The main difference with Docker Swarm is that the reconciliation process is run on your local/CI machine as part of the 'uc deploy' CLI command execution, not on the control plane nodes in the cluster.

And it's not running in the loop automatically. If the command fails, you get an instant feedback with the errors you can address or rerun the command again.

It should be pretty straightforward to wrap the CLI logic in a Terraform or Pulumi provider. The design principals are very similar and it's written in Go.

utopiah 12 hours ago | parent | prev | next [-]

Neat, as you include quite a few tool for services to be reachable together (not necessarily to the outside), do you also have tooling to make those services more interoperable?

jabr 11 hours ago | parent [-]

Do you have an example of what you mean? I'm not entirely clear on your question.

woile 13 hours ago | parent | prev | next [-]

does it support ipv6?

psviderski 13 hours ago | parent [-]

There is an open issue that confirms enabling ipv6 for containers works: https://github.com/psviderski/uncloud/issues/126 But this hasn’t been enabled by default.

What specifically do you mean by ipv6 support?

woile 8 hours ago | parent | next [-]

I'm no expert, so I'm not sure if I'll explain it correctly. But I've been using docker swarm in a server, I use traefik as reverse proxy, and it just doesn't seem to work (I've tried a lot) with ipv6 (issue that might be related https://github.com/moby/moby/issues/24379)

miyuru 12 hours ago | parent | prev [-]

> What specifically do you mean by ipv6 support?

This question does not make sense. This is equivalent to asking "What specifically do you mean by ipv4 support"

These days both protocols must be supported, and if there is a blocker it should be clearly mentioned.

justincormack 11 hours ago | parent [-]

How do you want to allocate ipv6 addresses to containers? Turns out there are lots of answers. Some people even want to do ipv6 NAT.

lifty 8 hours ago | parent | next [-]

A really cool way to do it is how Yggdrasil project does it (https://yggdrasil-network.github.io/implementation.html#how-...). They basically use public keys as identities and they deterministically create an IPv6 address from the public key. This is beautiful and works for private networks, as well as for their global overlay IPv6 network.

What do you think about the general approach in Uncloud? It almost feels like a cousin of Swarm. Would love to get your take on it.

GoblinSlayer 3 hours ago | parent | prev [-]

Like docker? --fixed-cidr-v6=2001:db8:1::/64

knowitnone3 5 hours ago | parent | prev | next [-]

but if they already know how to use k8s, then they should use it. Now they have to know k8s AND know this tool?

doctorpangloss 11 hours ago | parent | prev | next [-]

haha, uncloud does have a control plane: the mind of the person running "uc" CLI commands

> I’m building Uncloud after years of managing Kubernetes

did you manage Kubernetes, or did you make the fateful mistake of managing microk8s?

Glemkloksdjf 8 hours ago | parent | prev [-]

So you build an insecure version of nomad/kubernetes and co?

If you do anything professional, you better choose proven software like kubernetes or managed kubernetes or whatever else all the hyperscalers provide.

And the complexity you are solving now or have to solve, k8s solved. IaC for example, Cloud Provider Support for provisioning a LB out of the box, cert-manager, all the helm charts for observability, logging, a ecosystem to fall back to (operators), ArgoCD <3, storage provisioning, proper high availability, kind for e2e testing on cicd, etc.

I'm also aways lost why people think k8s is so hard to operate. Just take a managed k8s. There are so many options out there and they are all compatible with the whole k8s ecosystem.

Look if you don't get kubernetes, its use casees, advantages etc. fine absolutly fine but your solution is not an alternative to k8s. Its another container orchestrator like nomad and k8s and co. with it own advantages and disadvantages.

bluepuma77 5 hours ago | parent | next [-]

It's not a k8s replacement. It's for the small dev team with no k8s experience. For people that might not use Docker Swarm because they see it's a pretty dead project. For people who think "everyone uses k8s", so we should, too.

I need to run on-prem, so managed k8s is not an option. Experts tells me I should have 2 FTE to run k8s, which I don't have. k8s has so many components, how should I debug that in case of issues without k8s experience? k8s APIs change continuously, how should I manage that without k8s experience?

It's not a k8s replacement. But I do see a sweet spot for such a solution. We still run Docker Swarm on 5 servers, no hyperscalers, no API changes expected ;-)

Glemkloksdjf 4 hours ago | parent [-]

[dead]

mgaunard 5 hours ago | parent | prev [-]

Those are all sub-par cloud technologies which perform very badly and do not scale at all.

Some people would rather build their own solutions to do these things with fine-grain control and the ability to handle workloads more complex that a shopping cart website.

Glemkloksdjf 4 hours ago | parent [-]

[dead]