Remix.run Logo
jiggawatts 3 days ago

> Then just roll normal actual long lived VMs the way we've done it for the past 15 years.

This is easy to say if your wallet exploded because it had too much money in it, and if you don't care about the speed of operations.

Just today I'm investigating hosting options for migrating a legacy web farm with about 200 distinct apps to the cloud.

If I put everything into one VM image, then patching/upgrades and any system-wide setting changes become terrifying. The VM image build itself takes hours because this is 40 GB of apps, dependencies, frameworks, etc... There is just no way to "iterate fast" on a build script like that. Packer doesn't help.

Not to mention that giant VM images are incompatible with per-app DevOps deployment automation. How does developer 'A' roll back their app in a hurry while developer 'B' is busy rolling theirs out?

Okay, sure, let's split this into an image-per-app and a VM scale set per app. No more conflicts, each developer gets their own pipeline and image!

But now the minimum cost of an app is 4x VMs because you need 2x in prod, 1x each in test and development (or whatever). I have 200 apps, so... 800 VMs. With some apps needing a bit of scale-out, let's round this up to 1,000 VMs. In public clouds you can't really go below $50/VM/mo so that's an eye-watering $50,000 per month to replace half a dozen VMs that were "too cheap to meter" on VMware!

Wouldn't it be nicer if we could "somehow" run nested VMs with a shared base VM disk image that was thin-cloned so that only the app-specific differences need to be kept? Better yet, script the builds somehow to utilise VM snapshots so that developers can iterate fast on app-specific build steps without having to wait 30 minutes for the base platform build steps each time they change something.

Uh-oh, we've reinvented Docker!

charcircuit 3 days ago | parent | next [-]

When deploying to a VM you don't need to build a new image. If setup right you can just copy the updated files over and then trigger a reload or restart of the service. Different team's services are in different directories and don't conflict.

twunde 3 days ago | parent | next [-]

This is much more viable than it was in the past with the advent and adoption of nvm, pyenv etc but the limiting factor becomes system dependencies. The typical example from yesteryear was upgrading openssl but inevitably you'll find that some dependency auto updates a system dependency silently or requires a newer version that requires upgrading the OS.

DiabloD3 3 days ago | parent [-]

So why are you using a Linux that forces that on you?

Sane people use Debian, Debian packages are compatible with the Debian release they are from. I do not have to worry about accidentally installing an incompatible deb; even if I try to, apt won't allow me to install a package whose deps cannot be satisfied because they're too new (and thus, not in my release's package repo).

I know other distros have problems with release management, but this is why I've used Debian for the past 20 years and will continue to use Debian

curt15 2 days ago | parent [-]

>Debian packages are compatible with the Debian release they are from.

That won't save you if an application requires newer tooling or libraries than whatever is in Debian stable. Once that happens the application needs to bundle its dependencies. But that is precisely why people use containers.

DiabloD3 2 days ago | parent [-]

You say container, but if you're at AWS, for example, and you're using any API of theirs, either the original deprecated EC2 API, or the Docker API, or the Kubernetes API... its all a real honest to god VM underneath, provided by Firecracker.

In other words, the technology I have used non-stop since it was mainlined in 2008 is the technology all of you use today. I can deploy a VM, you can deploy a VM, the difference is the API we use to do it, but they're all VMs.

And yes, my VMs are not required to be Debian Stable, and I can deploy any version as needed. That is why we have VMs, so we do not need to dedicate entire machines to a single application with unusual dependencies. Something big companies like Amazon and Google and Microsoft realized is the true wisdom that sysadmins like me have always known: even the damned kernel is a dep that should be tracked, and sometimes this does matter, and the only way you can deploy kernels on a per-application basis is with VMs.

Something container jockeys will never understand: containers that are offered through the OCI facility in the kernel has multiple discrete userlands, but one kernel. You do not have a hypervisor with its own kernel.

Docker, real namebrand Docker, is an OCI consumer. Using something that implements a compatible API is not Docker, but merely a compatibility shim.

jiggawatts 3 days ago | parent | prev [-]

Let's say you have image "ProdWebAppFoo-2025-08-01" and you used this to deploy three VMs in a scale set or whatever.

Then a developer deploys their "loose files" on top a couple of times, so now you have the image plus god-knows-what.

The VM scale set scales out.

What version of the app is running on which instance?

Answer: Mixed versions.

charcircuit 3 days ago | parent [-]

>so now you have the image plus god-knows-what.

Rsync is smart enought to figure out how to get parity.

>What version of the app is running on which instance?

There is always going to be a time during a rollout where there are mixed versions running.

jiggawatts 3 days ago | parent [-]

Sure, rolling forward you expect mixed versions.

Nobody expects a month old version to occasionally turn up and then mysteriously disappear when some rsync job catches up.

DiabloD3 3 days ago | parent | prev [-]

Why would you be wasting money and going into the cloud? The cloud is not appropriate for small time users, it will always be cheaper to go with dedi or semi-dedi and building the infrastructure you need.

saagarjha 3 days ago | parent | next [-]

The cloud is perfectly appropriate for a lot of usecases. To be honest, it seems like you are really bad at making decisions for organizations (but really good at being a commenter on Hacker News!)

DiabloD3 2 days ago | parent [-]

Funny, but no. I've merely watched smaller companies evaporate because they fell for the cloud trap. Wish I could have helped them, but they were very gung-ho on the cloud being the future. Many I've discovered because of their goodbye message got linked here on HN, they imploded not because they failed to launch or failed to find a following, but they failed to overcome AWS eating all their profits.

saagarjha 2 days ago | parent [-]

Yes, but plenty of companies are also successful using cloud services.

nope1000 2 days ago | parent | prev [-]

I'm actually thinking the opposite. If you are a small company, the cloud makes sense and once you grow big it makes sense to build your own infra. For example my company of 10 people, we do B2B SaaS and we couldn't do that if we hosted ourselves. We would need people with the skill to set something like this up, develop physical security concepts, backup duplication, disaster recovery, etc. We would spend more time working on the infra than on the actual product we are selling.