Remix.run Logo
KronisLV 13 hours ago

> I use Linode or DigitalOcean. Pay no more than $5 to $10 a month. 1GB of RAM sounds terrifying to modern web developers, but it is plenty if you know what you are doing.

If you get one dedicated server for multiple separate projects, you can still keep the costs down but relax those constraints.

For example, look at the Hetzner server auction: https://www.hetzner.com/sb/

I pay about 40 EUR a month for this:

  Disk: 736G / 7.3T (11%)
  CPU: Intel Core i7-7700 @ 8x 4.2GHz [42.0°C]
  RAM: 18004MiB / 64088MiB
I put Proxmox on it and can have as many VMs as the IO pressure of the OSes will permit: https://www.proxmox.com/en/ (I cared mostly about storage so got HDDs in RAID 0, others might just get a server with SSDs)

You could have 15 VMs each with 4 GB of RAM and it would still come out to around 2.66 EUR per month per VM. It's just way more cost efficient at any sort of scale (number of projects) when compared to regular VPSes, and as long as you don't put any trash on it, Proxmox itself is fairly stable, being a single point of failure aside.

Of course, with refurbished gear you'd want backups, but you really need those anyways.

Aside from that, Hetzner and Contabo (opinions vary about that one though) are going to be more affordable even when it comes to regular VPS hosting. I think Scaleway also had those small Stardust instances if you want something really cheap, but they go out of stock pretty quickly as well.

nchmy 6 hours ago | parent | next [-]

Agreed. Though, now that hetzner has increased pricing, OVH is quite competitively priced and has some newer hardware available.

doubleorseven 2 hours ago | parent [-]

everytime i want to put something in my dishwasher i pray to god it's not full and clean. same with OVH, prayer-wise.

utopiah 7 hours ago | parent | prev | next [-]

Why VMs over containers?

KronisLV 4 hours ago | parent [-]

Mostly to have stronger separation, I'm sure the person who prefers VM-per-project also has their own reasons.

I just have a few large VMs, each a different environment with slightly different ways how I treat them - the prod ones get more due diligence and being careful, whereas all of the dev ones (including where I host Gitea, Woodpecker CI, Nextcloud, Kanboard, Uptime Kuma etc.) I mess around with the configuration in and do restarts more often. I personally used to run a Docker Swarm cluster, but now just use Docker Compose with Ansible directly, still multiple stacks per each of those servers, dead simple

So my setup ended up being:

  * VPS / VMs - an environment, since don't really need replication/distributed systems at my scale
  * container stack (Compose/Swarm) - a project, with all its dependencies, though ingress is a shared web server container per environment
  * single container - the applications I build, my own are built on top of a common Ubuntu LTS base more often than not, external ones (like Nextcloud and tbh most DBs) are just run directly
Works very well, plus containers allow me to easily have consistent configuration management, networking, resource limits and persistent storage.
compounding_it 11 hours ago | parent | prev | next [-]

What do you do about ipv4 ? Do you also use a routing VM to manage all that ?

It’s very interesting how people rent large VMs with a hypervisor. I’m wondering if licenses for VPS have any clauses preventing this for commercial scale.

mbesto 4 hours ago | parent | next [-]

Why not just Nginx Proxy Manager? Solves both the Proxy issue as well as TLS/SSL.

https://nginxproxymanager.com/

deniska 6 hours ago | parent | prev | next [-]

I help my dad run a proxmox setup on a server he's got from a local craigslist analog and put on a co-location in a datacenter. It only uses a single public IP. All VMs are in a "virtual intranet", and the host itself acts like a router (giving local IP addresses to VMs via dnsmasq, routing VM internet access via NAT, forwarding specific outside ports to specific VMs). For example ports 80, 443 are given to a dedicated "nginx vm" which then will route a request to a specific VM depending on the hostname.

KronisLV 10 hours ago | parent | prev | next [-]

Hetzner has some docs: https://docs.hetzner.com/robot/dedicated-server/ip/additiona...

Since I only needed about 3 VMs (though each being a bit beefier, running containers on them, a web server sitting in front of those with vhosts as ingress), I could give each VM its own IPv4 address and it didn’t end up being too expensive for my use case. Would be a bit different for someone who wants many small VMs.

hkpack 9 hours ago | parent | prev [-]

There are security benefits of not having public IPs on every VM.

I assign few VMs public IPs and use them as ingress / SSL termination / load balancer for my workloads running on VMs with only internal IPs.

I personally use kvm with libvirt and manage all these with Ansible.

DeathArrow 3 hours ago | parent | prev [-]

Wouldn't be easier and more efficient to just run docker containers?

sbarre 3 hours ago | parent [-]

It depends on what you're doing. Proxmox gives you the flexibility to figure it out as you go.

If you have a plan from the start and you know what you'll need and you're pretty confident it won't change, then sure.

If you want a box that you can slice and dice however you want (VMs, containers, etc) then something like Proxmox might be worth it.