Remix.run Logo
adamcharnock 7 hours ago

This is an industry we're[0] in. Owning is at one end of the spectrum, with cloud at the other, and a broadly couple of options in-between:

1 - Cloud – This is minimising cap-ex, hiring, and risk, while largely maximising operational costs (its expensive) and cost variability (usage based).

2 - Managed Private Cloud - What we do. Still minimal-to-no cap-ex, hiring, risk, and medium-sized operational cost (around 50% cheaper than AWS et al). We rent or colocate bare metal, manage it for you, handle software deployments, deploy only open-source, etc. Only really makes sense above €$5k/month spend.

3 - Rented Bare Metal – Let someone else handle the hardware financing for you. Still minimal cap-ex, but with greater hiring/skilling and risk. Around 90% cheaper than AWS et al (plus time).

4 - Buy and colocate the hardware yourself – Certainly the cheapest option if you have the skills, scale, cap-ex, and if you plan to run the servers for at least 3-5 years.

A good provider for option 3 is someone like Hetzner. Their internal ROI on server hardware seems to be around the 3 year mark. After which I assume it is either still running with a client, or goes into their server auction system.

Options 3 & 4 generally become more appealing either at scale, or when infrastructure is part of the core business. Option 1 is great for startups who want to spend very little initially, but then grow very quickly. Option 2 is pretty good for SMEs with baseline load, regular-sized business growth, and maybe an overworked DevOps team!

[0] https://lithus.eu, adam@

torginus 4 hours ago | parent | next [-]

I think the issue with this formulation is what drives the cost at cloud providers isn't necessarily that their hardware is too expensive (which it is), but that they push you towards overcomplicated and inefficient architectures that cost too much to run.

A core at this are all the 'managed' services - if you have a server box, its in your financial interest to squeeze as much per out of it as possible. If you're using something like ECS or serverless, AWS gains nothing by optimizing the servers to make your code run faster - their hard work results in less billed infrastructure hours.

This 'microservices' push usually means that instead of having an on-server session where you can serve stuff from a temporary cache, all the data that persists between requests needs to be stored in a db somewhere, all the auth logic needs to re-check your credentials, and something needs to direct the traffic and load balance these endpoint, and all this stuff costs money.

I think if you have 4 Java boxes as servers with a redundant DB with read replicas on EC2, your infra is so efficient and cheap that even paying 4x for it rather than going for colocation is well worth it because of the QoL and QoS.

These crazy AWS bills usually come from using every service under the sun.

bojangleslover 3 hours ago | parent | next [-]

The complexity is what gets you. One of AWS's favorite situations is

1) Senior engineer starts on AWS

2) Senior engineer leaves because our industry does not value longevity or loyalty at all whatsoever (not saying it should, just observing that it doesn't)

3) New engineer comes in and panics

4) Ends up using a "managed service" to relieve the panic

5) New engineer leaves

6) Second new engineer comes in and not only panics but outright needs help

7) Paired with some "certified AWS partner" who claims to help "reduce cost" but who actually gets a kickback from the extra spend they induce (usually 10% if I'm not mistaken)

Calling it it ransomware is obviously hyperbolic but there are definitely some parallels one could draw

On top of it all, AWS pricing is about to massively go up due to the RAM price increase. There's no way it can't since AWS is over half of Amazon's profit while only around 15% of its revenue.

coliveira 3 hours ago | parent | next [-]

The end result of all this is that the percentage of people who know how to implement systems without AWS/Azure will be a single digit. From that point on, this will be the only "economic" way, it doesn't matter what the prices are.

couscouspie 2 hours ago | parent [-]

That's not a factual statement over reality, but more of a normative judgement to justify resignation. Yes, professionals that know how to actually do these things are not abundantly available, but available enough to achieve the transition. The talent exists and is absolutely passionate about software freedom and hence highly intrinsically motivated to work on it. The only thing that is lacking so far is the demand and the talent available will skyrocket, when the market starts demanding it.

eitally an hour ago | parent | next [-]

They actually are abundantly available and many are looking for work. The volume of "enterprise IT" sysadmin labor dwarfs that of the population of "big tech" employees and cloud architects.

organsnyder 28 minutes ago | parent [-]

I've worked with many "enterprise IT" sysadmins (in healthcare, specifically). Some are very proficient generalists, but most (in my experience) are fluent in only their specific platforms, no different than the typical AWS engineer.

toomuchtodo 13 minutes ago | parent [-]

Perhaps we need bootcamps for on prem stacks if we are concerned about a skills gap. This is no different imho from the trades skills shortage many developed countries face. The muscle must be flexed. Otherwise, you will be held captive by a provider "who does it all for you".

"Today, we are going to calculate the power requirements for this rack, rack the equipment, wire power and network up, and learn how to use PXE and iLO to get from zero to operational."

friendzis 2 hours ago | parent | prev | next [-]

> and the talent available will skyrocket, when the market starts demanding it.

Part of what clouds are selling is experience. A "cloud admin" bootcamp graduate can be a useful "cloud engineer", but it takes some serious years of experience to become a talented on prem sre. So it becomes an ouroboros: moving towards clouds makes it easier to move to the clouds.

SahAssar 20 minutes ago | parent [-]

> A "cloud admin" bootcamp graduate can be a useful "cloud engineer"

That is not true. It takes a lot more than a bootcamp to be useful in this space, unless your definition is to copy-paste some CDK without knowing what it does.

bix6 2 hours ago | parent | prev [-]

> The only thing that is lacking so far is the demand and the talent available will skyrocket, when the market starts demanding it.

But will the market demand it? AWS just continues to grow.

bluGill 2 hours ago | parent [-]

Only time will tell. It depends on when someone with a MBA starts asking questions about cloud spending and runs the real numbers. People promoting self hosting often are not counting all the cost of self hosting (AWS has people working 24x7 so that if something fails someone is there to take action)

infecto 3 hours ago | parent | prev [-]

It’s all anecdotal but in my experiences it’s usually opposite. Bored senior engineer wants to use something new and picks a AWS bespoke service for a new project.

I am sure it happens a multitude of ways but I have never seen the case you are describing.

alpinisme 3 hours ago | parent [-]

I’ve seen your case more than the ransom scenario too. But also even more often: early-to-mid-career dev saw a cloud pattern trending online, heard it was a new “best practice,” and so needed to find a way to move their company to using it.

coredog64 41 minutes ago | parent | prev | next [-]

> If you're using something like ECS or serverless, AWS gains nothing by optimizing the servers to make your code run faster - their hard work results in less billed infrastructure hours.

If ECS is faster, then you're more satisfied with AWS and less likely to migrate. You're also open to additional services that might bring up the spend (e.g. ECS Container Insights or X-Ray)

Source: Former Amazon employee

mrweasel 3 hours ago | parent | prev | next [-]

Just this week a friend of mine was spinning up some AWS managed service, complaining about the complexity, and how any reconfiguration took 45 minutes to reload. It's a service you can just install with apt, the default configuration is fine. Not only is many service no longer cheaper in the cloud, the management overhead also exceed that of on-prem.

mystifyingpoi 3 hours ago | parent | next [-]

I'd gladly use (and maybe even pay for!) an open-source reimplementation of AWS RDS Aurora. All the bells and whistles with failover, clustering, volume-based snaps, cross-region replication, metrics etc.

As far as I know, nothing comes close to Aurora functionality. Even in vibecoding world. No, 'apt-get install postgres' is not enough.

SOLAR_FIELDS 2 hours ago | parent | next [-]

serverless v2 is one of the products that i was skeptical about but is genuinely one of the most robust solutions out there in that space. it has its warts, but I usually default to it for fresh installs because you get so much out of the box with it

sgarland 43 minutes ago | parent | prev [-]

Nitpick (I blame Amazon for their horrible naming): Aurora and RDS are separate products.

What you’re asking for can mostly be pieced together, but no, it doesn’t exist as-is.

Failover: this has been a thing for a long time. Set up a synchronous standby, then add a monitoring job that checks heartbeats and promotes the standby when needed. Optionally use something like heartbeat to have a floating IP that gets swapped on failover, or handle routing with pgbouncer / pgcat etc. instead. Alternatively, use pg_auto_failover, which does all of this for you.

Clustering: you mean read replicas?

Volume-based snaps: assuming you mean CoW snapshots, that’s a filesystem implementation detail. Use ZFS (or btrfs, but I wouldn’t, personally). Or Ceph if you need a distributed storage solution, but I would definitely not try to run Ceph in prod unless you really, really know what you’re doing. Lightbits is another solution, but it isn’t free (as in beer).

Cross-region replication: this is just replication? It doesn’t matter where the other node[s] are, as long as they’re reachable, and you’ve accepted the tradeoffs of latency (synchronous standbys) or potential data loss (async standbys).

Metrics: Percona Monitoring & Management if you want a dedicated DB-first, all-in-one monitoring solution, otherwise set up your own scrapers and dashboards in whatever you’d like.

What you will not get from this is Aurora’s shared cluster volume. I personally think that’s a good thing, because I think separating compute from storage is a terrible tradeoff for performance, but YMMV. What that means is you need to manage disk utilization and capacity, as well as properly designing your failure domain. For example, if you have a synchronous standby, you may decide that you don’t care if a disk dies, so no messing with any kind of RAID (though you’d then miss out on ZFS’ auto-repair from bad checksums). As long as this aligns with your failure domain model, it’s fine - you might have separate physical disks, but co-locate the Postgres instances in a single physical server (…don’t), or you might require separate servers, or separate racks, or separate data centers, etc.

tl;dr you can fairly closely replicate the experience of Aurora, but you’ll need to know what you’re doing. And frankly, if you don’t, even if someone built a OSS product that does all of this, you shouldn’t be running it in prod - how will you fix issues when they crop up?

vel0city 32 minutes ago | parent [-]

> you can fairly closely replicate the experience of Aurora

Nobody doubts one could build something similar to Aurora given enough budget, time, and skills.

But that's not replicating the experience of Aurora. The experience of Aurora is I can have all of that, in like 30 lines of terraform and a few minutes. And then I don't need to worry about managing the zpools, I don't need to ensure the heartbeats are working fine, I don't need to worry about hardware failures (to a large extent), I don't need to drive to multiple different physical locations to set up the hardware, I don't need to worry about handling patching, etc.

You might replicate the features, but you're not replicating the experience.

infecto 3 hours ago | parent | prev [-]

What managed service? Curious, I don’t use the full suite of aws services but wondering what would take 45mins, maybe it was a large cluster of some sort that needed rolling changes?

coliveira 2 hours ago | parent | next [-]

My observation is that all these services are exploding in complexity, and they justify saying that there are more features now, so everyone needs to accept spending more and more time and effort for the same results.

patrick451 2 hours ago | parent [-]

It's basically the same dynamic as hedonic adjustment in the CPI calculations. Cars may cost twice as much now they have usb chargers built in so inflation isn't really that bad.

mrweasel 2 hours ago | parent | prev [-]

I think this was MWAA

jdmichal 24 minutes ago | parent | prev | next [-]

It's about fitting your utilization to the model that best serves you.

If you can keep 4 "Java boxes" fed with work 80%+ of the time, then sure EC2 is a good fit.

We do a lot of batch processing and save money over having EC2 boxes always on. Sure we could probably pinch some more pennies if we managed the EC2 box uptime and figured out mechanisms for load balancing the batches... But that's engineering time we just don't really care to spend when ECS nets us most of the savings advantage and is simple to reason about and use.

nthdesign 39 minutes ago | parent | prev | next [-]

Agreed. There is a wide price difference between running a managed AWS or Azure MySQL service and running MySQL on a VM that you spin up in AWS or Azure.

re-thc 3 hours ago | parent | prev [-]

> your infra is so efficient and cheap that even paying 4x for it rather than going for colocation is well worth it because of the QoL and QoS.

You don’t need colocation to save 4x though. Bandwidth pricing is 10x. EC2 is 2-4x especially outside US. EBS for its iops is just bad.

bojangleslover 3 hours ago | parent | prev | next [-]

Great comment. I agree it's a spectrum and those of us who are comfortable on (4) like yourself and probably us at Carolina Cloud [0] as well, (4) seems like a no brainer. But there's a long tail of semi-technical users who are more comfortable in 2-3 or even 1, which is what ultimately traps them into the ransomware-adjacent situation that is a lot of the modern public cloud. I would push back on "usage-based". Yes it is technically usage-based but the base fee also goes way up and there are also sometimes retainers on these services (ie minimum spend). So of course "usage-based" is not wrong but what it usually means is "more expensive and potentially far more expensive".

[0] https://carolinacloud.io, derek@

spwa4 3 hours ago | parent [-]

The problem is that clouds have easily become 3 or 5 times the price of managed services, 10x the price of option 3, and 20x the price of option 4. To say nothing of the fact that almost all businesses can run fine on "pc under desk" type situations.

So in practice cloud has become the more expensive option the second your spend goes over the price of 1 engineer.

boplicity an hour ago | parent | prev | next [-]

I don't know. I rent a bare metal server for $500 a month, which is way overkill. It takes almost no time to manage -- maybe a few hours a year -- and can handle almost anything I throw at it. Maybe my needs are too simple though?

edge17 an hour ago | parent [-]

Just curious, what is the spec you pay $6000/year for? Where/what is the line between rent vs buy?

boplicity 43 minutes ago | parent [-]

It's a server with:

- 2x Intel Xeon 5218

- 128gb Ram

- 2x960GB SSD

- 30TB monthly bandwidth

I pay around an extra $200/month for "premium" support and Acronis backups, both of which have come in handy, but are probably not necessary. (Automated backups to AWS are actually pretty cheap.) It definitely helps with peace of mind, though.

Lucasoato 6 hours ago | parent | prev | next [-]

Hetzner is definitely an interesting option. I’m a bit scared of managing the services on my own (like Postgres, Site2Site VPN, …) but the price difference makes it so appealing. From our financial models, Hetzner can win over AWS when you spend over 10~15K per month on infrastructure and you’re hiring really well. It’s still a risk, but a risk that definitely can be worthy.

mrweasel 3 hours ago | parent | next [-]

> I’m a bit scared of managing the services on my own

I see it from the other direction, when if something fails, I have complete access to everything, meaning that I have a chance of fixing it. That's down to hardware even. Things get abstracted away, hidden behind APIs and data lives beyond my reach, when I run stuff in the cloud.

Security and regular mistakes are much the same in the cloud, but I then have to layer whatever complications the cloud provide comes with on top. If cost has to be much much lower if I'm going to trust a cloud provider over running something in my own data center.

adamcharnock 6 hours ago | parent | prev | next [-]

You sum it up very neatly. We've heard this from quite a few companies, and that's kind of why we started our ours.

We figured, "Okay, if we can do this well, reliably, and de-risk it; then we can offer that as a service and just split the difference on the cost savings"

(plus we include engineering time proportional to cluster size, and also do the migration on our own dime as part of the de-risking)

wulfstan 4 hours ago | parent | prev | next [-]

I've just shifted my SWE infrastructure from AWS to Hetzner (literally in the last month). My current analysis looks like it will be about 15-20% of the cost - £240 vs 40-50 euros.

Expect a significant exit expense, though, especially if you are shifting large volumes of S3 data. That's been our biggest expense. I've moved this to Wasabi at about 8 euros a month (vs about $70-80 a month on S3), but I've paid transit fees of about $180 - and it was more expensive because I used DataSync.

Retrospectively, I should have just DIYed the transfer, but maybe others can benefit from my error...

adamcharnock 3 hours ago | parent | next [-]

FYI, AWS offers free Egress when leaving them (because they were forced to be EU regulation, but they chose to offer it globally):

https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-i...

But. Don't leave it until the last minute to talk to them about this. They don't make it easy, and require some warning (think months, IIRC)

wulfstan 2 hours ago | parent [-]

Extremely useful information - unfortunately I just assumed this didn't apply to me because I am in the UK and not the EU. Another mistake, though given it's not huge amounts of money I will chalk it up to experience.

Hopefully someone else will benefit from this helpful advice.

2 hours ago | parent | prev [-]
[deleted]
baby 5 hours ago | parent | prev | next [-]

I’m wondering if it makes sense to distribute your architecture so that workers who do most of the heavy lifting are in hetzner, while the other stuff is in costly AWS. On the other hand this means you don’t have easy access to S3, etc.

rockwotj 5 hours ago | parent [-]

networking costs are so high in AWS I doubt this makes sense

iso1631 5 hours ago | parent | prev | next [-]

> I’m a bit scared of managing the services on my own (like Postgres, Site2Site VPN, …)

Out of interest, how old are you? This was quite normal expectation of a technical department even 15 years ago.

christophilus 3 hours ago | parent | next [-]

I’m curious to know the answer, too. I used to deploy my software on-prem back in the day, and that always included an installation of Microsoft SQL Server. So, all of my clients had at least one database server they had to keep operational. Most of those clients didn’t have an IT staff at all, so if something went wrong (which was exceedingly rare), they’d call me and I’d walk them through diagnosing and fixing things, or I’d Remote Desktop into the server if their firewalls permitted and fix it myself. Backups were automated and would produce an alert if they failed to verify.

It’s not rocket science, especially when you’re talking about small amounts of data (small credit union systems in my example).

infecto 3 hours ago | parent | prev [-]

No it was not. 15 years ago Heroku was the rage. Even the places that had bare metal usually had someone running something similar to devops and at least core infrar was not being touched. I am sure places existed but 15 years while far away was already pretty far along from what you describe. At least in SV.

acdha 2 hours ago | parent [-]

Heroku was popular with startups who didn’t have infrastructure skills but the price was high enough that anyone who wasn’t in that triangle of “lavish budget, small team, limited app diversity” wasn’t using it. Things like AWS IaaS were far more popular due to the lower cost and greater flexibility but even that was far from a majority service class.

infecto 2 hours ago | parent [-]

I am not sure if you are trying to refute my lived experience or what exactly the point is. Heroku was wildly popular with startups at the time, not just those with lavish budgets. I was already touching RDS at this point and even before RDS came around no organization I worked at had me jumping on bare metal to provision services myself. There always a system in place where someone helped out engineering to deploy systems. I know this was not always the case but the person I was responding to made it sound like 15 years ago all engineers were provisioning their own database and doing other times of dev/sys ops on a regular basis. It’s not true at least in SV.

objektif 3 hours ago | parent | prev [-]

No amount of money will make me maintain my own dbs. We tried it at first and it was a nightmare.

g8oz 2 hours ago | parent [-]

It's worth becoming good at.

eru 4 hours ago | parent | prev | next [-]

> 4 - Buy and colocate the hardware yourself – Certainly the cheapest option if you have the skills, scale, cap-ex, and if you plan to run the servers for at least 3-5 years.

Is it still the cheapest after you take into account that skills, scale, cap-ex and long term lock-in also have opportunity costs?

graemep 4 hours ago | parent [-]

That is why the the second "if" is there.

You can get locked into cloud too.

The lock in is not really long term as it is an easy option to migrate off.

whiplash451 an hour ago | parent | prev | next [-]

> Option 1 is great for startups

Unfortunately, (successful) startups can quickly get trapped into this option. If they're growing fast, everyone on the board will ask why you'd move to another option at the first place. The cloud becomes a very deep local minimum that's hard to get out off.

weavie 6 hours ago | parent | prev | next [-]

What is the upper limit of Hertzner? Say you have an AWS bill in the $100s of millions, could Hertzner realistically take on that scale?

adamcharnock 5 hours ago | parent | next [-]

An interesting question, so time for some 100% speculation.

It sounds like they probably have revenue in the €500mm range today. And given that the bare metal cost of AWS-equivalent bills tends to be a 90% reduction, we'll say a €10mm+ bare metal cost.

So I would say a cautious and qualified "yes". But I know even for smaller deployments of tens or hundreds of servers, they'll ask you what the purpose is. If you say something like "blockchain," they're going to say, "Actually, we prefer not to have your business."

I get the strong impression that while they naturally do want business, they also aren't going to take a huge amount of risk on board themselves. Their specialism is optimising on cost, which naturally has to involve avoiding or mitigating risk. I'm sure there'd be business terms to discuss, put it that way.

StilesCrisis 3 hours ago | parent [-]

Why would a client who wants to run a Blockchain be risky for Herzner? I'm not a fan, I just don't see the issue. If the client pays their monthly bill, who cares if they're using the machine to mine for Bitcoin?

Symbiote 3 hours ago | parent [-]

They are certain to run the machines at 100% continually, which will cost more than a typical customer who doesn't do this, and leave the old machines with less second-hand value for their auction thing afterwards.

mbreese 2 hours ago | parent [-]

I’d bet that main reason would be power. Running machines at 100% doesn’t subtract much extra , but a server running hard for 24 hours would use more power than a bursty workload.

(While we’re all speculating)

geocar 5 hours ago | parent | prev [-]

Who are you thinking of?

Netflix might be spending as much as $120m (but probably a little less), and I thought they were probably Amazon's biggest customer. Does someone (single-buyer) spend more than that with AWS?

Hertzner's revenue is somewhere around $400m, so probably a little scary taking on an additional 30% revenue from a single customer, and Netflix's shareholders would probably be worried about risk relying on a vendor that is much smaller than them.

Sometimes if the companies are friendly to the idea, they could form a joint venture or maybe Netflix could just acquire Hertzner (and compete with Amazon?), but I think it unlikely Hertzner could take on Netflix-sized for nontechnical reasons.

However increasing pop capacity by 30% within 6mo is pretty realistic, so I think they'd probably be able to physically service Netflix without changing too much if management could get comfortable with the idea

phiresky 5 hours ago | parent | next [-]

A $120M spend on AWS is equivalent to around a $12M spend on Hetzner Dedicated (likely even less, the factor is 10-20x in my experience), so that would be 3% of their revenue from a single customer.

direwolf20 5 hours ago | parent | prev | next [-]

That $120m will become $12m when they're not using AWS.

Quarrel 3 hours ago | parent | prev | next [-]

> Hertzner's revenue is somewhere around $400m, so probably a little scary taking on an additional 30% revenue from a single customer

A little scare for both sides.

Unless we're misunderstanding something I think the $100Ms figure is hard to consider in a vacuum.

objektif 2 hours ago | parent | prev [-]

Figma apparently spends around 300-400k/day on AWS. I think this puts them up there.

mbreese 2 hours ago | parent [-]

How is this reasonable? At what point do they pull a Dropbox and de-AWS? I can’t think of why they would gain with AWS over in house hosting at that point.

I’m not surprised, but you’d think there would be some point where they would decide to build a data center of their own. It’s a mature enough company.

mgaunard 6 hours ago | parent | prev | next [-]

you're missing 5, what they are doing.

There is a world of difference between renting some cabinets in an Equinix datacenter and operating your own.

adamcharnock 6 hours ago | parent [-]

Fair point!

5 - Datacenter (DC) - Like 4, except also take control of the space/power/HVAC/transit/security side of the equation. Makes sense either at scale, or if you have specific needs. Specific needs could be: specific location, reliability (higher or lower than a DC), resilience (conflict planning).

There are actually some really interesting use cases here. For example, reliability: If your company is in a physical office, how strong is the need to run your internal systems in a data centre? If you run your servers in your office, then there's no connectivity reliability concerns. If the power goes out, then the power is out to your staff's computers anyway (still get a UPS though).

Or perhaps you don't need as high reliability if you're doing only batch workloads? Do you need to pay the premium for redundant network connections and power supplies?

If you want your company to still function in the event of some kind of military conflict, do you really want to rely on fibre optic lines between your office and the data center? Do you want to keep all your infrastructure in such a high-value target?

I think this is one of the more interesting areas to think about, at least for me!

jermaustin1 2 hours ago | parent | next [-]

When I worked IT for a school district at the beginning of my career (2006-2007), I was blown away that every school had a MASSIVE server room (my office at each school - the MDF). 3-5 racks filled (depending on school size and connection speed to the central DC - data closet) 50-75% was networking equipment (5 PCs per class hardwired), 10% was the Novell Netware server(s) and storage, the other 15% was application storage for app distributions on login.

mgaunard 6 hours ago | parent | prev | next [-]

Personally I haven't seen a scenario where it makes sense beyond a small experimental lab where you value the ability to tinker physically with the hardware regularly.

Offices are usually very expensive real estate in city centers and with very limited cooling capabilities.

Then again the US is a different place, they don't have cities like in Europe (bar NYC).

kryptiskt 2 hours ago | parent [-]

If you are a bank or a bookmaker or similar you may well want to have total control of physical access to the machines. I know one bookmaker I worked with had their own mini-datacenter, mainly because of physical security.

tomcam 2 hours ago | parent [-]

I am pretty forward-thinking but even when I started writing my first web server 30+ years ago I didn’t foresee the day when the phrase “my bookie’s datacenter” might cross my lips.

direwolf20 5 hours ago | parent | prev | next [-]

If you have less than a rack of hardware, if you have physical security requirements, and/or your hardware is used in the office more than from the internet, it can make sense.

noosphr 6 hours ago | parent | prev [-]

5 was a great option for ml work last year since colo rented didn't come with a 10kW cable. With ram, sd and GPU prices the way they are now I have no idea what you'd need to do.

Thank goodness we did all the capex before the OpenAI ram deal and expensive nvidia gpus were the worst we had to deal with.

Schlagbohrer 5 hours ago | parent | prev | next [-]

Can someone explain 2 to me. How is a managed private cloud different from full cloud? Like you are still using AWS or Azure but you are keeping all your operation in a bundled, portable way, so you can leave that provider easily at any time, rather than becoming very dependent on them? Is it like staying provider-agnostic but still cloud based?

adamcharnock 4 hours ago | parent | next [-]

To put it plainly: We deploy a Kubernetes cluster on Hetzner dedicated servers and become your DevOps team (or a part thereof).

It works because bare metal is about 10% the cost of cloud, and our value-add is in 1) creating a resilient platform on top of that, 2) supporting it, 3) being on-call, and 4) being or supporting your DevOps team.

This starts with us providing a Kubernetes cluster which we manage, but we also take responsibility for the services run on it. If you want Postgres, Redis, Clickhouse, NATS, etc, we'll deploy it and be SLA-on-call for any issues.

If you don't want to deal with Kubernetes then you don't have to. Just have your software engineers hand us the software and we'll handle deployment.

Everything is deployed on open source tooling, you have access to all the configuration for the services we deploy. You have server root access. If you want to leave you can do.

Our customers have full root access, and our engineers (myself included) are in a Slack channel with you engineers.

And, FWIW, it doesn't have to be Hetzner. We can colocate or use other providers, but Hetzner offer excellent bang-per-buck.

Edit: And all this is included in the cluster price, which comes out cheaper than the same hardware on the major cloud providers

mancerayder 2 hours ago | parent | next [-]

You give customers root but you're on call when something goes tits up?

You're a brave DevOps team. That would cause a lot of friction in my experience, since people with root or other administrative privileges do naughty things, but others are getting called in on Saturday afternoon.

belthesar an hour ago | parent [-]

From a platform risk perspective, each tenant has dedicated resources, so it's their platform to blow up. If a customer with root access blows up their own system, then the resources from the MSP to fix it are billable, and the after-action meetings would likely include a review of whether that access is appropriate, if additional training is needed to prevent those issues in the future (also billable), or if the customer-provider relationship is the right fit. Will the on-call resource be having a bad time fixing someone else's screw up? Yeah, and having been that guy before, I empathize. The business can and should manage this relationship however, so that it doesn't become an undue burden on their support teams. A customer platform that is always getting broken at 4pm on a Friday when an overzealous customer admin is going in and deciding to run arbitrary kubectl commands takes support capacity away from other customers when a major incident happens, regardless of how much you're making in support billing.

Annatar 3 hours ago | parent | prev [-]

[dead]

victorbjorklund 4 hours ago | parent | prev [-]

Instead of using the Cloud's own Kubernetes service, for example, you just buy the compute and run your own Kubernetes cluster. At a certain scale that is going to be cheaper if you have to know how. And since you are no longer tied to which services are provided and you just need access to compute and storage. you can also shop around for better prices than Amazon or Azure since you can really go to any provider of a VPS.

Archelaos 3 hours ago | parent | prev | next [-]

I am using something inbetween 2 and 3, a hosted Web-site and database service with excellent customer support. On shared hardware it is 22 €/month. A managed server on dedicated hardware starts at about 50 €/month.

CrzyLngPwd 5 hours ago | parent | prev | next [-]

#2.5ish

We rent hardware and also some VPS, as well as use AWS for cheap things such as S3 fronted with Cloudflare, and SES for priority emails.

We have other services we pay for, such as AI content detection, disposable email detection, a small postal email server, and more.

We're only a small business, so having predictable monthly costs is vital.

Our servers are far from maxed out, and we process ~4 million dynamic page and API requests per day.

jgalt212 an hour ago | parent | prev | next [-]

We looked at option 4. And colocation is not cheap. It was cheaper for us to lease VMs from Hetzner than to buy boxes and colocate at Equinix.

preisschild 6 hours ago | parent | prev | next [-]

Been using Hetzner Cloud for Kubernetes and generally like it, but it has its limitations. The network is highly unpredictable. You at best get 2Gbit/s, but at worst a few hundreds of Mbit/s.

https://docs.hetzner.com/cloud/technical-details/faq/#what-k...

victorbjorklund 4 hours ago | parent [-]

Is that for the virtual private network? I heard some people say that you actually get higher bandwidth if you're using the public network instead of the private network within Hetzner, which is a little bit crazy.

direwolf20 4 hours ago | parent [-]

Hetzner dedicated is pretty bad at private networks, so bad you should use a VPN instead. Don't know about the cloud side of things.

DyslexicAtheist 6 hours ago | parent | prev | next [-]

this is what we did in the 90ies into mid 2000:

> Buy and colocate the hardware yourself – Certainly the cheapest option if you have the skills

back then this type of "skill" was abundant. You could easily get sysadmin contractors who would take a drive down to the data-center (probably rented facilities in a real-estate that belonged to a bank or insurance) to exchange some disks that died for some reason. such a person was full stack in a sense that they covered backups, networking, firewalls, and knew how to source hardware.

the argument was that this was too expensive and the cloud was better. so hundreds of thousands of SME's embraced the cloud - most of them never needed Google-type of scale, but got sucked into the "recurring revenue" grift that is SaaS.

If you opposed this mentality you were basically saying "we as a company will never scale this much" which was at best "toxic" and at worst "career-ending".

The thing is these ancient skills still exist. And most orgs simply do not need AWS type of scale. European orgs would do well to revisit these basic ideas. And Hetzner or Lithus would be a much more natural (and honest) fit for these companies.

belorn 6 hours ago | parent | next [-]

I wonder how much companies pay yearly in order to avoid having an employee pick up a drive from a local store, drive to the data center, pull the disk drive, screw out the failing hard drive and put in the new one, add it in the raid, verify the repair process has started, and then return to the office.

Symbiote 5 hours ago | parent | next [-]

I don't think I've ever seen a non-hot-swap disk in a normal server. The oldest I dealt with had 16 HDDs per server, and only 12 were accessible from the outside, bu the 4 internal ones were still hot-swap after taking the cover off.

Even some really old (2000s-era) junk I found in a cupboard at work was all hot-swap drives.

But more realistically in this case, you tell the data centre "remote hands" person that a new HDD will arrive next-day from Dell, and it's to go in server XYZ in rack V-U at drive position T. This may well be a free service, assuming normal failure rates.

belorn 2 hours ago | parent [-]

Yes, I did write that a bit hasty. I changed above to the normal process. As it happened we just installed a server without hotswap disk, but to be fair that is the first one I have personally seen in the last 20 years.

Remote hands is a thing indeed. Servers also tend to be mostly pre-built now days by server retailers, even when buying more custom made ones like servermicro where you pick each component. There isn't that many parts to a generic server purchase. Its a chassi, motherboard, cpu, memory, and disks. PSU tend to be determined by the motherboard/chassi choice, same with disk backplanes/raid/ipmi/network/cables/ventilation/shrouds. The biggest work is in doing the correct purchase, not in the assembly. Once delivered you put on the rails, install any additional item not pre-built, put it in the rack and plug in the cables.

amluto 5 hours ago | parent | prev [-]

In the Bay Area there are little datacenters that will happily colocate a rack for you and will even provide an engineer who can swap disks. The service is called “remote hands”. It may still be faster to drive over.

theodric 6 hours ago | parent | prev [-]

> ancient skills https://youtu.be/ZtYU87QNjPw?&t=10

It baffles me that my career trajectory somehow managed to insulate me from ever having to deal with the cloud, while such esoteric skills as swapping a hot swap disk or racking and cabling a new blade chassis are apparently on the order of finding a COBOL developer now. Really?

I can promise you that large financial institutions still have datacenters. Many, many, many datacenters!

direwolf20 5 hours ago | parent [-]

we had two racks in our office of mostly developers. If you have an office you already have a rack for switches and patch panels. Adding a few servers is obvious.

Software development isn't a typical SME however. Mike's Fish and Chips will not buy a server and that's fine.

bpavuk 6 hours ago | parent | prev [-]

if someone on the DevOps team knows Nix, option 3 becomes a lot cheaper time-wise! yeah, Nix flakes still need maintenance, especially on the `nixos-unstable` branch, but you get the quickest disaster recovery route possible!

plus, infra flexibility removes random constraints that e.g. Cloudflare Workers have

slyall 6 hours ago | parent | next [-]

There are a bunch of ways to manage bare metal servers apart from Nix. People have been doing it for years. Kickstart, theforeman, maas, etc, [0]. Many to choose from according to your needs and layers you want them to manage.

Reality is these days you just boot a basic image that runs containers

[0] Longer list here: https://github.com/alexellis/awesome-baremetal

adamcharnock 6 hours ago | parent | prev | next [-]

Indeed! We've yet to go down this route, but it's something we're thinking on. A friend and I have been talking about how to bring Nix-like constructs to Kubernetes as well, which has been interesting. (https://github.com/clotodex/kix, very much in the "this is fun to think about" phase)

aequitas 6 hours ago | parent | prev | next [-]

This is what we do, I gave a talk about our setup earlier this week at CfgMgmtCamp: https://www.youtube.com/watch?v=DBxkVVrN0mA&t=8457s

muvlon 6 hours ago | parent | prev | next [-]

Option 4 as well, that's how we do it at work and it's been great. However, it can't really be "someone on the team knows Nix", anyone working on Ops will need Nix skills in order to be effective.

preisschild 6 hours ago | parent | prev [-]

I'm a NixOS fan, but been using Talos Linux on Hetzner nodes (using Cluster-API) to form a Kubernetes Cluster. Works great too!