| ▲ | speedgoose 7 hours ago |
| I would suggest to use both on-premise hardware and cloud computing. Which is probably what comma is doing. For critical infrastructure, I would rather pay a competent cloud provider than being responsible for reliability issues. Maintaining one server room in the headquarters is something, but two servers rooms in different locations, with resilient power and network is a bit too much effort IMHO. For running many slurm jobs on good servers, cloud computing is very expensive and you sometimes save money in a matter of months. And who cares if the server room is a total loss after a while, worst case you write some more YAML and Terraform and deploy a temporary replacement in the cloud. Another thing between is colocation, where you put hardware you own in a managed data center. It’s a bit old fashioned, but it may make sense in some cases. I can also mention that research HPCs may be worth considering. In research, we have some of the world fastest computers at a fraction of the cost of cloud computing. It’s great as long as you don’t mind not being root and having to use slurm. I don’t know in USA, but in Norway you can run your private company slurm AI workloads on research HPCs, though you will pay quite a bit more than universities and research institutions. But you can also have research projects together with universities or research institutions, and everyone will be happy if your business benefits a lot from the collaboration. |
|
| ▲ | epolanski 6 hours ago | parent | next [-] |
| > but two servers rooms in different locations, with resilient power and network is a bit too much effort IMHO I worked in a company with two server farms (a main and a a backup one essentially) in Italy located in two different regions and we had a total of 5 employees taking care of them. We didn't hear about them, we didn't know their names, but we had almost 100% uptime and terrific performance. There was one single person out of 40 developers who's main responsibility were deploys, and that's it. It costed my company 800k euros per year to run both the server farms (hardware, salaries, energy), and it spared the company around 7-8M in cloud costs. Now I work for clients that spend multiple millions in cloud for a fraction of the output and traffic, and I think employ around 15+ dev ops engineers. |
|
| ▲ | olavgg 7 hours ago | parent | prev | next [-] |
| > I would rather pay a competent cloud provider than being responsible for reliability issues. Why do so many developers and sysadmins think they're not competent for hosting services. It is a lot easier than you think, and its also fun to solve technical issues you may have. |
| |
| ▲ | pageandrew 7 hours ago | parent | next [-] | | The point was about redundancy / geo spread / HA. It’s significantly more difficult to operate two physical sites than one. You can only be in one place at a time. If you want true reliability, you need redundant physical locations, power, networking. That’s extremely easy to achieve on cloud providers. | | |
| ▲ | PunchyHamster 7 hours ago | parent | next [-] | | You can just rent the rack space in datacenter and have that covered. It's still much cheaper than running that in cloud. It doesn't make sense if you only have few servers, but if you are renting equivalent of multiple racks of servers from cloud and run them for most of the day, on-prem is staggeringly cheaper. We have few racks and we do "move to cloud" calculation every few years and without fail they come up at least 3x the cost. And before the "but you need to do more work" whining I hear from people that never did that - it's not much more than navigating forest of cloud APIs and dealing with random blackbox issues in cloud that you can't really debug, just go around it. | |
| ▲ | direwolf20 5 hours ago | parent | prev | next [-] | | How much does your single site go down? On cloud it's out of your control when an AZ goes down. When it's your server you can do things to increase reliability. Most colos have redundant power feeds and internet. On prem that's a bit harder, but you can buy a UPS. If your head office is hit by a meteor your business is over. Don't need to prepare for that. | |
| ▲ | account42 7 hours ago | parent | prev [-] | | You don't need full "cloud" providers for that, colocation is a thing. | | |
| |
| ▲ | tomcam an hour ago | parent | prev | next [-] | | Because when I’m running a busy site and I can’t figure out what went wrong, I freak out. I don’t know whether the problem will take 2 hours or 2 days to diagnose. | | |
| ▲ | MaKey an hour ago | parent [-] | | Usually you can figure out what went wrong pretty quickly. Freaking out doesn't help with the "quickly" part though. |
| |
| ▲ | jim180 7 hours ago | parent | prev | next [-] | | Also I'd add this question, why do so many developers and sysadmins think, that cloud companies always hire competent/non-lazy/non-pissed employees? | |
| ▲ | faust201 6 hours ago | parent | prev | next [-] | | > Why do so many developers and sysadmins think they're not competent for hosting services. It is a lot easier than you think, and its also fun to solve technical issues you may have. It is a different skillset. SRE is also an under-valued/paid (unless one is in FAANGO). | | |
| ▲ | clickety_clack 2 hours ago | parent [-] | | It’s all downside. If nothing goes wrong, then the company feels like they’re wasting money on a salary. If things go wrong they’re all your fault. | | |
| |
| ▲ | infecto 2 hours ago | parent | prev | next [-] | | Maybe you find it fun. I don’t, I prefer building software not running and setting up servers. It’s also nontrivial once you go past some level of complexity and volume. I have made my career at building software and part of that requires understanding the limitations and specifics of the underlying hardware but at the end of the day I simply want to provision and run a container, I don’t want to think about the security and networking setup it’s not worth my time. | |
| ▲ | speedgoose 6 hours ago | parent | prev | next [-] | | At a previous job, the company had its critical IT infrastructure on their own data center. It was not in the IT industry, but the company was large and rich enough to justify two small data centers. It notably had batteries, diesel generators, 24/7 teams, and some advanced security (for valid reasons). I agree that solving technical issues is very fun, and hosting services is usually easy, but having resilient infrastructure is costly and I simply don't like to be woken up at night to fix stuff while the company is bleeding money and customers. | |
| ▲ | rvz 7 hours ago | parent | prev [-] | | > Why do so many developers and sysadmins think they're not competent for hosting services. Because those services solve the problem for them. It is the same thing with GitHub. However, as predicted half a decade ago with GitHub becoming unreliable [0] and as price increases begin to happen, you can see that self-hosting begins to make more sense and you have complete control of the infrastructure and it has never been more easier to self host and bring control over costs. > its also fun to solve technical issues you may have. What you have just seen with coding agents is going to have the same effect on "developers" that will have a decline in skills the moment they become over-reliant on coding agents and won't be able to write a single line of code at all to fix a problem they don't fully understand. [0] https://news.ycombinator.com/item?id=22867803 |
|
|
| ▲ | bigfatkitten 6 hours ago | parent | prev | next [-] |
| > Maintaining one server room in the headquarters is something, but two servers rooms in different locations, with resilient power and network is a bit too much effort IMHO. Speaking as someone who does this, it is very straightforward. You can rent space from people like Equinix or Global Switch for very reasonable prices. They then take care of power, cooling, cabling plant etc. |
|
| ▲ | Schlagbohrer 4 hours ago | parent | prev [-] |
| Unfortunately we experienced an issue where our Slurm pool was contaminated by a misconstrained Postgres Daemon. Normally the contaminated slurm pool would drain into a docker container, but due to Rust it overloaded and the daemon ate its own head. Eventually we returned it to a restful state so all's well that ends well. (hardware engineer trying to understand wtaf software people are saying when they speak) |