Remix.run Logo
roncesvalles 3 hours ago

Putting consumer grade (aka "commodity") hardware in a datacenter and running your infra on it is a bit of a meme, in the sense that it's not the only way of doing things. It was probably pioneered/popularized by Google but that's because writing great software was their "hammer", ie they framed every computing problem as a software problem. It was probably easier for them (= Jeff Dean) to take mediocre hardware and write a robust distributed system on top instead of the other way around.

There is, however, a completely different vision for how web infrastructure should be and that is to have extremely resilient hardware and simple software. That's what a mainframe is. You can write a simple and easy to maintain single process backend program, run it on a mainframe and be fairly confident that it can run without stopping for decades. Everything from the power supply to the CPU is redundant and can be hot swapped without booting the OS. Credit card transactions and banking software run on this model for example (just think about how insanely reliable credit card transactions are).

IBM has a monopoly in the second world. You could say the entire field of distributed systems is one big indie effort to break free of IBM's monopoly on computing.

vbezhenar 2 hours ago | parent | next [-]

What I think today people do:

1. They run complicated infrastructure software, written by third-party developers.

2. And they run their own simple programs on top of them.

So for example you can rent Kubernetes cluster from AWS and run simple HTTP server. If your server crashes, Kubernetes will restart it, so it's resilient. There will be records in some metrics which will light up some alerts and eventually people will know about it and will fix it.

Another example: your simple program does some REST GET query. This query failed for some reason. But that query was intercepted by middleware proxy and that proxy determines that HTTP response was 5xx, so it can retry it. So it retries it few times with properly calibrated duration and eventually gets a response and propagates it back to the simple program. Simple program had no idea about all the stuff happening to make it work, it just threw HTTP query and got a response.

There's a lot of complicated machinery to enable simple programs to be part of resilient architecture. That's a goal, anyway.

throwaway27448 2 hours ago | parent | prev | next [-]

> Credit card transactions and banking software run on this model for example

TSYS is super expensive and is dying out. The current generation of banking software is very much shifting to distributed software across commodity data centers.

g947o 2 hours ago | parent [-]

Source? Interested in learning more about this

esseph 32 minutes ago | parent [-]

Red Hat OpenShift (IBM) is what a lot of banks have settled on. Red Hat went all in maybe 5+ years ago in capturing those institutions.

zozbot234 2 hours ago | parent | prev | next [-]

> There is, however, a completely different vision for how web infrastructure should be and that is to have extremely resilient hardware and simple software.

You actually need both, the point of the extremely resilient hardware is that it can act as the single source of truth when you need it - including perhaps hosting some web-based transactions that directly affect your single source of truth. (Calling this a "model" for web-based infrastructure in general would be misleading though: a credit card transaction on the web is not your ordinary website! The web is just an implementation technology here.) Everything else can be ephemeral open systems, which is orders-of-magnitude cheaper.

Nursie 2 hours ago | parent | prev [-]

> Credit card transactions and banking software run on this model for example

Eh, they can but even a couple of decades ago there was a shift to open platforms. 90s and early 00s, sure, it was mainframe and exotic x86 species like Stratus machines. But even then the power of “throw a ton of cheaper Unix at it” was winning.

Banks’ central systems maybe, I have less experience there. IBM did also try for a while to ride the Linux virtualisation wave as well, saying “hey, you can run thousands of Linux instances on a single mainframe”, and I did some work porting IBM software to s390 Linux around 2007.

mghackerlady an hour ago | parent [-]

x86 servers weren't that common in the 90s and early 200s, that was all sun or the other commercial unix peoples things