Remix.run Logo
le-mark a day ago

I’d add this. Servers used to be viewed as pets; the system admins spent a lot time on snow flake configurations and managing each one. When we started standing up tens of servers to host the nodes of our app (early 2000s); the simple admin overhead was huge. One thing I have not seen mentioned here was how powerful ansible and similar tools were at simplifying server management. Iirc being able to provision and standup servers simply with known configurations was a huge win aws provided.

zejn 20 hours ago | parent | next [-]

Also, it was a very very different landscape.

You were commonly given a network uplink and a list of public IP addresses you were to set up on your box or boxes. IPMI/BMC were not a given on a server so if you broke it, you needed to have remote hands and probably brains too.

Virtualisation was in the early days and most of the services were co-hosted on the server.

Software defined networks and Open vSwitch were also not a thing back then. There were switches with support for VLANs and you might have had a private network to link together frontend and backend boxes.

Servers today can be configured remotely. They have their own management interfaces so you can access the console and install OS remotely. The network switches can be reconfigured on the fly, making the network topology reconfigurable online. Even storage can be mapped via SAN. The only hands on issue is hardware malfunction.

If I was to compare with today, it was like having a wardrobe of Raspberry Pies on a dumb switch, plugging in cables when changes were needed.

BirAdam a day ago | parent | prev [-]

Even if you don't go ansible/chef/puppet/salt, just having git is good. You can put your configs in git, use a git action to change whatever variables, and deploy to the target. No extra tools needed, and you get versioned configs.