| ▲ | tqi 2 days ago |
| I'd be more interested to understand (from folk who were there) what the conditions were that made AWS et al such a runaway hit. What did folks gain, and have those conditions meaningfully changed in some way that makes it less of a slam dunk? My recollection from working at a tech company in the early 2010s is that renting rack space and building servers was expensive and time consuming, estimating what the right hardware configuration would be for your business was tricky, and scaling different services independently was impossible. also having multi regional redundancy was rare (remember when squarespace was manually carrying buckets of petrol for generators up many flights of stairs to keeps servers online post sandy?[1]). AWS fixed much of that. But maybe things have changed in ways that meaningfully changes the calculus? [1] https://www.squarespace.com/press-coverage/2012-11-1-after-s... |
|
| ▲ | jgb1984 2 days ago | parent | next [-] |
| You're falling into the false dichotomy that always comes up with these topics: as if the choice is between the cloud and renting rack space while applying your own thermal paste on the CPUs.
In reality, for most people, renting dedicated servers is the goldilocks solution (not colocation with your own hardware).
You get an incredible amount of power for a very reasonable price, but you don't need to drive to a datacenter to swap out a faulty PSU, the on site engineers take care of that for you.
I ordered an extra server today from Hetzner. It was available 90 seconds afterwards. Using their installer I had Ubuntu 24.04 LTS up and running, and with some Ansible playbooks to finish configuration, all in all from the moment of ordering to fully operational was about 10 minutes tops. If I no longer need the server I just cancel it, the billing is per hour these days. Bang for the buck is unmatched, and none of the endless layers of cloud abstraction getting in the way. A fixed price, predictable, unlimited bandwidth, blazing fast performance. Just you and the server, as it's meant to be.
I find it a blissful way to work. |
| |
| ▲ | le-mark a day ago | parent | next [-] | | I’d add this. Servers used to be viewed as pets; the system admins spent a lot time on snow flake configurations and managing each one. When we started standing up tens of servers to host the nodes of our app (early 2000s); the simple admin overhead was huge. One thing I have not seen mentioned here was how powerful ansible and similar tools were at simplifying server management. Iirc being able to provision and standup servers simply with known configurations was a huge win aws provided. | | |
| ▲ | zejn 20 hours ago | parent | next [-] | | Also, it was a very very different landscape. You were commonly given a network uplink and a list of public IP addresses you were to set up on your box or boxes. IPMI/BMC were not a given on a server so if you broke it, you needed to have remote hands and probably brains too. Virtualisation was in the early days and most of the services were co-hosted on the server. Software defined networks and Open vSwitch were also not a thing back then. There were switches with support for VLANs and you might have had a private network to link together frontend and backend boxes. Servers today can be configured remotely. They have their own management interfaces so you can access the console and install OS remotely. The network switches can be reconfigured on the fly, making the network topology reconfigurable online. Even storage can be mapped via SAN. The only hands on issue is hardware malfunction. If I was to compare with today, it was like having a wardrobe of Raspberry Pies on a dumb switch, plugging in cables when changes were needed. | |
| ▲ | BirAdam a day ago | parent | prev [-] | | Even if you don't go ansible/chef/puppet/salt, just having git is good. You can put your configs in git, use a git action to change whatever variables, and deploy to the target. No extra tools needed, and you get versioned configs. |
| |
| ▲ | lelanthran a day ago | parent | prev | next [-] | | > all in all from the moment of ordering to fully operational was about 10 minutes tops. I think this is an important point. It's quick. When cloud got popular, doing what you did could take upwards of 3 months in an organisation, with some being closer to 8 months. The organisational bureaucracy meant that any asset purchase was a long procedure. So, yeah, the choices were: 1. Wait 6 months to spend out of capex budget Or 2. Use the opex budget and get something in 10m. We are no longer in that phase, so cloud services makes very little sense now because you can still use the opex budget to get a VPS and have in going in minutes with automation. | |
| ▲ | mattstainton001 a day ago | parent | prev | next [-] | | True, but I think you're touching on something important regarding value. Value is different depending on the consumer: for you, you're willing and able to manage more of the infrastructure than someone who has a more narrow skillset.
Being able to move the responsibility for areas of the service on to the provider is what we're paying for, and for some, paying more money to offload more of the responsibility actually results in more value for the organization/consumer | |
| ▲ | alphager 2 days ago | parent | prev [-] | | > I ordered an extra server today from Hetzner. It was available 90 seconds afterwards. Back when AWS was starting, this would have taken 1-3 days. |
|
|
| ▲ | throwup238 2 days ago | parent | prev | next [-] |
| AWS also made huge inroads in big companies because engineering teams could run their own account off of their budget and didn’t have to go through to IT to requisition servers, which was often red tape hell. In my experience it was just as much about internal politics as the technical benefits. |
| |
| ▲ | ericbarrett 2 days ago | parent | next [-] | | > which was often red tape hell Seconded. I was working for a storage vendor when AWS was first ascendant. After we delivered hardware, it was typically 6-12 weeks to even get it powered up, and often a few weeks longer to complete deployment. This is with professional services, e.g. us handling the setup once we had wires to plug in. Similar lead time for ordering, racking, and provisioning standard servers. The paperwork was massive, too. Order forms, expense justifications, conversations with Legal, invoices, etc. etc. And when I say 6-12 weeks, I mean that was a standard time - there were outliers measured in months. | |
| ▲ | rand846633 2 days ago | parent | prev [-] | | Absolutely. At several startups, getting a simple €20–50/month Hetzner server meant rounds with leadership and a little dance with another department to hand over a credit card. With AWS, leadership suddenly accepted that Ops/Dev could provision what we thought was right. It isn’t logically compelling, but that’s why the cloud gained traction so quickly: it removed friction. | | |
| ▲ | dabockster 2 days ago | parent [-] | | > At several startups, getting a simple €20–50/month Hetzner server meant rounds with leadership and a little dance with another department to hand over a credit card. That's not a startup if you can't go straight to the founder and get a definite yes/no answer in a few minutes. |
|
|
|
| ▲ | saulpw 2 days ago | parent | prev | next [-] |
| Computing power (compute, memory, storage) has increased 100x or more since 2005, but AWS prices are not proportionally cheaper. So where you were getting a reasonable value in ~2012, that value is no longer reasonable, and by an increasing margin. |
| |
| ▲ | noosphr 2 days ago | parent [-] | | This is the big one. In 2006 when the first EC2 instances showed up they were on par with an ok laptop and would take 24 months to pay enough in rent to cover the cost of hardware. Today the smallest instance is a joke and the medium instances are the size of a 5 year old phone. It takes between 3 to 6 months to pay enough in rent to cover the cost of the hardware. What was a great deal in 2006 is a terrible one today. |
|
|
| ▲ | ascorbic a day ago | parent | prev | next [-] |
| raises hand I ran a small SaaS business in the early 2000s, pre-AWS. Renting dedicated servers was really expensive. To the extent that it was cheaper for us to buy a 1U server and host it in a local datacenter. Maintaining that was a real pain. Getting the train to London to replace a hard drive was so much fun. CDNs were "call for pricing". EC2 was a revelation when it launched. It let us expand as needed without buying servers or paying for rack space, and try experiments without shoving everything onto one server and fiddling with Apache configs in production. Lambda made things even easier (at the expense of needing new architectures). The thing that has changed is that renting bare metal is orders of magnitude cheaper, and comparable in price to shared hosting in the bad old days. |
|
| ▲ | dabockster 2 days ago | parent | prev | next [-] |
| > But maybe things have changed in ways that meaningfully changes the calculus? I'd argue that Docker has done that in a LOT of ways. The huge draw to AWS, from what I recall with my own experiences, was that it was cheaper than on-prem VMware licenses and hardware. So instead of virtualizing on proprietary hypervisors, firms outsourced their various technical and legal responsibilities to AWS. Now that Docker is more mature, largely open source, way less resource intensive, and can run on almost any compute hardware made in the last 15 years (or longer), the cost/benefit analysis starts to favor moving off AWS. Also AWS used to give out free credits like free candy. I bet most of this is vendor lock in and a lot of institutional brain drain. |
|
| ▲ | baobun 2 days ago | parent | prev | next [-] |
| One factor was huge amounts of free credits for the first year or more for any startup that appeared above-board and bothered to ask properly. Second, egress data being very expensive with ingress being free has contributed to making them sticky gravity holes. |
| |
| ▲ | dabockster 2 days ago | parent [-] | | The free credits... what a WILD time! Just show up to a hackathon booth, ask nicely, and you'd get months/years worth of "startup level" credits. Nothing super powerful - basically the equivalent of a few quad core boxes in a broom closet. But still for "free". |
|
|
| ▲ | Macha 13 hours ago | parent | prev | next [-] |
| The problem it really solved was your sysadmins were still operating by SSHing into the physical servers and running commands meticulously typed out in a releaae doc or stored on a local mediawiki instance, and acquiring new compute resources involved a battle with finance for the capex which would delay pretty much any project for weeks, while cloud vendors let engineers at many companies sidestep both processes. Everything else was just reference material for how to sell it to your management. |
|
| ▲ | fennecbutt 2 days ago | parent | prev | next [-] |
| Et al is for people, Et cetera is for things. Edit: although actually many people on here are American so I guess for you aws is legally a person... |
| |
| ▲ | nemo 2 days ago | parent | next [-] | | As an American who studied Latin: Et al. = et alii, "and other things", "among other things". Etc. = et cetera, "and so on". Either may or may not apply to people depending on context. | |
| ▲ | dragonwriter 2 days ago | parent | prev | next [-] | | > although actually many people on here are American so I guess for you aws is legally a person... Corporate legal personhood is actually older than Christianity, and it being applied to businesses (which were late to the game of being allowed to be corporations) is still significantly older than the US (starting with the British East India Company), not a unique quirk of American law. | | |
| ▲ | fennecbutt 2 days ago | parent | next [-] | | Oh I didn't how that, thanks for the lesson. Tbf it just sounds...so American, so I assumed, my bad. But East India Company was involved...whew I guess that does make sense, oof. | | |
| ▲ | dragonwriter 2 days ago | parent [-] | | What is unique in the US is the interaction between corporate personhood and our First Amendment and the way that our courts have applied that to limit political campaign finance laws, and a lot of “corporate personhood” controversy is really about that, not actually about corporate personhood as a broad concept. | | |
| ▲ | sarchertech a day ago | parent | next [-] | | People also get confused about the Citizens United ruling. It had nothing to do with corporate personhood. The ruling said that since a person has first amendment rights, those same rights extend to a group of people—any group—whether it’s a non profit organization, a corporation, or something else. | |
| ▲ | BirAdam a day ago | parent | prev [-] | | Well, it's also what allows the executives, boards, and owners of companies to be divorced from the consequences of their actions in a legal sense much of the time. |
|
| |
| ▲ | LtWorf a day ago | parent | prev [-] | | Source of it being older than christianity? |
| |
| ▲ | tqi 2 days ago | parent | prev [-] | | I don't think that's a hard and fast rule? I think et al is for named, specific entities of any kind. You might say "palm trees, evergreens trees, etc" but "General Sherman, Grand Oak, et al" |
|
|
| ▲ | throwaway201606 a day ago | parent | prev | next [-] |
| Answers here https://learn.microsoft.com/en-us/azure/cloud-adoption-frame... TL;DR version - its about money and business balance sheets, not about technology. For businesses past a certain size, going to cloud is a decision ALWAYS made by business, not by technology. From a business perspective, having a "relatively fixed" ongoing cost (which is an operational expense ie OpEx ) even if it is significantly higher than what it would cost to do things with internal buy and build out (which is a capital expense cost ie CapEx), make financial planning, taxes and managing EBITDA much easier. Note that no one on the business really cares what the tech implications are as long at "tech still sorta runs mostly OK". It also, via financial wizardry, makes tech cost "much less" on a quarter over quarter and year over year basis. |
|
| ▲ | rufus_foreman 2 days ago | parent | prev [-] |
| It was the accountants. CapEx vs. OpEx. |