| ▲ | Sebb767 2 days ago |
| I dislike those black and white takes a lot. It's absolutely true that most startups that just run an EC2 instance will save a lot of cash going to Hetzner, Linode, Digital Ocean or whatever. I do host at Hetzner myself and so do a lot of my clients. That being said, the cloud does have a lot of advantages: - You're getting a lot of services readily available. Need offsite backups? A few clicks. Managed database? A few clicks. Multiple AZs? Available in seconds. - You're not paying up-front costs (vs. investing hundreds of dollars for buying server hardware) and everything is available right now [0] - Peak-heavy loads can be a lot cheaper. Mostly irrelevant for you average compute load, but things are quite different if you need to train an LLM - Many services are already certified according to all kinds of standards, which can be very useful depending on your customers Also, engineering time and time in general can be expensive. If you are a solo entrepreneur or a slow growth company, you have a lot of engineering time for basically free. But in a quick growth or prototyping phase, not to speak of venture funding, things can be quite different. Buying engineering time for >150€/hour can quickly offset a lot of saving [1]. Does this apply to most companies? No. Obviously not. But the cloud is not too expensive - you're paying for stuff you don't need. That's an entirely different kind of error. [0] Compared to the rack hosting setup described in the post. Hetzner, Linode, etc. do provide multiple AZs with dedicated servers. [1] Just to be fair, debugging cloud errors can be time consuming, too, and experienced AWS engineers will not be cheaper. But an RDS instance with solid backups-equivalent will usually not amortize quickly, if you need to pay someone to set it up. |
|
| ▲ | John23832 2 days ago | parent | next [-] |
| You don't actually need any of those things until you no longer have a "project", but a business which will allow you to pay for the things you require. You'd be amazed by how far you can get with a home linux box and cloudflare tunnels. |
| |
| ▲ | koito17 2 days ago | parent | next [-] | | On this site, I've seen these kind of takes repeatedly over the past years, so I went ahead and built a little forum that consists of a single Rust binary and SQLite. The binary runs on a Mac Mini in my bedroom with Cloudflare tunnels. I get continuous backups with Litestream, and testing backups is as trivial as running `litestream restore` on my development machine and then running the binary. Despite some pages issuing up to 8 database queries, I haven't seen responses take more than about 4 - 5 ms to generate. Since I have 16 GB of RAM to spare, I just let SQLite mmap the whole the database and store temp tables in RAM. I can further optimize the backend by e.g. replacing Tera with Askama and optimizing the SQL queries, but the easiest win for latency is to just run the binary in a VPS close to my users. However, the current setup works so well that I just see no point to changing what little "infrastructure" I've built. The other cool thing is the fact that the backend + litestream uses at most ~64 MB of RAM. Plenty of compute and RAM to spare. It's also neat being able to allocate a few cores on the same machine to run self-hosted GitHub actions, so you can have the same machine doing CI checks, rebuilding the binary, and restarting the service. Turns out the base model M4 is really fast at compiling code compared to just about every single cloud computer I've ever used at previous jobs. | | |
| ▲ | a day ago | parent | next [-] | | [deleted] | |
| ▲ | busterarm 2 days ago | parent | prev [-] | | Just one of the couple dozen databases we run for our product in the dev environment alone is over 12 TB. How could I not use the cloud? | | |
| ▲ | maccard a day ago | parent | next [-] | | 12TB is $960/month in gp3 storage alone. You can buy 12TB of NVMe storage for less than $960, and it will be orders of magnitude faster than AWS. Your use case is the _worst_ use case for the cloud. | | |
| ▲ | pnutjam a day ago | parent [-] | | The most consistent misunderstanding I see about the cloud, is disk I/O. Nobody understands how slow your standard cloud disk is under load. They see good performance and assume that will always be the case.
They don't realize that most cloud disks use a form of token tracking where they build up I/O over time and if you have bursts or sustained high I/O load you will very quickly notice that your disk speeds are garbage. For some reason people more easily understand the limits of CPU and memory, but overlook disk constantly. | | |
| ▲ | maccard a day ago | parent | next [-] | | Even without that, you are still at the heart of it accessing over a SAN like interface with some sort of local cache. Getting an actual local drive on AWS the performance is night and day | | |
| ▲ | pnutjam a day ago | parent [-] | | Sure, you can work around it; but it blows up the savings alot of people expect when they don't include this in their math. Also, SAN is often faster then local disk if you have a local SAN. | | |
| |
| ▲ | anktor a day ago | parent | prev | next [-] | | What could I read to inform myself better on this topic? It is true I had not seen this angle before | | | |
| ▲ | immibis a day ago | parent | prev [-] | | At one time I had a project to run a cryptocurrency node for BSC (this is basically a fork of Ethereum with all the performance settings cranked up to 11, and blocks centrally issued instead of being mined). It's very sensitive to random access disk throughput and latency. At the time I had a few tiny VPS on AWS and a spinning drive at home, so I evaluated running it there. Even besides the price, you simply cannot run it on AWS EBS because the disk is just too slow to validate each block before the next one arrives. I spent a few hundred dollars and bought an NVMe SSD for my home computer instead. |
|
| |
| ▲ | sgarland a day ago | parent | prev | next [-] | | First of all, if you have a dev DB that’s 12 TB, I can practically guarantee that it is tremendously unoptimized. But also, that’s extremely easily handled with physical servers - there are NVMe drives that are 10x as large. | | |
| ▲ | fx1994 a day ago | parent | next [-] | | that's what I always brag to my devs, why is our DB 1TB, and only 20 users are working in our app. They are collecting all garbage and saving it to DB. Poor development skills I would say. Our old app did the same thing, and after 15 years it was barely 100GB with tens of users. devs today are SELECT *. If it does not work, they say we need more resources. Thats why I hate cloud. | | |
| ▲ | xpe 14 hours ago | parent [-] | | Nothing like piling transactions, analytics, and logs to the same database. /s |
| |
| ▲ | John23832 12 hours ago | parent | prev [-] | | Eh, please find me a 120 TB NVMe. | | |
| |
| ▲ | mootothemax a day ago | parent | prev | next [-] | | > Just one of the couple dozen databases we run for our product in the dev environment alone is over 12 TB. > How could I not use the cloud? Funnily enough, one of my side projects has its (processed) primary source of truth at that exact size. Updates itself automatically every night adding a further ~18-25 million rows. Big but not _big_ data, right? Anyway, that's sitting running happily with instant access times (yay solid DB background) on a dedicated OVH server that's somewhere around £600/mo (+VAT) and shared with a few other projects. OVH's virtual rack tech is pretty amazing too, replicating that kind of size on the internal network is trivial too. | |
| ▲ | wheybags 2 days ago | parent | prev | next [-] | | https://www.seagate.com/products/enterprise-drives/exos/exos... | | |
| ▲ | selcuka 2 days ago | parent [-] | | > one of the couple dozen databases I guess this is one of those use cases that justify the cloud. It's hard to host that reliably at home. | | |
| ▲ | c0balt a day ago | parent [-] | | Not too push the point too hard, but a "dev environment" for a product is for a business (not an individual consumer). Having a server (rack) in an office is not that hard, but alas the cloud might be better here for ease of administration. | | |
| ▲ | mcny a day ago | parent | next [-] | | My understanding is that aws exists because we can't get any purchase approved in under three months. | | |
| ▲ | darkwater a day ago | parent [-] | | I don't think so. An organization so big and bureaucratic that needs 3 months to authorize a server purchase will for sure need a few weeks of paperwork to authorize a new AWS account creation, and will track the spending for OU and will cut budget and usage if they think you deserve it. |
| |
| ▲ | wongarsu a day ago | parent | prev [-] | | And plenty of datacenters will be happy to give you some space in one of their racks. Not wanting to deal with backups or HA are decent reasons to put a database in the cloud (as long as you are aware how much you are overpaying). Not having a good place to put the server is not a good reason | | |
| ▲ | immibis a day ago | parent [-] | | If anyone's curious about the ballpark cost, a carrier-owned (?) DC near me that publishes prices (most don't) advertises a full rack for 650€ per month, including internet @ 20TB/month @ 1 Gbps, and 1kW power. Though both of which are probably less than you'd need if you needed a full of rack of space, which I assume is part of the reason that pricing is almost always "contact us". I did not bother getting a quote just for the purpose of this comment. But another thing that people need to be less afraid of, when they're looking to actually spend a few digits of money and not just comment about it, is asking for quotes. |
|
|
|
| |
| ▲ | koito17 2 days ago | parent | prev | next [-] | | 12 TB fits entirely into the RAM of a 2U server (cf. Dell PowerEdge R840). However, I think there's an implicit point in TFA; namely, that your personal and side projects are not scaling to a 12 TB database. With that said, I do manage approximately 14 TB of storage in a RAIDZ2 at my home, for "Linux ISOs". The I/O performance is "good enough" for streaming video and BitTorrent seeding. However, I am not sure what your latency requirements and access patterns are. If you are mostly reading from the 12 TB database and don't have specific latency requirements on writes, then I don't see why the cloud is a hard requirement? To the contrary, most cloud providers provide remarkably low IOPS in their block storage offerings. Here is an example of Oracle Cloud's block storage for 12 TB: Max Throughput: 480 MB/s
Max IOPS: 25,000
https://docs.oracle.com/en-us/iaas/Content/Block/Concepts/bl...Those are the kind of numbers I would expect of a budget SATA SSD, not "NVMe-based storage infrastructure". Additionally, the cost for 12 TB in this storage class is ~$500/mo. That's roughly the cost of two 14 TB hard drives in a mirror vdev on ZFS (not that this is a good idea btw). This leads me to guess most people will prefer a managed database offering rather than deploying their own database on top of a cloud provider's block storage. But 12 TB of data in the gp3 storage class of RDS costs about $1,400/mo. That is already triple the cost of the NAS in my bedroom. Lastly, backing up 12 TB to Backblaze B2 is about $180/mo. Given that this database is for your dev environment, I am assuming that backup requirements are simple (i.e. 1 off-site backup). The key point, however, is that most people's side projects are unlikely to scale to a 12 TB dev environment database. Once you're at that scale, sure, consider the cloud. But even at the largest company I worked at, a 14 TB hard drive was enough storage (and IOPS) for on-prem installs of the product. The product was an NLP-based application that automated due diligence for M&As. The storage costs were mostly full-text search indices on collections of tens of thousands of legal documents, each document could span hundreds to thousands of pages. The backups were as simple as having a second 14 TB hard drive around and periodically checking the data isn't corrupt. | | |
| ▲ | busterarm 2 days ago | parent [-] | | Still missing the point. This is just one server and just in the dev enviornment? How many pets do you want to be tending to? I have 10^5 servers I'm responsible for... The quantity and methods the cloud affords me allow me to operate the same infrastructure with 1/10th as much labor. At the extreme ends of scale this isn't a benefit, but for large companies in the middle this is the only move that makes any sense. 99% of posts I read talking about how easy and cheap it is to be in the datacenter all have a single digit number of racks worth of stuff. Often far less. We operate physical datacenters as well. We spend multiple millions in the cloud per month. We just moved another full datacenter into the cloud and the difference in cost between the two is less than $50k/year. Running in physical DCs is really inefficient for us for a long of annoying and insurmountable reasons. And we no longer have to deal with procurement and vendor management. My engineers can focus their energy on more valuable things. | | |
| ▲ | rowanG077 a day ago | parent | next [-] | | What is this ridiculous bait and switch. First you talk about a 12 TB dev databases and "How could I not use the cloud?". And you rightfully get challenged on that and then suddenly it's about the number of servers you have to manage and you don't have the energy to do that with your team. Those two have nothing to do with each other. | |
| ▲ | 2 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | CyberDildonics a day ago | parent | prev | next [-] | | Why do people think it takes "labor" to have a server up and running? Multiple millions in the cloud per month? You could build a room full of giant servers and pay multiple people for a year just on your monthly server bill. | |
| ▲ | jamesnorden a day ago | parent | prev [-] | | Found the "AWS certified cloud engineer". |
|
| |
| ▲ | Aeolun a day ago | parent | prev | next [-] | | Buy a pretty basic HDD? These days 12 TB isn’t all that much? | |
| ▲ | dublinben a day ago | parent | prev | next [-] | | 12 TB is easy. https://yourdatafitsinram.net/ | |
| ▲ | n3t 2 days ago | parent | prev | next [-] | | What's your cloud bill? | |
| ▲ | dragonelite a day ago | parent | prev | next [-] | | Sounds more like your use case is like the 1~2% of the cases a simple server and sqlite is maybe not the correct answer. | |
| ▲ | cultofmetatron a day ago | parent | prev | next [-] | | what are you doing that you have 12TB in dev??? my startup isn't even using a TB in production and we hands multiple millions of dollars in transactions every month. | |
| ▲ | esseph a day ago | parent | prev | next [-] | | A high end laptop now can come with double that amount of storage. | |
| ▲ | cess11 a day ago | parent | prev | next [-] | | My friends run hundreds of TB:s serviced onto the Internet for hobby and pleasure reasons. It's not all HA, NVMe, web scale stuff, but it's not like a few hundred TB:s is a huge undertaking even for individual nerds with a bit of money to spend or connections at corporations that monotonically decommission hardware and is happy to not have to spend resources getting rid of it. This summer I bought a used server for 200 euros from an acquaintance, I plan on shoving 140 TB in it and expect some of my future databases to exceed 10 TB in size. | |
| ▲ | a day ago | parent | prev [-] | | [deleted] |
|
| |
| ▲ | bambax a day ago | parent | prev | next [-] | | Exactly! I've been self hosting for about two years now, on a NAS with Cloudflare in front of it. I need the NAS anyway, and Cloudflare is free, so the marginal cost is zero. (And even if the CDN weren't free it probably wouldn't cost much.) I had two projects reach the front page of HN last year, everything worked like a charm. It's unlikely I'll ever go back to professional hosting, "cloud" or not. | | |
| ▲ | John23832 a day ago | parent [-] | | If you have explosive growth, sure cloud. The vast majority of us that are actually technically capable are better served self hosting. Especially with tools like cloudflare tunnels and Tailscale. |
| |
| ▲ | fragmede 2 days ago | parent | prev [-] | | You can get quite far without that box,
even, and just use Cloudflare R2 as free static hosting. | | |
| ▲ | selcuka 2 days ago | parent [-] | | CloudFlare Pages is even easier for static hosting with automatic GitHub pulls. | | |
| ▲ | jen729w 2 days ago | parent [-] | | Happy Netlify customer here, same deal. $0. (LOL 'customer'. But the point is, when the day comes, I'll be happy to give them money.) | | |
| ▲ | 0cf8612b2e1e a day ago | parent | next [-] | | Careful what you wish for. Netlify sent a guy a $104k bill from the free plan. Thankfully social media outage saved the guy. https://news.ycombinator.com/item?id=39520776 | | |
| ▲ | ascorbic a day ago | parent [-] | | Netlify changed their pricing after that so that free accounts are always free. | | |
| ▲ | franciscop a day ago | parent [-] | | Could you give a reference please? I was literally going to recommend Netlify at work, but didn't after I saw that story. | | |
| ▲ | selcuka 15 hours ago | parent | next [-] | | https://www.netlify.com/pricing/#faq > The free plan is always free, with hard monthly limits that cannot be exceeded or incur any costs. | |
| ▲ | immibis 15 hours ago | parent | prev [-] | | Is $0.55/GB not enough reason to avoid them? I guess not if your business is making more than that - bandwidth expense for a shopping site shouldn't be a problem when the customers are spending $100 for every 0.1GB - but that price should realistically be closer to $0.01/GB or even $0.002/GB. Sounds like they're forwarding you AWS's extremely excessive bandwidth pricing. | | |
| ▲ | selcuka 15 hours ago | parent [-] | | > Is $0.55/GB not enough reason to avoid them? Where did you read that? The pricing page says 10 credits per GB, and extra credits can be purchased at $10 per 1500 credit. So it's more like $0.067/GB. |
|
|
|
| |
| ▲ | fvdessen a day ago | parent | prev [-] | | FYI I just migrated from Netlify to Cloudflare pages and Cloudflare is massively faster across all metrics. |
|
|
|
|
|
| ▲ | ksec a day ago | parent | prev | next [-] |
| >most startups that just run an EC2 instance will save a lot of cash going to Hetzner, Linode, Digital Ocean or whatever. I do host at Hetzner myself and so do a lot of my clients. That being said, the cloud does have a lot of advantages: When did Linode and DO got dropped and not being part of the cloud ? What used to separate VPS and Cloud was resources at per second billing. Which DO and Linode along with a lot of 2nd tier hosting also offer. They are part of cloud. Scaling used to be an issue, because buying and installing your hardware or sending it to DC to be installed and ready takes too much time. Dedicated Servers solution weren't big enough at the time. And the highest Core count at the time was 8 core Xeon CPU in 2010. Today we have EPYC Zen 6c at 256 Core and likely double the IPC. Scaling issues that requires a Rack of server can now be done with 1 single server and fit everything inside RAM. Managed database? PlanetScale or Neon. A lot of issues for medium to large size project that "Cloud" managed to solve are no longer an issue in 2025. Unless you are top 5-10% of project that requires these sort of flexibilities. |
| |
| ▲ | bobdvb a day ago | parent [-] | | For a lot of people (not me), if it's not from AWS, Azure, GCP or Oracle then it's not cloud, it's just a sparkling hosting provider. I had someone on this site arguing that Cloudflare isn't a cloud provider... |
|
|
| ▲ | fhd2 a day ago | parent | prev | next [-] |
| My pet peeves are: 1. For small stuff, AWS et al aren't that much more expensive than Hetzner, mostly in the same ballpark, maybe 2x in my experience. 2. What's easy to underestimate for _developers_ is that your self hosted setup is most likely harder to get third party support for. If you run software on AWS, you can hire someone familiar with AWS and as long as you're not doing anything too weird, they'll figure it out and modify it in no time. I absolutely prefer self hosting on root servers, it has always been my go to approach for my own companies, big and small stuff. But for people that can't or don't want to mess with their infrastructure themselves, I do recommend the cloud route even with all the current anti hype. |
| |
| ▲ | makeitdouble a day ago | parent | next [-] | | > 2. What's easy to underestimate for _developers_ is that your self hosted setup is most likely harder to get third party support for. If you run software on AWS, you can hire someone familiar with AWS and as long as you're not doing anything too weird, they'll figure it out and modify it in no time. If you're at an early/smaller stage you're not doing anything too fancy either way. Even self hosted, it will probably be easy enough to understand that you're just deploying a rails instance for example. It only becomes trickier if you're handling a ton of traffic or apply a ton of optimizations and end up already in a state where a team of sysadmin should be needed while you're doing it alone and ad-hoc. IMHO the important part would be to properly realize when things will get complicated and move on to a proper org or stack before you're stuck. | | |
| ▲ | fhd2 a day ago | parent [-] | | You'd think that, but from what I've seen, some people come up with pretty nasty self hosting setups. All the way from "just manually set it all up via SSH last year" to Kubernetes. Of course, people can and also definitely do create a mess on AWS. It's just that I've seen that _far_ less. |
| |
| ▲ | matt-p a day ago | parent | prev | next [-] | | One way of solving for this is to just use K3s or even just plain docker. It is then just kuberneters/containers and you can hire alot of people who understand that. | | |
| ▲ | amtamt a day ago | parent [-] | | Absolutely recommend k3s.
Start with a single node and keep on scaling as customer base increases. |
| |
| ▲ | jimbokun a day ago | parent | prev [-] | | > mostly in the same ballpark, maybe 2x in my experience. 2x is the same ballpark??? |
|
|
| ▲ | tbeseda 2 days ago | parent | prev | next [-] |
| > But the cloud is not too expensive - you're paying for stuff you don't need. That's an entirely different kind of error. Agreed.
These sort of takedowns usually point to a gap in the author's experience. Which is totally fine! Missing knowledge is an opportunity. But it's not a good look when the opportunity is used for ragebait, hustlr. |
|
| ▲ | zigzag312 2 days ago | parent | prev | next [-] |
| > A few clicks. Getting through AWS documentation can be fairly time consuming. |
| |
| ▲ | rtpg 2 days ago | parent | next [-] | | Figuring out how to do db backups _can_ also be fairly time consuming. There's a question of whether you want to spend time learning AWS or spend time learning your DB's hand-rolled backup options (on top of the question of whether learning AWS's thing even absolves you of understanding your DB's internals anyways!) I do think there's value in "just" doing a thing instead of relying on the wrapper. Whether that's easier or not is super context and experience dependent, though. | | |
| ▲ | Dylan16807 2 days ago | parent | next [-] | | > Figuring out how to do db backups _can_ also be fairly time consuming. apt install automysqlbackup autopostgresqlbackup Though if you have proper filesystem snapshots then they should always see your database as consistent, right? So you can even skip database tools and just learn to make and download snapshots. | | |
| ▲ | ngc248 a day ago | parent [-] | | nah filesystem snapshots may not lead to consistent DB backups. DB backup software usually use a plugin to tell the DB to coalesce data before taking a snapshot. | | |
| ▲ | Dylan16807 a day ago | parent | next [-] | | Databases have to be ready for power loss, don't they? They might not be happy about it, but if that corrupts anything then the design has failed. And again I'll emphasize proper snapshot, cutting off writes at an exact point in time. A normal file copy cannot safely back up an active database. | |
| ▲ | enronmusk a day ago | parent | prev | next [-] | | > filesystem snapshots may not lead to consistent DB backups Only if your database files are split across multiple file systems, which is atypical. | |
| ▲ | baq a day ago | parent | prev [-] | | At least one OS you’ve heard of can quiesce the file system to allow taking a consistent snapshot; I’d be surprised if this wasn’t widely available everywhere. |
|
| |
| ▲ | anotherevan 2 days ago | parent | prev | next [-] | | Hmmm, I think you have to figure out how to do your database backups anyway as trying to get a restorable backup out of RDS to use on another provider seems to be a difficult task. Backups that are stored with the same provider are good, providing the provider is reliable as a whole. (Currently going through the disaster recovery exercise of, "What if AWS decided they didn't like us and nuked our account from orbit.") | | |
| ▲ | bdangubic 2 days ago | parent [-] | | aws would never do that :) plus you can also do it in aws with like 75 clicks around UI which makes no sense even when you are tripping on acid | | |
| ▲ | happymellon a day ago | parent [-] | | > 75 clicks Well 2 commands... aws rds export-task create \
--source-arn <SnapshotArn> \
--s3-bucket-name <Bucket> \
--iam-role-arn <Role>
Then copy it down aws s3 cp \
<S3 Location> \
<Local Dir> --recursive
The biggest effort would be then running the Apache Parquet to CSV tool on it. | | |
| ▲ | prmoustache a day ago | parent | next [-] | | Those buckets and IAM policies and roles also have to be managed. There are also turnkeys solutions that allow one to spin up a DB, setup replication and backups inside or outside of big cñoud vendors. That is the point of db kubernetes operators for instance. | |
| ▲ | darkwater a day ago | parent | prev [-] | | Plus the s3 bucket creation and definition commands, and the IAM role and attached policy commands. If you do all in the webUI it's not going to be 75 clicks either but 30 for sure. | | |
| ▲ | happymellon a day ago | parent [-] | | It could easily be 30 clicks. But creating an S3 bucket, an IAM role and attaching policies isn't 30 commands. |
|
|
|
| |
| ▲ | bdangubic 2 days ago | parent | prev [-] | | most definitely do not want to spend time learning aws… would rather learn about typewriter maintenance |
| |
| ▲ | JuniperMesos 2 days ago | parent | prev | next [-] | | And making sure you're not making a security configuration mistake that will accidentally leak private data to the open internet because of a detail of AWS you were unaware of. | | | |
| ▲ | fulafel a day ago | parent | prev | next [-] | | And learning TypeScript and CDK, if we're comparing scripted, repeatable setups which you should be doing from the start. | | |
| ▲ | sofixa a day ago | parent [-] | | > repeatable setups which you should be doing from the start Yes, but not with > TypeScript and CDK Unless your business includes managing infrastructure with your product, for whatever reason (like you provision EC2 instances for your customers and that's all you do), there is no reason to shoot yourself in the foot with a fully fledged programming language for something that needs to be as stable as infrastructure. The saying is Infrastructure as Code, not with code. Even assuming you need to learn Terraform from scratch but already know Typescript, you would still save you time compared to learning CDK, figuring out what is possisble with it, and debugging issues down the line. | | |
| ▲ | fulafel a day ago | parent [-] | | I think declarative is nicer too, but choosing a non mainstream tech here takes self-confidence in the matter that inexperienced AWSers are unlikely to have. And learning something arguably better, like Cloudformation / Terraform / SST, is still a hurdle. |
|
| |
| ▲ | hughw 2 days ago | parent | prev [-] | | gotta say, Amazon Q can do the details for you in many cases. |
|
|
| ▲ | graemep a day ago | parent | prev | next [-] |
| > You're getting a lot of services readily available. Need offsite backups? A few clicks I think it is a lot safer for backups to be with an entirely different provider. It protects you in case of account compromise, account closure, disputes. If using cloud and you want to be safe, you should be multi-cloud. People have been saved from disaster by multi-cloud setups. > You're not paying up-front costs (vs. investing hundreds of dollars for buying server hardware) Not true for VPSes or rented dedicated servers either. > Peak-heavy loads can be a lot cheaper. they have to be very spiky indeed though. LLMs might fit but a lot of compute heavy spiky loads do not. I saved a client money on video transcoding that only happened once per upload and only over a month or two an year by renting a dedi all ear round rather than using the AWS transcoding service. > Compared to the rack hosting setup described in the post. Hetzner, Linode, etc. do provide multiple AZs with dedicated servers. You have to do work to ensure things run across multiple availability zones (and preferably regions) anyway. > But an RDS instance with solid backups-equivalent will usually not amortize quickly, if you need to pay someone to set it up. You have more forced upgrades. An unmanaged database will only need a lot of work if operating at large scale. If you are then its probably well worth employing a DBA anyway as an AWS or similar managed DB is not going to do all the optimising and tuning a DBA will do. |
|
| ▲ | gtech1 2 days ago | parent | prev | next [-] |
| any serious business will(might?) have hundreds of Tbs of data. I store that in our DC and with a 2nd DC backup for about 1/10 the price of what it would cost in S3. When does the cloud start making sense ? |
| |
| ▲ | presentation 2 days ago | parent [-] | | In my case we have a B2B SaaS where access patterns are occasional, revenue per customer is high, general server load is low. Cloud bills just don’t spike much. Labor is 100x the cost of our servers so saving a piddly amount of money on server costs while taking on even just a fraction of one technical employee’s worth of labor costs makes no sense. |
|
|
| ▲ | sz4kerto a day ago | parent | prev | next [-] |
| I think compliance is one of the key advantages of cloud. When you go through SOC2 or ISO27001, you can just tick off entire categories of questions by saying 'we host on AWS/GCP/Azure'. It's really shitty that we all need to pay this tax, but I've been just asked about whether our company has armed guards and redundant HVAC systems in our DC, and I wouldn't know how to do that apart from saying that 'our cloud provider has all of those'. |
| |
| ▲ | prmoustache a day ago | parent [-] | | In my experience you still have to provide an awful lot of "evidence". I guess the advsntage of AWS/GCP/Cloud is that they are so ubiquitous you could literally ask an LLM to generate fake evidence to speed up the process. |
|
|
| ▲ | locknitpicker a day ago | parent | prev | next [-] |
| > That being said, the cloud does have a lot of advantages: Another advantage is that if you aim to provide a global service consumed throughout the world then cloud providers allow you to deploy your services in a multitude of locations in separate continents. This alone greatly improves performance. And you can do that with a couple of clicks. |
|
| ▲ | winddude 2 days ago | parent | prev | next [-] |
| linode was better and had cheaper pricing before being bought by akamai |
| |
| ▲ | Aeolun 2 days ago | parent | next [-] | | I don’t feel like anything really changed? Fairly certain the prices haven’t changed. It’s honestly been pleasantly stable. I figured I’d have to move after a few months, but we’re a few years into the acquisition and everything still works. | | | |
| ▲ | jonway 2 days ago | parent | prev | next [-] | | Akamai has some really good infrastructure, and an extremely competent global cdn and interconnects. I was skeptical when linode was acquired, but I value their top-tier peering and decent DDoS mitigation which is rolled into the cost. | |
| ▲ | busterarm 2 days ago | parent | prev | next [-] | | No longer getting DDOSed multiple years in a row on Christmas Eve is worth whatever premium Akamai wants to charge over old Linode. | |
| ▲ | mcmcmc 2 days ago | parent | prev [-] | | Whoa, an acquisition made things worse for everyone but the people who cashed out? Crazy, who could have seen that coming | | |
| ▲ | presentation a day ago | parent [-] | | Guess you came for the hot take without actually using the service or participating in any intelligent conversation. All the sibling comments observe that nothing you are talking about happened. Snarky ignorant comments like yours ruin Hacker News and the internet as a whole. Please reconsider your mindset for the good of us all. |
|
|
|
| ▲ | hshdhdhehd a day ago | parent | prev | next [-] |
| To me DO is a cloud. It is pricey (for performance) and convenient. It is possibly a wiser bet than AWS for a startup that wants to spend less developer (read expensive!) time on infra. |
|
| ▲ | dabockster 2 days ago | parent | prev | next [-] |
| You're literally playing into what the author is criticizing. |
|
| ▲ | hinkley a day ago | parent | prev | next [-] |
| I want more examples of people running the admin interface on prem and the user visible parts on the cloud. |
|
| ▲ | matt-p a day ago | parent | prev | next [-] |
| I mean there are many places that sell multi AZ, hourly billed VPS/Bare Metal/GPU at a fraction of the cost of AWS. I would personally have an account at one of those places and back up to there with everything ready to spin up instances and failover if you lose your rack, and use them for any bursty loads. |
|
| ▲ | EGreg 2 days ago | parent | prev [-] |
| I started out with linode, a decade ago. It became much more expensive than AWS, because it bundled the hard drive space with the RAM. Couldn't scale one without scaling the other. It was ridiculous. AWS has a bunch of startup credits you can use, if you're smart. But if you want free hosting, nothing beats just CloudFlare. They are literally free and even let you sign up anonymously with any email. They don't even require a credit card, unlike the other ones. You can use cloudflare workers and have a blazing fast site, web services, and they'll even take care of shooing away bots for you. If you prefer to host something on your own computer, well then use their cache and set up a cloudflare tunnel. I've done this for Telegram bots for example. Anything else - just use APIs. Need inference? Get a bunch of Google credits, and load your stuff into Vertex or whatever. Want to take payments anonymously from around the world? Deploy a dapp. Pay nothing. Literally nothing! LEVEL 2: And if you want to get extra fancy, have people open their browser tabs and run your javascript software in there, earning your cryptocurrency. Now you've got access to tons of people willing to store chunks of files for you, run GPU inference, whatever. Oh do you want to do distributed inference? Wasmcloud: https://wasmcloud.com/blog/2025-01-15-running-distributed-ml... ... but I'd recommend just paying Google for AI workloads Want livestreaming that's peer to peer? We've got that too: https://github.com/Qbix/Media/blob/main/web/js/WebRTC.js PS: For webrtc livestreaming, you can't get around having to pay for TURN servers, though. LEVEL 3: Want to have unstoppable decentralized apps that can even run servers? Then use pears (previously dat / hypercore). If you change your mindset, from server-based to peer to peer apps, then you can run hypercore in the browser, and optionally have people download it and run servers. https://pears.com/news/building-apocalypse-proof-application... |
| |
| ▲ | foldr a day ago | parent [-] | | >It became much more expensive than AWS, because it bundled the hard drive space with the RAM. Couldn't scale one without scaling the other. It was ridiculous. You can easily scale hard drive space independently of RAM by buying block storage separately and then mounting it on your Linode. | | |
| ▲ | graemep a day ago | parent [-] | | I think every VPS provider I have looked at any time recently (and I have been moving things in the last few weeks) offers some option for block storage separate from compute. Most offer an object storage option too. | | |
|
|