Remix.run Logo
lkrubner 15 hours ago

Interesting that the mania for over-investment in devops is beginning to abate. Here on Hacker News I was a steady critic of both Docker and Kubernetes, going to at least 2017, but most of these posts were unpopular. I have to go back to 2019 to find one that sparked a conversation:

https://news.ycombinator.com/item?id=20371961

The stuff I posted about Kubernetes did not draw a conversation, but I was simply documenting what I was seeing: vast over-investment in devops even at tiny startups that were just getting going and could have easily dumped everything on a single server, exactly as we used to do things back in 2005.

OtomotO 15 hours ago | parent | next [-]

It's just the hype moving on.

Every generation has to make similar mistakes again and again.

I am sure if we had the opportunity and the hype was there we would've used k8s in 2005 as well.

The same thing is true for e.g. JavaScript on the frontend.

I am currently migrating a project from React to HTMX.

Suddenly there is no build step anymore.

Some people were like: "That's possible?"

Yes, yes it is and it turns out for that project it increases stability and makes everything less complex while adding the exact same business value.

Does that mean that React is always the wrong choice?

Well, yes, React sucks, but solutions like React? No! It depends on what you need, on the project!

Just as a carpenter doesn't use a hammer to saw, we as a profession should strive to use the right tool for the right job. (Albeit it's less clear than for the carpenter, granted)

sarchertech 6 hours ago | parent | next [-]

>Just as a carpenter doesn't use a hammer to saw, we as a profession should strive to use the right tool for the right job. (Albeit it's less clear than for the carpenter, granted)

The problem is that most devs don’t view themselves as carpenters. They view themselves as hammer capenters or saw carpenters etc…

It’s not entirely their fault, some of the tools are so complex that you really need to devote most of your time to 1 of them.

I realize that this kind of tool specialization is sometimes required, but I that it’s overused by at the very least an order of magnitude.

The vast majority of companies that are running k8s, react, kafka etc… with a team of 40+, would be better off running rails (or similar) on heroku (or similar), or a VPS, or a couple servers in the basement. Most of these companies could easily replace their enormous teams of hammer carpenters and saw carpenters with 3-4 carpenters.

But devs have their own gravity. The more devs you have the faster you draw in new ones, so it’s unclear to me if a setup like the above is sustainable long term outside of very specific circumstances.

But if it were simpler there wouldn’t be nearly many jobs, so I really shouldn’t complain. And it’s not like every other department isn’t also bloated.

ajayvk 15 hours ago | parent | prev | next [-]

Along those lines, I am building https://github.com/claceio/clace for teams to deploy internal tools. It provides a Cloud Run type interface to run containers, including scaling down to zero. It implements an application server than runs containerized apps.

Since HTMX was mentioned, Clace also makes it easy to build Hypermedia driven apps.

MortyWaves 8 hours ago | parent [-]

Would you be open to non Python support as well? This tool seems useful, very useful in fact, but I mainly use .NET (which yes can run very well in containers).

ajayvk 7 hours ago | parent [-]

Starlark (python like config language) is used to configure Clace. For containerized apps, python frameworks are supported without a Dockerfile being required. All other languages currently require a user provided Dockerfile, the `container` spec can be used.

I do plan to add specs for other languages. New specs have to be added here https://github.com/claceio/appspecs. New specs can be created locally also in the config, see https://clace.io/docs/develop/#building-apps-from-spec

esperent 15 hours ago | parent | prev | next [-]

> Just as a carpenter doesn't use a hammer to saw, we as a profession should strive to use the right tool for the right job

I think this is a gross misunderstanding of the complexity of tools available to carpenters. Use a saw. Sure, electric, hand powered? Bandsaw, chop saw, jigsaw, scrollsaw? What about using CAD to control the saw?

> Suddenly there is no build step anymore

How do you handle making sure the JS you write works on all the browsers you want to support? Likewise for CSS: do you use something like autoprefixer? Or do you just memorize all the vendor prefixes?

creesch 10 hours ago | parent | next [-]

As far as browser prefixes go, you know that browser vendors have largely stopped using those? Not even recently, that process started already way back in 2016. Chances are that if you are using prefixes in 2024 you are supporting browsers versions who, by all logic, should no longer have internet access because of all the security implications....

OtomotO 15 hours ago | parent | prev [-]

Htmx works on all browsers I want to support.

I don't use any prefixed CSS and haven't for many years.

Last time I did knowingly and voluntarily was about a decade ago.

augbog 15 hours ago | parent | prev | next [-]

It's actually kinda hilarious how RSC (React Server Components) is pretty much going back to what PHP was but yeah proves your point as hype moves on people begin to realize why certain things were good vs not

fud101 13 hours ago | parent | prev | next [-]

where does tailwind stand on this? you can use it without a build step but it's strongly recommended in production

fer 11 hours ago | parent [-]

A build step in your pipeline is fine because, chances are, you already have a build step in there.

raxxorraxor 5 hours ago | parent | prev [-]

[dead]

harrall 13 hours ago | parent | prev | next [-]

People gravely miss-understand containerization and Docker.

All it lets you do is put shell commands into a text file and be able to run it self-contained anywhere. What is there to hate?

You still use the same local filesystem, the same host networking, still rsync your data dir, still use the same external MySQL server even if you want -- nothing has changed.

You do NOT need a load balancer, a control plane, networked storage, Kubernetes or any of that. You ADD ON those things when you want them like you add on optional heated seats to your car.

skydhash 6 hours ago | parent [-]

Why would you want to run it anywhere. People mostly select an OS and just update that. It may be great when distributing applications for others to host, but not when it’s the only strategy. I have to reverse engineer dockerfiles when the developer wouldn’t provide a proper documentation.

sobellian 15 hours ago | parent | prev | next [-]

I've worked at a few tiny startups, and I've both manually administered a single server and run small k8s clusters. k8s is way easier. I think I've spent 1, maybe 2 hours on devops this year. It's not a full-time job, it's not a part-time job, it's not even an unpaid internship. Perhaps at a bigger company with more resources and odd requirements...

nicce 13 hours ago | parent [-]

But how much this costs extra? Sounds like you are using cloud-provided k8s.

sobellian 13 hours ago | parent | next [-]

EKS is priced at $876 / yr / cluster at current rates.

Negligible for me personally, it's much less than either our EC2 or RDS costs.

fer 11 hours ago | parent [-]

Yeah, using EKS isn't the same thing as "administering k8s", unless I misread you above. Actual administration is already done for you, it's batteries included, turn-key, and integrated with everything AWS.

A job ago we had our own k8s cluster in our own DC, and it required a couple of teams to keep running and reasonably integrated with everything else in the rest of the company. It was probably cheaper overall than cloud given the compute capacity we had, but also probably not by much given the amount of people dedicated to it.

Even my 3-node k3s at home requires more attention than what you described.

sobellian 6 hours ago | parent [-]

You did misread me, I never said I administered k8s. The quoted phrase does not exist :)

p_l 4 hours ago | parent | prev [-]

I currently use k8s to control bunch of servers.

The amount of work/cost of using k8s for handling them in comparison to doing it "old style" is probably negative by now.

valenterry 15 hours ago | parent | prev | next [-]

So, let's say you want to deploy server instances. Let's keep it simple and say you want to have 2 instances running. You want to have zero-downtime-deployment. And you want to have these 2 instances be able to access configuration (that contains secrets). You want load balancing, with the option to integrate an external load balancer. And, last, you want to be able to run this setup both locally and also on at least 2 cloud providers. (EDIT: I meant to be able to run it on 2 cloud providers. Meaning, one at a time, not both at the same time. The idea is that it's easy to migrate if necessary)

This is certainly a small subset of what kubernetes offers, but I'm curious, what would be your goto-solution for those requirements?

bruce511 15 hours ago | parent | next [-]

That's an interesting set of requirements though. If that is indeed your set of requirements then perhaps Kubernetes is a good choice.

But the set seems somewhat arbitrary. Can you reduce it further? What if you don't require 2 cloud providers? What if you don't need zero-downtime?

Indeed given that you have 4 machines (2 instances, x 2 providers) could a human manage this? Is Kubernetes overkill?

I ask this merely to wonder. Naturally if you are rolling out hundreds of machines you should, and no doubt by then you have significant revenue (and thus able to pay for dedicated staff) , but where is the cross-over?

Because to be honest most startups don't have enough traction to need 2 servers, never mind 4, never mind 100.

I get the aspiration to be large. I get the need to spend that VC cash. But I wonder if Devops is often just premature and that focus would be better spent getting paying customers.

valenterry 14 hours ago | parent [-]

> Can you reduce it further? What if you don't require 2 cloud providers? What if you don't need zero-downtime?

I think the "2 cloud providers" criteria is maybe negotiable. Also, maybe there was a misunderstanding: I didn't mean to say I want to run it on two cloud providers. But rather that I run it on one of them but I could easily migrate to the other one if necessary.

The zero-downtime one isn't. It's not necessarily so much about actually having zero-downtime. It's about that I don't want to think about it. Anything besides zero-downtime actually adds additional complexity to the development process. It has nothing to do with trying to be large actually.

AznHisoka 14 hours ago | parent [-]

I disagree with that last part. By default, having a few seconds downtime is not complex. The easiest thing you could do to a server is restart it. Its literally just a restart!

valenterry 13 hours ago | parent | next [-]

It's not. Imagine there is a bug that stops the app from starting. It could be anything, from a configuration error (e.g. against the database) to a problem with warmup (if necessary) or any kind of other bug like an exception that only triggers in production for whatever reasons.

EDIT: and worse, it could be something that just started and would even happen when trying to deploy the old version of the code. Imagine a database configuration change that allows the old connections to stay open until they are closed but prevents new connections from being created. In that case, even an automatic roll back to the previous code version would not resolve the downtime. This is not theory, I had those cases quite a few times in my career.

globular-toast 11 hours ago | parent | prev [-]

I managed a few production services like this and it added a lot of overhead to my work. On the one hand I'd get developers asking me why their stuff hasn't been deployed yet. But then I'd also have to think carefully about when to deploy and actually watch it to make sure it came back up again. I would often miss deployment windows because I was doing something else (my real job).

I'm sure there are many solutions but K8s gives us both fully declarative infrastructure configs and zero downtime deployment out of the box (well, assuming you set appropriate readiness probes etc)

So now I (a developer) don't have to worry about server restarts or anything for normal day to day work. We don't have a dedicated DevOps/platforms/SRE team or whatnot. Now if something needs attention, whatever it is, I put my k8s hat on and look at it. Previously it was like "hmm... how does this service deployment work again..?"

osigurdson 15 hours ago | parent | prev | next [-]

"Imagine you are in a rubber raft, you are surrounded by sharks, and the raft just sprung a massive leak - what do you do?". The answer, of course, is to stop imagining.

Most people on the "just use bash scripts and duct tape" side of things assume that you really don't need these features, that your customers are ok with downtime and generally that the project that you are working on is just your personal cat photo catalog anyway and don't need such features. So, stop pretending that you need anything at all and get a job at the local grocery store.

The bottom line is there are use cases, that involve real customers, with real money that do need to scale, do need uptime guarantees, do require diverse deployment environments, etc.

QuiDortDine 14 hours ago | parent | next [-]

Yep. I'm one of 2 Devops at an R&D company with about 100 employees. They need these services for development, if an important service goes down you can multiply that downtime by 100, turning hours into man-days and days into man-months. K8 is simply the easiest way to reduce the risk of having to plead for your job.

I guess most businesses are smaller than this, but at what size do you start to need reliability for your internal services?

ozim 11 hours ago | parent | prev [-]

You know that you can scale servers just as well, you can use good practices with scripts and deployments in bash and having them documented and in version control.

Equating bash scripts and running servers to duct taping and poor engineering vs k8s yaml being „proper engineering„ is well wrong.

caseyohara 15 hours ago | parent | prev | next [-]

I think you are proving the point; there are very, very few applications that need to run on two cloud providers. If you do, sure, use Kubernetes if that makes your job easier. For the other 99% of applications, it’s overkill.

Apart from that requirement, all of this is very doable with EC2 instances behind an ALB, each running nginx as a reverse proxy to an application server with hot restarting (e.g. Puma) launched with a systemd unit.

osigurdson 15 hours ago | parent | next [-]

To me that sounds harder than just using EKS. Also, other people are more likely to understand how it works, can run it in other environments (e.g. locally), etc.

valenterry 14 hours ago | parent | prev | next [-]

Sorry, that was a misunderstanding. I meant that I want to be able to run it on two cloud providers, but one at a time is fine. It just means that it would be easy to migrate/switch over if necessary.

globular-toast 10 hours ago | parent | prev [-]

Hmm, let's see, so you've got to know: EC2, ALB, Nginx, Puma, Systemd, then presumably something like Terraform and Ansible to deploy those configs, or write a custom set of bash scripts. And all of that and you're tied to one cloud provider.

Or, instead of reinventing the same wheels for Nth time, I could just use a set of abstractions that work for 99% of network services out there, on any cloud or bare metal. That set of abstractions is k8s.

supersixirene 7 hours ago | parent [-]

[dead]

tootubular 15 hours ago | parent | prev | next [-]

My personal goto-solution for those requirements -- well 1 cloud provider, I'll follow up on that in a second -- would be using ECS or an equivalent service. I see the OP was a critic of Docker as well, but for me, ECS hits a sweet spot. I know the compute is at a premium, but at least in my use-cases, it's so far been a sensible trade.

About the 2 cloud providers bit. Is that a common thing? I get wanting migrate away from one for another, but having a need for running on more than 1 cloud simultaneously just seems alien to me.

mkesper 8 hours ago | parent | next [-]

Last time I checked ECS was even more expensive than using Lambda but without the ability of fast starting your container, so I really don't get the niche it fits into, compared to Lambda on one side and self-hosting docker on minimal EC2 instances on the other side.

tootubular 6 hours ago | parent [-]

I may need to look at Lambda closer! At least way back, I thought it was a no-go since the main runtime I work with is Ruby. As for minimal EC2 instances, definitely, I do that for environments where it makes sense and that's the case fairly often.

valenterry 14 hours ago | parent | prev [-]

Actually, I totally agree. ECS (in combination with secret manager) is basically fulfilling all needs, except being not so easy to reproduce/simulate locally and of course with the vendor lock-in.

shrubble 15 hours ago | parent | prev | next [-]

Do you know of actual (not hypothetical) cases, where you could "flip a switch" and run the exact same Kubernetes setups on 2 different cloud providers?

InvaderFizz 14 hours ago | parent | next [-]

I run clusters on OKE, EKS, and GKE. Code overlap is like 99% with the only real differences all around ingress load balancers.

Kubernetes is what has provided us the abstraction layer to do multicloud in our SaaS. Once you are outside the k8s control plane, it is wildly different, but inside is very consistent.

threeseed 15 hours ago | parent | prev | next [-]

Yes. I've worked on a number of very large banking and telco Kubernetes platforms.

All used multi-cloud and it was about 95% common code with the other 5% being driver style components for underlying storage, networking, IAM etc. Also using Kind/k3d for local development.

devops99 15 hours ago | parent | prev | next [-]

Both EKS (Amazon) and GKE (Google Cloud) run Cilium for the networking part of their managed Kubernetes offerings. That's the only real "hard part". From the users' point of view, the S3 buckets, the network-attached block devices, and compute (CRIO container runtime) are all the same.

You are using some other cloud provider or want uniformity there's https://Talos.dev

brodo 6 hours ago | parent | prev | next [-]

If you are located in germany and run critial IT infrastructure (banks, insurance companies, energy companies) you have to be able to deal with a cloud provider completely going down in 24 houres. Not everyone who has to can really do it, but the big players can.

hi_hi 15 hours ago | parent | prev [-]

Yes, but it would involve first setting up a server instance and then installing k3s :-)

valenterry 13 hours ago | parent [-]

I actually also think that k3s probably comes closest to that. But I have never used it, and ultimately it also uses k8s.

kccqzy 14 hours ago | parent | prev | next [-]

I've worked at tiny startups before. Tiny startups don't need zero-downtime-deployment. They don't have enough traffic to need load balancing. Especially when you are running locally, you don't need any of these.

anon7000 14 hours ago | parent | next [-]

Tiny startups can’t afford to loose customers because they can’t scale though, right? Who is going to invest in a company that isn’t building for scale?

Tiny startups are rarely trying to build projects for small customer bases (eg little scaling required.) They’re trying to be the next unicorn. So they should probably make sure they can easily scale away from tossing everything on the same server

lmm 13 hours ago | parent | next [-]

> Tiny startups can’t afford to loose customers because they can’t scale though, right? Who is going to invest in a company that isn’t building for scale?

Having too many (or too big) customers to handle is a nice problem to have, and one you can generally solve when you get there. There are a handful of giant customers that would want you to be giant from day 1, but those customers are very difficult to land and probably not worth the effort.

jdlshore 13 hours ago | parent | prev [-]

Startups need product-market fit before they need scale. It’s incredibly hard to come by and most won’t get it. Their number one priority should be to run as many customer acquisition experiments as possible for as little as possible. Every hour they spend on scale before they need it is an hour less of runway.

lkjdsklf 12 hours ago | parent [-]

while true, zero downtime deployments is... trivial... even for a tiny startup.. So you might as well do it.

supersixirene 7 hours ago | parent [-]

[dead]

p_l 4 hours ago | parent | prev [-]

Tiny startups don't have money to spend on too much PaaS or too many VMs or faff around with custom scripts for all sorts of work.

Admittedly, if you don't know k8s, it might be non-starter... but if you some knowledge, k3s plus cheap server is a wonderful combo

whatever1 15 hours ago | parent | prev | next [-]

Why does a startup need zero-downtime-deployment? Who cares if your site is down for 5 seconds? (This is how long it takes to restart my Django instance after updates).

valenterry 14 hours ago | parent | next [-]

Because it increases development speed. It's maybe okay to be down for 5 seconds. But if I screw up, I might be down until I fix it. With zero-downtime deployment, if I screw up, then the old instances are still running and I can take my time to fix it.

everfrustrated 8 hours ago | parent | prev [-]

If you're doing CD where every push is an automated deploy a small company might easily have a hundred deploys a day.

So you need seamless deployments.

xdennis an hour ago | parent [-]

I think it's a bit of an exaggeration to say a "small" company easily does 100 deployments a day.

lkjdsklf 14 hours ago | parent | prev | next [-]

We’ve been deploying software like this for a long ass time before kubernetes.

There’s shitloads of solutions.

It’s like minutes of clicking in a ui of any cloud provider to do any of that. So doing it multiple times is a non issue.

Or automate it with like 30 lines of bash. Or chef. Or puppet. Or salt. Or ansible. Or terraform. Or or or or or.

Kubernetes brings in a lot of nonsense that isn’t worth the tradeoff for most software.

If you feel it makes your life better, then great!

But there’s way simpler solutions that work for most things

valenterry 13 hours ago | parent [-]

I'm actually not using kubernetes because I find it too complex. But I'm looking for a solution for that problem and I haven't found one, so I was wondering what OP uses.

Sorry, but I don't want to "click in a UI". And it is certainly not something you can just automate with 30 lines of bash. If you can, please elaborate.

lkjdsklf 12 hours ago | parent [-]

> And it is certainly not something you can just automate with 30 lines of bash. If you can, please elaborate.

Maybe not literally 30.. I didn't bother actually writing it. Also bash was just a single example. It's way less terraform code to do the same thing. You just need an ELB backed by an autoscaling group. That's not all that much to setup. That gets you the two loadbalanced servers and zero downtime deploys. When you want to deploy, you just create a new scaling group and launch configuration and attach to the ELB and ramp down the old one.. Easy peasy. For the secrets, you need at least KMS and maybe secret manager if you're feeling fancy.. That's not much to setup. I know for sure AWS and azure provide nice CLIs that would let you do this in not that many commands. or just use terraform

Personally if I really cared about multi cloud support, I'd go terraform (or whatever it's called now).

valenterry 11 hours ago | parent [-]

> You just need an ELB backed by an autoscaling group

Sure, and then you can neither 1.) test your setup locally nor 2.) easily move to another cloud provider. So that doesn't really fit what I asked.

If they answer is "there is nothing, just accept the vendor lock-in" then fine, but please don't reply with "30 lines of bash" and make me have expectations. :-(

rozap 13 hours ago | parent | prev | next [-]

A script that installs some dependencies on an Ubuntu vm. A script that rsyncs the build artifact to the machine. The script can drain connections and restart the service using the new build, then onto the next VM. The cloud load balancer points at those VMs and has a health check. It's very simple. Nothing fancy.

Our small company uses this setup. We migrated from GCP to AWS when our free GCP credits from YC ran out and then we used our free AWS credits. That migration took me about a day of rejiggering scripts and another of stumbling around in the horrible AWS UI and API. Still seems far, far easier than paying the kubernetes tax.

valenterry 12 hours ago | parent [-]

I guess the cloud load balancer is the most custom part. Do you use the alb from aws?

wordofx 8 hours ago | parent | prev | next [-]

0 downtime. Jesus Christ. Nginx and HAProxy solved this shit decades ago. You can drop out a server or group. Deploy it. Add it back in. With a single telnet command. You don’t need junk containers to solve things like “0 downtime deployments”. That was a solved problem.

valenterry 6 hours ago | parent [-]

Calm down my friend!

You are not wrong, but that only covers a part of what I was asking. How about the rest? How do you actually bring your services to production? I'm curious.

And, PS, I don't use k8s. Just saying.

amluto 14 hours ago | parent | prev | next [-]

For something this simple, multi-cloud seems almost irrelevant to the complexity. If I’m understanding your requirements right, a deployment consists of two instances and a load balancer (which could be another instance or something cloud-soecific). Does this really need to have fancy orchestration to launch everything? It could be done by literally clicking the UI to create the instances on a cloud and by literally running three programs to deploy locally.

CharlieDigital 15 hours ago | parent | prev | next [-]

Serverless containers.

Effectively using Google and Azure managed K8s. (Full GKE > GKE Autopilot > Google Cloud Run). The same containers will run locally, in Azure, or AWS.

It's fantastic for projects but and small. The free monthly grant makes it perfect for weekend projects.

gizzlon 6 hours ago | parent | prev [-]

Cloud Run. Did you read the article?

Migrating to another cloud should be quite easy. There are many PaaS solutions. The hard parts will be things like migrating the data, make sure there's no downtime AND no drift/diff in the underlying data when some clients write to Cloud-A and some write to CLoud-B, etc. But k8 do not fix these problems, so..

htgb 5 hours ago | parent [-]

Came here to say the same thing: PaaS. Intriguing that none of the other 12 sibling comments mention this… each in their bubble I guess (including me). We use Azure App Service at my day job and it just works. Not multi-cloud obviously, but the other stuff: zero downtime deploys, scale-out with load balancing… and not having to handle OS updates etc. And containers are optional, you can just drop your binaries and it runs.

pclmulqdq 15 hours ago | parent | prev | next [-]

The attraction of this stuff is mostly the ability to keep your infrastructure configurations as code. However, I have previously checked in my systemd cofig files for projects and set up a script to pull them on new systems.

It's not clear that docker-compose or even kubernetes* is that much more complicated if you are only running 3 things.

* if you are an experienced user

honkycat 15 hours ago | parent [-]

Having done both: running a small Kubernetes cluster is simpler than managing a bunch of systemd files.

worldsayshi 11 hours ago | parent [-]

Yeah this is my impression as well which makes me not understand the k8s hate.

pclmulqdq 5 hours ago | parent [-]

The complexity of k8s comes the moment you need to hold state of some kind. Now instead of one systemd entry, we have to worry about persistent volume claims and other such nonsense. When you are doing things that are completely stateless, it's simpler than systemd.

p_l 4 hours ago | parent [-]

If you need to care about state with systemd you still have the "nonsense" of persistent volume claims, they are just something you keep in notes somewhere, in my experience usually in heads of the sysadmins or an excel sheet or a text file that tries to track which server has what data connected how.

pclmulqdq 3 hours ago | parent [-]

Understand that in the hypothetical system we are discussing, there are something like 1-2 servers. In that case the "volume claim" is just "it's a file on the obvious filesystem" and does not actually need to be spelled out they way you need to spell it out in k8s. The file path you give in environment variables is where the most up-to-date version of the volume claim is. And that file is free to expand to hundreds of GB without bothering you.

p_l 3 hours ago | parent [-]

Things get iffier when you start doing things like running multiple instances of something (maybe you're sticking two test environments for your developers), or suddenly you grew a bit or no longer fit on the server and start migrating around.

The complexity of PVCs in my experience isn't really that big compared to this, possibly lower, and I did stuff both ways.

sunshine-o 6 hours ago | parent | prev | next [-]

Kubernetes, as an industry standard that a lot of people complain about is just a sitting duck waiting to be disrupted.

Anybody who doesn't have the money, time or engineering resources will jump on whatever appear as a decent alternative.

My intuition is that alternative already exist but I can't see it...

A bit like Spring emerged as an alternative to J2EE or what HTMX is to React & co.

Is it k3s or something more radical?

Is it on a chinese Github?

santoshalper 15 hours ago | parent | prev | next [-]

As an industry, we spent so much time sharpening our saw that we nearly forgot to cut down the tree.

rozap 13 hours ago | parent | prev | next [-]

ZIRP is over.

honkycat 15 hours ago | parent | prev [-]

Start-ups that don't need to scale will quickly go away, because how else are you going to make a profit?

How have you been going since 2005 and still not understand the economics of software?

ndriscoll 14 hours ago | parent | next [-]

CPUs are ~300x more powerful and storage offers ~10,000x more IOPS than 2005 hardware. More efficient server code exists today. You can scale very far on one server. If you were bootstrapping a startup, you could probably plan to use a pair of gaming PCs until at least the first 1-10M users.

shakiXBT 7 hours ago | parent [-]

10 million users on a pair of gaming PCs is ridiculous. What's your product, a website that tells the current time?

ndriscoll 3 hours ago | parent [-]

How many requests do you expect users actually do? Especially if you're serving a B2B market; not everything is centered around addiction/"engagement". My 8 year old PC can do over 10k page requests/second for a reddit or myspace clone (without getting into caching). A modern high end gaming PC should be around 10x more capable (in terms of both CPU and storage IOPS). The limit in terms of needing to upgrade to "unusual" hardware for a PC would likely be the NIC. Networking is one place where typical consumer gear is stuck in 2005.

Webapps might make it hard to tell, but a modern computer (or even an old computer like mine) is mindbogglingly fast.

Vespasian 13 hours ago | parent | prev | next [-]

Just to make it clear: There are a million use cases that don't involve scaling fast.

For example B2B businesses where you have very few but extremely high value customers for specialized use cases.

Another one is building bully hardware. Your software infrastructure does not need to grow any faster than your shop floor is building it.

Whether you want to call that a "startup" is up for debate (and mostly semanticist if you ask me) but at one point they were all a zero employee company and needed to survive their first 5 years.

In general you won't find their products on the app store.

infecto 6 hours ago | parent | prev [-]

It's disappointing to see how tone deaf some users like yourself are. Such a immature way to speak.