| ▲ | adamtulinius 9 hours ago |
| If you spin up Kubernetes for "a couple of containers to run your web app", I think you're doing something wrong in the first place, also coupled with your comment about adding SDN to Kubernetes. People use Kubernetes for way too small things, and it sounds like you don't have the scale for actually running Kubernetes. |
|
| ▲ | ownagefool 6 hours ago | parent | next [-] |
| It depends what you're doing it. My app is fairly simple node process with some side car worker processes. k8s enables me to deploy it 30 times for 30 PRs, trivially, in a standard way, with standard cleanup. Can I do that without k8s? Yes. To the same standard with the same amount of effort? Probably not. Here, I'd argue the k8s APIs and interfaces are better than trying to do this on AWS ( or your preferred cloud provider ). Where things get complicated is k8s itself is borderline cloud provider software. So teams who were previously good using a managed service are now owning more of the stack, and these random devops heros aren't necessarily making good decisions everywhere. So you really have three obvious use cases: a) You're doing something interesting with the k8s APIs, that aren't easy to do on a cloud provider. Essentially, you're a power user.
b) You want a cloud abstraction layer because you're multi-cloud or you want a lock-in bargaining chip.
c) You want cloud semantics without being on a cloud provider. However, if you're a single developer with a single machine, or a very small team and you're happy working through contended static environments, you can pretty much just put a process on a box and call it done. k8s is overkill here, though not as much as people claim until the devops heros start their work. |
| |
| ▲ | shimman 3 hours ago | parent | next [-] | | Call me old fashion but I prefer tools like Dokploy that make deployment across different VPS extremely easy. Dokploy allows me to utilize my home media server, using local instances of forgejo to deploy code, to great effect. k8s appears to be a corporate welfare jobs program where trillion dollar multinational monopolistic companies are the only ones who can collectively spend 100s of millions sustaining. Since most companies aren't trillion dollar monopolies, adopting such measures seems extremely poor. All it signals to me is that we have to stop letting SV + VC dictate the direction of tech in our industry, because their solutions are unsustainable and borderline useless for the vast majority of use cases. I'll never forget the insurance companies I worked at that orchestrated every single repo with a k8s deployment whose cloud spend was easily in the high six figures a month to handle a work load of 100k/MAU where the concurrent peak never went more than 5,000 users, something the company did know with 40 years of records. Literally had a 20 person team whose entire existence was managing the companies k8s setup. Only reason the company could sustain this was that it's an insurance company (insurance companies are highly profitable, don't let them convince you otherwise; so profitable that the government has to regulate how much profit they're legally allowed to make). Absolute insanity, unsustainable, and a tremendous waste of limited human resources. Glad you like it for your node app tho, happy for you. | | |
| ▲ | wredcoll an hour ago | parent | next [-] | | K8s is just a standardized api for running "programs" on hardware, which is a really difficult problem it solves fairly well. Is it complex? Yes, but so is the problem it's trying to solve. Is its complexity still nicer and easier to use than the previous generation of multimachine deployment systems? Also yes. | |
| ▲ | johnmaguire 3 hours ago | parent | prev [-] | | Just as a quick aside, I tried Coolify, Dokploy, Dockge, and Komodo, and if you're trying to do a Heroku-style PaaS, Dokploy is really good. Hands down the best UX for delivering apps & databases. It's too bad about the licensing. (e.g. OIDC + audit logs behind a paid enterprise license.) Coolify is full of features, but the UX suffers and they had a nasty breaking bug at one point (related to Traefik if you want to search it.) Dockge is just a simple interface into your running Docker containers and Komodo is a bit harder to understand/come up with a viable deployment model, and has no built-in support for things like databases. | | |
| ▲ | evanphx 2 hours ago | parent | next [-] | | If you're open, love to get your thoughts on https://miren.dev. We've doing similar things, but leaning into the small team aspects of these systems, along with giving folks an optional cloud tie in to help with auth, etc. | |
| ▲ | indigodaddy 2 hours ago | parent | prev [-] | | I use Cosmos Cloud on a free 24g oracle VM. Nice UI, solid system | | |
| ▲ | johnmaguire 2 hours ago | parent [-] | | Cosmos Cloud looks neat! At a first glance from looking at the web page, it looks more focused on delivering a "personal cloud" or "1-click deploy apps." Dokploy is more Heroku-styled: while you can deploy third-party apps (it's just Docker after all), it seems really geared towards and intended for you to be deploying your own apps that you developed, alongside a "managed" database (meaning, the DB is exposed in the UI, includes backup functionality, and can even be temporarily exposed publicly on the internet for debugging.) Coolify feels a bit like a mix of the two deployment models, while Dockge is "bring your own deployment" and Komodo offers to replace Terraform/Ansible/docker-compose through its own declarative GitOps-style file-based config but lacks features like managed databases, or built-in subdomain provisioning. |
|
|
| |
| ▲ | electroly an hour ago | parent | prev | next [-] | | > I'd argue the k8s APIs and interfaces are better than trying to do this on AWS I think Amazon ECS is within striking distance, at least. It does less than K8S, but if it fits your needs, I find it an easier deployment target than K8S. There's just a lot less going on. | | | |
| ▲ | evanphx 2 hours ago | parent | prev | next [-] | | Totally, it's all about the primitives. I'm curious where exe.dev is gonna build on the the base, or just leave it up to folks to add all their own bespoke stuff to do containers, logs, etc. The last 20 years has given us a lot of great primitives for folks to plug in, I think that lots of people don't want to wrangle those primitives, they just want to use them. | |
| ▲ | wredcoll an hour ago | parent | prev | next [-] | | > a) You're doing something interesting with the k8s APIs, that aren't easy to do on a cloud provider. Essentially, you're a power user. b) You want a cloud abstraction layer because you're multi-cloud or you want a lock-in bargaining chip. c) You want cloud semantics without being on a cloud provider. This is well put and it's very similar to the arguments made when comparing programming languages. At the end of the day you can accomplish the same tasks no matter which interface you choose. Personally I've never found kubernetes that difficult to use[1]. It has some weird, unpredictable bits, but so does sysvinit or docker, that just ends up being whatever you're used to. [1] except for having to install your own network mesh plugin. That part sucked. | |
| ▲ | bharat1010 2 hours ago | parent | prev [-] | | [dead] |
|
|
| ▲ | sdevonoes 6 hours ago | parent | prev | next [-] |
| Depends. For personal projects, yeah definitely. But at work? Typically the “Platform” team can only afford to support 1 (maybe 2) ways of deployment, and k8s is quite versatile, so even if you need 1 small service, you’ll go with the self-service-k8s approach your Platform team offers. Because the alternative is for you (or your team) to own the whole infrastructure stack for your new deloyment model (ecs? lambda? Whatever): so you need to setup service accounts, secret paths, firewalls, security, pipelines, registries, and a large etc. And most likely, no one will give you access rights for all of that , and your PM won’t accept the overhead either. So having everyone use the same deployment model (and that’s typically k8s) saves effort. I don’t like it for sure |
| |
| ▲ | limaho 4 hours ago | parent [-] | | This is where I'm at. Using Podman daily to run Python scripts and apps and it's been going great! However trying to build things like monitoring, secure secret injection, centralized inventory, remote logging, etc. has fallen on us. Has lead to some shadow IT (running our own container image registry, hashicorp vault instance, etc.) which makes me hesitant to share with others in the company how we're operating. I like to think if we had a K8s environment a lot of this would be built out within it. Having that functionality abstracted away from the developer would be a huge win in my opinion. | | |
|
|
| ▲ | moooo99 an hour ago | parent | prev | next [-] |
| > People use Kubernetes for way too small things, and it sounds like you don't have the scale for actually running Kubernetes. This is a problem I've run into enterprise deployments. K8s is often the lowest common denominator semi small platform engineering teams arrive on. At my current employer, a platform managed K8s namespace is the only thing we got in terms of PaaS offering, so it is what we use. Is it overpowered? Yes. Is it overly complex for our usecase? Definitely. Could we basically get by hosting our services on a few cheap mini computers with no performance penalty? Also yes. |
|
| ▲ | dajonker 9 hours ago | parent | prev | next [-] |
| I totally agree, but that's not what happens in reality: the average devops knows k8s and will slap it onto anything they see (if only so they can put in on their resume). The average manager hears about k8s, gets convinced they need and hires beforementioned devops to build it. |
| |
| ▲ | goombaskoop 9 hours ago | parent | next [-] | | > the average devops knows k8s and will slap it onto anything they see This is certainly the case from all the third person accounts I hear. Online. I never actually met a single one that is like that, if anything, those same people are the ones that are first to tell me about their Hetzner setups. | | |
| ▲ | hkt 8 hours ago | parent | next [-] | | DevOps here. The trouble is that we are literally expected to do this everywhere we go. I've personally advocated for approaches which use say, a pair of dedicated servers, or VMs as in GPs example. If you want it outside of AWS/GCP/Azure, you're regarded as a crazy person. If you don't adopt "best practices" (as defined by vendors) then management are scared. Management very often trust the sales and marketing departments of big vendors more than their own staff. Many of us have given up fighting this, because what it comes down to is a massive asymmetry of information and trust. | | |
| ▲ | regularfry 7 hours ago | parent | next [-] | | There is a kernel of validity lurking in the heart of all this, which is that immutable images you have the ability to throw away and refresh regularly are genuinely better than long-running VMs with an OS you've got to maintain, with the scope for vulnerabilities unrelated to the app you actually want to run. Management has absorbed this one good thing and slapped layer after layer of pointless rubbish on it, like a sort of inverse pearl. Being able to say "we've minimised our attack surface with a scratch image" (or alpine, or something from one of the secure image vendors) is a genuinely valuable thing. It's just the all of the everything that goes along with it... | | |
| ▲ | dijit 6 hours ago | parent [-] | | Sure. The challenge is convincing people that "golden images" and containers share a history, and that kubernetes didn't invent containers: they just solved load balancing and storage abstraction for stateless message architectures in a nice way. If you're doing something highly stateful, or that requires a heavy deployment (game servers are typically 10's of GB and have rich dynamic configuration in my experience) then kubernetes starts to become round-peg-square-hole. But people buy into it because the surrounding tooling is just so nice; and like GP says: those cloud sales guys are really good at their jobs, and kubernetes is so difficult to run reliably yourself that it gets you hooked on cloud. There's a literal army of highly charismatic, charming people who are economically incentivised to push this technology and it can be made to work so- the odds, as they say, are against you. |
| |
| ▲ | vladvasiliu 5 hours ago | parent | prev | next [-] | | > If you want it outside of AWS/GCP/Azure, you're regarded as a crazy person. If you don't adopt "best practices" (as defined by vendors) then management are scared. Management very often trust the sales and marketing departments of big vendors more than their own staff. Many of us have given up fighting this, because what it comes down to is a massive asymmetry of information and trust. I think this is the crux of the matter. Also, "everybody is doing it, so they must be right" is also a very common way of thinking amongst this population. | |
| ▲ | nz 5 hours ago | parent | prev | next [-] | | The following happened to a friend. Around the time of the pandemic, a company wanted to make some Javascript code do a kind of transformation over large number of web-pages (a billion or so, fetched as WARC files from the web archive). Their engineers suggested setting up SmartOS VMs and deploying Manta (which would have allowed the use of the Javascript code in a totally unmodified way -- map-reduce from the command-line, that scales with the number storage/processing nodes) which should have taken a few weeks at most. After a bit of googling and meeting, the higher ups decided to use AWS Lambdas and Google Cloud Functions, because that's what everyone else was doing, and they figured that this was a sensible business move because the job-market must be full of people who know how to modify/maintain Lambda/GCF code. Needless to say, Lambda/GCF were not built for this kind of workload, and they could not scale. In fact, the workload was so out-of-distribution, that the GCP folks moved the instances (if you can call them that) to a completely different data-center, because the workload was causing performance problems, for _other_ customers in the original data-center. Once it became clear that this approach cannot scale to a billion or so web-pages, it was decided to -- no, not to deploy Manta or an equivalent -- but to build a custom "pipeline" from scratch, that would do this. This system was in development for 6 months or so, and never really worked correctly/reliably. This is the kind of thing that happens when non-engineers can override or veto engineering decisions -- and the only reason they can do that, is because the non-engineers sign the paychecks (it does not matter how big the paycheck is, because market will find a way to extract all of it). One of the fallacies of the tech-industry (I do not mean to paint with too broad a brush, there are obviously companies out there that know what they are doing) is that there are trade-offs to be made between business-decisions and engineering-decisions. I think this is more a kind of psychological distortion or a false-choice (forcing an engineering decision on the basis of what the job market will be like some day in the future -- during a pandemic no less -- is practically delusional). Also, if such trade-offs are true trade-offs, then maybe the company is not really an engineering company (which is fine, but that is kind of like a shoe-store having a few podiatrists on staff -- it is wasteful, but they can now walk around in white lab-coats, and pretend to be a healthcare institution instead of a shoe-store). Personally, I believe that the tech industry sustains itself via technical debt, much like the real economy sustains itself on real debt. In some sense, everyone is trying to gaslight everyone else into incurring as much technical debt as possible, so that a way to service the debt can be sold. Most of the technical debt is not necessary, and if people were empowered to just not incur it, I suspect it would orient tech companies towards making things that actually push the state of the art forward. | | |
| ▲ | 3 hours ago | parent | next [-] | | [deleted] | |
| ▲ | jcgrillo 5 hours ago | parent | prev [-] | | There was a moment ca. 2020 when everyone was losing their minds over Lambda and other cloud services like SQS and S3 because they're "so cheap!!11". Innumeracy is a hell of a drug. | | |
| ▲ | p_l 4 hours ago | parent [-] | | Still is, just details change. A lot of criticism of k8s is always centered about some imagined perfect PaaS, or related to being in very narrow goldilocks zone where the costs of "serverless" are easier to bear... |
|
| |
| ▲ | jcgrillo 5 hours ago | parent | prev [-] | | > Management very often trust the sales and marketing departments of big vendors more than their own staff. They're getting kickbacks from cloud vendors. Prove me wrong. | | |
| ▲ | r_lee 2 hours ago | parent [-] | | not sure if this is a thing with Cloud vendors, but e.g. in Finance, you'll definitely get the opportunity to call your rep over for free fancy dinners or whatever you want, because those are "customer meetings" better than nothing, I don't blame em. |
|
| |
| ▲ | ownagefool 5 hours ago | parent | prev [-] | | To be fair, I have k8s on my hetzner :p |
| |
| ▲ | darkwater 8 hours ago | parent | prev | next [-] | | And the average developer doesn't even know where to start to deploy things in prod. When the feature product asks passes QA... to the next sprint! we are done! | | |
| ▲ | chrisweekly 6 hours ago | parent [-] | | Whose responsibility is it to establish the prerequisite CICD pipelines, HITL workflows, and Observability infr in order for devs to shepherd changes to prod (and track their impact)? Hint: it's not the developer's. | | |
| ▲ | philipallstar 5 hours ago | parent | next [-] | | This was the point of "devops" (the concept, not the job title): the team should be responsible for development and operations, so one isn't prioritised hugely over the other. | |
| ▲ | liveoneggs 6 hours ago | parent | prev [-] | | But those things all require more pods on the cluster! We've looped back around to the beginning. | | |
| ▲ | darkwater 6 hours ago | parent [-] | | Exactly my point.
But then developers: "I just want to go to my Heroku days again!" but then with a sufficient big company there are maaany developers doing things their slightly different way, and then other effects start compounding, and then costs go up because 15 different teams are using 27 different solutions and and and... But yeah, let's just spin-up a shadow IT VM with Debian like GP said, it's easy! | | |
| ▲ | throwup238 5 hours ago | parent [-] | | > But yeah, let's just spin-up a shadow IT VM with Debian like GP said, it's easy! That’s literally how they sold AWS in the beginning. Cloud won not because of costs or flexibility but because it allowed teams to provision their own machines from their budget instead of going through all the red tape with their IT departments creating… a bunch of shadow IT VMs! Everything old is new again, except it works on an accelerated ten year cycle in the IT industry. | | |
| ▲ | darkwater an hour ago | parent [-] | | Indeed. And it stems from the illusion that what works in solo/small teams/scrappy startup works the same when you are bigger, and that a developer can take over all the corollary work to the actual product development. And yes, a dev that's able to do that properly (stress on properly) is indeed a signal of a better overall developer but they are a minority and anyway as orgs scale up there is just too much of "side salad" that it becomes a separated dish. |
|
|
|
|
| |
| ▲ | tete 7 hours ago | parent | prev [-] | | > the average devops knows k8s If you'd know Kubernetes, you know not to use it. I say that as someone who used to do consulting for it. The reality is that yet again "making money" completely collides with efficient, quality, sane productive work. For me one of the main reasons to leave that space is that I couldn't really deal with the fact that my work collides with a client's success. That said I have helped to get off that stuff and other things that they thought they needed, that just wasted time and money. It just feels odd going into a company that hired you to consult on a topic only to end up telling them "The best approach for you is not doing that at all". Often never. Like some people thought "Well, if we have hundreds of thousands or even millions of users" and the reality was that even in these scenarios if you went away from that abstract thought and discussed a hypothetical based on their product they realized that they'd still be better off without it. Besides the fact that this hypothetical often was in a future that made it likely that they said they'd likely have completely different setup so preparing for that didn't even make sense. I think a big thing related to that was/is the microservice craze where people end up moving to a complex architecture for not many good reasons and then they increase complexity way faster than what they actually deliver in terms of the product, because it somehow feels good. I know it does, I've been there. When in reality the outcome often is just a complex mess with what could have been a relatively simple monolith. And these monoliths do work. And in the vast majority of cases they are easy to scale, because your problem switches from "how do we best allocate that huge amount of very different services across our infrastructure" to (for the most part) "how do we spin up our monolith on one more server" which tends to be a way easier to tackle service. And nothing stops you from still using everything else if you want. Just because it's a monolith doesn't mean you need to skip on any of the cloud offerings, etc. For some reason there seems to be that idea that if you write a monolith you are somehow barred from using modern tooling, infrastructure, services, etc. Not sure where that comes from. | | |
| ▲ | r_lee 2 hours ago | parent [-] | | I think one big problem is that using microservice architecture doesn't mean that literally everything has to be a "microservice". if you don't truly need granual scaling (i.e. your "app" doesn't get a bunch of asymmetric loads across different paths), then you can just have more monolithic "microservices" until they need to be split up imo this should achieve a nice balance? |
|
|
|
| ▲ | tjarjoura 5 hours ago | parent | prev | next [-] |
| In some sense, Kubernetes is just a portable platform for running Linux services, even on a single node using something like K3s. I almost see it as being an extension of the Linux OS layer. |
| |
| ▲ | acedTrex 5 hours ago | parent | next [-] | | This is what I do for small stuff, debian vm, k3s on it for a nicer http based deployment api. | |
| ▲ | throwaway894345 2 hours ago | parent | prev | next [-] | | Yep, this is the way. Linux is just a platform for running services on one or more computers without needing to know about those computers individually, and even if your scale is 1, it's often easier to install k3s and manage your services with it rather than memorizing a bunch of disparate tools with their own configuration languages, filepath conventions, etc. It's just a lot easier to use k3s than it is to cobble together stuff with traditional linux tools. It's a standard, scalable pane of glass and as much as I may dislike kubectl, it's worlds better than systemctl and journalctl and the like. | |
| ▲ | sgt 5 hours ago | parent | prev [-] | | Then why can't we put a wrapper onto systemd and make that into a light weight k8s? | | |
| ▲ | tjarjoura 3 hours ago | parent | next [-] | | This may be familiarity bias, but I often find `kubectl` and related tools like `k9s` more ergonomic than `systemctl`/`journalctl`, even for managing simple single-replica processes that are bound to the host network. | |
| ▲ | marcosdumay 4 hours ago | parent | prev | next [-] | | Systemd is on the wrong layer here. You need something that can set your machine up, like docker. | | |
| ▲ | jasonjayr 2 hours ago | parent | next [-] | | Systemd seems to be moving in that direction, the features are coming together to actually enable this. Though imagining the unholy existence of an init system who's only job is to spin up containers, that can contain other inits, OS images, or whatever ..... turtles all the way down. | |
| ▲ | sgt 2 hours ago | parent | prev [-] | | Okay it sets the machine up, but not the underlying host machine though. |
| |
| ▲ | enos_feedler 5 hours ago | parent | prev | next [-] | | Remember fleet? | |
| ▲ | jcgl 2 hours ago | parent | prev [-] | | See Podman quadlets. | | |
|
|
|
| ▲ | geodel 3 hours ago | parent | prev | next [-] |
| Doing Kubernetes like doing Agile is mandatory nowadays. I've been asked to package a 20 line worth of bash script as docker image so it can be delivered via CI/CD pipeline via Kubernetes pods in cloud. Value is not that I got job done at a day's notice. It is black mark that I couldn't package it as per industry best practices. Not doing would mean out of job/work. Whether it is happening correctly is not something decision makers care as long it is getting done anyhow. |
| |
| ▲ | johnmaguire 3 hours ago | parent [-] | | There are many organizations which still ship software without Kubernetes. Perhaps even the vast majority. | | |
| ▲ | geodel 2 hours ago | parent [-] | | Of course. I used to think I am working for one such organization for long time. Until leadership decided "modernization" as top priority for IT teams as we are lagging far. |
|
|
|
| ▲ | firesteelrain 23 minutes ago | parent | prev | next [-] |
| We have a hobby web based app that consists of multiple containers. It runs in docker compose. Serves 1000 users right now (runs 24/7). Single VM. No Kubernetes whatsoever. I agree with you. |
|
| ▲ | Thanemate 9 hours ago | parent | prev | next [-] |
| I know that "resume-driven development" exists, where the tradeoffs between approaches aren't about the technical fit of the solution but the career trajectory. I've seen people making plain workstation preparation scripts using Rust, only to have something to flex about in interviews. I'm not surprised even in the slightest that DevOps workers will slap k8s on everything, to show "real industry experience" in a job market where the resume matches the tools. |
| |
| ▲ | capitol_ 3 hours ago | parent | next [-] | | Your first example sound very sensible to me? Using new technology in something small and unimportant like a setup script is a perfect way to experiment and learn. It would be irresponsible to build something important as the first thing you do in a new language. | | |
| ▲ | bananamogul 24 minutes ago | parent | next [-] | | For your own use, yes. But if you're working with others, you should default to using standard industry tools (absent a compelling reason not to) because your work will be handed off to others and passed on to new team members. It's unreasonable to expect that a new Windows or Linux sysadmin or desktop support tech must learn Rust to maintain a workstation setup workflow. | |
| ▲ | r_lee 2 hours ago | parent | prev [-] | | agreed. I think if we all went with this HN mindset of "html4 and PHP work just fine" we wouldn't have gone anywhere with regards to all the technical advancements we enjoy today in the software space |
| |
| ▲ | JALTU 2 hours ago | parent | prev | next [-] | | We are building a religion, we are building it bigger
We are widening the corridors and adding more lanes
We are building a religion, a limited edition
We are now accepting coders linking new AI brains (Apologies to Cake. And coders.) | |
| ▲ | ororoo 8 hours ago | parent | prev [-] | | there are alsp people with devops title that do not know anything else than the hammer, and then everything is a hammer problem. I mean, I worked with people who were suprised that you can run more applications inside ec2 vm than just 1 app. | | |
| ▲ | tete 7 hours ago | parent [-] | | > there are alsp people with devops title that do not know anything else than the hammer, and then everything is a hammer problem. To be fair though, that's true for every profession or skill. > I mean, I worked with people who were suprised that you can run more applications inside ec2 vm than just 1 app. I've seen something similar where people were surprised that you can use an object storage (so effectively "make HTTP requests") from every server. |
|
|
|
| ▲ | hombre_fatal 4 hours ago | parent | prev | next [-] |
| k8s is useful when you have services that must spin up and down together, and you want to swap out services and deploy all/some/one. and then also package this so that you and other developers can get the infrastructure running locally or on other machines. |
|
| ▲ | throwaway894345 2 hours ago | parent | prev | next [-] |
| Even if using just one VM, I'll probably slap k3s on it and manage my application using manifests. It's just so much easier than dealing with puppet or chef or vanilla cloud-init. Docker compose works too, but at that point it's just easier to stick with k3s and then I can have nice things like background jobs, a straightforward path to HA, access to an ecosystem of existing software, and a nicer CLI. |
| |
| ▲ | tayo42 an hour ago | parent [-] | | Thats what I don't get when people bring up this idea k8s is complicated. All of those other tools are complicated and fragile |
|
|
| ▲ | rvz 9 hours ago | parent | prev | next [-] |
| They use it for inflating their resume for career progression rather than actually evaluating if they need it in the first place. This is why you get many folks over-thinking the solution and picking the most hyped technologies and using them to solve the wrong problems without thinking about what they are selling. You don't need K8s + AWS EC2 + S3 just to host a web app. That tells me they like lighting money on fire and bankrupting the company and moving to the next one. |
| |
| ▲ | p_l 4 hours ago | parent [-] | | Often the alternatives presented as cheaper to me in discussions are actually burning money. But given how I always see "you don't need k8s because you're not going to scale so fast" I am feel like even professional k8s operators have missed the fundamental design goals of it :/ (maximizing utilization of finite compute) |
|
|
| ▲ | altmanaltman 8 hours ago | parent | prev | next [-] |
| yeah it's like wanting to drive to the mall in the Space Shuttle and then complaining how its too complicated |
|
| ▲ | littlestymaar 7 hours ago | parent | prev [-] |
| I have nom doubt that there are legit use cases for something like k8s at Google or other multi-billion companies. But if its use was confined to this use case, pretty much nobody would be using it (unless as a customer of the organization's infra) and barely would be talking about it (like how there isn't too much talk about Borg). The reason k8s is a thing in the first place is because it's being used by way too many people for their own goods. (Most people having worked in startups have met too many architecture astronauts in our lives). If I had to bet, I'd wager that 99% of k8s users are in the “spin a few containers to run your web app” category (for the simple reason that for one billion-dollar tech business using it for legit reasons, there's many thousands early startups who do not). |
| |
| ▲ | rantanplan 7 hours ago | parent [-] | | The legit use case for companies like Google/Amazon etc is only to sell it to customers. None of these companies use K8s internally for real critical workloads. | | |
| ▲ | bitexploder 5 hours ago | parent | next [-] | | Ehm, that is simply not true. Google built it for themselves first. It is essentially the open source version of the internal architecture. It gets used. | | |
| ▲ | zaphar 5 hours ago | parent | next [-] | | I worked at google. k8s does not really look at all like what they used internally when I was there, aside from sharing some similar looking building blocks. | | |
| ▲ | oblio 5 hours ago | parent [-] | | Yeah, but is the internal tool simpler? I'd be surprised. | | |
| |
| ▲ | akdev1l 5 hours ago | parent | prev [-] | | Also Amazon definitely uses k8s for stuff. Teams are free to use EKS internally. |
| |
| ▲ | oblio 5 hours ago | parent | prev | next [-] | | Google uses Kubernetes' grandpa, called Borg, for everything. But to quote someone: "you are not Google". | |
| ▲ | littlestymaar 5 hours ago | parent | prev [-] | | I said “something like k8s” above, and Google for sure uses something like k8s called Borg. | | |
|
|