|
| ▲ | wavemode 5 hours ago | parent | next [-] |
| Uptime is much, much easier at low scale than at high scale. The reason for buying centralized cloud solutions is not uptime, it's to safe the headache of developing and maintaining the thing. |
| |
| ▲ | manquer an hour ago | parent | next [-] | | It is easier until things go down. Meaning the cloud may go down more frequently than small scale self deployments , however downtimes are always on average much shorter on cloud. A lot of money is at stake for clouds providers, so GitHub et al have the resources to put to fix a problem compared to you or me when self hosting. On the other hand when things go down self hosted, it is far more difficult or expensive to have on call engineers who can actual restore services quickly . The skill to understand and fix a problem is limited so it takes longer for semi skilled talent to do so, while the failure modes are simpler but not simple. The skill difference between setting up something locally that works and something works reliably is vastly different. The talent with the latter are scarce to find or retain . | |
| ▲ | tyre 5 hours ago | parent | prev [-] | | My reason for centralized cloud solutions is also uptime. Multi-AZ RDS is 100% higher availability than me managing something. | | |
| ▲ | wavemode 5 hours ago | parent [-] | | Well, just a few weeks ago we weren't able to connect to RDS for several hours. That's way more downtime than we ever had at the company I worked for 10 years ago, where the DB was just running on a computer in the basement. Anecdotal, but ¯\_(ツ)_/¯ | | |
| ▲ | sshine 3 hours ago | parent [-] | | An anecdote that repeats. Most software doesn’t need to be distributed. But it’s the growth paradigm where we build everything on principles that can scale to world-wide low-latency accessibility. A UNIX pipe gets replaced with a $1200/mo. maximum IOPS RDS channel, bandwidth not included in price. Vendor lock-in guaranteed. |
|
|
|
|
| ▲ | jakewins 5 hours ago | parent | prev | next [-] |
| “Your own solution” should be that CI isn’t doing anything you can’t do on developer machines. CI is a convenience that runs your Make or Bazel or Just or whatever you prefer builds, that your production systems work fine without. I’ve seen that work first hand to keep critical stuff deployable through several CI outages, and also has the upside of making it trivial to debug “CI issues”, since it’s trivial to run the same target locally |
| |
| ▲ | CGamesPlay 2 hours ago | parent [-] | | Yes, this, but it’s a little more nuanced because of secrets. Giving every employee access to the production deploy key isn’t exactly great OpSec. |
|
|
| ▲ | tcoff91 6 hours ago | parent | prev | next [-] |
| Compared to 2025 github yeah I do think most self-hosted CI systems would be more available. Github goes down weekly lately. |
| |
| ▲ | Aperocky 5 hours ago | parent [-] | | Aren't they halting all work to migrate to azure? Does not sound like an easy thing to do and feels quite easy to cause unexpected problems. | | |
| ▲ | macintux 2 hours ago | parent [-] | | I recall the Hotmail acquisition and the failed attempts to migrate the service to Windows servers. | | |
| ▲ | drykjdryj an hour ago | parent [-] | | Yes, this is not the first time github trying to migrate to azure. It's like the fourth time or something. |
|
|
|
|
| ▲ | deathanatos 4 hours ago | parent | prev | next [-] |
| Yes. I've quite literally run a self-hosted CI/CD solution, and yes, in terms of total availability, I believe we outperformed GHA when we did so. We moved to GHA b/c nobody ever got fired ^W^W^W^W leadership thought eng running CI was not a good use of eng time. (Without much question into how much time was actually spent on it… which was pretty close to none. Self-hosted stuff has high initial cost for the setup … and then just kinda runs.) Ironically, one of our self-hosted CI outages was caused by Azure — we have to get VMs from somewhere, and Azure … simply ran out. We had to swap to a different AZ to merely get compute. The big upside to a self-hosted solution is that when stuff breaks, you can hold someone over the fire. (Above, that would be me, unfortunately.) With Github? Nobody really cares unless it is so big, and so severe, that they're more or less forced to, and even then, the response is usually lackluster. |
|
| ▲ | prescriptivist 5 hours ago | parent | prev | next [-] |
| It's fairly straightforward to build resilient, affordable and scalable pipelines with DAG orchestrators like tekton running in kubernetes. Tekton in particular has the benefit of being low level enough that it can just be plugged into the CI tool above it (jenkins, argo, github actions, whatever) and is relatively portable. |
|
| ▲ | davidsainez 6 hours ago | parent | prev | next [-] |
| Doesn’t have to be an in house system, just basic redundancy is fine. eg a simple hook that pushes to both GitHub and gitlab |
|
| ▲ | nightski 6 hours ago | parent | prev [-] |
| I mean yes. We've hosted internal apps that have four nines reliability for over a decade without much trouble. It depends on your scale of course, but for a small team it's pretty easy. I'd argue it is easier than it has ever been because now you have open source software that is containerized and trivial to spin up/maintain. The downtime we do have each year is typically also on our terms, not in the middle of a work day or at a critical moment. |