| ▲ | mholt 8 hours ago |
| Even better IMO is this status page: https://mrshu.github.io/github-statuses/ "The Missing GitHub Status Page" with overall aggregate percentages. Currently at 90.84% over the last 90 days. It was at 90.00% a couple days ago. |
|
| ▲ | montroser 8 hours ago | parent | next [-] |
| It has been pretty rough. Their own numbers report just a single `9` for Actions in Feb 2026 with 98% uptime. But that said -- I don't get the 90% number. Anecdotally, it seems believable that 1 in 50 times (2%) in Feb that Actions barfed. Which is not very nice, but it wasn't at 1 in 10 times (10%). |
| |
| ▲ | verdverm 8 hours ago | parent [-] | | It looks like the aggregate stats are more of a venn diagram than an average. So if 1/N services are down, the aggregate is considered down. I don't think this is an accurate way to calculate this. It should be weighted or in some way show partial outages. This belief is derived from the Google SRE book, in particular chapters 3 (embracing risk) and 4 (service level objectives) https://sre.google/sre-book/embracing-risk/ https://sre.google/sre-book/service-level-objectives/ | | |
| ▲ | ablob 7 hours ago | parent | next [-] | | If you're using all services, then any partial outage is essentially a full outage.
Of course, you can massage the numbers to make it look nicer in the way you described but the conservative approach is better for the customers.
If you insist, one could create this metric for selected services only to "better reflect users". That being said, even when looking at the split uptimes, you'd have to do a very skewed weighting to achieve a number with more than one 9. | | |
| ▲ | verdverm 7 hours ago | parent [-] | | > That being said, even when looking at the split uptimes, you'd have to do a very skewed weighting to achieve a number with more than one 9. It's definitely bad no matter how it you slice the pie. If GH pages is not serving content, my work is not blocked. (I don't use GH pages for anything personally) |
| |
| ▲ | marcosdumay 7 hours ago | parent | prev | next [-] | | That's how you count uptime. You system is not up if it keeps failing when the user does some thing. The problem here is the specification of what the system is. It's a bit unfair to call GH a single service, but it's how Microsoft sells it. | | |
| ▲ | verdverm 7 hours ago | parent [-] | | > That's how you count uptime. It's not how I and many others calculate uptime. There is not uniformity, especially when you look at contracts. |
| |
| ▲ | bandrami 2 hours ago | parent | prev | next [-] | | Thinking back to when I was hosting, I think telling a customer "your web server was running fine it's just that the database was down" would not have been received well. | |
| ▲ | mort96 7 hours ago | parent | prev | next [-] | | I mean I think it's useful. It answers the question, "what percentage of the time can I rely on every part of GitHub to work correctly?". The answer seems to be roughly 90% of the time. | | |
| ▲ | verdverm 7 hours ago | parent | next [-] | | I don't use half of the services, the answer is not straight forward https://mrshu.github.io/github-statuses/ | |
| ▲ | naniwaduni 7 hours ago | parent | prev [-] | | Nobody cares about every part of GitHub working correctly. I mean, ok, their SREs are supposed to, but tabling the question of whether that's true: if tomorrow they announced a distributed no-op service with 100% downtime, you should not have the intuition that the overall availability of the platform is now worse. |
| |
| ▲ | formerly_proven 7 hours ago | parent | prev [-] | | In a nutshell, why would the consumer care (for the SLO) care about how the vendor sliced the solution into microservices? | | |
| ▲ | verdverm 7 hours ago | parent [-] | | It will depend on the contract. When I was at IBM, they didn't meet their SLOs for Watson and customers got a refund for that portion of their spend |
|
|
|
|
| ▲ | fontain 8 hours ago | parent | prev | next [-] |
| An aggregate number like that doesn’t seem to be a reasonable measure. Should OpenAI models being unavailable in CoPilot because OpenAI has an outage be considered GitHub “downtime”? |
| |
| ▲ | mort96 7 hours ago | parent | next [-] | | As long as they brand it as a part of GitHub by calling it "GitHub Copilot" and integrate it into the GitHub UI, I think it's fair game. | | |
| ▲ | mememememememo 7 hours ago | parent [-] | | What is Google's uptime (including every single little thing with Google in the name)? | | |
| ▲ | mort96 7 hours ago | parent | next [-] | | I don't think that's a fair comparison. Google Maps, Google Calendar, Google Drive, Google Search, Google Chrome, Google Ads, etc. are all clearly completely different products which have very little to do each other, they're just made by the same company called Google. GitHub is a different situation. There's one "thing" users interact with, github.com, and it does a bunch of related things. Git operations, web hooks, the GitHub API (and thus their CLI tool), issues, pull requests, Actions; it's all part of the one product users think of as "GitHub", even if they happen to be implemented as different services which can fail separately. EDIT: To illustrate the analogy: Google Code, Google Search and Google Drive are to Google what Microsoft GitHub, Microsoft Bing and Microsoft SharePoint are to Microsoft. | | |
| ▲ | Kaliboy 7 hours ago | parent [-] | | Completely agree, it makes it worse actually as Github's secondary functions so to speak are things we implicitely rely on. When I merge to master I expect a deploy to follow. This goes through git, webhooks and actions. Especially the latter two can fail silently if you haven't invested time in observation tools. If maps is down I notice it and immediately can pivot. No such option with Github. |
| |
| ▲ | dogma1138 6 hours ago | parent | prev [-] | | It depends, for example - I would consider Google Drive uptime as part of say Google Docs’ overall uptime because if I can’t access my stored documents or save a document I’ve been working on for the past 3 hours because Drive is down I would be very pissed and wouldn’t care if it’s Drive or Docs that is the problem underneath I still can’t use Google Docs as a service at that point. |
|
| |
| ▲ | fwip 8 hours ago | parent | prev [-] | | I think reasonable people can disagree on this. From the point of view of an individual developer, it may be "fraction of tasks affected by downtime" - which would lie between the average and the aggregate, as many tasks use multiple (but not all) features. But if you take the point of view of a customer, it might not matter as much 'which' part is broken. To use a bad analogy, if my car is in the shop 10% of the time, it's not much comfort if each individual component is only broken 0.1% of the time. | | |
| ▲ | remus 7 hours ago | parent | next [-] | | > But if you take the point of view of a customer, it might not matter as much 'which' part is broken. To use a bad analogy, if my car is in the shop 10% of the time, it's not much comfort if each individual component is only broken 0.1% of the time. Not to go too out of my way to defend GH's uptime because it's obviously pretty patchy, but I think this is a bad analogy. Most customers won't have a hard reliability on every user-facing gh feature. Or to put it another way there's only going to be a tiny fraction of users who actually experienced something like the 90% uptime reported by the site. Most people are in practice are probably experienceing something like 97-98%. | | |
| ▲ | fwip 7 hours ago | parent [-] | | Sorry, by 'customer' I meant to say something like a large corporate customer - you're buying the whole package, and across your org, you're likely to be a little affected by even minor outages of niche services. But yeah, totally agree that at the individual level, the observed reliability is between 90% and 99%, and probably toward the upper end of that range. | | |
| |
| ▲ | mememememememo 7 hours ago | parent | prev | next [-] | | Or if your kettle is not working the house is considered not working? | | |
| ▲ | Polizeiposaune 6 hours ago | parent [-] | | I've been on a flight that was late leaving the gate because the coffeemaker wasn't working. |
| |
| ▲ | wang_li 7 hours ago | parent | prev [-] | | A better analogy is if one bulb in the right rear brake light group is burnt out. Technically the car is broken. But realistically you will be able to do all the things you want to do unless the thing you want to do is measure that all the bulbs in your brake lights are working. | | |
| ▲ | Dylan16807 6 hours ago | parent [-] | | That's an awful analogy because "realistically you will be able to do all the things you want to do". If a random GitHub service goes down there's a significant chance it breaks your workflow. It's not always but it's far from zero. One bulb in the cluster going out is like a single server at GitHub going down, not a whole service. |
|
|
|
|
| ▲ | skipants 8 hours ago | parent | prev | next [-] |
| These are two pages telling two different things, albeit with the same stats. The information is presented by OP in a way to show the results of the Microsoft acquisition. |
|
| ▲ | goodmythical 2 hours ago | parent | prev [-] |
| holy shit that's nearly five weeks of down time. Well, I mean, I guess that's fair really. How long has github been around? Surely it's got five weeks of paid time off by now... |