| ▲ | jedberg 4 days ago |
| It's not great. Just talked to a hubber last week. They said everyone inside feels pretty dejected right now, and these posts don't help. I feel for them -- with AI coders submitting 25 PRs within an hour of an issue being filed, GitHub bears the brunt of that along with the maintainers. That's a lot of work that gets done with each PR. But they need to make some changes quickly. |
|
| ▲ | zipy124 4 days ago | parent | next [-] |
| But the amount of compute needed to serve is not very high. It's all text. The amount of bandwidth and compute needed to serve a Netflix or YouTube is far far harder and they managed just fine. |
| |
| ▲ | jedberg 4 days ago | parent | next [-] | | Netflix and YouTube both built custom CDNs. Netflix uses AWS for control plane only. Also, respectfully, you have no idea what you're talking about. "Just text" doesn't make it easy to solve. GitHub Actions aren't just text and take a lot of compute. | | |
| ▲ | zipy124 2 days ago | parent [-] | | You're right, github actions do indeed take a lot of compute, but the status incidents do not seem to be limited to just actions. I never said "just text" makes it easy to solve, just that I felt Netflix and YouTube solved harder (in terms of serving the load) problems, as demonstrated by their custom CDN's and other engineering feats. Youtube gets a similar number of videos uploaded to it a day as github gets commits now (20 vs 39 million, from the 275 million a week number listed elsewhere in this thread), and I can't believe that those are equivalently hard to serve in terms of compute and bandwidth. I agree that it is not an easy problem to solve when load scales the way it has for them and I feel for the technical guys there, but I don't disagree with the level of dis-satisfaction directed their way when customers who pay GitHub large sums of money don't receive an adequate service. |
| |
| ▲ | mghackerlady 4 days ago | parent | prev | next [-] | | they also aren't using azure. IDK what youtube is on, but netflix has actually faced their problems and found solutions (freebsd, mostly) | |
| ▲ | the_sleaze_ 4 days ago | parent | prev [-] | | They should migrate to AWS. Its webscale |
|
|
| ▲ | giwook 4 days ago | parent | prev | next [-] |
| I wouldn't feel too bad for them with their top-of-market comp and valuable RSU packages. |
| |
| ▲ | jedberg 4 days ago | parent | next [-] | | I don't believe they pay top of market, but even if they did, it's possible to make a lot of money and still feel bad when you have a sense of ownership and responsibility to the users of your service. | | |
| ▲ | giwook 4 days ago | parent [-] | | You missed my point. | | |
| ▲ | jedberg 4 days ago | parent [-] | | Apparently so did everyone else. What was your point? | | |
| ▲ | giwook 4 days ago | parent [-] | | The comment I responded to said they felt bad for GH employees. I was saying I don't feel all that bad given they are well-compensated white collar workers (like many of us here). Life is pretty good if one's biggest concern is work stuff and you're not personally in danger or actively being harmed. That's all I'm saying. | | |
|
|
| |
| ▲ | batshit_beaver 4 days ago | parent | prev [-] | | GitHub doesn't pay top of market. | | |
| ▲ | giwook 4 days ago | parent [-] | | You're right. That being said, 300k TC for E4 is still pretty good. Plus the RSUs have gone up like 60% in the last several years so that 300k package from a few years ago is maybe 350k or more by now. My point is that they are compensated well. They should be feeling pressure to get this stuff right when their product is core infrastructure for a majority of the digital products that exist today. |
|
|
|
| ▲ | JamesSwift 4 days ago | parent | prev | next [-] |
| I just dont really buy the explanation though. It seems so solvable to hack a throttle or something in place, especially for non-paid plans. The cracks were also showing before AI hit the scene. Im not saying this is the end-game solution but absolutely they could have put temporary safeguards in place while they "figure it out" if it _really_ is just AI driven slop setting their computers on fire. |
|
| ▲ | jcgrillo 4 days ago | parent | prev | next [-] |
| The whole "anyone can submit a PR" thing has been a UX issue from day one. That probably needs to go away, and I doubt anyone would really miss it. Where Github could help is by providing a means to build trust that doesn't involve random unknown people slinging code at projects. |
| |
| ▲ | jedberg 4 days ago | parent [-] | | Any sort of trust requirement would break the entire model and cause some serious inequality. How would a random kid in a 3rd world country ever get noticed enough to enter a trust circle, for example? | | |
| ▲ | RoddaWallPro a day ago | parent | next [-] | | Mitchel Hashimoto started a tool for this, https://github.com/mitchellh/vouch. | |
| ▲ | jcgrillo 4 days ago | parent | prev | next [-] | | That's a hard problem! I don't know. But when we select colleagues we build trust before we let them in the building by interviewing them, looking at their work, checking their references. So maybe there's some sort of analogous process that isn't just "here's a big PR, look at it" that would be useful? If there was such a process, maybe that kid could go through it and become trusted. EDIT: from Github's selfish perspective, this would gatekeep their CI load. I assume (I have no idea, it's just a guess) that mostly serving source code and handling commits is not primarily the scale problem. Instead (again just guessing) probably the vast majority of the compute load due to PRs is running all the CI checks. Nontrivial projects can spawn a hell of a lot of compute per PR, and on every subsequent commit pushed while the PR is open. | |
| ▲ | roadbuster 4 days ago | parent | prev [-] | | > would break the entire model The "model" - GH effectively allowing an overload of their infra - is already broken > How would a random kid in a 3rd world country ever get noticed enough to enter a trust circle By submitting a quality change with a clear description, preferably with unit tests? Is that no longer considered an acceptable hurdle? | | |
| ▲ | jedberg 4 days ago | parent [-] | | > By submitting a quality change with a clear description, preferably with unit tests? Is that no longer considered an acceptable hurdle? But the proposal is to specifically disallow that unless the person is already known. That is the model today, the one that people want to get rid of. | | |
| ▲ | roadbuster 4 days ago | parent [-] | | I think you are taking an excessive interpretation of what was suggested. Let's level-set on the issue: Of late, GH has suffered a continuous stream of noteworthy outages. It is hypothesized the underlying cause of the instability has been the dramatic rise in submissions from coding agents ("AI"). The open question is how (or whether) GH can get load at a manageable level, with the proposal being, 'don't immediately allocate build/compute resources against any and all submissions.' I don't see why that is equivalent to rampant disenfranchisement in the open source community. I believe what people have in mind is closer to, "don't immediately trigger an expensive build process as soon as someone submits a pull request." | | |
| ▲ | jcgrillo 4 days ago | parent [-] | | > "don't immediately trigger an expensive build process as soon as someone submits a pull request." Yes, and I'd add to that "don't immediately trigger an expensive review process". There's no good reason maintainers should have to be on the hook for screening submissions from the entire general public (including all the various OpenClaws or whatever)... It's an absolutely unreasonable thing to ask of anyone. So Github has the opportunity to both protect their own uptime and do a decent thing for the community by solving this problem. |
|
|
|
|
|
|
| ▲ | Scubabear68 4 days ago | parent | prev [-] |
| "AI coders submitting 25 PRs within an hour of an issue being filed, GitHub bears the brunt of that....". What "brunt"? These are not large numbers. |
| |
| ▲ | jedberg 4 days ago | parent [-] | | Before AI coding, a GitHub issue might get one or two PRs after six months. AI coding has made this orders of magnitude bigger. The individual numbers are small, but they add up quickly. | | |
| ▲ | Scubabear68 4 days ago | parent [-] | | Maybe I am really dense, but a single issue getting 2 vs 25 PRs seems to be no practical difference. | | |
| ▲ | jedberg 4 days ago | parent [-] | | Well two in six months vs 25 in one hour. So that's a 54,000x increase. But also, each PR kicks off a bunch of CI work, often in GitHub Actions. |
|
|
|