Remix.run Logo
AlexB138 7 hours ago

Github has published some incredible usage rate increase numbers, which they ascribe to the rise of agentic coding. At some point, they are going to have to change rate limits, cut free-tier usage, or find some other path to reducing load. It's clear that their infrastructure can't keep up with this significant increase, and it's unlikely that they're going to just absorb the increased costs themselves.

Very curious to see what the future holds for Github.

eddyg 7 hours ago | parent | next [-]

From the GitHub COO on April 3rd:

    Platform activity is surging. There were 1 billion commits in 2025.
    Now, it's 275 million per week, on pace for 14 billion this year if
    growth remains linear (spoiler: it won't.)

    GitHub Actions has grown from 500M minutes/week in 2023 to 1B minutes/week
    in 2025, and now 2.1B minutes so far this week.

    So we're pushing incredibly hard on more CPUs, scaling services, and
    strengthening GitHub’s core features.
https://x.com/kdaigle/status/2040164759836778878

They also had a recent blog post about availability: https://github.blog/news-insights/company-news/an-update-on-...

I don't envy the scaling issues the GitHub engineers are facing! #HugOps

munk-a 6 hours ago | parent | next [-]

After the Microsoft acquisition GH marketing and pricing put an immense amount of effort[1] into trying to kill secondary platforms that integrated into github and move more corporate accounts fully on-platform. We recently dropped travis for github actions and dropped reviewable for github PRs (which are terrible).

There's a portion of this that is agentic driven and there's a portion of this that's just github making their own bed.

1. Arguably anticompetitive pricing like MSFT is used to doing with the office suite.

foolswisdom 6 hours ago | parent | next [-]

In other words, the set of github core services has expanded because you don't use third party tooling for some of those services anymore.

munk-a 6 hours ago | parent [-]

For us, yes - and likely for a lot of other users. I'm not certain who else has dealt with the headache of being migrated off their legacy pricing plan but it ends up pushing those internal offerings a lot harder than the old approach did so if they're seeing successful conversions it's likely they're seeing significantly more load from mature codebases with expensive CI/CD pipelines.

blks 3 hours ago | parent | prev [-]

That sounds like their classic EEE

skylerwiernik 7 hours ago | parent | prev | next [-]

This is extremely interesting how fast this happened. Either AI use surged massively in the last quarter, or this is a very sneaky move by Anthropic. Looking at my own stats, I don't think I'm using Claude Code much more than I used to, but my commits have gone way up. I have a feeling they've tuned the models recently to commit more often, which gives the illusion of more work being done.

crystal_revenge 6 hours ago | parent | next [-]

> Either AI use surged massively in the last quarter

December 2025 is considered by many people to be a major step function in agentic coding (both due to improvements in harnesses and LLMs themselves). I know my coding has forever changed since then.

Before I was basically always hands on the keyboard while working with AI. Now I'm running experiments with multiple agents over the weekend, only periodically checking in if they have any questions or need further instruction.

The last quarter is where I personally first started to see how this was all going to change things (despite having worked on both the research and product side of AI for the last few years).

> I have a feeling they've tuned the models recently to commit more often, which gives the illusion of more work being done.

Agents certainly are committing more often, but I know, at least for these projects, there really is work being done. An example: I had an agent auto-researching a forecast I was working on. This is something I've done manually for over a decade now. The iteration process is tedious and time consuming, and would often take weeks of setting up and ultimately poorly documenting many, many experiments to see what works. Now I can "set it and forget it", and get the same results I would have in hours (with much more surface area covered and much better documentation). Each experiment is a branch (or work-tree) so yes there are a lot of commits happening, but the results are measurably real.

I often think the big divide related to the success with agents is whether or not the quality of ones work can be objectively measured. For those of us doing work that can be measured, the impact of agents is still hard to comprehend.

overfeed 4 hours ago | parent | next [-]

> Each experiment is a branch (or work-tree) so yes there are a lot of commits happening, but the results are measurably real.

If you are correct , and GitHub is scaling its compute mostly as a reaction to this externality (agents churning through code that will mostly be discarded), then you can look forward to getting billed for your usage. After all, it is hard to build a scalable system without back-pressure.

crystal_revenge 3 hours ago | parent [-]

I've already started moving my personal projects off github and onto forgejo running on my homelab. I know a lot of people doing the same. With a hermes-agent for a sysadmin I can debug problems from my phone, so I wouldn't be surprised if I have more "9s" that GH.

But if it ends up costing extra for GH, especially for work usage, then it's just a simple calculation of "is this worth it?" which I suspect for most cases will be 'yes'.

overfeed 3 hours ago | parent [-]

> [...]it's just a simple calculation of "is this worth it?" which I suspect for most cases will be 'yes'

Once the landgrab-stage flat-pricing goes away, it will become a case-by-case calculation because unsupervised agents can (and will) run up your billing with zero understanding of the business value of what they're instructed to solve.

crystal_revenge an hour ago | parent [-]

> with zero understanding of the business value

What kind of products/services are you building where you aren't able to tie your eval suite to business value? If you can't, then why are you building whatever is it you are in the first place?

By far one of the biggest changes I think we'll see in things being built by agents is reducing the gap between code and value. The first stage is to start making it possible to measure quality (evals) and the second stage is to more closely align measurable equality with value. The business value of the tokens spent on my team was discussed my first day.

> Once the landgrab-stage flat-pricing goes away

Aside from the above point, I'm already running local LLMs on my homelab that, while not quite what I want for truly production work, have been able to iterate on and solve real, non-trivial research tasks for effectively zero cost (energy cost was roughly on par with running an old light bulb).

The way open, local models have been developing there will be many cases where if proprietary providers over-charge it won't be a deal breaker to just switch to local models. Not to mention that there are plenty of open, but non-local models that are already 5x cheaper and roughly on par with the mainstream model providers.

iknowstuff 4 hours ago | parent | prev [-]

Whats your setup? How are your agents not running out of context and becoming dumb as a rock after ~100k tokens? Do you have a heartbeat thing on spawning more agents every time?

crystal_revenge 4 hours ago | parent | next [-]

The most important thing for any agentic task is to build up and continue to record context as a project develops.

The start of basically any project involves building up and documenting context around the project itself (and for a new company, the organization itself), this is kept at multiple levels of granularity (cross project, project specific, task specific, and human readable documentation). All experiments are planned out and documented as they go.

This becomes extremely important because after a weekend of running experiments stakeholders (and myself) often have questions, with everything in memory or some other stored context it's trivial to get answers to all sorts of questions.

Maybe it's because of this, but in both Claude Code and Codex I haven't run into any issues with models getting "dumb as a rock", even after compaction (or occasional full terminal crashes) they seem to have no trouble marching on.

martinald 4 hours ago | parent | prev [-]

Opus has 1M context now. In my experience it starts getting increasingly dumb after about 700k, but below that it is very usable. I don't think I've ever ran out of context window since they brought that out.

martinald 6 hours ago | parent | prev | next [-]

Many things at once I suspect:

1. Models have got way better, which means you are far more likely to get something working. I know I used to have little 'tool'/'weekend projects' all the time that wouldn't get off the starting blocks before, now it takes a few minutes often to build them, and once I've built them I tend to want to have them saved on github. Quite how useful they turn out to be is another question though...

2. Related, because the models are a lot better I can generate far more code per unit time. On Sonnet last year I'd have to babysit the model and constantly 'steer' it, which meant a lot of the CC time was actually me reviewing it. Now with Opus4.7 it can often just churn away for 10-30minutes and get something reasonable.

3. Most importantly, just the volume of new users to coding agents - loads of new developers shipping far more far frequently.

4. Many users who were not on github, now signing up and pushing code to it. "Vibe coders" basically who don't have SWE experience and their agent tells them git would be a good idea.

Each of these would be a big increase in scale, but combined it is vvv high

tossandthrow 6 hours ago | parent | prev [-]

I don't think commits per se puts pressure on the infrastructure.

More likely pulls and pushes, and, naturally, the ci minutes they identify as the main issue.

NewJazz 6 hours ago | parent [-]

But CI only increased by a factor of 2 since last year. Did they really not foresee that happening? And how does that affect git and api operations.

munk-a 6 hours ago | parent [-]

It really shouldn't. The technical summary they released[1] is a very interesting read from a software engineering perspective. It seems to be blindsided by the increased traffic and gives stats related to commits/PRs (which should be relatively cheap for github to process) without any insight into their web traffic or details on how much actions are costing them. If they were super transparent they'd release information about their request response time and resourcing to fulfill that.

Their current path to resolution is to migrate their codebase to a new language[2], continue to drop their inhouse ops for Azure resources and get off MySQL. Maybe one or two of those steps are legitimately a good idea - I don't have an inside scope - but technology migrations are always fraught with issues. It's quite possible these changes are just a result of them vibe-coding a mature codebase into a new language.

1. https://github.blog/news-insights/company-news/an-update-on-...

2. I'll grant that Ruby isn't the best language to use as scale but I think we're all old enough to realize that language choice is far less impactful on performance than code quality.

hosh 5 hours ago | parent | next [-]

Azure’s core hypervisor orchestrator was half-baked at launch and it has never been fixed. This long read blog series explains a lot for me — for example, why the FedRamp certification program was never able to get a straight answer from Azure about how they handled secrets.

https://isolveproblems.substack.com/p/how-microsoft-vaporize...

https://www.kunalganglani.com/blog/microsoft-fedramp-failure...

evanelias 5 hours ago | parent | prev | next [-]

> migrate their codebase to a new language[2], continue to drop their inhouse ops for Azure resources and get off MySQL

The recent blog post you're linking to mentioned moving data only for webhooks off MySQL, not all relational data used by the entire site; and moving "performance or scale sensitive code out of Ruby", again not the entire codebase.

Do you have an official source suggesting these migrations are more comprehensive than that?

munk-a 5 hours ago | parent [-]

I do not know - this is the only source I'm aware of and the wording is vague enough that the above is just my interpretation of it. It could be highly targeted but the manner of wording indicates a strong preference that smells of a large migration.

evanelias 5 hours ago | parent [-]

What part of the wording gives you that impression? On these topics, the post literally just says the following:

"bottlenecks that appeared faster than expected from moving webhooks to a different backend (out of MySQL)"

"Similarly, we accelerated parts of migrating performance or scale sensitive code out of Ruby monolith into Go" (in a paragraph specifically about "critical services like git and GitHub Actions")

Both of those sound highly targeted to me!

munk-a 5 hours ago | parent [-]

> While we were already in progress of migrating out of our smaller custom data centers into public cloud, we started working on path to multi cloud. This longer-term measure is necessary to achieve the level of resilience, low latency, and flexibility that will be needed in the future.

That paragraph read, to me at least, that the initial targeted changes were just the tip of the iceberg and that much heavier lifting than initially budgeted were now in scope.

evanelias 4 hours ago | parent [-]

"smaller custom data centers into public cloud" is talking about their Azure migration, so "multi cloud" would almost certainly mean extending a presence into AWS and/or GCP (or maybe others like OCI).

I'm sorry but I really don't see how you're drawing conclusions about this meaning a move off of Ruby and MySQL entirely. That's a huuuge logical leap away from what is written in this post, and you originally stated it in a way that indicated this was a fact.

spockz 6 hours ago | parent | prev [-]

Re 2, I would generally agree and there is a lot that can be done with caching. However, since writing services in Rust and Golang, there is whole other tier in speed. Architecture matters, code quality also matters, but Golang and Rust help a lot in making very fast services.

munk-a 5 hours ago | parent [-]

Yeah I don't disagree. To clarify. Rust, Golang etc - they give you a very noticeable advantage when it comes to writing good performant software with the assumption that you're putting in the effort on the design side. But poorly written Rust is likely going to be indistinguishable from poorly written Ruby.

siva7 6 hours ago | parent | prev | next [-]

It's the end of the free lunch era. Subsidizing groups like students or new users to gain market share worked as long as there weren't billions of them at the same time eating all compute from the paying customers. It's not working anymore for ai products.

po1nt an hour ago | parent [-]

Not a free lunch, data gold mine

wolfi1 7 hours ago | parent | prev | next [-]

I wonder how many of those actions are really necessary

PhilipRoman 6 hours ago | parent | next [-]

And how many of those actions do uncached downloads instead of building self-contained offline images... Speaking of which, I wonder if GitHub has implemented any HTTP interception for common mirror sites, like used by apt, etc.

everfrustrated 5 hours ago | parent | next [-]

GitHub and WarpBuild cache is so slow it is often faster to re-download hundreds of MB each run than cache it properly.

I so wish this wasn't the case.

spockz 6 hours ago | parent | prev [-]

Many downloads now go over https. Intercepting them would require having certificate for those domains. IIRC on the clouds the standard images do have a sources list that points to mirrors on the cloud’s network. I would only presume Github Actions runners have the same.

Not sure if something similar exists for NPM which is big for all things JS.

munk-a 6 hours ago | parent [-]

Other CI/CD platforms usually push you towards using self-hosted mirrors for downloading large chunks of data (often aggressively so) but github is pretty hands off when it comes to actions. It is interesting to consider whether managing that traffic might be overwhelming them and if this can be traced back to a lack of forethought when it came to building out those tools.

bravetraveler 7 hours ago | parent | prev [-]

Or how many pushes those commits are spread across; oh, neat, big number.

sgt 5 hours ago | parent | prev | next [-]

They can easily spin this as massive success. Uptime will only matter for a small number of users. Probably not true, but not far from the truth either. I'm a heavy Github user and I can't really say it's THAT bad. If something doesn't work, you can always fill your time with something else.

hansmayer 7 hours ago | parent | prev [-]

Wow, nice to see the relentless push for more AI slop finally paying back some dividents back to the issuer.

amluto 7 hours ago | parent | prev | next [-]

For literally decades, I’ve observed that there are systems that make each operation cheap and systems that work hard to scale out. The former frequently seems to wildly outperform the latter.

GitHub, for example, seems to implement the main repository /pulls page as a search query, which is hinted at by the prefilled search bar and was mostly confirmed last week when the search backend failed and pull requests didn’t load. But it could have been implemented as a plain API call that just loads open pull requests, and that API exists and did not go down.

If GitHub focused a bit on identifying their top 95% of high level operations (page loads including resulting API calls, for example) and making them efficient, I bet they could get a 5x or better reduction in backend load by simplifying them.

(Don’t even get me started on the diff viewer. I realize that much of its awfulness is the horribly inefficient front end, which does not directly load the back end, but I expect there is plenty of room for improvement. The plain git command line features are very fast.)

mnky9800n 7 hours ago | parent | next [-]

Are you telling me you don’t want a chat interface to greet you when you log in to GitHub?

amluto 6 hours ago | parent [-]

That’s sort of orthogonal. But if GitHub actually invoked an LLM on initial page load, that would be about par for the course, and it would be amusing for GitHub to then complain that they’ve grown so quickly that their systems can’t keep up.

stabbles 6 hours ago | parent | prev | next [-]

I noticed the same https://news.ycombinator.com/item?id=47940213. My working hypothesis is that, given that a filter was always required (prs and issues are likely rows in the same database with a bool property to distinguish them), someone thought it'd be good to use the search API uniformly. But search is on the derivative of the underlying data, in contrast to the specific APIs for listing issues and prs.

munk-a 6 hours ago | parent [-]

Working in an organization without a mono-repository I've actually found it extremely difficult to keep a tab on PRs and issues across multiple repositories. For a problem that should be resolved by a "For me" page that just lists out all your active incoming and outgoing PRs their multi-page solution involving search filters that often need to be reset feels extremely weak. I've worked on large multi-tenant solutions before and a page where you can "SELECT * FROM everything LIMIT 10" is the absolute last thing you want to give to users.

It is bizarre to me that so much of their tooling defaults to acting across the whole of github data points without guiding the user towards (or even making available as far as I can tell) a way to easily scope requests down outside of a complex search filter.

davideg 5 hours ago | parent [-]

Do you mean like https://github.com/pulls and https://github.com/issues ?

These are in the top left hamburger menu from the Home dashboard (edit: actually on all pages).

munk-a 5 hours ago | parent [-]

Hey, that's awesome and nevermind me. I just got stumbled by their UI.

There's probably a fair argument about how discoverable these are (especially given their labeling as "All Issues" and "All Pull Requests") but that tip is quite helpful to me personally. Thanks for sharing it, I really appreciate it!

amluto 2 hours ago | parent [-]

And yet these are still (apparently) implemented as search queries instead of direct database queries.

munk-a an hour ago | parent [-]

There may be some magic they do to better optimize within-user-searching. It's something that they could hide in implementation details so we can't be sure unless they spill the beans but it's feasible - especially with the default search parameters they're using.

I'd still love something a bit more obvious and intuitive but if it's just a UX failure that makes me feel a lot better.

wavemode 7 hours ago | parent | prev | next [-]

Git itself is kind of a fundamentally computationally inefficient way to store and retrieve information. If the problem to solve were simply "store and version this text", 14 billion commits in a year would not even be considered a lot.

In other words, a centralized version control system built from the ground up to operate at scale would do far more for scalability than anything GitHub could possibly do to optimize their Git operations. Every major tech company (Amazon, Meta, Google, etc) is already doing something like this internally.

Though this would require people to start using a github-specific client rather than the traditional git+ssh. (Though the github client could still maintain a git repo locally, for compat.)

munk-a 6 hours ago | parent | next [-]

I can guarantee you one thing - github's problem isn't coming from git.

Considering all the ci/cd pipelines, PR & issue discussions, social media tracking, rich data and else that github hosts if their true issue is the actual meat and potatoes of running git I would be gobsmacked.

stabbles 6 hours ago | parent | prev [-]

What are you referring to when you say it's "fundamentally computationally inefficient"? It's pretty efficient because it's content-addressed, plus optimizations to reduce storage and data transfer with packfiles.

galangalalgol 6 hours ago | parent [-]

I suspect they were referring to some of the things git allows for non centralized version control. There are simplifications if you just wanted a centralized system like cvs had.

the_sleaze_ 7 hours ago | parent | prev [-]

I think you need to broaden your focus here - I can't really remember any significant downtime before the Microsoft acquisition and the data supports my memories.

Microsoft bought Github and migrated to Azure, which is explains the findings. The query performance was fine before they started serving from Azure.

I mean honestly, as though there isn't one single person competent enough to read some logs and horizontally scale a few read only dbs to meet demand? That's not it

AlexB138 7 hours ago | parent | next [-]

> I think you need to broaden your focus here - I can't really remember any significant downtime before the Microsoft acquisition and the data supports my memories.

This is the opposite of my recollection, actually. I distinctly remember having conversations about Github struggling to scale well before MS was involved, and people claiming that MS had somehow saved Github because it had stabilized and begun adding features again.

> The query performance was fine before they started serving from Azure.

This may be correct though. The Azure migration seems more aligned with the timeline of struggling to scale.

the_sleaze_ 5 hours ago | parent [-]

> I distinctly remember having conversations about Github struggling to scale well before MS was involved

Do you have any sources to back your claim up? At what point did Github fail to scale their search endpoints?

> This may be correct

It is.

nvme0n1p1 6 hours ago | parent | prev | next [-]

I don't know why this is downvoted. The data backs you up: https://damrnelson.github.io/github-historical-uptime/

evanelias 5 hours ago | parent [-]

I'm skeptical about that page's accuracy. For example, if you go to the breakdown tab, it shows Actions having 100% availability when the graph starts (Apr 2016), yet Actions didn't even exist until late 2018, and wasn't GA until a full year after that. So if the math behind the "average" tab is treating NULLs as 100% uptime, this just isn't a correct measurement.

The page also notes it obtains its data from the official status page, but big tech companies have been known to under-report outages. My general sense is they've gotten better about this in recent years; if so, that means historical data will give an erroneously rosy picture of uptime.

the_sleaze_ 5 hours ago | parent [-]

I think we can agree the data is correct enough to ascribe a trend with a strong statistical significance no? Enough to draw a conclusion

evanelias 5 hours ago | parent [-]

We can clearly draw a conclusion that their availability is getting worse, but that's not what your original comment claimed.

You said "I can't really remember any significant downtime before the Microsoft acquisition and the data supports my memories", but my memories differ (as do other commenters), and the accuracy of the supporting data seems questionable.

the_sleaze_ 4 hours ago | parent [-]

ok.

philistine 7 hours ago | parent | prev [-]

I mean, are any of the other forges, which I presume are also seeing logarithmic increase in commits, also failing as hard as Github?

the_sleaze_ 5 hours ago | parent [-]

I totally agree, you should expect a similar increase and degradation in Gitlab which we do not.

graypegg 7 hours ago | parent | prev | next [-]

IMO, they're reaching the point of no return. I don't think they can horizontally-scale their way out of the hole they dug themselves unless they separate their free and paid infra maybe... which doesn't seem likely considering how their other infra changes are going.

In the same way you need to be 10x better for someone to consider switching to your product, if you get 10x worse your competitors get a free 10x by just standing still.

AlexB138 7 hours ago | parent | next [-]

I think there's a very good chance you're right. Their reputation is obviously severely harmed, and high profile projects like Ghostty leaving may be a canary in the coalmine.

Something creative like separating their free and paid tiers may help them. I suspect the fact that all of this is happening to them along with their migration to Azure is probably complicating their ability to adapt their infrastructure.

bastardoperator 4 hours ago | parent [-]

What if I told you most enterprise customers don't even use the cloud offering and aren't impacted by any of this? Companies like Apple use GHES, and honestly thats where most of their revenue comes from, not the free offering.

dylan604 6 hours ago | parent | prev | next [-]

I wonder if AWS resurrecting CodeCommit might be related. "For all of our warts, we still have a higher rep score than github" would not be an extraordinary thought at this point. There has been some brief chat about looking to github, and I'm so glad we never did. A previous company did migrate to github with no real answers on what the benefit was other than investors ask if your code is in github by name vs some other repo.

fastball 6 hours ago | parent | prev | next [-]

How can they not? Surely at GitHub scale there isn't a single component where they were relying on vertical scaling?

graypegg 6 hours ago | parent [-]

For all of it's history (up to and including now possibly?) Github was a big Ruby on Rails monolith. [0] Obviously some things run in their own service, but I'm seeing the core github features fall apart which should be the features packed into the big monolith. If load is this much a problem, not being able to only vertically scale the processes that need the extra headroom is a big problem. Scaling horizontally by just throwing more machines at it, or at least cordoning-off some machines as "the ones that people actually pay for" is all I can think of for an application I can only describe as "accidentally working". Urgency is most-definitely high and that pushes decision making towards permanently-temporary patches instead of actual infra/architecture improvements.

[0] https://github.blog/engineering/architecture-optimization/bu...

jcgrillo 6 hours ago | parent | prev [-]

IIRC back in the day they used to have an on-prem Enterprise product? I've never heard of anyone who actually used it though. IMO that would make a lot of sense for a medium-large organization--you still get the familiar Github product but you can take responsibility for your own uptime--like with Jira, Jenkins (nee Hudson), PyPI/Maven/etc.

kqp 6 hours ago | parent | prev | next [-]

A week ago GitHub published a blog post saying this, a day later GitHub execs were in HN comments repeating it, and just like that it’s common knowledge that GitHub’s steady reliability decline from the 2019 onward was actually caused not by the 2019 Microsoft integration, but by something that did not exist until 2023. PR works, y’all. Turns out the reason GitHub doesn’t work is because it’s just so good!

sh3rl0ck 7 hours ago | parent | prev | next [-]

I've been a strong proponent of reallocating all LinkedIn server capacity to GitHub.

dijksterhuis 7 hours ago | parent | next [-]

this is an idea that i’d happily get behind.

bachmeier 6 hours ago | parent | prev [-]

[flagged]

cdrnsf 7 hours ago | parent | prev | next [-]

They can't really cite the situation as a problem given their hand in creating and continuing it.

nine_k 7 hours ago | parent | next [-]

It's hard to talk about "them" as a singular entity. I bet that the "Copilot all the things!!11" faction mostly does not consist of GitHub SREs.

Hamuko 7 hours ago | parent [-]

The GitHub SREs are working for the Copilot company.

cdud3 7 hours ago | parent [-]

Satya Nadella at the LlamaCon event in April 2025: "I’d say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software."

In particular Github, with it's copilot-next initiative, has probably so much AI generated code inside today that fixing all this new performance problems will need lots of human developer brains.

PunchyHamster 6 hours ago | parent [-]

It literally have problems the moment MS bought it, way before AI gold rush.

petcat 7 hours ago | parent | prev [-]

The sysadmins didn't make any of those decisions.

cdrnsf 7 hours ago | parent [-]

I suppose the idiocy of their parent company is their job security.

munk-a 6 hours ago | parent | prev | next [-]

Have they published incredible usage rate numbers somewhere? I saw their recent blog post about the outages[1] and it has a graph without axis labels and without any context around usage before 2019 to indicate just how much this agentic acceleration has actually increased usage growth.

1. https://github.blog/news-insights/company-news/an-update-on-...

crote 7 hours ago | parent | prev | next [-]

It's a bit hard to blindly trust their numbers when they are trying very hard to sell Copilot to everyone.

Sure, AI will undoubtedly have increased their workload, but how much of the shown figures is real, and how much is the PR department trying to make it look like Copilot & friends is a massive success?

bdashdash 7 hours ago | parent | prev | next [-]

Isn't the data that flows through Github so valuable that they (Microsoft) are happy to eat the cost?

I don't have a clear idea how that value can be captured, since it's going to be 90% AI generated code that anyone can scrape (public projects) or can't be used (private projects), so perhaps you're right.

Athas 7 hours ago | parent | next [-]

> Isn't the data they capture so valuable that they (Microsoft) are happy to eat the cost?

Even if that is true, unless the value of the data corresponds to near-term revenue, then eventually the cost may simply not be possible to meet. Or for that matter, the capital to manage the increasing load may simply not exist - it does not matter how much valuable data you have, if the supply of hardware cannot keep up with your demand.

Also, I suspect that most of the "data" obtained by the incessant hammering on GitHub is not very valuable. Most business code is routine, and getting Copilot to help out with generating enormous amounts of it may not contribute much in return.

petcat 7 hours ago | parent | prev | next [-]

> 90% AI generated code

And it isn't even clear yet if the AI generated code is even particularly valuable since it's legally ambiguous as to whether or not any human ownership can be attributed to it.

The USPTO has declined copyrightability for genai artwork, it's only a matter of time before the same question comes up about code.

graemep 7 hours ago | parent [-]

Your claim is incorrect. Something purely AI generated may not be covered by copyright in the US. That would make it more valuable to MS as you can reuse it as you like.

However, works with significant human input are covered by copyright, and most code does have such input. Human review, and correction is very common. There is a lot of AI generated code out there, and there are no cases challenging the copyright on it.

You also need to look beyond US law. Software is a global business and most software businesses do not want to write software they can only sell in certain countries.

sofixa 7 hours ago | parent [-]

> However, works with significant human input are covered by copyright, and most code does have such input. Human review, and correction is very common. There is a lot of AI generated code out there, and there are no cases challenging the copyright on it.

Legislation and court decisions still pending. There are numerous lawsuits about copyrigtability of output, and right of use of copyrighted work by LLMs, and both could have ramifications for code. I don't see how it's materially different to tell Claude Code to write you a function fetching an entry from a database, and telling ChatGPT to generate you a picture of a unicorn riding a bicycle. Both have the same level of input (desired end goal), both might go through review and updates (no, pink unicorn; no, cache the database connection).

Legal challenges over code copyright are relatively rare nowadays, so I wouldn't take lack of high profile lawsuits as proof of legality / copyrightability.

And yes, this will also depend on jurisdiction. Court decisions or laws can change that. Litigation over copyright infringement via training and reproduction is ongoing in multiple jurisdiction, and it wouldn't be shocking to me if at least some decide that it is indeed copyright infringement to pirate content to train LLMs that can reproduce it.

xp84 6 hours ago | parent | next [-]

If I write a program of 1000 lines of code, with AI features turned off, then I turned the AI features on and use a completion to edit one function, can my program not be copyrighted? (I expect/hope you’ll say: “Of course it’s still eligible for copyright”)

How about if I write 100 lines myself, turn the AI features on, vibe code 100 lines, and repeat this for five cycles? Half the functions are AI coded and half the functions I wrote myself. How about if I just tell Claude to write the program?

And what if I tell Claude to write the program, and then spend six months tweaking most of the lines of code?

I struggle to see a specific and obvious point where a line should be drawn. It seems intuitive to me that if I spend at least a few days worth of effort on a code base (whether tweaking, correcting, or directing AI to do targeted refactors), that is meaningful human authorship even if it has thousands of lines of generated code.

I can, however, acknowledge the fairness that something which is simply one-shot output probably shouldn’t merit protection. But really, in any of these cases, it’s going to be pretty hard to prove after the fact exactly what the proportion of generated code to human authorship is, so idk how a court will really tell whether a repo with 20,000 LOC is one-shot or actually had a person spend a few weeks tweaking it.

elevation 4 hours ago | parent [-]

> And what if I tell Claude to write the program

Why should this be any different than when telling/paying a human to write the program?

You're free to enter an agreement assigning all rights to the employer or the worker, to license the work ir/revokably and/or non/transferably. There is no need to wait for a court decision to understand what the results will be.

graemep 6 hours ago | parent | prev [-]

If that function is all you ask it to write as a one off, maybe. However, if that function is part of a larger system that is human designed it is very different. If you review and correct the code in the system it is very different.

Pages 27 and 28 of this are relevant to this: https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...

gpugreg 7 hours ago | parent | prev | next [-]

> I don't have a clear idea how that value can be captured, since it's going to be 90% AI generated code that anyone can scrape (public projects) or can't be used (private projects), so perhaps you're right.

The value is probably in knowing which AI-generated code ends up being pushed or discarded, which can't be derived from public projects. This information can then be used to finetune the next big model so it only generates the "good" code.

graemep 7 hours ago | parent | prev | next [-]

Its easier for them to scrape than it is for anyone else. they also have a lot more meta data about the code which may be useful.

Do Github terms entirely prevent them from making use of data in private projects.

desdenova 7 hours ago | parent | prev [-]

> or can't be used (private projects)

As if they cared about that

mohsen1 7 hours ago | parent | prev | next [-]

The same company operates the Xbox network. More daily active users and more events per second

incognito124 7 hours ago | parent | next [-]

Xbox network was _designed_ for such concurrency, GitHub is Ruby on rails + vitess (mysql).

brian-armstrong 6 hours ago | parent | prev | next [-]

Not comparable at all. Xbox would be mostly transient traffic. It's probably not much more than packet forwarding for a lot of traffic.

Github is a giant complicated stateful mess with a lot of reads and writes. It also has a lot of features at this point. Hard to scale and hard to optimize.

pathartl 6 hours ago | parent [-]

I think this is minimizing the Xbox platform. They are also a massive digital distribution platform where almost every game is a digital download now.

That being said, you are correct. It is absolutely no surprise to me that Actions has the worst uptime.

steve1977 7 hours ago | parent | prev [-]

Do they run Xbox network on Azure or is it a separate thing?

amarant 7 hours ago | parent | prev | next [-]

Huh, so vibe coding really is the reason GitHub has been down so much lately!

cedws 6 hours ago | parent | prev | next [-]

The disappointing thing is that if you do some digging, you'll find the majority of that it's slop and just outright spam. There's a page on GitHub where you can see recently updated repositories and it's very rare I see anything of quality on there.

GitHub has become a dumping ground for broken code and it has more bots than ever. As much as I hate ID verification it might be a necessarily evil at this point because clearly their anti-bot measures aren't working.

elAhmo 7 hours ago | parent | prev | next [-]

Can you share where did they published that?

AlexB138 7 hours ago | parent [-]

Their COO has talked about it extensively on X. A sibling comment in this thread posted a link here: https://news.ycombinator.com/item?id=48011075

shevy-java 6 hours ago | parent | prev | next [-]

It means that Skynet is winning.

What you described above will piss off and alienate even more people. Eventually there will be a critical threshold crossed. Microslop will be the first victim of Skynet 11.0 (I lost track of its current version but you can see how much damage is caused by AI in general now - this was the beginning of skynet. Except that it sucks).

pier25 6 hours ago | parent | prev | next [-]

Amazing that Microsoft didn't see this coming after aggressively pushing AI everywhere for years.

pydry 7 hours ago | parent | prev [-]

Github naturally scales horizontally.

Usage numbers is the PR reason. Vibecoding insanity in Microsoft is the more plausible actual culprit.

add-sub-mul-div 7 hours ago | parent | next [-]

So maybe it's AI that's responsible for both ends. The increased traffic and the lessened product.

sofixa 7 hours ago | parent | prev [-]

> Github naturally scales horizontally

Not necessarily, a few years ago they had some crucial information stuck in a single MySQL cluster (so write constrained) and were working on sharding it, but struggling: https://github.blog/engineering/infrastructure/partitioning-...