Remix.run Logo
nicoburns 3 hours ago

They're claiming a huge increase in traffic due to vibe coded projects. It might just be an excuse, but it certainly seems plausible to me.

motbus3 3 hours ago | parent | next [-]

Could be. But 99% of the repos are static garbage with no PR nor actions.

They mentioned they have some elasticsearch reindexing going to, I would guess they needed to regard or move stuff and something didn't work well. But if I understood it right they mentioned the PRs ES index which they didn't shared proof increased as the number of repos.

It might be anything. It seems they lost huge chunks due to layoffs and structural changes and MS which has the reverse golden Midas touch.

This is just pure speculation but also now there is no reason for MS to keep GH working. They absorbed all code they wanted. Now they can let it burn. Would be even better for them if that happened

jonfw 3 hours ago | parent | next [-]

> Could be. But 99% of the repos are static garbage with no PR nor actions.

But the 1% of repos that do have PRs and actions are likely going to be seeing enormous increases in volumes

I have been a part of two very large companies with self hosted gits and I've seen enough to be confident that this is an incredibly hard thing to manage

fourseventy an hour ago | parent [-]

Ya but they are owned by freaking microsoft and have billions of dollars and employees to throw at the problem. The outage problems shouldn't be happening period.

jonfw an hour ago | parent [-]

Easy to say that! Some problems are legitimately hard to solve though. Github is likely seeing usage patterns that have never been seen before and I bet some of these failure modes are novel

If you are at the limits of your architecture you may need to re-write things, and if you are rewriting things you can not arbitrarily speed that up by throwing dollars at it.

motbus3 12 minutes ago | parent [-]

It is not like MS is involved with AI and say they can make anything in minutes with AI too

giancarlostoro 2 hours ago | parent | prev | next [-]

At that point, make it lazy indexing? Who cares that I can't find a repo that was made 10 seconds ago, or even 15 minutes ago? No seriously, who cares? Search to that level of nuance is not mission critical, I don't care what anyone says, you'll live if you wait another 15 minutes or even an hour. Their search has been terrible since their last major set of search changes where they overhauled it completely either way.

parthdesai 3 hours ago | parent | prev [-]

Serious question, have you been part of an org that had to scale orders of magnitude very quickly?

Anyone who has been part of that journey knows how painful it really is. A lot of times the systems to fail at all levels, and you have to redesign it from the first principles.

dijit 2 hours ago | parent | next [-]

> Serious question, have you been part of an org that had to scale orders of magnitude very quickly?

I have, but it depends what you mean.

Scenario 1: e-commerce SaaS (think: Amazon but whitelabel, and before CPUs even had AES instructions); Christmas was "fun".

Scenario 2: Video Games. The first day is the worst day when it comes to scale. Everything has to be flawless from day 0 and you get no warning as to what can go wrong.

Yet, somehow, I managed to make highly reliable systems.

In scenario 1; I had an existing system that had to scale up and down with load, this was before there was cloud and hardware had a 3-4 month lead time, so most of the effort was around optimising existing code, increasing job timeouts and "quenching" sources that were expensive. We used to also do so 'magic' when it came to serving requests that had session token or shopping cart cookie.

In scenario 2; we have a clean-room implementation and no legacy, which is a blessing but also a curse, there's no possibility to sample real usage: but you also don't need to worry about making breaking changes that are for the better. With legacy you have to figure out how to migrate to the new behaviour gradually.

So, pro's and con's... but it's not like handling huge load hasn't been done before, computers are faster than they ever have been and while my personal opinion is that operational knowledge is dying (due to general distain for people who actually used to run systems that scale: not just write hopeful "eventually consistent" yaml that they call deterministic) - the systems that do exist today hold your hand much better than they did for me 20 years ago.

And I ran 1% of web traffic with an ops team of 5 back then. So, idk what's going on here.

EDIT: Likely people are flagging me because I sound arrogant (or I hurt their feelings by talking bad about YAML-ops), but all I am doing is answering the question presented based on my experience.

Dwedit 2 hours ago | parent [-]

I think you meant "green fields" and not "clean room"? Clean room refers to reverse engineering an existing program to create specifications, then having another team implement the specifications without legal risk from involving the original.

dijit an hour ago | parent [-]

Yes I did, sorry! You are right. :)

HWR_14 3 hours ago | parent | prev | next [-]

Is GitHub scaling by orders of magnitude though? That would be an insane increase at this stage of their lifecycle.

jodrellblank 2 hours ago | parent | next [-]

They say it is at least one order of magnitude[1]; "our plan to increase GitHub’s capacity by 10X in October 2025 .. By February 2026, it was clear that we needed to design for a future that requires 30X today’s scale."

[1] https://github.blog/news-insights/company-news/an-update-on-...

ori_b 2 hours ago | parent [-]

Note the lack of concrete numbers on how much they have scaled. Somebody may have just asked an LLM for projections.

codechicago277 27 minutes ago | parent | next [-]

https://gitcharts.com shows ~310 million public repos today, vs. 250 million in April 2025 (according to the wayback machine).

Large increase, but nothing existential.

Barbing 2 hours ago | parent | prev [-]

Would Microsoft lawyers OK that?

GitHub would have obligations to MS investors to make accurate projections just like Microsoft itself, right?

HWR_14 40 minutes ago | parent [-]

I don't think it's an issue here. If the investor relations people put it out, it would be. But in this case it is closer to marketing.

nicoburns 2 hours ago | parent | prev | next [-]

I wouldn't be surprised. Have you not noticed the sheer volume of slop being posted everywhere these days? Almost all of that is hosted on Github. And some of those repos have insane commit frequencies.

ambicapter 2 hours ago | parent | prev [-]

If they're suffering the onslaught of ai slop, it's possible.

owebmaster 3 hours ago | parent | prev [-]

> you have to redesign it from the first principles

And that start by layoffing your best engineers, I guess

mitchell_h 2 hours ago | parent | prev | next [-]

They can claim that...but if you've built a public SaaS before you know the job is not to host the software, it's to put rails around people taking it down. They've had since 2008 to build those rails, and they're just now hitting places that take the service down on the regular?

empath75 27 minutes ago | parent [-]

The problem is that they are charging per seat and need to start charging for usage.

AznHisoka 3 hours ago | parent | prev | next [-]

Yep, definitely more traffic and also more new Github repos being created, with a pretty huge spike the last 2 months [1]

[1] https://bloomberry.com/data/github/

reaperducer an hour ago | parent | prev | next [-]

They're claiming a huge increase in traffic due to vibe coded projects. It might just be an excuse, but it certainly seems plausible to me.

I simply do not care.

Customers pay for a service. If they don't get what they paid for, it's perfectly reasonable and normal to go elsewhere.

Why do people on HN keep apologizing on the behalf of trillion-dollar companies?

pixl97 an hour ago | parent [-]

I mean, what will happen here is people will go to other services and they'll get overloaded too.

Self hosted is probably the way to go, but hardware prices are insane currently.

u_fucking_dork an hour ago | parent | prev | next [-]

Probably true. GH Enterprise Cloud is mostly 100% uptime over the past 90 days.

twoodfin 3 hours ago | parent | prev [-]

I’d be shocked if this wasn’t the reason.