Remix.run Logo
bugsMarathon88 a day ago

[flagged]

edent a day ago | parent | next [-]

Gosh! It is a pity Google doesn't hire any smart people who know how to build a throttling system.

Still, they're a tiny and cash-starved company so we can't expect too much of them.

acheron a day ago | parent | next [-]

Must not be any questions about that in Leetcode.

lyu07282 a day ago | parent | prev | next [-]

Its almost like as if once a company becomes this big, burning them to the ground would be better for society or something. That would be the liberal position on monopolies if they actually believed in anything.

bugsMarathon88 a day ago | parent | prev [-]

It is a business, not a charity. Adjust your expectations accordingly, or expect disappointment.

quesera a day ago | parent | prev | next [-]

Modern webservers are very, very fast on modern CPUs. I hear Google has some CPU infrastructure?

I don't know if GCP has a free tier like AWS does, but 10kQPS is likely within the capability of a free EC2 instance running nginx with a static redirect map. Maybe splurge for the one with a full GB of RAM? No problem.

bbarnett a day ago | parent [-]

You could deprecate the service, and archive the links as static html. 200bytes of text for an html redirect (not js).

You can serve immense volumes of traffic from static html. One hardware server alone could so easily do the job.

Your attack surface is also tiny without a back end interpreter.

People will chime in with redundancy, but the point is Google could stop maintaining the ingress, and still not be douches about existing urls.

But... you know, it's Google.

quesera a day ago | parent [-]

Exactly. I've seen goo.gl URLs in printed books. Obviously in old blog posts too. And in government websites. Nonprofit communications. Everywhere.

Why break this??

Sure, deprecate the service. Add no new entries. This is a good idea anyway, link shorteners are bad for the internet.

But breaking all the existing goo.gl URLs seems bizarrely hostile, and completely unnecessary. It would take so little to keep them up.

You don't even need HTML files. The full set of static redirects can be configured into the webserver. No deployment hassles. The filesystem can be RO to further reduce attack surface.

Google is acting like they are a one-person startup here.

Since they are not a one-person startup, I do wonder if we're missing the real issue. Like legal exposure, or implication in some kind of activity that they don't want to be a part of, and it's safer/simpler to just delete everything instead of trying to detect and remove all of the exposure-creating entries.

Of maybe that's what they're telling themselves, even if it's not real.

bugsMarathon88 20 hours ago | parent [-]

> Why break this??

We already told you: people are likely brute-forcing URLs.

quesera 19 hours ago | parent [-]

I'm not sure why that is a problem.

nomel a day ago | parent | prev [-]

Those numbers make it seem fairly trivial. You have a dozen bytes referencing a few hundred bytes, for a service that is not latency sensitive.

This sounds like a good project for an intern, with server costs that might be able to exceed a hundred dollars per month!