Remix.run Logo
Google's shortened goo.gl links will stop working next month(theverge.com)
227 points by mobilio a day ago | 205 comments
edent a day ago | parent | next [-]

About 60k academic citations about to die - https://scholar.google.com/scholar?start=90&q=%22https://goo...

Countless books with irrevocably broken references - https://www.google.com/search?q=%22://goo.gl%22&sca_upv=1&sc...

And for what? The cost of keeping a few TB online and a little bit of CPU power?

An absolute act of cultural vandalism.

toomuchtodo a day ago | parent | next [-]

https://wiki.archiveteam.org/index.php/Goo.gl

https://tracker.archiveteam.org/goo-gl/ (1.66B work items remaining as of this comment)

How to run an ArchiveTeam warrior: https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior

(edit: i see jaydenmilne commented about this further down thread, mea culpa)

progbits 16 hours ago | parent | next [-]

They appear to be doing ~37k items per minute, with 1.6B remaining that is roughly 30 days left. So that's just barely enough to do it in time.

Going to run the warrior over the weekend to help out a bit.

pentagrama 20 hours ago | parent | prev [-]

Thank you for that information!

I wanted to help and did that using VMware.

For curious people, here is what the UI looks like, you have a list of projects to choose, I choose the goo.gl project, and a "Current project" tab which shows the project activity.

Project list: https://imgur.com/a/peTVzyw

Current project: https://imgur.com/a/QVuWWIj

addandsubtract 2 hours ago | parent [-]

Also available as a docker file, for those not running VMs: https://github.com/ArchiveTeam/warrior-dockerfile

jlarocco 19 hours ago | parent | prev | next [-]

IMO it's less Google's fault and more a crappy tech education problem.

It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors. They didn't publish a book or write an academic paper in a vacuum - somebody around them should have known better and said something.

And really it's not much different than anything else online - it can disappear on a whim. How many of those shortened links even go to valid pages any more?

And no company is going to maintain a "free" service forever. It's easy to say, "It's only ...", but you're not the one doing the work or paying for it.

justin66 18 hours ago | parent | next [-]

> It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors. They didn't publish a book or write an academic paper in a vacuum - somebody around them should have known better and said something.

It's a great idea, and today in 2025, papers are pretty much the only place where using these shortened URLs makes a lot of sense. In almost any other context you could just use a QR code or something, but that wouldn't fit an academic paper.

Their specific choice of shortened URL provider was obviously unfortunate. The real failure is that of DOI to provide an alternative to goo.gl or tinyurl or whatever that is easy to reach for. It's a big failure, since preserving references to things like academic papers is part of their stated purpose.

dingnuts 15 hours ago | parent [-]

Even normal HTTP URLs aren't great. If there was ever a case for content-addressable networks like IPFS it's this. Universities should be able to host this data in a decentralized way.

nly 14 hours ago | parent [-]

CANs usually have complex hashy URLs, so you still have the compactness problem

gmerc 19 hours ago | parent | prev [-]

Ahh classic free market cop out.

FallCheeta7373 19 hours ago | parent | next [-]

if the smartest among us publishing for academia cannot figure this out, then who will?

hammyhavoc 7 hours ago | parent [-]

Not infrequently, someone being smart in one field doesn't necessarily mean they can solve problems in another.

I know some brilliant people, but, well, putting it kindly, they're as useful as a chocolate teapot outside of their specific area of academic expertise.

kazinator 18 hours ago | parent | prev [-]

Nope! There have in fact been education campaigns about the evils of URL shorteners for years: how they pose security risks (used for shortening malicious URLs), and how they stop working when their domain is temporarily or permanently down.

The authors just had their heads too far up their academic asses to have heard of this.

epolanski a day ago | parent | prev | next [-]

Jm2c, but if your resource is a link to an online resource that's borderline already (at any point the content can be changed or disappear).

Even worse if your resource is a shortened link by some other service, you've just added yet another layer of unreliable indirection.

whatevaa a day ago | parent [-]

Citations are citations, if it's a link, you link to it. But using shorteners for that is silly.

ceejayoz 21 hours ago | parent [-]

It's not silly if the link is a couple hundred characters long.

IanCal 20 hours ago | parent | next [-]

Adding an external service so you don’t have to store a few hundred bytes is wild, particularly within a pdf.

ceejayoz 20 hours ago | parent [-]

It's not the bytes.

It's the fact that it's likely gonna be printed in a paper journal, where you can't click the link.

SR2Z 20 hours ago | parent | next [-]

I find it amusing that you are complaining about not having a computer to click a link while glossing over the fact that you need a computer to use a link at all.

This use case of "I have a paper journal and no PDF but a computer with a web browser" seems extraordinarily contrived. I have literally held a single-digit number of printed papers in my entire life while looking at thousands as PDFs. If we cared, we'd use a QR code.

This kind of luddite behavior sometimes makes using this site exhausting.

jtuple 19 hours ago | parent | next [-]

Perhaps times have changed, but when I was in grad school circa 2010 smartphones and tablets weren't yet ubiquitous but laptops were. It was super common to sit in a cafe/library with a laptop and a stack of printed papers to comb though.

Reading paper was more comfortable then reading on the screen, and it was easy to annotate, highlight, scribble notes in the margin, doodle diagrams, etc.

Do grad students today just use tablets with a stylus instead (iPad + pencil, Remarkable Pro, etc)?

Granted, post grad school I don't print much anymore, but that's mostly due to a change in use case. At work I generally read at most 1-5 papers a day tops, which is small enough to just do on a computer screen (and have less need to annotate, etc). Quite different then the 50-100 papers/week + deep analysis expected in academia.

Incipient 11 hours ago | parent [-]

>Perhaps times have changed, but when I was in grad school circa 2010 smartphones and tablets weren't yet ubiquitous but laptops were. It was super common to sit in a cafe/library with a laptop and a stack of printed papers to comb though.

I just had a really warm feeling of nostalgia reading that! I was a pretty average student, and the material was sometimes dull, but the coffee was nice, life had little stress (in comparison) and everything felt good. I forgot about those times haha. Thanks!

ceejayoz 19 hours ago | parent | prev | next [-]

> I have literally held a single-digit number of printed papers in my entire life while looking at thousands as PDFs.

This is by no means a universal experience.

People still get printed journals. Libraries still stock them. Some folks print out reference materials from a PDF to take to class or a meeting or whatnot.

SR2Z 19 hours ago | parent [-]

And how many of those people then proceed to type those links into their web browsers, shortened or not?

Sure, contributing to link rot is bad, but in the same way that throwing out spoiled food is bad. Sometimes you've just gotta break a bunch of links.

andrepd 19 hours ago | parent | prev | next [-]

I feel like all that is beyond the point. People used goo.gl because they largely are not tech specialists and aren't really aware of link rot or of a Google decision rendering those links unaccessible.

SR2Z 19 hours ago | parent [-]

> People used goo.gl because they largely are not tech specialists and aren't really aware of link rot or of a Google decision rendering those links unaccessible.

Anyone who is savvy enough to put a link in a document is well-aware of the fact that links don't work forever, because anyone who has ever clicked a link from a document has encountered a dead link. It's not 2005 anymore, the internet has accumulated plenty of dead links.

reaperducer 15 hours ago | parent | prev [-]

This kind of luddite behavior sometimes makes using this site exhausting.

We have many paper documents from over 1,000 years ago.

The vast majority of what was on the internet 25 years ago is gone forever.

eviks 6 hours ago | parent | next [-]

What a weird comparison. Do we have the vast majority of paper documents from 1,000 years ago?

epolanski 15 hours ago | parent | prev [-]

25?

Try going back by 6/7 years on this very website, half the links are dead.

IanCal 2 hours ago | parent | prev | next [-]

That’s an even worse reason to use a temporary redirection service. If you really need to, put in both.

leumon 19 hours ago | parent | prev [-]

which makes url shorteners even more attractive for printed media, because you don't have to type many characters manually

epolanski 20 hours ago | parent | prev [-]

Fix that at the presentation layer (PDFs and Word files etc support links) not the data one.

ceejayoz 20 hours ago | parent [-]

Let me know when you figure out how to make a printed scientific journal clickable.

epolanski 15 hours ago | parent | next [-]

Scientific journals should not rely on ephemeral data on the internet. It doesn't even matter how long the url is.

Just buy any scientific book and try to navigate to it's own errata they link in the book. It's always dead.

diatone 20 hours ago | parent | prev [-]

Take a photo on your phone, OS recognises the link in the image, makes it clickable, done. Or, use a QR code instead

3 hours ago | parent | next [-]
[deleted]
ceejayoz 20 hours ago | parent | prev | next [-]

https://news.ycombinator.com/item?id=9224

jeeyoungk 19 hours ago | parent | prev [-]

This is the answer; turns out that non-transformed links are the most generic data format, without any "compression" - QR codes or a third-party-intermediary - needed.

zffr a day ago | parent | prev | next [-]

For people wanting to include URL references in things like books, what’s the right approach to take today?

I’m genuinely asking. It seems like its hard to trust that any service will remaining running for decades

toomuchtodo a day ago | parent | next [-]

https://perma.cc/

It is built for the task, and assuming worse case scenario of sunset, it would be ingested into the Wayback Machine. Note that both the Internet Archive and Cloudflare are supporting partners (bottom of page).

(https://doi.org/ is also an option, but not as accessible to a casual user; the DOI Foundation pointed me to https://www.crossref.org/ for adhoc DOI registration, although I have not had time to research further)

ruined 21 hours ago | parent | next [-]

perma.cc is an interesting project, thanks for sharing.

other readers may be specifically interested in their contingency plan

https://perma.cc/contingency-plan

Hyperlisk a day ago | parent | prev | next [-]

perma.cc is great. Also check out their tools if you want to get your hands dirty with your own archival process: https://tools.perma.cc/

whoahwio 21 hours ago | parent | prev [-]

While Perma is solution specifically for this problem, and a good one at that - citing the might of the backing company is a bit ironic here

toomuchtodo 21 hours ago | parent [-]

If Cloudflare provides the infra (thanks Cloudflare!), I am happy to have them provide the compute and network for the lookups (which, at their scale, is probably a rounding error), with the Internet Archive remaining the storage system of last resort. Is that different than the Internet Archive offering compute to provide the lookups on top of their storage system? Everything is temporary, intent is important, etc. Can always revisit the stack as long as the data exists on disk somewhere accessible.

This is distinct from Google saying "bye y'all, no more GETs for you" with no other way to access the data.

whoahwio 21 hours ago | parent [-]

This is much better positioned for longevity than google’s URL shortener, I’m not trying to make that argument. My point is that 10-15 years ago, when Google’s URL shortener was being adopted for all these (inappropriate) uses, its use was supported by a public opinion of Google’s ‘inevitability’. For Perma, CF serves a similar function.

toomuchtodo 19 hours ago | parent [-]

Point taken.

edent a day ago | parent | prev | next [-]

The full URl to the original page.

You aren't responsible if things go offline. No more than if a publisher stops reprinting books and the library copies all get eaten by rats.

A reader can assess the URl for trustworthiness (is it scam.biz or legitimate_news.com) look at the path to hazard a guess at the metadata and contents, and - finally - look it up in an archive.

firefax 21 hours ago | parent | next [-]

>The full URl to the original page.

I thought that was the standard in academia? I've had reviewers chastise me when I did not use wayback machine to archive a citation and link to that since listing a "date retrieved" doesn't do jack if there's no IA copy.

Short links were usually in addition to full URLS, and more in conference presentations than the papers themselves.

grapesodaaaaa 19 hours ago | parent | prev [-]

I think this is the only real answer. Shorteners might work for things like old Twitter where characters were a premium, but I would rather see the whole URL.

We’ve learned over the years that they can be unreliable, security risks, etc.

I just don’t see a major use-case for them anymore.

AbstractH24 an hour ago | parent | prev | next [-]

What's the right approach to take for referencing anything that isn't preserved in an institution like the Library of Congress?

Say the interview of a person, a niche publication, a local pamphlet?

Maybe to certify that your article is of a certain level of credibility you need to manually preserve all the cited works yourself in an approved way.

danelski a day ago | parent | prev [-]

Real URL and save the website in the Internet Archive as it was on the date of access?

kazinator a day ago | parent | prev | next [-]

The act of vandalism occurs when someone creates a shortened URL, not when they stop working.

djfivyvusn a day ago | parent | prev | next [-]

The vandalism was relying on Google.

toomuchtodo a day ago | parent | next [-]

You'd think people would learn. Ah, well. Hopefully we can do better from lessons learned.

api a day ago | parent | prev [-]

The web is a crap architecture for permanent references anyway. A link points to a server, not e.g. a content hash.

The simplicity of the web is one of its virtues but also leaves a lot on the table.

QuantumGood 16 hours ago | parent | prev | next [-]

When they began offering this, their rep for ending services was already so bad I refused to consider goo.gl. Amazing for how many years now they have introduced then ended services with large user bases. Gmail being in "beta" for five years was, weirdly, to me, a sign they might stick with it.

justinmayer 16 hours ago | parent | prev | next [-]

In the first segment of the very first episode of the Abstractions podcast, we talked about Google killing its goo.gl URL obfuscation service and why it is such a craven abdication of responsibility. Have a listen, if you’re curious:

Overcast link to relevant chapter: https://overcast.fm/+BOOFexNLJ8/02:33

Original episode link: https://shows.arrowloop.com/@abstractions/episodes/001-the-r...

crossroadsguy 21 hours ago | parent | prev | next [-]

I have always struggled with this. If I buy a book I don’t want an online/URL reference in it. Put the book/author/isbn/page etc. Or refer to the magazine/newspaper/journal/issue/page/author/etc.

BobaFloutist 21 hours ago | parent [-]

I mean preferably do both, right? The URL is better for however long it works.

SoftTalker 20 hours ago | parent [-]

We are long, long past any notion that URLs are permanent references to anything. Better to cite with title, author, and publisher so that maybe a web search will turn it up later. The original URL will almost certainly be broken after a few years.

eviks 7 hours ago | parent | prev | next [-]

> And for what? The cost of keeping a few TB online and a little bit of CPU power?

For the immeasurable benefits of educating the public.

SirMaster 20 hours ago | parent | prev | next [-]

Can't someone just go through programmatically right now and build a list of all these links and where they point to? And then put up a list somewhere that everyone can go look up if they need to?

spixy 3 hours ago | parent [-]

Yes: https://tracker.archiveteam.org/goo-gl

lubujackson 8 hours ago | parent | prev | next [-]

Truly, the most Googly of sunsets.

jeffbee a day ago | parent | prev | next [-]

While an interesting attempt at an impact statement, 90% of the results on the first two pages for me are not references to goo.gl shorteners, but are instead OCR errors or just gibberish. One of the papers is from 1981.

asdll 11 hours ago | parent | prev | next [-]

> An absolute act of cultural vandalism.

It makes me mad also, but something we have to learn the hard way is that nothing in this world is permanent. Never, ever depend on any technology to persist. Not even URLs to original hosts should be required. Inline everything.

nikanj 21 hours ago | parent | prev | next [-]

The cost of dealing and supporting an old codebase instead of burning it all and releasing a written-from-scratch replacement next year

garyHL 19 hours ago | parent | prev | next [-]

[dead]

bugsMarathon88 a day ago | parent | prev | next [-]

[flagged]

edent a day ago | parent | next [-]

Gosh! It is a pity Google doesn't hire any smart people who know how to build a throttling system.

Still, they're a tiny and cash-starved company so we can't expect too much of them.

acheron 17 hours ago | parent | next [-]

Must not be any questions about that in Leetcode.

lyu07282 20 hours ago | parent | prev | next [-]

Its almost like as if once a company becomes this big, burning them to the ground would be better for society or something. That would be the liberal position on monopolies if they actually believed in anything.

bugsMarathon88 18 hours ago | parent | prev [-]

It is a business, not a charity. Adjust your expectations accordingly, or expect disappointment.

quesera a day ago | parent | prev | next [-]

Modern webservers are very, very fast on modern CPUs. I hear Google has some CPU infrastructure?

I don't know if GCP has a free tier like AWS does, but 10kQPS is likely within the capability of a free EC2 instance running nginx with a static redirect map. Maybe splurge for the one with a full GB of RAM? No problem.

bbarnett 20 hours ago | parent [-]

You could deprecate the service, and archive the links as static html. 200bytes of text for an html redirect (not js).

You can serve immense volumes of traffic from static html. One hardware server alone could so easily do the job.

Your attack surface is also tiny without a back end interpreter.

People will chime in with redundancy, but the point is Google could stop maintaining the ingress, and still not be douches about existing urls.

But... you know, it's Google.

quesera 17 hours ago | parent [-]

Exactly. I've seen goo.gl URLs in printed books. Obviously in old blog posts too. And in government websites. Nonprofit communications. Everywhere.

Why break this??

Sure, deprecate the service. Add no new entries. This is a good idea anyway, link shorteners are bad for the internet.

But breaking all the existing goo.gl URLs seems bizarrely hostile, and completely unnecessary. It would take so little to keep them up.

You don't even need HTML files. The full set of static redirects can be configured into the webserver. No deployment hassles. The filesystem can be RO to further reduce attack surface.

Google is acting like they are a one-person startup here.

Since they are not a one-person startup, I do wonder if we're missing the real issue. Like legal exposure, or implication in some kind of activity that they don't want to be a part of, and it's safer/simpler to just delete everything instead of trying to detect and remove all of the exposure-creating entries.

Of maybe that's what they're telling themselves, even if it's not real.

bugsMarathon88 11 hours ago | parent [-]

> Why break this??

We already told you: people are likely brute-forcing URLs.

quesera 10 hours ago | parent [-]

I'm not sure why that is a problem.

nomel 21 hours ago | parent | prev [-]

Those numbers make it seem fairly trivial. You have a dozen bytes referencing a few hundred bytes, for a service that is not latency sensitive.

This sounds like a good project for an intern, with server costs that might be able to exceed a hundred dollars per month!

oyveybro a day ago | parent | prev [-]

[flagged]

mrcslws a day ago | parent | prev | next [-]

From the blog post: "more than 99% of them had no activity in the last month" https://developers.googleblog.com/en/google-url-shortener-li...

This is a classic product data decision-making fallacy. The right question is "how much total value do all of the links provide", not "what percent are used".

bayindirh a day ago | parent | next [-]

> The right question is "how much total value do all of the links provide", not "what percent are used".

Yes, but it doesn't bring in the sweet promotion home, unfortunately. Ironically, if 99% of them doesn't see any traffic, you can scale back the infra, run it in 2 VMs, and make sure a single person can keep it up as a side quest, just for fun (but, of course, pay them for their work).

This beancounting really makes me sad.

quesera a day ago | parent | next [-]

Configuring a static set of redirects would take a couple hours to set up, and literally zero maintenance forever.

Amazon should volunteer a free-tier EC2 instance to help Google in their time of economic struggles.

bayindirh 21 hours ago | parent [-]

This is what I mean, actually.

If they’re so inclined, Oracle has an always free tier with ample resources. They can use that one, too.

socalgal2 20 hours ago | parent | prev | next [-]

If they wanted the sweat promotion they could add an interstitial. Yes, people would complain, but at least the old links would not stop working.

ahstilde a day ago | parent | prev | next [-]

> just for fun (but, of course, pay them for their work).

Doing things for fun isn't in Google's remit

kevindamm a day ago | parent | next [-]

Alas, it was, once upon a time.

morkalork a day ago | parent | prev | next [-]

Then they shouldn't have offered it as a free service in the first place. It's like that discussion about how Google in all its 2-ton ADHD gorilla glory will enter an industry, offer a (near) free service or product, decimate all competition, then decide its not worth it and shutdown. Leaving a desolate crater behind of ruined businesses, angry and abandoned users.

jsperson 19 hours ago | parent [-]

I’m still sore about reader. Gap has never been filled for me.

ceejayoz a day ago | parent | prev [-]

It used to be. AdSense came from 20% time!

21 hours ago | parent | prev | next [-]
[deleted]
kmeisthax a day ago | parent | prev [-]

[dead]

HPsquared a day ago | parent | prev | next [-]

Indeed. I've probably looked at less than 1% of my family photos this month but I still want to keep them.

14 hours ago | parent [-]
[deleted]
sltkr 21 hours ago | parent | prev | next [-]

I bet 99% of URLs that exist on the public web had no activity last month. Might as well delete the entire WWW because it's obviously worthless.

chneu 3 hours ago | parent [-]

Where'd all my porn go!?

fizx a day ago | parent | prev | next [-]

Don't be confused! That's not how they made the decision; it's how they're selling it.

esafak 21 hours ago | parent [-]

So how did they decide?

chneu 3 hours ago | parent | next [-]

new person got hired after old person left. new person says "we can save x% by shutting down these links. 99% arent used" and the new boss that's only been there for 6 months says "yeah sure".

Why does google kill any project? the people who made it moved on, the new people dont care because it doesn't make their resume look any better.

basically nobody wants to own this service and it requires upkeep to maintain it alongside other google services.

google's history shows a clear choice to reward new projects, not old ones.

https://killedbygoogle.com/

nemomarx 21 hours ago | parent | prev [-]

I expect cost on a budget sheet, then an analysis was done about the impact of shutting it down

sltkr 21 hours ago | parent [-]

You can't get promoted at Google for not changing anything.

firefax 21 hours ago | parent | prev | next [-]

> "more than 99% of them had no activity in the last month"

Better to have a short URL and not need it, than need a short URL and not have it IMO.

SoftTalker 20 hours ago | parent | prev | next [-]

From Google's perspective, the question is "How many ads are we selling on these links" and if it's near zero, that's the value to them.

esafak 21 hours ago | parent | prev | next [-]

What fraction of indexed Google sites, Youtube videos, or Google Photos were retrieved in the last month? Think of the cost savings!

nomel 21 hours ago | parent [-]

Youtube already does this, to some extent, by slowly reduce the quality of your videos, if they're not accessed frequently enough.

Many videos I uploaded in 4k are now only available in 480p, after about a decade.

handsclean a day ago | parent | prev | next [-]

I don’t think they’re actually that dumb. I think the dirty secret behind “data driven decision making” is managers don’t want data to tell them what to do, they want “data” to make even the idea of disagreeing with them look objectively wrong and stupid.

HPsquared a day ago | parent [-]

It's a bit like the the difference between "rule of law" and "rule by law" (aka legalism).

It's less "data-driven decisions", more "how to lie with statistics".

FredPret 21 hours ago | parent | prev [-]

"Data-driven decision making"

JimDabell a day ago | parent | prev | next [-]

Cloudflare offered to keep it running and were turned away:

https://x.com/elithrar/status/1948451254780526609

Remember this next time you are thinking of depending upon a Google service. They could have kept this going easily but are intentionally breaking it.

fourseventy 21 hours ago | parent | next [-]

Google killing their domains service was the last straw for me. I started moving all of my stuff off of Google since then.

nomel 21 hours ago | parent [-]

I'm still shocked that my google voice number still functions after all these years. It makes me assume it's main purpose is to actually be an honeypot of some sort, maybe for spam call detection.

joshstrange 21 hours ago | parent | next [-]

Because IIRC it’s essentially completely run by another company (I want to say Bandwidth?) and, again my memories might be fuzzy, originally came from an acquisition of a company called Grand Central.

My guess is it just keeps chugging along with little maintenance needed by Google itself. The UI hasn’t changed in a while from what I’ve seen.

14 hours ago | parent [-]
[deleted]
hnfong 20 hours ago | parent | prev | next [-]

Another shocking story to share.

I have a tiny service built on top of Google App Engine that (only) I use personally. I made it 15+ years ago, and the last time I deployed changes was 10+ years ago.

It's still running. I have no idea why.

coryrc 20 hours ago | parent [-]

It's the most enterprise-y and legacy thing Google sells.

throwyawayyyy 21 hours ago | parent | prev | next [-]

Pretty sure you can thank the FCC for that :)

mrj 21 hours ago | parent | prev | next [-]

Shhh don't remind them

kevin_thibedeau 21 hours ago | parent | prev [-]

Mass surveillance pipeline to the successor of room 641A.

thebruce87m 19 hours ago | parent | prev [-]

> Remember this next time you are thinking of depending upon a Google service.

Next time? I guess there’s a wave of new people that haven’t learned that that lesson yet.

jaydenmilne a day ago | parent | prev | next [-]

ArchiveTeam is trying to brute force the entire URL space before its too late. You can run a Virtualbox VM/docker image (ArchiveTeam Warrior) to help (unique IPs are needed). I've been running it for a couple months and found a million.

https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior

pimlottc a day ago | parent | next [-]

Looks like they have saved 8000+ volumes of data to the Internet Archive so far [0]. The project page for this effort is here [1].

0: https://archive.org/details/archiveteam_googl

1: https://wiki.archiveteam.org/index.php/Goo.gl

localtoast a day ago | parent | prev | next [-]

Docker container FTW. Thanks for the heads-up - this is a project I will happily throw a Hetzner server at.

chneu 3 hours ago | parent | next [-]

im about to go setup my spare n100 just for this project. If all it uses is a lil bandwidth then that's perfect for my 10gbps fiber and n100.

addandsubtract 2 hours ago | parent [-]

Doing the same, even though I'm worried Google will throw even more captchas at me now, than before.

wobfan 21 hours ago | parent | prev [-]

Same here. I am geniunely asking myself for what though. I mean, they'll receive a list of the linked domains, but what will they do with that?

localtoast 21 hours ago | parent | next [-]

It's not only goo.gl links they are actively archiving. Take a look at their current tasks.

https://tracker.archiveteam.org/

fragmede 20 hours ago | parent | prev [-]

save it, forever*.

* as long as humanly possible, as is archive.org's mission.

ojo-rojo a day ago | parent | prev | next [-]

Thanks for sharing this. I've often felt that the ease by which we can erase digital content makes our time period susceptible to a digital dark ages to archaeologists studying history a few thousand years from now.

Us preserving digital archives is a good step. I guess making hard copies would be the next step.

hadrien01 16 hours ago | parent | prev | next [-]

After a while I started to get "Google asks for a login" errors. Should I just keep going? There's no indication on what I should do on the ArchiveTeam wiki

AstroBen 21 hours ago | parent | prev [-]

Just started, super easy to set up

cpeterso a day ago | parent | prev | next [-]

Google’s own services generate goo.gl short URLs (Google Maps generates https://maps.app.goo.gl/ URLs for sharing links to map locations), so I assume this shutdown only affects user-generated short URLs. Google’s original announcement doesn’t say as such, but it is carefully worded to specify that short URLs of the “https://goo.gl/* format” will be shut down.

Google’s probably trying to stop goo.gl URLs from being used for phishing, but doesn’t want to admit that publicly.

growthwtf 20 hours ago | parent [-]

This actually makes the most logical sense to me, thank you for the idea. I don't agree with the way they're doing it of course but this probably is risk mitigation for them.

jedberg a day ago | parent | prev | next [-]

I have only given this a moment's thought, but why not just publish the URL map as a text file or SQLLite DB? So at least we know where they went? I don't think it would be a privacy issue since the links are all public?

DominikPeters a day ago | parent | next [-]

It will include many URLs that are semi-private, like Google Docs that are shared via link.

chneu 3 hours ago | parent | next [-]

That's not any better than what archiveteam is doing. They're brute forcing the URLs to capture all of them. So privacy won't really matter here.

ryandrake a day ago | parent | prev | next [-]

If some URL is accessible via the open web, without authentication, then it is not really private.

bo1024 21 hours ago | parent [-]

What do you mean by accessible without authentication? My server will serve example.com/64-byte-random-code if you request it, but if you don’t know the code, I won’t serve it.

prophesi 21 hours ago | parent | next [-]

Obfuscation may hint that it's intended to be private, but it's certainly not authentication. And the keyspace for these goog.le short URL's are much smaller than a 64byte alphanumeric code.

hombre_fatal 21 hours ago | parent | next [-]

Sure, but you have to make executive decisions on the behalf of people who aren't experts.

Making bad actors brute force the key space to find unlisted URLs could be a better scenario for most people.

People also upload unlisted Youtube videos and cloud docs so that they can easily share them with family. It doesn't mean you might as well share content that they thought was private.

bo1024 21 hours ago | parent | prev | next [-]

I'm not seeing why there's a clear line where GET cannot be authentication but POST can.

prophesi 21 hours ago | parent [-]

Because there isn't a line? You can require auth for any of those HTTP methods. Or not require auth for any of them.

21 hours ago | parent | prev | next [-]
[deleted]
wobfan 16 hours ago | parent | prev [-]

I mean, going by that argument a username + password is also just obfuscation. Generating a unique 64 byte code is even more secure than this, IF it's handled correctly.

21 hours ago | parent | prev [-]
[deleted]
charcircuit 21 hours ago | parent | prev | next [-]

Then use something like argon2 on the keys, so you have to spend a long time to brute force them all similar to how it is today.

high_na_euv a day ago | parent | prev [-]

So exclude them

ceejayoz a day ago | parent [-]

How?

How will they know a short link to a random PDF on S3 is potentially sensitive info?

Nifty3929 a day ago | parent | prev | next [-]

I'd rather see it as a searchable database, which I would think is super cheap and no maintenance for Google, and avoids these privacy issues. You can input a known goo.gl and get it's real URL, but can't just list everything out.

growt a day ago | parent [-]

And then output the search results as a 302 redirect and it would just be continuing the service.

20 hours ago | parent | prev | next [-]
[deleted]
devrandoom a day ago | parent | prev [-]

Are they all public? Where can I see them?

jedberg a day ago | parent | next [-]

You can brute force them. They don't have passwords. The point is the only "security" is knowing the short URL.

Alifatisk a day ago | parent | prev [-]

I don't think so, but you can find the indexed urls here https://www.google.com/search?q=site%3A"goo.gl" it's about 9,6 million links. And those are what got indexed, it should be way more out there

chneu 3 hours ago | parent | next [-]

archiveteam has the list at over 2billion urls with over a billion left to archive.

sltkr 21 hours ago | parent | prev [-]

I'm surprised Google indexes these short links. I expected them to resolve them to their canonical URL and index that instead, which is what they usually do when multiple URLs point to the same resource.

ElijahLynn a day ago | parent | prev | next [-]

OMFG - Google should keep these up forever. What a hit to trust. Trust with Google was already bad for everything they killed, this is another dagger.

phyzix5761 21 hours ago | parent [-]

People still trust Google?

spankalee 19 hours ago | parent | prev | next [-]

As an ex-Googler, the problem here is clear and common, and it's not the infrastructure cost: it's ownership.

No one wants to own this product.

- The code could be partially frozen, but large scale changes are constantly being made throughout the google3 codebase, and someone needs to be on the hook for approving certain changes or helping core teams when something goes wrong. If a service it uses is deprecated, then lots of work might need to be done.

- Every production service needs someone responsible for keeping it running. Maybe an SRE, thought many smaller teams don't have their own SREs so they manage the service themselves.

So you'd need some team, some full reporting chain all the way up, to take responsibility for this. No SWE is going to want to work on a dead product where no changes are happening, no manager is going to care about it. No director is going to want to put staff there rather than a project that's alive. No VP sees any benefit here - there's only costs and risks.

This is kind of the Reader situation all over again (except for the fact that a PM with decent vision could have drastically improved and grown Reader, IMO).

This is obviously bad for the internet as a whole, and I personally think that Google has a moral obligation to not rug pull infrastructure like this. Someone there knows that critical links will be broken, but it's in no one's advantage to stop that from happening.

I think Google needs some kind of "attic" or archive team that can take on projects like this and make them as efficiently maintainable in read-only mode as possible. Count it as good-will marketing, or spin it off to google.org and claim it's a non-profit and write it off.

Side note: a similar, but even worse situation for the company is the Google Domains situation. Apparently what happened was that a new VP came into the org that owned it and just didn't understand the product. There wasn't enough direct revenue for them, even though the imputed revenue to Workspace and Cloud was significant. They proposed selling it off and no other VPs showed up to the meeting about it with Sundar so this VP got to make their case to Sundar unchallenged. The contract to sell to Squarespace was signed before other VPs who might have objected realized what happened, and Google had to buy back parts of it for Cloud.

gsnedders 11 hours ago | parent | next [-]

To some extent, it's cases like this which show the real fragility of everything existing as a unified whole in google3.

While clearly maintenance and ownership is still a major problem, one could easily imagine deploying something similar — especially read-only — using GCP's Cloud Run and BigTable products could be less work to maintain, as you're not chasing anywhere near such a moving target.

rs186 18 hours ago | parent | prev [-]

Many good points, but if you don't mind me asking: if you were at Google, would you be willing to be the lead of that archive team, knowing that you'll be stuck at this position for the next 10 years, with the possibility of your team being downsized/eliminated when the wind blows slightly in the other direction?

spankalee 12 hours ago | parent [-]

Definitely a valid question!

Myself, no, for a few reasons: I mainly work on developer tools, I'm too senior for that, and I'm not that interested.

But some people are motivated to work on internet infrastructure, and would be interested. First, you wouldn't be stuck for 10 years. That's not how Google works (and you could of course quit) you're supposed to be with a team a minimum of 18 months, and after that, transfer away. A lot of junior devs don't care that much where they land, the archive team would have to be responsible for more than just the link shortener, so it might be interesting to care for several services from top to bottom. SWEs could be compensated for rotating on to the archive team, and/or it could be part-time.

I think the harder thing is getting management buy-in, even from the front-line managers.

romaniv 19 hours ago | parent | prev | next [-]

URL shorteners were always a bad idea. At the rate things are going I'm not sure people in a decade or two won't say the same thing about URLs and the Web as whole. The fact that there is no protocol-level support for archiving, versioning or even client-side replication means that everything you see on the Web right now has an overwhelming probability to permanently disappear in the near future. This is an astounding engineering oversight for something that's basically the most popular communication system and medium in the world and in history.

Also, it's quite conspicuous that 30+ years into this thing browsers still have no built-in capacity to store pages locally in a reasonable manner. We still rely on "bookmarks".

davidczech a day ago | parent | prev | next [-]

I don't really get it, it must cost peanuts to leave a static map like this up for the rest of Google's existence as a company.

nikanj 21 hours ago | parent [-]

There’s two things that are real torture to google dev teams: 1) Being told a product is completed and needs no new features or changes 2) Being made to work on legacy code

hinkley 21 hours ago | parent | prev | next [-]

What’s their body count now? Seems like they’ve slowed down the killing spree, but maybe it’s just that we got tired of talking about them.

theandrewbailey 21 hours ago | parent [-]

297

https://killedbygoogle.com/

hinkley 21 hours ago | parent [-]

Oh look it’s been months since they killed a project!

codyogden 16 hours ago | parent [-]

Because there's not much left to kill.

cyp0633 a day ago | parent | prev | next [-]

The runner of Compiler Explorer tried to collect the public shortlinks and do the redirection themselves:

Compiler Explorer and the Promise of URLs That Last Forever (May 2025, 357 points, 189 comments)

https://news.ycombinator.com/item?id=44117722

krunck a day ago | parent | prev | next [-]

Stop MITMing your content. Don't use shorteners. And use reasonable URL patterns on your sites.

Cyan488 21 hours ago | parent [-]

I have been using a shortening service with my own domain name - it's really handy, and I figure that if they go down I could always manually configure my own DNS or spin up some self-hosted solution.

musicale a day ago | parent | prev | next [-]

How surprising.

https://killedbygoogle.com

hinkley 21 hours ago | parent [-]

That needs a chart.

pentestercrab 21 hours ago | parent | prev | next [-]

There seems to have been a recent uptick in phishers using goo.gl URLs. Yes, even without new URLs being accepted by registering expired domains with an old reference.

bunbun69 5 hours ago | parent | prev | next [-]

Isn’t this a good thing? It forces people to think now before making decisions

ccgreg 14 hours ago | parent | prev | next [-]

Common Crawl's count of unique goo.gl links is approximately 10 million. That's in our permanent archive, so you'll be able to consult them in the future.

No search engine or crawler person will ever recommend using a shortener for any reason.

pluc a day ago | parent | prev | next [-]

Someone should tell Google Maps

david422 20 hours ago | parent | prev | next [-]

Somewhat related - I wanted to add short urls to a project of mine. I was looking around at a bunch of url shorteners - and then realized it would be pretty simple to create my own. It's my content pointed to my own service, so I don't have to worry about 3rd party content or other services going down.

Brajeshwar a day ago | parent | prev | next [-]

What will it really cost for Google (each year) to host whatever was created, as static files, for as long as possible?

malfist a day ago | parent | next [-]

It'd probably cost a couple tens of dollars, and Google is simply too poor to afford that these days. They've spent all their money on AI and have nothing left

chneu 3 hours ago | parent | prev [-]

it's not the cost of hosting/sharing it. It's the cost employing people to maintain this alongside other google products.

So, at minimum, assuming there are 2 people maintaining this at google that probably means it would cost them $250k/yr in just payroll to keep this going. That's probably a very low ball estimate on the people involved but it still shows how expensive theses old products can be.

rsync 17 hours ago | parent | prev | next [-]

A reminder that the "Oh By"[1] everything-shortener not only exists but can be used as a plain old URL shortener[2].

Unlike the google URL shortener, you can count on "Oh By" existing in 20 years.

[1] https://0x.co

[2] https://0x.co/hnfaq.html

xutopia 21 hours ago | parent | prev | next [-]

Google is making harder and harder to depend on their software.

christophilus 20 hours ago | parent [-]

That’s a good thing from my perspective. I wish they’d crush YouTube next. That’s the only Google IP I haven’t been able to avoid.

chneu 3 hours ago | parent [-]

The alternatives just aren't there, either. Nebula is okay but not great. Floatplane is too exclusive. Vimeo..okay.

But maybe a youtube disruption would be good for video on the internet. or it might be bad. idk.

andrii9 21 hours ago | parent | prev | next [-]

Ugh, I used to use https://fuck.it for short links too. Still legendary domain though.

pkilgore 21 hours ago | parent | prev | next [-]

Google probably spends more money a month than what it would take to preserve this service on coffee creamer for a single conference room.

throwaway81523 13 hours ago | parent | prev | next [-]

Cartoon villains. That's what they are.

gedy a day ago | parent | prev | next [-]

At least they didn't release a 2 new competing d.uo or re.ad, etc shorteners and expect you to migrate

micromacrofoot a day ago | parent | prev | next [-]

This is just being a poor citizen of the web, no excuses. Google is a 2 trillion dollar company, keeping these links working indefinitely would probably cost less than what they spend on homepage doodles.

charlesabarnes 21 hours ago | parent | prev | next [-]

Now I'm wondering why did chrome change the behavior to use share.google links if this will be the inevitable outcome

21 hours ago | parent | prev | next [-]
[deleted]
ChrisArchitect 21 hours ago | parent | prev | next [-]

Discussion on the source from 2024: https://news.ycombinator.com/item?id=40998549

mymacbook 15 hours ago | parent | prev | next [-]

Why is everyone jumping on the blame the victims bandwagon?! This is not the fault of users whether they were scientists publishing papers or the fault of the general public sharing links. This is absolutely 100% on Alphabet/Google.

When you blame your customer, you have failed.

eviks 6 hours ago | parent [-]

They weren't customers since they didn't buy anything, and yes, as sweet as "free" is, it is the fault of users to expect free to last forever

ChrisArchitect 21 hours ago | parent | prev | next [-]

Noticed recently on some google properties where there are Share buttons that it's generating share.google links now instead of goo.gl.

Is that the same shortening platform running it?

ourmandave a day ago | parent | prev | next [-]

A comment said they stopped making new links and announced back in 2018 it would be going away.

I'm not a google fanboi and the google graveyard is a well known thing, but this has been 6+ years coming.

goku12 21 hours ago | parent | next [-]

For one, not enough people seem to be aware of it. They don't seem to have given that announcement the importance and effort it deserved. Secondly, I can't say that they have a good migration plan when shutting down their services. People scrambling like this to backup the data is rather common these days. And finally, this isn't a service that can be so easily replaced. Even if people knew that it was going away, there would be short-links that they don't remember, but are important nevertheless. Somebody gave an example above - citations in research papers. There isn't much thought given to the consequences when decisions like this are taken.

Granted that it was a free service and Google is under no obligation to keep it going. But if they were going to be so casual about it, they shouldn't have offered it in the first place. Or perhaps, people should take that lesson instead and spare themselves the pain.

chneu 3 hours ago | parent | prev [-]

I just went through the old thread and it's comments. It appears google didn't specifically state they were going to end the service. They hinted that links would continue working, but new ones would not be able to be created. It was left a bit open-ended, and that likely made people think the links would work indefinitely.

This seems to be echoed by the archiveteam scrambling to get this archived. I figure they would have backed these up years ago if it was more well known.

pfdietz a day ago | parent | prev | next [-]

Once again we are informed that Google cannot be trusted with data in the long term.

fnord77 21 hours ago | parent | prev | next [-]

they attempted this in 2018

https://9to5google.com/2018/03/30/google-url-shortener-shut-...

quesera 10 hours ago | parent [-]

From the 2018 announcement:

> URL Shortener has been a great tool that we’re proud to have built. As we look towards the future, we’re excited about the possibilities of Firebase Dynamic Links

Perhaps relatedly, Google is shutting down Firebase Dynamic Links too, in about a month (2025-08-25).

chneu 3 hours ago | parent [-]

Thanks for pointing this out. That's hilarious.

insane_dreamer a day ago | parent | prev | next [-]

the lesson? never trust industry

Bluestein 21 hours ago | parent | prev | next [-]

Another one for the Google [G]raveyard.-

lrvick 21 hours ago | parent | prev [-]

Yet another reminder to never trust corpotech to be around long term.