Remix.run Logo
Google is killing the open web, part 2(wok.oblomov.eu)
302 points by akagusu 7 hours ago | 249 comments
nwellnhof 6 hours ago | parent | next [-]

Removing XSLT from browsers was long overdue and I'm saying that as ex-maintainer of libxslt who probably triggered (not caused) this removal. What's more interesting is that Chromium plans to switch to a Rust-based XML parser. Currently, they seem to favor xml-rs which only implements a subset of XML. So apparently, Google is willing to remove standards-compliant XML support as well. This is a lot more concerning.

xmcp123 6 hours ago | parent | next [-]

It’s interesting to see the casual slide of Google towards almost internet explorer 5.1 style behavior, where standards can just be ignored “because market share”.

Having flashbacks of “<!--[if IE 6]> <script src="fix-ie6.js"></script> <![endif]-->”

granzymes 5 hours ago | parent | next [-]

The standards body is deprecating XSLT with support from Mozilla and Safari (Mozilla first proposed the removal).

Not sure how you got from that to “Google is ignoring standards”.

_heimdall 4 hours ago | parent | next [-]

There's a lot of history behind WhatWG that revolves around XML.

WhatWG is focused on maintaining specs that browsers intend to implement and maintain. When Chrome, Firefox, and Safari agree to remove XSLT that effectively decides for WhatWG's removal of the spec.

I wouldn't put too much weight behind who originally proposed the removal. It's a pretty small world when it comes to web specifications, the discussions likely started between vendors before one decided to propose it.

NewsaHackO 4 hours ago | parent [-]

The issue is you can’t say to put little weight who originally proposed the removal if the other poster is putting all the weight on Google, who didn’t even initially propose it

_heimdall 3 hours ago | parent [-]

I wouldn't put weight on the initial proposer either way. As best I've been able to keep up with the topic, google has been the party leading the charge arguing for the removal. I thought they were also the first to announce their decision, though maybe my timing is off there.

akerl_ 2 hours ago | parent [-]

It doesn't seem like much of a charge to be led. The decision appears to have been pretty unanimous.

_heimdall 2 hours ago | parent [-]

By browser vendors, you mean? Yes it seems like they were in agreement and many here seem to think that was largely driven by google though that's speculation.

Users and web developers seemed much less on board though[1][2], enough that Google referenced that in their announcement.

[1] https://github.com/whatwg/html/issues/11578 [2] https://github.com/whatwg/html/issues/11523

akerl_ 2 hours ago | parent [-]

Yes, that's what I mean. In this comment tree, you've said:

> google has been the party leading the charge arguing for the removal.

and

> many here seem to think that was largely driven by google though that's speculation

I'm saying that I don't see any evidence that this was "driven by google". All the evidence I see is that Google, Mozilla, and Apple were all pretty immediately in agreement that removing XSLT was the move they all wanted to make.

You're telling us that we shouldn't think too hard about the fact that a Mozilla staffer opened the request for removal, and that we should notice that Google "led the charge". It would be interesting if somebody could back that up with something besides vibes, because I don't even see how there was a charge to lead. Among the groups that agreed, that agreement appears to have been quick and unanimous.

mtillman 2 hours ago | parent | prev | next [-]

I think the person you’re replying to was referring to the partial support of XML instead of the xslt part.

andrewl-hn 5 hours ago | parent | prev | next [-]

Probably if Mozilla didn't push for it initially XSLT would stay around for another decade or longer.

Their board syphons the little money that is left out of their "foundation + corporation" combo, and they keep cutting people from Firefox dev team every year. Of course they don't want to maintain pieces of web standards if it means extra million for their board members.

echelon 4 hours ago | parent [-]

Mozilla's board are basically Google yes-people.

I'm convinced Mozilla is purposefully engineered to be rudderless: C-suite draw down huge salaries, approve dumb, mission-orthgonal objectives, in order to keep Mozilla itself impotent in ever threatening Google.

Mozilla is Google's antitrust litigation sponge. But it's also kept dumb and obedient. Google would never want Mozilla to actually be a threat.

If Mozilla had ever wanted a healthy side business, it wasn't in Pocket, XR/VR, or AI. It would have been in building a DevEx platform around MDN and Rust. It would have synergized with their core web mission. Those people have since been let go.

cxr 2 hours ago | parent | next [-]

> If Mozilla had ever wanted a healthy side business, it wasn't in Pocket, XR/VR, or AI. It would have been in building a DevEx platform around MDN and Rust[…] Those people have since been let go.

The first sentence isn't wrong, but the last sentence is confused in the same way that people who assume that Wikimedia employees have been largely responsible for the content on Wikipedia are confused about how stuff actually makes it into Wikipedia. In reality, WMF's biggest contribution is providing infrastructure costs and paying engineers to develop the Mediawiki platform that Wikipedia uses.

Likewise, a bunch of the people who built up MDN weren't and never could be "let go", because they were never employed by Mozilla to work on MDN to begin with.

(There's another problem, too, which is that addition to selling short a lot of people who are responsible for making MDN as useful as it is but never got paid for it, it presupposes that those who were being paid to work on MDN shouldn't have been let go.)

akerl_ 2 hours ago | parent | prev | next [-]

So the idea is that some group has been perpetuating a decade or so's worth of ongoing conspiracy to ensure that Mozilla continues to exist but makes decisions that "keep Mozilla itself impotent"?

That seems to fail occam's razor pretty hard, given the competing hypotheses for each of their decisions include "Mozilla staff think they're doing a smart thing but they're wrong" and "Mozilla staff are doing a smart thing, it's just not what you would have done".

cxr an hour ago | parent [-]

You're not wrong.

And where philosophical razors are concerned, the appropriate diagnosis for Mozilla's decay comes down to the last word in Hanlon's razor.

glenstein 4 hours ago | parent | prev [-]

Can you say more about the teams let go who worked on MDN and Rust? Wondering if I can read anything on it to stay up to speed.

jacquesm 4 hours ago | parent [-]

https://news.ycombinator.com/item?id=24143819

echelon 4 hours ago | parent | prev [-]

Then standards body is Google and a bunch of companies consuming Google engine code.

dewey 4 hours ago | parent [-]

I guess you mean except Mozilla and Safari...which are the two other competing browser engines? It's not like a it's a room full of Chromium based browsers.

themafia an hour ago | parent | next [-]

Do Mozilla and Safari _not_ take money from Google?

BolexNOLA 4 hours ago | parent | prev [-]

Safari yes

Mozilla…are they actually competing? Like really and truly.

bigyabai 3 hours ago | parent [-]

Mozilla has proven they can exist in a free market; really and truly, they do compete.

Safari is what I'm concerned about. Without Apple's monopoly control, Safari is guaranteed to be a dead engine. WebKit isn't well-enough supported on Linux and Windows to compete against Blink and Gecko, which suggests that Safari is the most expendable engine of the three.

noosphr 3 hours ago | parent | next [-]

If your main competitor is giving you 90% of your revenue they aren't a competitor.

nerdponx 3 hours ago | parent | prev | next [-]

https://news.ycombinator.com/item?id=45955979 this sibling comment says it best

meindnoch 3 hours ago | parent | prev [-]

>Mozilla has proven they can exist in a free market; really and truly, they do compete.

This gave me a superb belly laugh.

Aurornis 5 hours ago | parent | prev | next [-]

I don’t get the comparison. The XSLT deprecation has support beyond Google.

amarant 4 hours ago | parent [-]

It's just ill-informed ideological thinking. People see Google doing anything and automatically assume it's a bad thing and that it's only happening because Google are evil.

HN has historically been relatively free of such dogma, but it seems times are changing, even here

hn_throwaway_99 4 hours ago | parent | next [-]

Completely agree. You see this all the time in online discourse. I call it the "two things can be true at the same time" problem, where a lot of people seem unable to believe that 2 things can simultaneously be true, in this case:

1. Google has engaged in a lot of anticompetitive behavior to maintain and extend their web monopoly.

2. Removing XSLT support from browsers is a good idea that is widely supported by all major browser vendors.

cxr an hour ago | parent | prev | next [-]

> It's just ill-informed ideological thinking.

> People see Google doing anything and automatically assume it's a bad thing and that it's only happening because Google are evil.

Sure, but a person also needs to be conscious of the role that this plays in securing a premature dismissal of anyone venturing to criticize.

(In quoting your comment above, I've deliberately separated the first sentence from the second. Notice how easily the observation of the phenomenon described in the second sentence can be used to undergird the first claim, even though it doesn't actually follow (i.e. as a natural consequence) from the second.)

pmontra 4 hours ago | parent | prev [-]

Maybe free of the "evil Google" dogma but not free from dogma. The few who dared to express one tenth of the disapproval what we usually express about Apple nowadays were downvoted to transparent ink in a matter of minutes. Microsoft had its honeymoon period with HN after their pro open source campaign, WSL, VSCode etc. People who prudently remembered the Microsoft of the 90s and the 2000s did get their fair share of downvotes. Then Windows 11 happened. Surprise. Actually I thought that there has been a consensus about Google being evil for at least ten years but I might me wrong.

amarant 3 hours ago | parent [-]

"relatively" is meant to be doing a lot of work in my previous comment. Allow me to clarify: Obviously some amount was always there, but it used to be so much less than it is now, and, more importantly, the difference between HN and other social media, such as Reddit, used to be bigger, in terms of amount of dogma.

HN still has less dogma than Reddit, but it's closer than it used to be in my estimation. Reddit is still getting more dogma each day, but HN is slowly catching up.

I don't know where to turn to for online discourse that is at least mostly free from dogma these days. This used to be it.

otabdeveloper4 5 hours ago | parent | prev [-]

So-called "standards" on the Google (c) Internet (c) network are but a formality.

zetafunction 5 hours ago | parent | prev | next [-]

https://issues.chromium.org/issues/451401343 tracks work needed in the upstream xml-rs repository, so it seems like the team is working on addressing issues that would affect standards compliance.

Disclaimer: I work on Chrome and have occasionally dabbled in libxml2/libxslt in the past, but I'm not directly involved in any of the current work.

inejge 4 hours ago | parent | next [-]

I hope they will also work on speeding it up a bit. I needed to go through 25-30 MB SAML metadata dumps, and an xml-rs pull parser took 3x more time than the equivalent in Python (using libxml2 internally, I think.) I rewrote it all with quick-xml and got a 7-8x speedup over Python, i.e., at least 20x over xml-rs.

nwellnhof 2 hours ago | parent [-]

Python ElementTree uses Expat, only lxml uses libxml2. Right now, I'm working on SIMD acceleration in my not yet released, GPL-licensed fork of libxml2. If you have lots of character data or large attribute values like in SVG, you will see tremendous speed improvements (gigabytes per second). Unfortunately, this is unlikely to make it into web browsers.

Ygg2 5 hours ago | parent | prev [-]

Wait. They are going along with a XML parser that supports DOCTYPES? I get XSLT is ancient and full of exploits, but so is DOCTYPE. Literally poster boy for billion laughs attack (among other vectors).

mananaysiempre 5 hours ago | parent | next [-]

You don't need DOCTYPE for that, you can put an ENTITY declaration straight in your source file ("internal subset") and the XML spec it needs to be processed. (I seem to recall someone saying that Adobe tools are fond of putting those in their exported SVG files.)

Mikhail_Edoshin 4 hours ago | parent | prev | next [-]

The billion laughs bug was fixed in libxml2 in 2008. (As far as I understand in .Net this bug was fixed in 2014 with .Net 4.5.2. In 2019 a bug similar to "billion laughs" was found in Go YAML parser although it was explicitly mentioned and forbidden by YAML specs. Among other products it affected Kubernetes.)

Other vectors probably mean a single vector: external entities, where a) you process untrusted XML on server and b) allow the processor to read external entities. This is not a bug, but early versions of XML processors may lack an option to disallow access to external entities. This also has been fixed.

XSLT has no exploits at all, that is no features that can be misused.

fabrice_d 4 hours ago | parent | prev [-]

The billion laughs attack has well known solutions (basically, don't recurse too deep). It's not a reason to not implement DOCTYPE support.

jillesvangurp 5 hours ago | parent | prev | next [-]

> This is a lot more concerning.

I'm not so sure that's problematic. Probably browser just aren't a great platform for doing a lot of XML processing at this point.

Preserving the half implemented frozen state of the early 2000s really doesn't really serve anyone except those maintaining legacy applications from that era. I can see why they are pulling out complex C++ code related to all this.

It's the natural conclusion of XHTML being sidelined in favor of HTML 5 about 15-20 years ago. The whole web service bubble, bloated namespace processing, and all the other complexity that came with that just has a lot of gnarly libraries associated with it. The world kind of has moved on since then.

From a security point of view it's probably a good idea to reduce the attack surface a bit by moving to a Rust based implementation. What use cases remain for XML parsing in a browser if XSLT support is removed? I guess some parsing from javascript. In which case you could argue that the usual solution in the JS world of using polyfills and e.g. wasm libraries might provide a valid/good enough alternative or migration path.

svieira 5 hours ago | parent | prev | next [-]

> Removing XSLT from browsers was long overdue

> Google is willing to remove standards-compliant XML support as well.

> They're the same picture.

To spell it out, "if it's inconvenient, it goes", is something that the _owner_ does. The culture of the web was "the owners are those who run the web sites, the servants are the software that provides an entry point to the web (read or publish or both)". This kind of "well, it's dashed inconvenient to maintain a WASM layer for a dependency that is not safe to vendor any more as a C dependency" is not the kind of servant-oriented mentality that made the web great, not just as a platform to build on, but as a platform to emulate.

akerl_ 5 hours ago | parent | next [-]

Can you cite where this "servant-oriented" mentality is from? I don't recall a part of the web where browser developers were viewed as not having agency about what code they ship in their software.

svieira an hour ago | parent | next [-]

A nice recent example is "smooshgate", wherein it was determined that breaking websites with an older version of Mootools installed was not an acceptable way to move the web forward, so we got `Array.prototype.flat` instead of `Array.prototype.flatten`: https://news.ycombinator.com/item?id=17141024

> I don't recall a part of the web where browser developers were viewed as not having agency

Being a servant isn't "not having agency", it's "who do I exercise my agency on behalf of". Tools don't have agency, servants do.

akerl_ an hour ago | parent [-]

I think you're reading way too much into that. For one thing, that's a proposal for Javascript, whose controlling body is TC39. For another, this was a bog standard example of a draft proposal where a bug was discovered, and rollout was adjusted. If that's having a "servant-oriented mindset", so do 99% of software projects.

hluska 2 hours ago | parent | prev | next [-]

I’ve never heard of servant oriented, but I understand the point. Browsers process and render whatever the server returns. Whether they’re advertisements that download malware or a long rambling page on whatever I’m interested in now, browsers really don’t have much control over what they run.

akerl_ 2 hours ago | parent [-]

I'm not sure what you're talking about.

1. As we're seeing here, browser developers determine what content the browser will parse and process. This happens in both directions: tons of what is now common JS/CSS shipped first as browser-specific behavior that was then standardized, and also browsers have dropped support for gopher, for SSLv2, and Flash, among other things.

2. Browsers often explicitly provide a transformation point where users can modify content. Ad blockers work specifically because the browser is not a "servant" of whatever the server returns.

3. Plenty of content can be hosted on servers but not understood or rendered by browsers. I joked about Opera elsewhere on the thread, which notably included a torrent client, but Chrome/Firefox/Safari did not: torrent files served by the server weren't run in those browsers.

dpark 5 hours ago | parent | prev | next [-]

It’s utter nonsense. Development of the web has always been advanced by the browser side, as it necessarily must. It’s meaningless for a server/web app to ship a feature that no browser supports.

etchalon 5 hours ago | parent | prev [-]

I cannot imagine a time when browsers were "servant-oriented".

Every browser I can think of was/is subservient to some big-big-company's big-big-strategy.

trinsic2 an hour ago | parent | next [-]

I don't remember it this way. It was my understanding that browsers were designed to browse servers and that servers, or websites designed themselves around web standards that were initiated by specs made part of browsing experience that web browsers created.

akerl_ 5 hours ago | parent | prev [-]

There have been plenty of browsers that were not part of a big company, either for part or all of their history. They don't tend to have massive market share, in part because browsers are amazingly complex and when they break, users get pissed because their browsing is affected.

Even the browsers created by individuals or small groups don't have, as far as I've ever seen, a "servant-oriented mindset": like all software projects, they are ultimately developed and supported at the discretion of their developer(s).

This is how you get interesting quirks like Opera including torrent support natively, or Brave bundling its own advertising/cryptocurrency thing.

etchalon 4 hours ago | parent [-]

Both of those are strategies aimed at capturing a niche market segment in hopes of attracting them away from the big browsers.

akerl_ 4 hours ago | parent [-]

I guess? I don't get the sense that when the Opera devs added torrents a couple decades ago, they were necessarily doing it to steal users so much as because the developers thought it was a useful feature.

But it doesn't really make a difference to my broader point that browser devs have never had "servant-mindset"

etchalon 3 hours ago | parent [-]

I agree. They've never had that mindset.

Aurornis 4 hours ago | parent | prev [-]

> The culture of the web was "the owners are those who run the web sites, the servants are the software that provides an entry point to the web (read or publish or both)".

This is an attempt to rewrite history.

Early browser like NCSA Mosaic were never even released as Open Source Software.

Netscape Navigator made headlines by offering a free version for academic or non-profit use, but they wanted to charge as much as $99 (in 1995 dollars!) for the browser.

Microsoft got in trouble for bundling a web browser with their operating system.

The current world where we have true open source browser options like Chromium is probably closer to a true open web than what some people have retconned the early days of the web as being.

glenstein 4 hours ago | parent | next [-]

Chromium commits are controlled by a pool of Google developers, so it's not open in the sense that anyone can contribute or steer the direction of the project.

It's also 32 million lines of code which is borderline prohibitive to maintain if you're planning any importantly different browser architecture, without a business plan or significant funding.

There's lots of things perfectly forkable and maintainable in the world is better for them (shoutout Nextcloud and the various Syncthing forks). But Chromium, insofar as it's a test of the health and openness of the software ecosystem, I think is not much of a positive signal on account of what it would realistically require to fork and maintain for any non-trivial repurposing.

dpark 4 hours ago | parent [-]

> Chromium commits are controlled by a pool of Google developers, so it's not open in the sense that anyone can contribute or steer the direction of the project.

By these criteria no software is open source.

glenstein 2 hours ago | parent [-]

I would disagree, corporate open source involves corporate dominance over governance that fits internal priorities. It meets the legal definition rather than the cultural model which is community driven and often multi-stakeholder. I would put Debian, VLC, LibreOffice in the latter camp.

akerl_ 2 hours ago | parent [-]

Is it often multi-stakeholder? Debian has bureaucracy and a set group of people with commit permissions. VLC likewise has the VideoLAN organization. LibreOffice has The Document Foundation.

It seems like most open source projects either have:

1. A singular developer, who controls what contributions are accepted and sets the direction of the project 2. An in-group / foundation / organization / etc that does the same.

Do you have an example of an open source project whose roadmap is community-driven, any more than Google or Mozilla accepts bug reports and feature reports and patches and then decides if they want to merge them?

glenstein an hour ago | parent [-]

A lot of the governance structures with "foundation" in their name, e.g. Apache Foundation, Linux Foundation, Rust Foundation, involve some combination of corporate parties, maintainers, independent contributors without any singularly corporate heavy hand responsible for their momentum.

I don't know that road maps are any more or less "community driven" than anything else given the nature of their structures, but one can draw a distinction between them and the degree of corporate alignment like React (Facebook), Swift (Apple).

I'm agreeable enough to your characterization of open source projects. It's broad but, I think, charitably interpreted, true enough. But I think you can look at the range of projects and see ones that are multi stakeholder vs those with consolidated control and their degree of alignment with specific corporate missions.

When Google tries to, or is able to, muscle through Manifest v3, or FLoC or AMP, it's not trying to model benevolent actor standing on open source principles.

akerl_ an hour ago | parent [-]

My argument is that "open source principles" do not suggest anything about how the maintainers have to handle input from users.

Open source principles have to do with the source being available and users being able to access/use/modify the source. Chrome is an open source project.

To try to expand "open source principles" to suggest that if the guiding entity is a corporation and they have a heavy hand in how they steer their own project, they're not meeting those principles, is just incorrect.

The average open source project is run by a person or group with a set of goals/intentions for the project, and they make decisions about the project based on those goals. That includes sometimes taking input from users and sometimes ignoring it.

croes 4 hours ago | parent | prev [-]

The web wasn’t the browser it was the protocols.

dpark 4 hours ago | parent | next [-]

That’s not an accurate statement. The web was not just the protocols. It was the protocols and the servers that served them and the browsers that supported them and the web sites that were built with them. There is no web without browsers just like there is no web without websites.

hluska 2 hours ago | parent [-]

I can’t understand why you’re splitting hairs to this extent. The web is protocols; some are implemented at server side whereas others are implemented at browser side. They’re all still protocols with a big dollop of marketing.

That statement was accurate enough if you’re willing to read actively and provide people with the most minimal benefit of the doubt.

dpark 2 hours ago | parent [-]

My response is in a chain discussing browsers in response to someone who literally said “The web wasn’t the browser it was the protocols.”

I responded essentially “it was indeed also the browser”, which it seems you agree with so I don’t know what you’re even trying to argue about.

> willing to read actively and provide people with the most minimal benefit of the doubt.

Indeed

akerl_ 4 hours ago | parent | prev [-]

Most of the protocol specs were written retroactively to match functionality that browsers were already using in the wild.

dietr1ch 4 hours ago | parent | prev | next [-]

> Currently, they seem to favor xml-rs which only implements a subset of XML.

Which seems to be a sane decision given the XML language allows for data blow-ups[^0]. I'm not sure what specific subset of XML `xml-rs` implements, but to me it seems insane to fully implement XML because of this.

[^0]: https://en.wikipedia.org/wiki/Billion_laughs_attack

zzo38computer 3 hours ago | parent | prev | next [-]

I think it might make more sense to use WebAssembly and make them as extensions which are included by default (many other things possibly should also be made as extensions rather than built-in functions). The same can be done for picture formats, etc. This would improve security while also improving the versatility (since you can replace parts of things), if the extension mechanism would have these capabilities.

(However, I also think that generally you should not require too many features, if it can be avoided, whether those features are JavaScripts, TLS, WebAssembly, CSS, and XSLT. However, they can be useful in many circumstances despite that.)

_heimdall 4 hours ago | parent | prev | next [-]

Given that you have experience working on libxslt, why do you think they should have removed the spec entirely rather than improving the current implementation or moving towards modern XSLT 3?

gnatolf 3 hours ago | parent | prev | next [-]

I was somewhat confused and irritated by the lack of a clear frontrunner crate for XML support in rust. I get that xml isn't sexy, but still.

cptskippy 2 hours ago | parent | prev | next [-]

> Currently, they seem to favor xml-rs which only implements a subset of XML.

What in particular do you find objectionable about this implementation? It's only claiming to be an XML parser, it isn't claiming to validate against a DTD or Schema.

The XML standard is very complex and broad, I would be surprised if anyone has implemented it in it's entirety beyond a company like Microsoft or Oracle. Even then I would question it.

At the end of the day, much of XML is hard if not impossible to use or maintain. A lot of it was defined without much thought given to practicality and for most developers they will never had to deal with a lot of it's eccentricities.

James_K 5 hours ago | parent | prev [-]

What's long overdue is them updating to a modern version of XSLT.

dfabulich 5 hours ago | parent | prev | next [-]

In part 1 of this article, the author wrote, "XSLT is an essential companion to RSS, as it allows the feed itself to be perused in the browser"

Actually, you can make an RSS feed user-browsable by using JavaScript instead. You can even run XSLT in JavaScript, which is what Google's polyfill does.

I've written thousands of lines of XSLT. JavaScript is better than XSLT in every way, which is why JavaScript has thrived and XSLT has dwindled.

This is why XSLT has got to go: https://www.offensivecon.org/speakers/2025/ivan-fratric.html

ndriscoll 5 hours ago | parent | next [-]

> JavaScript is better than XSLT in every way

Obviously not in every way. XSLT is declarative and builds pretty naturally off of HTML for someone who doesn't know any programming languages. It gives a very low-effort but fairly high power (especially considering its neglect) on-ramp to templated web pages with no build steps or special server software (e.g. PHP, Ruby) that you need to maintain. It's an extremely natural fit if you want to add new custom HTML elements. You link a template just like you link a CSS file to reuse styles. Obvious.

The equivalent Javascript functionality's documentation[0] starts going on about classes and callbacks and shadow DOM, which is by contrast not at all approachable for someone who just wants to make a web page. Obviously Javascript is necessary if you want to make a web application, but those are incredibly rare, and it's expected that you'll need a programmer if you need to make an application.

Part of the death of the open web is that the companies that control the web's direction don't care about empowering individuals to do simple things in a simple way without their involvement. Since there's no simple, open way to make your own page that people can subscribe to (RSS support having been removed from browsers instead of expanded upon for e.g. a live home page), everyone needs to be on e.g. Facebook.

It's the same with how they make it a pain to just copy your music onto your phone or backup your photos off of it, but instead you can pay them monthly for streaming and cloud storage.

[0] https://developer.mozilla.org/en-US/docs/Web/API/Web_compone...

munificent 5 hours ago | parent | next [-]

> XSLT is declarative and builds pretty naturally off of HTML for someone who doesn't know any programming languages.

Have you ever met a single non-programmer who successfully picked up XSLT of their own volition and used it productively?

I'd be willing to bet good money that the Venn diagram of users that fit the intersection of "authoring content for the web", "care about separating content from HTML", "comfortable with HTML", "not comfortable with JavaScript", and "able to ramp up on XSLT" is pretty small.

At some point, we have to just decide "sorry, this use case is too marginal for every browser to maintain this complexity forever".

basscomm an hour ago | parent | next [-]

> Have you ever met a single non-programmer who successfully picked up XSLT of their own volition and used it productively?

Hi! I'm a non-programmer who picked up XSLT of my own volition and spent the last five-ish years using it to write a website. I even put up all the code on github: https://github.com/zmodemorg/wyrm.org

I spent a few weeks converting the site to use a static site generator, and there were a lot of things I could do in XSLT that I can't really do in the generator, which sucks. I'd revert the entire website in heartbeat if I knew that XSLT support would actually stick around (in fact, that's one of the reasons I started with XSLT in the first place, I didn't think that support would go away any time soon, but here we are)

ndriscoll an hour ago | parent [-]

For what it's worth, you can still run an XSL processor as a static generator. You of course lose some power like using document() to include information for a logged in user, but if it's a static site then that's fine.

basscomm 44 minutes ago | parent [-]

Users don't log in to my site.

I eventually started using server-side XSL processing (https://nginx.org/en/docs/http/ngx_http_xslt_module.html) because I wanted my site to be viewable in text-based browsers, too, but it uses the same XSLT library that the browsers use and I don't know how long it's going to be around.

a456463 3 hours ago | parent | prev | next [-]

I did. Just because the herd says it's dead doesn't mean XSLT is dead or "bad"

matwood 4 hours ago | parent | prev | next [-]

> Have you ever met a single non-programmer who successfully picked up XSLT of their own volition and used it productively?

Admittedly this was 20ish years ago, but I used to teach the business analysts XSLT so they could create/edit/format their own reports.

At the time Crystal Reports had become crazy expensive so I developed a system that would send the data to the browser as XML and then an XSLT to format the report. It provided basic interactivity and could be edited by people other than me. Also, if I remember, at the time it only worked in IE because it was the only browser with the transform function.

ndriscoll 5 hours ago | parent | prev | next [-]

I was such a non-programmer as a child, yes. At the time that XSLT was new, if you read a book on HTML and making web pages from the library, it would tell you about things like separating content from styles and layout, yes. Things that blew my mind were that you could install Apache on your own computer and your desktop could be a website, or (as I learned many years later) that you could make a server application (or these days now Javascript code) that calls a function based on a requested path instead of paths being 1:1 with files. By contrast, like I said XSLT was just a natural extension of HTML for something that everyone who's written a couple web pages wants to do.

The fact that the web's new owners have decided that making web pages is too marginal a use-case for the Web Platform is my point.

ErroneousBosh 4 hours ago | parent [-]

> it would tell you about things like separating content from styles and layout, yes.

That's what CSS does.

antod 2 hours ago | parent | next [-]

XSLT is really separating (XML) data from markup in the case of the web. More generally it's transforming between different XML formats.

But in the case of docs (eg XML-FO for docbook, DITA etc) XSLT does actually separate content from styling.

ndriscoll 4 hours ago | parent | prev [-]

Yes that's why XSLT is such a natural fit when you learn about HTML+CSS. It's the same idea, but applied to HTML templates, which is something you immediately want when you hand-write HTML (e.g. navbars, headers, and footers that you can include on every page).

ErroneousBosh 4 hours ago | parent [-]

Your problem here is that you're hand-writing HTML including all the templates. This wasn't a good way to do it 30 years ago and it's not a good way to do it now.

See all these "static site generators" everyone's into these days? We used those in the mid-90s. They were called "Makefiles".

ndriscoll 4 hours ago | parent [-]

Yeah because I was 11 and didn't know what a Makefile was. That's my point. I wanted to make web pages, and didn't know any programming. HTML is designed to be hand-written. You just write text, and when you want it to look different, you wrap it in a thing. When doing this, you'll quickly want to re-use snippets/invent your own tags. XSLT gives a solution to this without saying "okay let's back up and go learn how to use a command line now, and probably use an entirely different document format" (SSGs) or "okay let's back up and learn about functions, variables, classes, and callbacks, and maybe a compiler" (Javascript). It just says "when you want to make your own tags, extract them into a 'template' tag, then include your templates just like you include a CSS file for styles".

jeffbee 4 hours ago | parent | prev | next [-]

Funnily enough, XSLT is one of those things that I don't know very well but LLMs do. I find that I can ask Gemini to blurt out an XSLT implementation of my requirements given a snippet of example doc, and I have used this to good effect in some web scrapers/robots.

rendaw 5 hours ago | parent | prev | next [-]

I've seen non-programmers learn SQL, and SQL is far more inconsistent, complex, non-orthogonal, fragmented, footgunny, and user hostile than most programming languages.

I'm not sure what I mean by this, WRT XSLT vs Javascript.

righthand 3 hours ago | parent | prev [-]

I did after reading about it. I immediately moved my personal site to it and got rid of the crap JS site I had.

dfabulich 5 hours ago | parent | prev | next [-]

XSL is a Turing-complete functional programming language, not a declarative language. When you xsl:apply-template, you're calling a function.

Functional programming languages can often feel declarative. When XSL is doing trivial, functional transformations, when you keep your hands off of xsl:for-each, XSL feels declarative, and doesn't feel that bad.

The problem is: no clean API is perfectly shaped for UI, so you always wind up having to do arbitrary, non-trivial transformations with tricky uses of for-each to make the output HTML satisfy user requirements.

XSL's "escape hatch" is to allow arbitrary Turing-complete transformations, with <xsl:variable>, <xsl:for-each>, and <xsl:if>. This makes easy transformations easy and hard transformations possible.

XSL's escape hatch is always needed, but it's absolutely terrible, especially compared to JS, especially compared to modern frameworks. This is why JS remained popular, but XSL dwindled.

> It gives a low-effort but fairly high power (especially considering its neglect) on-ramp to templated web pages with no build steps or special server software (e.g. PHP, Ruby) that you need to maintain. It's an extremely natural fit if you want to add new custom HTML elements.

JavaScript is a much better low-effort high-power on-ramp to templated web pages with no build steps or server software. JavaScript is the natural fit for adding custom HTML elements (web components).

Seriously, XSLT is worse than JavaScript in every way, even at the stuff that XSLT is best at. Performance/bloat? Worse. Security? MUCH worse. Learnability / language design? Unimaginably worse.

EDIT: You edited your post, but the Custom Element API is for interactive client-side components. If you just want to transform some HTML on the page into other HTML as the page loads, you can use querySelectorAll, the jQuery way.

Mikhail_Edoshin 2 hours ago | parent | next [-]

Come on. With XSLT you write a rule and then write a fragment of the resulting document.

    <xsl:template match="abc">
      <def ghi="jkl"/>
    </xsl:template>
This is one of simplest ways to do things. With JavaScript you what? Call methods?

    CreateElement("def").setAttribute("def", "jkl")
There is a ton of "template engines" (all strictly worse than XSLT); why people keep writing them? Why people invented JSX with all the complicated machinery if plain JavaScript is better?
James_K 5 hours ago | parent | prev [-]

> Security? MUCH worse.

This is patently false. It is much better for security if you use one of the many memory-safe implementations of it. This is like saying “SSL is insecure because I use an implementation with bugs”. No, the technology is fine. It's your buggy implementation that's the problem.

ndriscoll 5 hours ago | parent [-]

XSLT used as a pre-processor is obviously also a fundamentally better model for security because... it's used as a preprocessor. It cannot spy on you and exfiltrate information after page load because it's not running anymore (so you can't do voyeuristic stuff like capture user mouse movements or watch where they scroll on the page). It also doesn't really have the massive surface Javascript does for extracting information from the user's computer. It wasn't designed for that; it was designed to transform documents.

spankalee 4 hours ago | parent | prev | next [-]

I'm a web components guy myself, but that's not the equivalent JavaScript functionality at all, as XSLT doesn't event have components.

XSLT is a functional transform language. The equivalent JavaScript would be something like registry of pure functions of Node -> Node and associated selectors and a TreeWalker that walks the XML document, invokes matching functions, and emits the result into a new document.

Or you could consume the XML as data into a set of React functions.

ErroneousBosh 4 hours ago | parent | prev | next [-]

> not at all approachable for someone who just wants to make a web page

If someone wants to make a web page they need to learn HTML and CSS.

Why would adding a fragile and little-used technology like XSLT help?

basscomm an hour ago | parent | next [-]

> Why would adding a fragile and little-used technology like XSLT help?

A few years ago I bought a bunch of Skylanders for practically nothing when the toys to life fad faded away. To keep track of everything I made a quick and dirty XSLT script that sorted and organized the list of figures and formatted each one based on their 'element'. That would have been murderous to do in plain HTML and CSS: https://wyrm.org/inventory/skylanders.xml

Mikhail_Edoshin 2 hours ago | parent | prev [-]

Because you do not want to create web pages, but to render some information in the form of web pages. And as you write that information you make distinctions unique to a) this information and b) your approach to it. And one of the best ways to do this is to come up with a custom set of XML tags. You write about chess? Fine: invent tags to decribe parties, positions and moves. Or maybe a tutorial on Esperanto? Fine; invent a notation to highlight the lexical structure and the grammar. You can be as detailed as you want and at the same time you can ignore anything you do not care about.

And then you want to merely render this semantically rich document into HTML. This is where XSLT comes in.

dist-epoch 2 hours ago | parent | prev [-]

Nobody learned web programming by putting XSLT on top of XML.

This is a fantasy world that does not exist.

People used PHP, or a tool which created HTML (DreamWeaver), or a website, or maybe a LLM today.

Pet_Ant 5 hours ago | parent | prev | next [-]

JavaScript is ever evolving and it means you need to stick to one of the two browsers (WebKit or Firefox) and keep upgrading. XSLT hasn't changed in years. It's an actual standard instead of an evolving one.

I know that other independent browsers that I used to use back in the day just gave up because the pace of divergence pushed by the major implementations meant that it wasn't feasible to keep up independently.

I still miss Konqueror.

pitaj 4 hours ago | parent [-]

JavaScript is backwards compatible. You can use an older standard supported by everything if you wish.

Pet_Ant 3 hours ago | parent [-]

Really? Because I have an old iPad (4th gen?) that no longer works on many sites. If it was backwards compatible they'd still function.

O4epegb 2 hours ago | parent | next [-]

You are confusing backwards and forwards compatibility. Those sites may have added features that your iPad does not support, which is why it broke, if they have not added those, it might still work.

However JS is not 100% backwards compatible either, it is in many cases, largely backwards compatible, but there are rare cases of bug fixes, or deprecated APIs that might be removed and break old code, but this is not even JS itself, it's more like web/engine standards.

demurgos 2 hours ago | parent | prev [-]

You are talking about forward compatibility.

JS is backwards compatible: new engines support code using old features.

JS is not forward compatible: old engines don't support code using new features.

Regarding your iPad woes, the problem is not the engine but websites breaking compat with it.

The distinction matters as it means that once a website is published it will keep working. The only way to break an existing website is to publish a new version usually. The XSLT situation is note-worthy as it's an exception to this rule.

skobes 4 hours ago | parent | prev | next [-]

Your link is just the abstract, I had to hunt for the full talk:

https://www.youtube.com/watch?v=U1kc7fcF5Ao

But it is quite interesting and especially learning about the security problems of the document() function (described @ 19:40-25:38) made me feel more convinced that removing XSLT is a good decision.

kuschku 3 hours ago | parent | prev | next [-]

> Actually, you can make an RSS feed user-browsable by using JavaScript instead

Say I have an XML document that uses XSLT, how do I modify it to apply your suggestion?

I've previously suggested the XML stylesheet tag should allow

    <?xml-stylesheet type="application/javascript" href="https://example.org/script.js"?>
which would then allow the script to use the service-worker APIs to intercept and transform the request.

But with the implementation available today, I see no way to provide a first-class XSLT-like experience with JS.

LtWorf 3 hours ago | parent | prev | next [-]

No you can't, since opening an RSS feed won't run a javascript.

throw_m239339 36 minutes ago | parent | prev | next [-]

> by using JavaScript instead

I think you're entirely missing the point of RSS by saying that. RSS doesn't and should require NOT Javascript.

Now feeds could somehow be written in some bastard HTML5 directly, but please don't bring Javascript in that debate.

XSLT allows to transform a XML document into an HTML presentation, without the need for javascript, that's its purpose.

ErroneousBosh 4 hours ago | parent | prev [-]

> In part 1 of this article, the author wrote, "XSLT is an essential companion to RSS, as it allows the feed itself to be perused in the browser"

Wow. I can see the proposed scrapping of XSLT being a huge problem for all of the seven people who do this.

thayne 5 hours ago | parent | prev | next [-]

I don't disagree that Google is killing the open web. But XSLT is a pretty weak argument for showing that. It is an extremely complicated feature that is very seldom used. I am very doubtful dropping support is some evil political decision. It is much more likely they just don't want to sink resources into maintaining something that is almost never used.

For the specific use case of showing RSS and Atom feeds in the browser, it seems like a better solution would be to have built-in support in the browser, rather than relying on the use of XSLT.

AlotOfReading 4 hours ago | parent | next [-]

The sites that will be broken are disproportionately important though. Congress.gov/govinfo.gov, weather.gov, europa.gov, plus dozens of sites for libraries, and universities.

Looking only at how many sites use a feature gives you an incomplete view. If a feature were only used by Wikipedia, it'd still be inappropriate to deprecate it with a breaking change and a short (1yr) migration window. You work with the important users to retire it and then start pulling the plug publicly to notify everyone you might have missed.

Fileformat 4 hours ago | parent | prev [-]

Of course built-in support for RSS would be better. But what are the chances of that happening?

homebrewer 2 hours ago | parent | next [-]

We already had it, both Firefox and the old Opera supported viewing (and subscribing to) RSS feeds.

thayne 4 hours ago | parent | prev [-]

Probably better than browser makers committing to maintaining an xslt library.

righthand 2 hours ago | parent [-]

They didn’t have to maintain it. There was a simpler solution and switch to a library that wasn’t broken.

dpark 4 hours ago | parent | prev | next [-]

This has nothing to do with the “open web”. I don’t know if the people saying this just don’t have a meaningful definition of what open means or what. “Open” doesn’t mean “supports everything anyone has ever shipped in a browser”. (Chrome should support Gopher, really? Gopher was literally never part of the World Wide Web.)

What’s happening is that Google (along with Mozilla and Safari) are changing the html spec to drop support for xslt. If you want to argue that this is bad because it “breaks the web”, that’s fine, but it has nothing at all to do with whether the web is “open”. The open web means anyone can run a web server. Anyone can write a web site. Anyone can build their own compatible browser (hypothetically; this has become prohibitively expensive). It means anyone can use the tech, not that the tech includes everything possible.

If you want to complain about Google harming the open web, there are some real examples out there. Google Reader deprecation probably hurt RSS more than anything else. AMP was/is an attempt to give Google tighter control over more web traffic. Chrome extension changes were pushed through seemingly to give Google tighter control over ad blockers. Gemini in the search results is an attempt to keep Google users from ever actually clicking through to web sites for information.

XSLT in the browser has been dead for years. The reality is that no browser developer has cared about xslt since 1.0. Don’t blame Google for the death of xslt when xslt 2.0 was standardized before Chrome was even released and no one else cared enough to implement it. The removal of xslt doesn’t change the openness of the web and the reality is that it breaks very little while eliminating a source of real security errors.

shadowgovt 3 hours ago | parent [-]

> Google Reader deprecation probably hurt RSS more than anything else

And, indeed, if the protocol was one killer app deprecation and removal away from being obsolete, the problem was the use case, not the protocol.

(Personally, I don't think RSS is dead; it's very much alive in podcasting. What's dead is people consuming content from specific sites as a subscription model instead of getting most of their input slop-melanged in through their social media feeds; they don't care about the source of the info, they just want the info. I don't think that's something we fix with improved RSS support; it's a behavior issue looking for a better experience than Facebook, not for everyone to wake up one day and decide to install their own feed reader and stop browsing Facebook or Twitter or even Mastodon for links all day).

ndriscoll 3 hours ago | parent [-]

It wasn't just one killer app deprecation/removal away. RSS was also integrated into browsers at one point, and then removed. You wouldn't need a social media feed if your browser home page already gave you your timeline, and if it were trivial for any web page to add a "subscribe" button. But instead of known, proven use-cases that have clear demand, we get Javascript APIs for niche stuff like flashing firmware onto USB devices.

Aurornis 6 hours ago | parent | prev | next [-]

I have yet to read an article complaining about XSLT deprecation from someone who can explain why they actually used it and why it’s important to them.

> I will keep using XSLT, and in fact will look for new opportunities to rely on it.

This is the closest I’ve seen, but it’s not an explanation of why it was important before the deprecation. It’s a declaration that they’re using it as an act of rebellion.

ndiddy 5 hours ago | parent | next [-]

My guess is that a lot of the controversy is simply because this is one of the first times that a major web feature has been removed from the web standards. For the past 20+ years, people have grown to expect that any page they make will remain viewable indefinitely. It doesn't matter that most people don't like XSLT, or that barely any sites use it. Removing XSLT does break some websites and that violates their expectation, so they get mad at it reflexively.

As someone who's interested in sustainable open source development, I also find the circumstances around the deprecation to be interesting and worth talking about. The XSLT implementation used by all the browsers is a 25 year old C library whose maintainer recently resigned due to having to constantly deal with security bugs reported by large companies who don't provide any financial contribution or meaningful assistance to the project. It seems like the browser vendors were fine with the status quo of having XSLT support as long as they didn't have to contribute any resources to it. As soon as that free maintenance went away and they were faced with either paying someone to continue maintenance or writing a new XSLT library in a safer language, they weren't willing to pay the market value for what it would cost to do this and decided to drop the feature instead.

jerf 5 hours ago | parent | prev | next [-]

What a horrible technology to wrap around your neck for rebellion's sake. XSLT didn't succeed because it's fundamentally terrible and was a bad idea from the very beginning.

But I suppose forcing one's self to use XSLT just to spite Google would constitute its own punishment.

crazygringo 6 hours ago | parent | prev | next [-]

Yeah, the idea that it's some kind of foundation of the "open web" is quite silly.

I've used XSLT plenty for transforming XML data for enterprises but that's all backend stuff.

Until this whole kerfuffle I never knew there was support for it in the browser in the first place. Nor, it seems, did most people.

If there's some enterprise software that uses it to transform some XML that an API produces into something else client-side, relying on a polyfill seems perfectly reasonable. Or just move that data transformation to the back-end.

zekica 5 hours ago | parent | prev | next [-]

I used it. It's an (ugly) functional programming language that can transform one XML into another - think of it as Lisp for XML processing but even less readable.

It can work great when you have XML you want to present nicely in a browser by transforming it into XHTML while still serving the browser the original XML. One use I had was to show the contents of RSS/Atom feeds as a nice page in a browser.

rwmj 4 hours ago | parent | next [-]

I would just do this on the server side. You can even do it statically when generating the XML. In fact until all the stuff about XSLT in browsers appeared recently, I didn't even know that browsers could do it.

wizzwizz4 an hour ago | parent [-]

Converting the contents of an Atom feed into (X)HTML means it's no longer a valid Atom feed. The same is true for many other document formats, such as flattened ODF.

fuzzzerd 5 hours ago | parent | prev [-]

I have done same thing with sitemap.xml.

Fileformat 4 hours ago | parent | prev | next [-]

Making RSS/Atom feeds friendly to new users is key for its adoption, and for the open web. XSLT is the best way to do that.

I made a website to promote doing using XSLT for RSS/Atom feeds. Look at the before/after screenshots: which one will scare off a non-techie user?

https://www.rss.style/

cpill an hour ago | parent | next [-]

yes, but why??? Your on the website and you have a link to the syndicated feed, for the website your on, and you want to make they feed look good in the browser... so they can click the link to the website _you are already on_??? The argument you should be looking at the feed XML in the browser instead of the website is bonkers. They are not meant to replace the website coz if they were why have the website?!

kstrauser 4 minutes ago | parent [-]

I just checked and I’ve had 3 hits for my blog’s RSS feed from a legit-looking browser user agent string this year. Almost literally no one reads my site via RSS in the browser. Quite a few people fetch the feed from separate clients.

I wouldn’t spend 5 minutes making that feed look pretty for browser users because no one will ever see it. I don’t know who these mythical visitors are who 1) know what RSS is and 2) want to look at it in Chrome or Safari or Firefox.

shadowgovt 4 hours ago | parent | prev [-]

RSS and Atom feeds are at this point a solution looking for a problem.

I use RSS all the time... To keep up-to-date on podcasts. But for keeping up to date on news, people use social media. RSS isn't the missing piece of the puzzle for changing that, an app on top of RSS is. And in the absence of Reader, nothing has shown up to fill that role that can compete with just trading gossip on Facebook.

basscomm an hour ago | parent [-]

> But for keeping up to date on news, people use social media. RSS isn't the missing piece of the puzzle for changing that, an app on top of RSS is. And in the absence of Reader, nothing has shown up to fill that role that can compete with just trading gossip on Facebook.

I guess if you don't use social media or facebook you're out of luck?

shadowgovt an hour ago | parent [-]

I don't see why. You can always subscribe to a newspaper. Or just use RSS and a subscription tool since it didn't just go away.

What I'm saying, though, is if you don't use social media at this point you're already an outlier (I am, it should be noted, using the term broadly: you are using social media. Right now. Hacker News is in the same category as Facebook, Twitter, Mastodon, et. al. in this context: it's a place you go to get information instead of using a collection of RSS feeds, and I think the reason people do this instead of that may be instructive as to the ultimate fate of RSS for that use-case).

basscomm 33 minutes ago | parent [-]

> You can always subscribe to a newspaper.

The circulation for my local newspaper is so small that they now get printed at a press a hundred miles away and are shipped in every morning to the handful of subscribers who are left. I don't even know the last time I saw a physical newspaper in person.

> Hacker News... it's a place you go to get information instead of using a collection of RSS feeds

No, it's a place I go to _in addition_ to RSS feeds. An anonymous news aggregator with web forum attached isn't really social media. Maybe some people hang out here to socialize, but that's not a use case for me

shadowgovt 29 minutes ago | parent [-]

The relevant use case is you come here to see links people share and comment on them. That's sufficiently "social" in this context.

Contrasting the other use case you dabble in (that makes you an outlier) of pulling content from specific sources (I'm going to assume generating original content, not themselves link aggregators, otherwise this topic is moot) via RSS. Most people see that as redundant if they have access to something like HN, or Fark, or Reddit, or Facebook. RSS readers alone, in general, don't let you share your thoughts with other people reading the article, so it's not as popular a tool.

basscomm an hour ago | parent | prev | next [-]

> I have yet to read an article complaining about XSLT deprecation from someone who can explain why they actually used it and why it’s important to them.

I used it to develop a website because I'm not a programmer, but I still want to have some basic templates on my webpage without having to set up a dev environment or a static site generator. XML and XSLT extend HTML _just enough_ to let me do some fun things without me having to become a full-on programmer.

roywashere 5 hours ago | parent | prev | next [-]

All browsers ever implemented was XSLT 1.0, from 1999. There were 2.0 and 3.0 for which there is an open source Java based implementation (Saxon) but this never made it into libxslt and/or browsers!

danwilsonthomas 3 hours ago | parent | prev | next [-]

Imagine you have users that want to view an XML document as a report of some kind. You can easily do this right now by having them upload a document and attaching a stylesheet to it. I do this to let people view after-game reports for a video game (Nebulous: Fleet Command). They come in as XML and I transform them to HTML. Now I do this all client-side using the browser support for XSLT and about 10 lines of javascript because I don't want to pay for and run a server for file uploads. But if I did the XSLT support in the browser would make it truly trivial to do.

Now this obviously isn't critical infrastructure, but it sucks getting stepped on and I'm getting stepped on by the removal of XSLT.

James_K 5 hours ago | parent | prev | next [-]

I use XSLT because I want my website to work for users with JavaScript disabled and I want to present my Atom feed link as an HTML document on a statically hosted site without breaking standards compliance. Hope this helps.

matthews3 5 hours ago | parent | next [-]

Could you run XSLT as part of your build process, and serve the generated HTML?

kuschku 3 hours ago | parent | next [-]

I have arduinos with sensors providing their measurements as XML, with an external XSLT stylesheet to make them user-friendly. The arduinos have 2KB RAM and 16 MIPS.

Which build process are you talking about? Which XSLT library would you recommend for running on microcontrollers?

matthews3 2 hours ago | parent [-]

> Which build process are you talking about?

The one in the comment I replied to.

kuschku 2 hours ago | parent [-]

Fair, but that shows the issue at hand, doesn't it? XSLT is a general solution, while most alternatives are relatively specific solutions.

(Though I've written repeatedly about my preferred alternative to XSLT)

bilog 4 hours ago | parent | prev | next [-]

XML source + XSLT can be considerably more compact than the resulting transformation, saving on hosting and bandwidth.

zetanor 4 hours ago | parent [-]

The Internet saves a lot more on storage and bandwidth costs by not shipping an XSLT implementation with every browser than it does by allowing Joe's Blog to present XML as an index.

LtWorf 3 hours ago | parent [-]

You redownload your browser every request‽

James_K 4 hours ago | parent | prev [-]

No because then it would not be an Atom feed. Atom is a syndication format, the successor to RSS. I must provide users with a link to a valid Atom XML document, and I want them to see a web page when this link is clicked.

This is why so many people find this objectionable. If you want to have a basic blog, you need some HTML docments and and RSS/Atom feed. The technologies required to do this are HTML for the documents and XSLT to format the feed. Google is now removing one of those technologies, which makes it essentially impossible to serve a truly static website.

ErroneousBosh 4 hours ago | parent | next [-]

> Google is now removing one of those technologies, which makes it essentially impossible to serve a truly static website.

How so? You're just generating static pages. Generate ones that work.

James_K 4 hours ago | parent [-]

You cannot generate a valid RRS/Atom document which also renders as HTML.

shadowgovt 4 hours ago | parent [-]

So put them on separate pages because they are separate protocols (HTML for the browser and XML for a feed reader), with a link on the HTML page to be copied and pasted into a feed reader.

It really feels like the developer has over-constrained the problem to work with browsers as they are right now in this context.

kuschku 3 hours ago | parent [-]

> So put them on separate pages because they are separate protocols

Would you also suggest I use separate URLs for HTTP/2 and HTTP/1.1? Maybe for a gzipped response vs a raw response?

It's the same content, just supplied in a different format. It should be the same URL.

ErroneousBosh 28 minutes ago | parent | next [-]

> Would you also suggest I use separate URLs for HTTP/2 and HTTP/1.1? Maybe for a gzipped response vs a raw response?

The difference between HTTP/2 and HTTP/1.1 is exactly like the difference between plugging your PC in with a green cable or a red cable. The client neither knows nor cares.

> It's the same content, just supplied in a different format. It should be the same URL.

So what do I put as the URL of an MP3 and an Ogg of the same song? It's the same content, just supplied in a different format.

kuschku 10 minutes ago | parent [-]

> The difference between HTTP/2 and HTTP/1.1 is exactly like the difference between plugging your PC in with a green cable or a red cable. The client neither knows nor cares.

Just like protocol negotiation, HTTP has format negotiation and XML postprocessing for exactly the same reason.

> So what do I put as the URL of an MP3 and an Ogg of the same song? It's the same content, just supplied in a different format

Whatever you want? If I access example.org/example.png, most websites will return a webp or avif instead if my browser supports it.

Similarly, it makes sense to return an XML with XSLT for most browsers and a degraded experience with just a simple text file for legacy browsers such as NCSA Mosaic or 2027's Google Chrome.

zzo38computer 3 hours ago | parent | prev [-]

There are separate URLs for "https:" vs "http:" although they are usually the same content when both are available (although I have seen some where it isn't the same), although the compression (and some other stuff) is decided by headers. However, it might make sense to include some of these things optionally within the URL (within the authority section and/or scheme section somehow), for compression, version of the internet, version of the protocol, certificate pinning, etc, in a way that these things are easily delimited so that a program that understands this convention can ignore them. However, that might make a mess.

I had also defined a "hashed:" scheme for specifying the hash of the file that is referenced by the URL, and this is a scheme that includes another URL. (The "jar:" scheme is another one that also includes other URL, and is used for referencing files within a ZIP archive.)

gldrk 4 hours ago | parent | prev [-]

>I must provide users with a link to a valid Atom XML document, and I want them to see a web page when this link is clicked.

Do RSS readers and browsers send the same Accept header?

cpill an hour ago | parent | prev [-]

Yeah, but WHY? If they are on the website, why would they want to look at the feed for the website, on the website, in the browser instead of just looking at the website? If the feed is so amazing, why have the website in the first place? Oh yeah, you need something to make the feed off :D

6510 3 hours ago | parent | prev [-]

If you have a lot of xml data and need an UI that does complex operations that scream xpath it would be rather spectacular if it could be done without much of a back end, in the browser without js.

I'm not good enough with XSLT to know if it is worth creating the problem that fits the solution.

andsoitis 6 hours ago | parent | prev | next [-]

I don’t know. The author makes some arguments I could get entertain and get behind, but they also enumerate the immense complexity that they want web browsers to support (incl. Gopher).

Whether or not Google deprecating XSLT is a “political” decision (in authors words), I don’t know that I know for sure, but I can imagine running the Chrome project and steering for more simplicity.

coldpie 6 hours ago | parent | next [-]

The drama around the XSLT stuff is ridiculous. It's a dead format that no one uses[1], no one will miss, no one wants to maintain, and that provides significant complexity and attack surface. It's unambiguously the right thing to do to remove it. No one who actually works in the web space disagrees.

Yes, it's a problem that Chrome has too much market share, but XSLT's removal isn't a good demonstration of that.

[1] Yes, I already know about your one European law example that you only found out exists because of this drama.

lunar_mycroft 5 hours ago | parent | next [-]

The fact that people didn't realize that a site used XSLT before the recent drama is meaningless. Even as a developer, I don't know how most of the sites I visit work under the hood. Unless I have a reason to go poking around, I would probably never know whether a site used react, solid, svelte, or jquery.

But it ultimately doesn't matter either way. A major selling point/part of the "contract" the web platform has with web developers is backwards compatibility. If you make a web site which only relies on web standards (i.e. not vendor specific features or 3rd party plugins), you can/could expect it to keep working forever. Browser makers choosing to break that "contract" is bad for the internet regardless of how popular XSLT is.

Oh, and as the linked article points out, the attack surface concerns are obviously bad faith. The polyfil means browser makers could choose to sandbox it in a way that would be no less robust than their existing JS runtime.

coldpie 5 hours ago | parent | next [-]

> Browser makers choosing to break that "contract" is bad for the internet regardless of how popular XSLT is.

No, this is wrong.

Maintaining XSLT support has a cost, both in providing an attack surface and in employee-hours just to keep it around. Suppose it is not used at all, then removing it would be unquestionably good, as cost & attack surface would go down with no downside. Obviously it's not the case that it has zero usage, so it comes down to a cost-benefit question, which is where popularity comes in.

lunar_mycroft 4 hours ago | parent [-]

I want to start out by noting that despite both the linked article the very comment you're replying to pointing out that the security excuse is transparently bad faith, you still trotted it out, again.

And no, it really isn't a cost benefit question. Or if you'd prefer, the _indirect_ costs of breaking backwards compatibility are much higher than the _direct_ cost. As it stood, as a web developer you only needed to make sure that your code followed standards and it would continue to work. If the browser makers can decide to depriciate those standards, developers have to instead attempt to divine whether or not the features they want to use will remain popular (or rather, whether browser makers will continue to _think_ they're popular, which is very much not the same thing).

coldpie 4 hours ago | parent [-]

> security excuse is transparently bad faith, you still trotted it out

I don't see any evidence supporting your assertion of them acting in bad faith, so I didn't reply to the point. Sandboxes are not perfect, they don't transform insecure code into perfectly secure code. And as I've said, it's not only a security risk, it's also a maintenance cost: maintaining the integration, building the software, and testing it, is not free either.

It's fine to disagree on the costs/benefits and where you draw the line on supporting the removal, but fundamentally it's just a cost-benefit question. I don't see anyone at Chrome acting in bad faith with regards to XSLT removal. The drama here is really overblown.

> the _indirect_ costs of breaking backwards compatibility are much higher than the _direct_ cost ... If the browser makers can decide to deprecate those standards, developers have to instead attempt to divine whether or not the features they want to use will remain popular.

This seems overly dramatic. It's a small streamlining of an important software, by removing an expensive feature with almost zero usage. No one actually cares about this feature, they just like screaming at Google. (To be fair, so do I! But you gotta pick your battles, and this particular argument is a dud.)

lunar_mycroft 3 hours ago | parent [-]

> It's fine to disagree on the costs/benefits and where you draw the line on supporting the removal, but fundamentally it's just a cost-benefit question

If browser makers had simply said that maintaining all the web standards was too much work and they were opting to depreciate parts of it, I'd likely still object but I wouldn't be calling it bad faith. As it stands however, they and their defenders continue to cite alleged security problems as one of if not the primary reason to remove XSLT. This alleged security justification is a lie. We know it's a lie because there exists a trivial way to virtually completely remove the security burden presented by XSLT to browser maintainers without depreciating it, and the chrome team is well aware of this option. There is no significant difference in security between "shipping an existing polyfil which implements XSLT from inside the browser's sandbox instead of outside it" and "removing all support for XSLT", so security isn't the reason they're very deliberately choosing the former over the latter.

> This seems overly dramatic. It's a small streamlining of an important software, by removing an expensive feature with almost zero usage

This isn't a counter argument, you've just repeated your point that XSLT (allegedly) isn't sufficiently well used to justify maintaining it, ignoring the fact that said tradeoff being made by browser maintainers in the first place is a problem.

gspencley 5 hours ago | parent | prev [-]

> But it ultimately doesn't matter either way. A major selling point/part of the "contract" the web platform has with web developers is backwards compatibility.

The fact that you put "contract" in quotes suggests that you know there really is no such thing.

Backwards compatibility is a feature. One that needs to be actively valued, developed and maintained. It requires resources. There really is no "the web platform." We have web browsers, servers, client devices, telecommunications infrastructure - including routers and data centres, protocols... all produced and maintained by individual parties that are trying to achieve various degrees of interoperability between each other and all of which have their own priorities, values and interests.

The fact that the Internet has been able to become what it is, despite the foundational technologies that it was built upon - none of which had anticipated the usage requirements placed on their current versions, really ought to be labelled one of the wonders of the world.

I learned to program in the early to mid 1990s. Back then, there was no "cloud", we didn't call anything a "web application" but I cut my teeth doing the 1990s equivalent of building online tools and "web apps." Because everything was self-hosted, the companies I worked for valued portability because there was customer demand. Standardization was sought as a way to streamline business efficiency. As a young developer, I came to value standardization for the benefits that it offered me as a developer.

But back then, as well as today, if you looked at the very recent history of computing; you had big endian vs little endian CPUs to support, you had a dozen flavours of proprietary UNIX operating systems - each with their own vendor-lock-in features; while SQL was standard, every single RDBMS vendor had their own proprietary features that they were all too happy for you to use in order to try and lock consumers into their systems.

It can be argued that part of what has made Microsoft Windows so popular throughout the ages is the tremendous amount of effort that Microsoft goes through to support backwards compatibility. But even despite that effort, backwards compatibility with applications built for earlier version of Windows can still be hit or miss.

For better or worse, breaking changes are just part and parcel of computing. To try and impose some concept of a "contract" on the Internet to support backwards compatibility, even if you mean it purely figuratively, is a bit silly. The reason we have as much backwards compatibility as we do is largely historical and always driven by business goals and requirements, as dictated by customers. If only an extreme minority of "customers" require native xslt support in the web browser, to use today's example, it makes zero business sense to pour resources into maintaining it.

lunar_mycroft 4 hours ago | parent [-]

> The fact that you put "contract" in quotes suggests that you know there really is no such thing.

It's in quotes because people seem keen to remind everyone that there's no legal obligation on the part of the browser makers not to break backwards compatibility. The reasoning seems to be that if we can't sue google for a given action, that action must be fine and the people objecting to it must be wrong. I take a rather dim view of this line of reasoning.

> The reason we have as much backwards compatibility as we do is largely historical and always driven by business goals and requirements, as dictated by customers.

As you yourself pointed out, the web is a giant pile of cobbled together technologies that all seemed like a good idea at the time. If breaking changes were an option, there is a _long_ list of potential depreciation to pick from which would greatly simplify development of both browsers and websites/apps. Further, new features/standards would be able to be added with much less care, since if problems were found in those standards they could be removed/reworked. Despite those huge benefits, no such changes are/should be made, because the costs breaking backwards compatibility are just that high. Maintaining the implied promise that software written for the web will continue to work is a business requirement, because it's crucial for the long term health of the ecosystem.

basscomm an hour ago | parent | prev | next [-]

I've been running a small hobby site using XML and XSLT for the last five or so years, but Google refused to index it because Googlebot doesn't execute XSLT. I can't be the only one, but good luck Googling it

bryanrasmussen 6 hours ago | parent | prev | next [-]

>Yes, I already know about your one European law example

What example is that?

coldpie 5 hours ago | parent [-]

This page is styled via an XSLT transform: https://www.europarl.europa.eu/politicalparties/index_en.xml The drama mongers like to bring it up as an example of something that will be harmed by XSLT's removal, but it already has an HTML version, which is the one people actually use.

Analemma_ 6 hours ago | parent | prev | next [-]

Another bit of ridiculousness is pinning the removal on Google. Removing XSLT was proposed by Mozilla and unanimously supported with no objections by the rest of the WHATWG. Go blame Mozilla if you want somebody to get mad at, or least blame all the browser vendors equally. This has nothing to do with Chrome’s market share.

basscomm an hour ago | parent | next [-]

Shouldn't the users of the Web also get a say? There's been a lot of blowback on this decision, so this isn't as cut and dried as it's being made out to be

troupo 6 hours ago | parent | prev [-]

Google are the ones immediately springing into action. They only started collecting feedback on which sites may break after they already pushed "Intention to remove" and prepared a PR to remove it from Chromium.

hn_throwaway_99 6 hours ago | parent [-]

> Google are the ones immediately springing into action.

You say that like it's a bad thing. The proposal was already accepted. The most useful way to get feedback about which sites would break is to actually make a build without XSLT support and see what breaks.

troupo 6 hours ago | parent | prev [-]

> It's a dead format that no one uses[1],

This has to be proven by Google (and other browser vendors), not by people coming up with examples. The guy pushing "intent to deprecate" didn't even know about the most popular current usage (displaying podcast RSS feeds) until after posting the issue and until after people started posting examples: https://github.com/whatwg/html/issues/11523#issuecomment-315...

Meanwhile Google's own document says that's not how you approach deprecation: https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...

Also, "no one uses it" is rich considering that XSLT's usage is 10x the usage of features Google has no trouble shoving into the browser and maintaining. Compare XSLT https://chromestatus.com/metrics/feature/timeline/popularity... with USB https://chromestatus.com/metrics/feature/timeline/popularity... or WebTransport: https://chromestatus.com/metrics/feature/timeline/popularity... or even MIDI (also supported by Firerox) https://chromestatus.com/metrics/feature/timeline/popularity....

XSLT deprecation is a symptom of how browser vendors, and especially Google, couldn't give two shits about the stated purposes of the web.

To quote Rich Harris from the time when Google rushed to remove alert/confirm: "the needs of users and authors (i.e. developers) should be treated as higher priority than those of implementors (i.e. browser vendors), yet the higher priority constituencies are at the mercy of the lower priority ones" https://dev.to/richharris/stay-alert-d

Aurornis 6 hours ago | parent | next [-]

> Also, "no one uses it" is rich considering that XSLT's usage is 10x the usage of features Google has no trouble shoving into the browser and maintaining. Compare XSLT https://chromestatus.com/metrics/feature/timeline/popularity... with …

Comparing absolute usage of an old standard to newer niche features isn’t useful. The USB feature is niche, but very useful and helpful for pages setting up a device. I wouldn’t expect it to show up on a large percentage of page loads.

XSLT was supposed to be a broad standard with applications beyond single setup pages. The fact that those two features are used similarly despite one supposedly being a broad standard and the other being a niche feature that only gets used in unique cases (device setup or debugging) is only supportive of deprecating XSLT, IMO

kstrauser 5 hours ago | parent | next [-]

Furthermore, you can’t polyfill USB support. It’s something that the browser itself must support if it’s going to be used at all, as by definition it can’t run entirely inside the browser.

That’s not true for XSLT, except in the super-niche case of formatting RSS prettily via linking to XSLT like a stylesheet, and the intersection of “people who consume RSS” and “people who regularly consume it directly through the browser” has to be vanishingly small.

troupo 5 hours ago | parent | prev [-]

> Comparing absolute usage of an old standard to newer niche features isn’t useful. The USB feature is niche, but very useful and helpful for pages

So, if XSLT sees 10x usage of USB we can consider it a "niche technology that is 10x useful tan USB"

> The fact that those two features are used similarly

You mean USB is used on 10x fewer pages than XSLT despite HN telling me every time that it is an absolutely essential technology for PWAs or something.

coldpie 5 hours ago | parent | prev [-]

> This has to be proven by Google (and other browser vendors), not by people coming up with examples

What, to you, would constitute sufficient proof? Is it feasible to gather the evidence your suggestion would require?

PaulHoule 6 hours ago | parent | prev | next [-]

The case for JPEG XL is much better than that for XSLT. On the other hand, people who program in C will always be a little terrified of XML and everything around it since the parsing code will be complex and vulnerable.

pcleague 6 hours ago | parent [-]

Having a background in C/C++, that was the problem I ran into when I had to learn XSLT at translation company that used it to style documents across multiple formats. The upside of using XML was that you could store semantically rich info into the tags for the translators and designers. The downside, of course, with all the metadata, was that the files could be really large and the XSLT was usually specifically programmed for that particular document and very verbose so the XSLT template might only be used a couple times.

PaulHoule 5 hours ago | parent [-]

XSLT is really strange in that it's not really what people think it is. It's really a pattern-matching and production rules system right out of the golden age of AI but people think it is just an overcomplicated Jinja2 or handlebars.

zzo38computer an hour ago | parent | prev | next [-]

If you really want to improve the simplicity, there are better ways to do so rather than excluding Gopher.

(Also, they could make XSLT (and many other things that are built-in) into an extension instead, therefore making the core system more simpler.)

ForHackernews 5 hours ago | parent | prev | next [-]

The company that invented "Web Bluetooth" doesn't have a leg to stand on whining about "immense complexity" in having to maintain old stable features in their browser implementation.

ablob 6 hours ago | parent | prev [-]

"Steering for more simplicity" would be a political decision. Keeping it is also a political decision.

Removing a feature that is used, while possibly making chrome more "simple", also forces all the users of that feature to react to it, lest their efforts are lost to incompatibility. There is no way this can not be a political decision, given that either way one side will have to cope with the downsides of whatever is (or isn't) done.

PS: I don't know how much the feature is actually used, but my rationale should apply to any X where X is a feature considered to be pruned.

crazygringo 6 hours ago | parent | next [-]

No, the idea is that "political decision" is used in opposition to a decision based on rational tradeoffs.

If there isn't enough usage of a feature to justify prioritizing engineering hours to it instead of other features, so it's removed, that's just a regular business-as-usual decision. Nothing "political" about it. It's straightforward cost-benefit.

However, if the decision is based on factors beyond simple cost-benefit -- maintaining or removing a feature because it makes some influential group happy, because it's part of a larger strategic plan to help or harm something else, then we call that a political decision.

That's how the term "political decision" in this kind of context is used, what it means.

troupo 5 hours ago | parent [-]

> If there isn't enough usage of a feature to justify prioritizing engineering hours to it instead of other features, so it's removed, that's just a regular business-as-usual decision. Nothing "political" about it. It's straightforward cost-benefit.

Then why is Google actively shoving multiple hardware APIs into the browser (against the objection of other vendors) if their usage is 10x less than that of XSLT?

They have no trouble finding the resource to develop and maintain those

crazygringo 5 hours ago | parent | next [-]

You have to keep developing new things to see what proves useful in the long-run.

When you have something that's been around for a long time and still shows virtually no usage, it's fine to pull the plug. It's a kind of evolution. You can kill things that are proven to be unpopular, while building things and giving them the time to see if they become popular.

That's what product feature iteration is.

Attrecomet 4 hours ago | parent | prev [-]

WebSerial and WebUSB are the best thing to happen to browsers since sliced bread. Just because you can't see why it's amazing that users won't need to give some random, badly supported driver SYSTEM/root privileges to run their specialized hardware -- encompassing hobbyist, educational and professional uses -- doesn't mean it's not obviously useful, and Mozilla's stance on keeping it out of Firefox will just harm their market share in these area -- education probably being the most hurtful.

From what I gather here, XSLT's functionality OTOH is easily replaced, and unlike the useful hardware support you're raging against, is a behemoth to support.

tracker1 5 hours ago | parent | prev [-]

I would argue that FTP and Gopher were far more broadly used in browsers than XSLT ever was... but they still removed them. They also likely didn't present nearly the burden of support for XSLT either.

charcircuit 5 hours ago | parent | prev | next [-]

>Mozilla bent over to Google's pressure to kill off RSS by removing the “Live Bookmarks” features from the browser

They both were just responding to similar market demands because end users didn't want to use RSS. Users want to use social media instead.

>This is a trillion-dollar ad company who has been actively destroying the open web for over a decade

Google has both done more for and invested more into progressing the open web than anyone else.

>The WHATWG aim is to turn the Web into an application delivery platform

This is what web developers want and browsers our reacting to the natural demands of developers, who are reacting to demands of users. It was an evolutionary process that got it to that state.

>but with their dependency on the Blink rendering engine, controlled by Google, they won't be able to do anything but cave

Blink is open source and modular. Maintaining a fork is much less effort than the alternative of maintaining a different browser engine.

Fileformat 4 hours ago | parent | next [-]

I think that "market demands" is a bit of a misnomer. RSS was (and remains) too tech-y for the mainstream.

If browser vendors had made it easy for mainstream users, would there have been as much "market demand"?

Between killing off Google Reader and failing to support RSS/Atom, Google handed social media to Facebook et al.

glenstein 4 hours ago | parent [-]

Exactly, those changes which I believe were done at the time to create space for Google Plus (which I think in an alternative reality with some different choices and different execution could very well have been a relevant entrant into the social media space).

It involved driving a steak through the heart of Google reader. Perhaps the most widely used RSS reader on the planet, and ripple effects that led to the de-emphasis of RSS across the internet. Starting the historical timeline after those choices in summarizing it as an absence of market demand overlooks the fact that intentional choices were made on this front to roll it back rather than to emphasize it and make it accessible.

charcircuit 4 hours ago | parent [-]

The writing was already on the wall by the time Google Reader shutdown.

>usage of Google Reader has declined

https://googlereader.blogspot.com/2013/03/powering-down-goog...

glenstein 3 hours ago | parent [-]

I would respectfully disagree in the following sense: I think the choice to shut down Google Reader and deprioritize RSS across the Google ecosystem (including the browser) did more to impact the trajectory of RSS than whatever was already in motion prior to the Reader shutdown.

And the same is true in the other direction, I want RSS to be a success but that would hinge on affirmative choices by major actors in the space choosing to sustain it.

glenstein 4 hours ago | parent | prev | next [-]

>Google has both done more for and invested more into progressing the open web than anyone else.

One could also make that case about Microsoft with Microsoft office in the '90s. Embrace extend extinguish always involves being a contributor in the beginning.

>Blink is open source and modular. Maintaining a fork is much less effort than the alternative of maintaining a different browser engine.

Yeah and winning Asia Physical 100 is easier than winning a World's Strongest Man competition, and standing in a frying pan is preferable to jumping in a fire.

I'm baffled by appeals to the open source nature of Blink and Chromium to suggest that they're positive indicators of an open web that any random Joe could jump in and participate in. That's only the case if you're capable of the monumental weightlifting that comes with the task.

gbalduzzi 5 hours ago | parent | prev | next [-]

I agree with everything, but just to be clear:

> This is what web developers want

I don't think it is what web developers want, it is what customers expect.

Of course there are plenty of situation where the page is totally bloated and could be much leaner, but the overall trend to build web applications instead of web pages is dictated by user expectations and, as a consequence, requirements.

LtWorf 3 hours ago | parent [-]

Users say "the page shall not load in less than 15 seconds and shall not use less than 5% of my monthly dataplan"?

Odd… are these people with us?

carlosjobim 4 hours ago | parent | prev [-]

> They both were just responding to similar market demands because end users didn't want to use RSS. Users want to use social media instead.

How does that become a market demand to remove RSS? There are tons of features within browsers which most users don't use. But they do no harm staying there.

wryoak 6 hours ago | parent | prev | next [-]

I think imma convert my blog to XML/XSLT. Nobody reads it anyway, but now I’ll be able to blame my lack of audience on chrome.

et1337 5 hours ago | parent | prev | next [-]

I’m no Google fan, but deprecating XSLT is a rare opportunity to shrink the surface area of the web’s “API” without upsetting too many people. It would be one less thing for independent browsers like Ladybird to worry about. Thus actually weakening Google’s chokehold on the browser market.

basscomm an hour ago | parent [-]

> but deprecating XSLT is a rare opportunity to shrink the surface area of the web’s “API” without upsetting too many people

There's a lot of back and forth on every discussion about XSLT removal. I don't know if I would categorize that as 'without upsetting too many people'

gwbas1c 4 hours ago | parent | prev | next [-]

For the past 10-15 years, every time I look at web standards, it always feels like someone is trying to make browsers support their specific niche use case.

Seems like getting XSLT (and offering a polyfill replacement) is just a move in the direction of stopping applications from pushing their complexity into the browser.

pmdr 3 hours ago | parent | prev | next [-]

Google is just one of the companies killing the open web. None of them will say it outright, but they'll just scrounge up enough "security" reasons for their decisions to seem palatable, even to the HN crowd.

They're just turning up the heat, even more so since AI became a thing.

dang 4 hours ago | parent | prev | next [-]

Prequel:

Google is killing the open web - https://news.ycombinator.com/item?id=44949857 - Aug 2025 (181 comments)

Also related. Others?

XSLT RIP - https://news.ycombinator.com/item?id=45873434 - Nov 2025 (459 comments)

Removing XSLT for a more secure browser - https://news.ycombinator.com/item?id=45823059 - Nov 2025 (337 comments)

Intent to Deprecate and Remove XSLT - https://news.ycombinator.com/item?id=45779261 - Nov 2025 (149 comments)

XSLT removal will break multiple government and regulatory sites - https://news.ycombinator.com/item?id=44987346 - Aug 2025 (146 comments)

Google did not unilaterally decide to kill XSLT - https://news.ycombinator.com/item?id=44987239 - Aug 2025 (128 comments)

"Remove mentions of XSLT from the html spec" - https://news.ycombinator.com/item?id=44952185 - Aug 2025 (535 comments)

Should we remove XSLT from the web platform? - https://news.ycombinator.com/item?id=44909599 - Aug 2025 (96 comments)

Evidlo an hour ago | parent | prev | next [-]

Why can't the polyfill be enabled by default? It would fix the security issues and we wouldn't have to worry about breaking websites.

The JS polyfill also makes supporting modern XSLT feasible.

basscomm an hour ago | parent [-]

I tried the JS polyfill on some of the basic XSLT that I wrote, and it only kinda worked. I can't imagine how it would fail on anything with any complexity.

jamesbelchamber 6 hours ago | parent | prev | next [-]

Do the up-and-coming new browsers/engines (Servo, Ladybird.. others?) plan to support XSLT? If they do already, do they want to remove it?

righthand 3 hours ago | parent [-]

Yes they are going to support it because there are modern libraries that do.

pjmlp 6 hours ago | parent | prev | next [-]

It is Chrome OS Platform nowadays, powered by Chrome market share, and helped by everyone shipping Electron garbage.

yegle 5 hours ago | parent | prev | next [-]

Isn't the decision made by all the browser vendors (including Apple and Mozilla)?

etchalon 5 hours ago | parent [-]

They're obviously in on it. /s

apeters 5 hours ago | parent | prev | next [-]

The day will come when DRM is used to protect the whole http body.

silon42 5 hours ago | parent [-]

Cutting us Linux users off the Web.

doublerabbit 5 hours ago | parent [-]

Probably a good thing. Allows us to use it as an opportunity to make a new "web" without the mess of HTTP.

spankalee 5 hours ago | parent | prev | next [-]

This page makes some wild claims, like Google wants to deprecate MathML, even though it basically just landed. Yeah, the Chrome team wasn't prioritizing the work and it came through Igalia, but the best time for Chrome to kill MathML would have been before it was actually usable on the web.

The post also fails to mention that all browsers want to remove XSLT. The topic was brought up in several meetings by Firefox reps. It's not a Google conspiracy.

I also see that the site is written in XHTML and think the author must just really love XML, and doesn't realize that most browser maintainers think that XHTML is a mistake and failure. Being strict on input in failing to render anything on an error is antithetical to the "user agent" philosophy that says the browser should try to render something useful to the user anyway. Forgiving HTML is just better suited for the messy web. I bet this fuels some of their anger here.

zzo38computer an hour ago | parent | next [-]

XHTML does have some advantages compared with ordinary HTML, such as the parsing being more consistent, since the file will specify where literal text is used and which commands are or are not a block that is expected to contain other things.

(It could still try to render in case of an error, but display the error message as well, perhaps.)

kstrauser 5 hours ago | parent | prev [-]

I was all in on the concept of XHTML back in the day because it seemed obviously superior to chaotic, messy HTML. Nothing got me off that bandwagon as effectively as me converting a web app to emit pristine, validated XHTML and learning that no 2 browsers could process it the same way. Forget pixel-perfect layout and all that jazz. I couldn’t even get them to display the whole page reliably.

koakuma-chan 5 hours ago | parent | prev | next [-]

I didn't know XSLT existed before this drama.

righthand 5 hours ago | parent | next [-]

That’s because they didn’t want you to know about it. Hence letting it languish for 20 years and 2 major versions. The players doing this have been intentionally doing it for a few decades.

canvas12 4 hours ago | parent | prev [-]

me too

overgard 3 hours ago | parent | prev | next [-]

This guy seems pretty focused on XML based standards, but I think the reason XML based standards are dying is because people don't like working with XML.

altmind 6 hours ago | parent | prev | next [-]

Do you remember that chrome lost FTP support recently? The protocol was widely used and simple enough.

ErroneousBosh 4 hours ago | parent | next [-]

"Was" is the key here. FTP has been obsolete for 20 years.

chb 6 hours ago | parent | prev [-]

Widely used? By whom? Devs who don't understand rsync or scp? Give me a practical scenario where a box is running FTP but not SSH.

Edit: then account for the fact that this rare breed of content uploader doesn't use an FTP client... there's absolutely no reason to have FTP client code in a browser. It's an attack surface that is utterly unnecessary.

Demiurge 5 hours ago | parent | next [-]

Also, the protocol is pretty much a holdover from the earliest days, before encryption, or complicated NATs. I remember using it with just telnet a few times. It's pretty cool, but absolutely nobody should be using FTP these days. I remember saying this back in the 2005, and here we are 20 years later, someone still lamenting dropping FTP support from a browser? I think we're decades overdue.

tracker1 5 hours ago | parent | next [-]

I'm not lamenting it being removed.. but will say that it was probably a huge multiple more popular and widely used than XSLT is in the browser.

Demiurge 5 hours ago | parent [-]

I'm genuinely curious about that. But, this says a lot more about how different these standards are. FTP really needed a good successor, which it never really got. So, there is a strong use case, but technical deficiency to the protocol. So, FTP was overcome by a meriad of web forms and web drive sites, as a way to fill the gap. Still, resumable chunked uploads are really hard to implement from scratch, even now.

Dropping XSLT is about something different. It's not bad an in an obvious way. It's things like code complexity vs applicability. It's definitely not as clear of an argument to me, and I haven't touched XSLT in the past 20 years of web development, so I am not sure about the trade-offs.

grumbel 4 hours ago | parent | prev | next [-]

The problem wasn't that FTP got deprecated, but that we never got a proper successor. With FTP you could browse a directory tree like it was a real file system. With HTTP you can't, it has no concept of a directory. rsync is the closest thing to a real successor, but no Web browser support that either.

Demiurge 4 hours ago | parent [-]

I agree that we should get a successor, but if it got deprecated way back, I think we would have more likely gotten one. For just downloads, I have used apache and nginx directory and file listing functionality with ease.

koakuma-chan 5 hours ago | parent | prev [-]

I worked for a company where I had to make screenshots every minute and upload them via FTP for review to get paid. If there was multiple screenshots with the same thing on the screen, there would be questions.

ErroneousBosh 4 hours ago | parent [-]

Did you do any work besides taking screenshots and trying to figure out why FTP was broken this time?

Your old job's broken workflow is not a good reason for keeping a fundamentally broken protocol that relies on allowing Remote Code Execution as a privileged user around.

koakuma-chan 28 minutes ago | parent [-]

I wrote a tool that took screenshots automatically and used FileZilla to upload :) And my comment is in support of removing FTP because it was lame.

tracker1 5 hours ago | parent | prev [-]

Linking to an FTP file from a web page.

tiffanyh 5 hours ago | parent | prev | next [-]

Isn't Google one of the few (if not only), major tech company that would want to keep alive the open web ... given their business model.

bilog 4 hours ago | parent [-]

Their business model is selling ads. They don't give a rats ass about the open web.

shadowgovt 4 hours ago | parent | prev | next [-]

Okay, I was entertaining the author's position to a point, but I have to get off the train where they sing the praises of NPAPI.

Hey fam. I remember NPAPI. I wrote a very large NPAPI plugin.

The problem with NPAPI is that it lets people run arbitrary code as your browser. it was barely sandboxed. At best, it let any plugin do its level best to crash your browser session. At worst, it's a third-party binary blob you can't inspect running in the same thing you use to control your bank account.

NPAPI died for a good reason, and it has little to do with someone wanting to control your experience and everything to do with protecting you, the user, from bad actors. I think the author tips their hand a little too far here; the world they're envisioning is one where the elite hackers among us get to keep using the web and everyone else just gets owned by mechanisms they can't understand, and that's fine because it lets us be "wild" and "free" like we were in the nineties and early aughts again. Coupled with the author's downplaying of the security concerns in the XSLT lib, the author seems comfortable with the notion that security is less important than features, and I think there's a good reason that the major browser creators and maintainers disagree.

The author's dream, at the bottom, "a mesh of building blocks," is asking dozens upon dozens upon dozens of independent operators to put binary blobs in your browser outside the security sandbox. We stopped doing that for very, very good reasons.

zzo38computer an hour ago | parent [-]

> put binary blobs in your browser outside the security sandbox

There are reasons to do this sometimes, but usually it would be better to put them inside of the security sandbox (if the security sandbox can be designed in a good way).

The user (or system administrator) could manually install and configure any native code extensions (without needing to recompile the entire browser), but sandboxed VM codes would also be available and would be used for most stuff, rather than the native code.

shadowgovt an hour ago | parent [-]

We already have two infrastructures to do that: the JavaScript engine and wasm.

And, indeed, part of the deprecation of XSLT proposal involves, in essence, moving XSLT processing from the browser-native layer to wasm as a polyfill that a site author can opt into.

zzo38computer an hour ago | parent [-]

Yes, what I meant (one way to handle what the author proposed; possibly not exactly what they meant) is that many of these "building blocks" can be made from wasm (although I have some criticism of that too, nevertheless, it will do), and many will be included by default, and others would be set up by the user if desired. Native code extensions (e.g. .so files) would also be possible but is not needed for most things, and if you set up from the app store or from stuff specified by the document or server then only sandboxed VM codes would be possible and native codes would not be allowed in those circumstances.

kellengreen 5 hours ago | parent | prev | next [-]

Today I Learned: There's a built-in class called XSLTProcessor.

zzo38computer 2 hours ago | parent | prev | next [-]

I like the idea they mentioned of "a browser made up of composable components, protocol handlers separate from primary document renderers separate from attachment handlers", and I had the same idea. (Not all browsers will have to be implemented in this way, and they are not necessarily all the same, but this can be helpful when you want this.)

There can be two kind of extensions, sandboxed VM codes (e.g. WebAssembly) and native codes; the app store will only allow sandboxed VM codes, and any native codes that you might want must be installed and configured manually.

There is also the issue of such things as: identification of file formats (such as MIME), character sets, proxies, etc.

I hade made up Scorpion protocol and file format which is intended to be between Gemini and "WWW as it should be if it was designed better". This uses ULFI rather than MIME (to avoid some of the issues of MIME), and supports TRON character code, and the Scorpion conversion file can be used to specify a way to handle unknown file formats (there are several ways that this can be specified, including by a uxn code).

So, an implementation can be versatile to support things that can be useful beyond only MIME and Unicode etc.

Adding some additional optional specifications to WWW might also help, e.g. a way to specify that certain parts of the document are supposed be overridden by the user specifications in the client when they are available (although in some cases the client could guess, e.g. if a CSS only selects by HTML commands and media queries and not by anything else (or no CSS at all), then it should be considered unnecessary and the user's specifications of CSS can be used instead if they have been specified). Something like the Scorpion conversion file would be another possibility to have, possibly by adding a response header.

The previous "Google is killing the open web" article also mentions some similar things, but also a few others:

> in 2015, the WHATWG introduces the Fetch API, purportedly intended as the modern replacement for the old XMLHttpRequest; prominently missing from the new specification is any mention or methods to manage XML documents, in favor of JSON

Handling XML or JSON should probably better be a separate function than the function for downloading files, though. (Also, DER is better for many things)

> in 2024, Google discontinues the possibility to submit RSS feeds for review to be included in Google News

This is not an issue having to do with web browsers, although it is related to the issues that do have to do with web browsers (not) handling RSS.

> in 2025, Google announces a change in their Chrome Root Program Policy that within 2026 they will stop supporting certificate with an Extended Key Usage that includes any usage other than server [...]; this effectively kills certificates commonly used for mutual authentication

While I think they should not have stopped supporting such certificates (whoever the certificate is issued to probably should better make their own decision), it is usually helpful to use different certificates for client authentication anyways, so this is not quite as bad as they say, although it is still bad.

(X.509 client authentication would also have many other benefits, which I had described several times in the past.)

> in 2021, Google tried to remove [alert(), prompt(), and confirm()], again citing “security” as reason, despite the proposed changes being much more extensive than the purported security threat, and better solutions being proposed

Another issue is blocking events and JavaScript execution (which can sometimes be desirable; in the case of frames it should be better to only block one frame though), and modal dialog boxes potentially blocking other functions in the browser (which is undesirable). For the former case, there other other things that can be done, though, such as a JavaScript object that controls the execution of another JavaScript context which can then be suspended like a generator function (without needing to be a generator function).

shadowgovt 4 hours ago | parent | prev | next [-]

I don't think I'm plugged into the side of the Internet that considers XML "the backbone of an independent web."

I think XML has some good features, but in general infatuation with it as either a key representation or key transmission protocol has waned over the years. Everything I see on the wire these days is JSON or some flavor of binary RPC like protobuffer; I hardly ever see XML on the wire anymore.

zzo38computer an hour ago | parent [-]

XML is not so good for most of the things it was used for, and JSON has some problems too (I prefer DER), but Google is doing many bad things with WWW and not only things relating to XML, whether or not XML is good.

jll29 6 hours ago | parent | prev | next [-]

Let's all move to Ladybird next August.

recursive 5 hours ago | parent | next [-]

Have to get everyone off Windows first. If you can do that, switching to Ladybird should be easy.

GalaxyNova 5 hours ago | parent | prev | next [-]

the article doesn't say kind things about it..

pessimizer 5 hours ago | parent | prev [-]

Just in time for Apple to buy it.

ChrisArchitect 5 hours ago | parent | prev | next [-]

Related large discussion:

XSLT RIP

https://news.ycombinator.com/item?id=45873434

1vuio0pswjnm7 4 hours ago | parent | prev | next [-]

"The WHATWG aim is to turn the Web into an application delivery platform, a profit-making machine for corporations where the computer (and the browser through it) are a means for them to make money off you rather than for you to gain access to services you may be interested in."

"Such vision is in direct contrast with that of the Web as a repository of knowledge, a vast vault of interconnected documents whose value emerges from organic connections, personalization, variety, curation and user control. But who in the WHATWG today would defend such vision?"

"Maybe what we need is a new browser war. Not one of corporation versus corporation -doubly more so when all currently involved parties are allied in their efforts to enclose the Web than in fostering an open and independent one- but one of users versus corporations, a war to take back control of the Web and its tools."

It should be up to the www user not the web developer to determine how they prefer the documents to appear on their screen

Contrast this with one or a few software programs, i.e, essentially a predetermined selection (no choice), that purport to offer all possible preferences to all www users, i.e., the so-called "modern" browser. These programs are distributed by companies that sell ad services and their business partners (Mozilla)

Documents can be published in a "neutral" format, JSON or whatever, and users can choose to convert this, if desired, to whatever format they prefer. This is more or less the direction the web has taken however at present the conversion is generally being performed by web developers using (frequently obfuscated) Javascript, intended to be outside the control of the user

Although from a technical standpoint, there is nothing that requires (a) document retrieval and (b) document display to be performed by the same program, commercial interests have tried to force users toward using one program for everything (a "do everything program")^1

When users run "do everything programs" from companies selling ad services and their business partners to perform both (a) and (b), they end up receiving "documents" they never requested (ads) and getting tracked

If users want such "do everything" corporate browsers, if they prefer "do everything programs", then they are free to choose them, but there should be other choices and it should be illegal to discriminate against other software as long as rules of "netiquette" are followed. A requirement to use some "do everything program" is not a valid rule

"There's more to the Internet than the World Wide Web built around the HTTP protocol and the HTML file format. There used to be a lot of the Internet beyond the Web, and while much of it still remains as little more than a shadow of the past, largely eclipsed by the Web and what has been built on top of it (not all of it good) outside of some modest revivals, there's also new parts of it that have tried to learn from the past, and build towards something different."

Internet subscribers pay a relatively high price for access in many countries

According to one RFC author the www became the "the new waist"

But to use expensive internet access only for "the web", especially a 100% commercial, obsessively surveilled one filled with ads, is also a "waste", IMHO

1. Perhaps the opposite of "do one thing well". America's top trillionaire wants to create another of these "do everything programs", one to rule them all. These "do everything programs" will always exist but they should never be the only viable options. They should never be "required"

rendall 3 hours ago | parent | prev | next [-]

> ...just in case the questionable “no politics” policies —which consistently prove to be weasel words for “we're right-wingers but too chicken to come out as such”— weren't enough to stay away from it.

I am sympathetic to the stance of the article, but this line really turned me off and made me wonder if I was giving the writer too much credit. This kind of "if you're not with me, then you suck" outlook is childish and off-putting.

I know it's hard for some terminally political people to understand, but some of us really, really think it's a strength to work with teammates who hold different opinions than our own.

jeffbee 4 hours ago | parent | prev | next [-]

"Nobody wants my nerd bullshit, part 42"

pessimizer 5 hours ago | parent | prev [-]

What you actually want is a web that isn't decided by the whims of massive monopolies, not XSLT. XSLT is not good. Google will not be caring that you do not comply, and that you don't install their polyfill; it's some real vote with your wallet middle-class style consumer activism. It's an illusion of control. If you don't eat the bugs, you'll starve, then everyone is eating the bugs.

Try having an opposition party that isn't appointing judges like Amit Mehta. Or pardoning torturers, and people who engineered the financial crash, and people who illegally spied on everyone, etc., etc. But good luck with that, we can't even break up a frozen potato monopoly.