| ▲ | stmw 4 days ago |
| It's nice to see this. Things used to be simple! (XSLT itself should've been simpler of course). BTW, as I commented on earlier HN threads re: removal of XSLT support from HTML spec and browswers, IBM owns a high-performance XSLT implementation that they may want to consider contributing to one or more browsers. (It is a JIT that generates machine code directly from XSLT and several other data transformation and policy languages, and then executes it). |
|
| ▲ | mananaysiempre 4 days ago | parent | next [-] |
| > XSLT itself should've been simpler of course I don’t think XSLT 1.0 is all that complicated. It’s a declarative language for transforming trees into trees or strings in an arbitrary manner, and you could argue that problem statement is too wide, but if you’re OK with it then I don’t think there’s any inappropriate complexity in the solution. The surface syntax is bad, true, but that’s because of its verbosity, not its complexity. Once you put a bit of an Algol veneer on it[1], it becomes perfectly palatable. [1] https://juniper.github.io/libslax/slax-manual.html |
|
| ▲ | bawolff 4 days ago | parent | prev [-] |
| I think it would be very unlikely browsers would use a jit engine for xslt. They are removing it because they are afraid of the security footprint. A JIT engine would make that footprint much worse. |
| |
| ▲ | dpassens 4 days ago | parent | next [-] | | Not necessarily. The security issues are with the libxml implementation, a different one might be more secure even with JIT. That's part of what makes the whole situation so ridiculous. | | |
| ▲ | whizzter 4 days ago | parent | next [-] | | Still, from a security perspective considering the low amount of sites that use it I think a better solution would be to implement it with a JS shim like PDF.js. JS is already required to have a XML DOM parser, an universal XSLT engine in JS should be a low-effort web to continue supporting XSLT, as for performance the transform could probably be eval'ed and cached to JS snippets so that they in turn become JIT-compiled. https://developer.mozilla.org/en-US/docs/Web/API/DOMParser | |
| ▲ | bawolff 4 days ago | parent | prev | next [-] | | Whether or not it is actually secure, as a factual matter, has nothing to do with its security footprint. | |
| ▲ | afavour 4 days ago | parent | prev [-] | | Emphasis on might be. Finding out whether it actually is is not a trivial process. | | |
| ▲ | troupo 4 days ago | parent [-] | | There are multiple CVEs in multiple Chrome-only non-standards that Chrome spits out by the hundreds in the past few years. They have no issues releasing those, supporting them, and fixing them. Somehow they have an issue with supporting, fixing (and updating to latest version) this particular one. Possibly because it doesn't result in promotions. | | |
| ▲ | afavour 4 days ago | parent [-] | | This often ends up as an anti-Google argument but all the major browser makers agree with this deprecation proposal. |
|
|
| |
| ▲ | stmw 4 days ago | parent | prev [-] | | I don't think that follows, esp. since when we're talking about a mature, actively commercially maintained JIT engine. | | |
| ▲ | bawolff 3 days ago | parent [-] | | Why not? JIT engines are inherently risky. They are great for performance but terrible for security. | | |
| ▲ | stmw 3 days ago | parent [-] | | Briefly, because overall risk = new inherent risk / (architecture * security reputation * ongoing maintenance investment) Even without arguing over whether JIT engines are inherently risky or add much risk given the modern computing environment is full of them, from graphics to Javascript. |
|
|
|