Remix.run Logo
osigurdson 2 days ago

Sometimes I think we should go back to basics: pixels, data, latency / bandwidth. The web is an optimization problem in the sense that we want to render correct pixels at perceptual speed given latency and bandwidth constraints.

It should be more like: what pixels is the user about to see? What data is need to set the the pixels? What data is likely needed next and optimistically pre-fetch - something like that.

timschmidt 2 days ago | parent | next [-]

In https://github.com/timschmidt/alumina-ui I've been building for WASM using the egui toolkit which just accepts an HTML canvas the size of the browser and starts shouting WebGL at it.

I get to forget about HTML and Javascript, CSS, most of the complication of the browser and web, and just write an application in my favorite language which will run fast and deliver GL accelerated graphics to it's users.

I am really happy with WASM / WebGL for being abstractions which allows for this.

sneak 2 days ago | parent | next [-]

I was wondering how long it would take for the web to get to this point. It seemed inevitable once canvas became a thing. You don’t even technically need gl or wasm to rewrite an entire rendering engine in js that can just blast pixels at the full-size canvas.

Cool that you’re creating an actual desktop-style gl app with it.

timschmidt 2 days ago | parent [-]

I'm looking forward to the day when Javascript can be just another WASM polyfill loaded with the page with a well-defined, portable, and fast API to the DOM. Also can't wait for WebGPU to be adopted in more places. Right now I rely on WebGL2 because WebGPU isn't available in Firefox/Linux stable by default.

Another minor annoyance is that 'cargo bloat' and similar tools don't yet have backends for wasm, so I need to fix up the native build to make use of that sort of analysis, which I'd like, because I serve the whole application from microcontroller flash where I only have 4 - 16mb to hold application and firmware, including the http server and network stack.

sneak 2 days ago | parent [-]

I don't. I don't want the web to become just a runtime for opaque binary applications. Right now we still maintain some semblance of user control over the DOM and the application; when everything is WASM and WebGPU using its own custom renderer, the ability of people to casually inspect, modify, hack, tinker, and tweak will be completely over.

I learned how to build for the web by looking at webpages and seeing how they worked. What you're describing is the switch from circuit boards of discrete components to everything in a single microscopic IC under a blob of epoxy.

timschmidt 2 days ago | parent | next [-]

I recognize it as a more or less universal optimization process. Everything important ends up in silicon, if it remains important long enough. And there are several stages of that.

Like you, a part of me years for SGML and The Semantic Web, and where it makes sense in Alumina I am using RESTish APIs.

But I do not pine for Javascript. Language or frameworks. 20 years has been enough. And I do not believe that some of the 4mb minified obfuscated js downloads from a CDN I've seen are in any way more accessible than a WASM binary of similar size. At least there is advanced tooling for the WASM representation. Folks who want you to read their code will still make it easy, and folks who don't won't.

To your point about learning, WASM makes some really incredible systems like https://lovr.org/ available in the browser, which I would have loved to have as a kid, but which also aren't javascript. It's related to Turing completeness somehow, once you let a little software into your structured document, it all wants in. Or maybe it's "if you give a mouse a cookie"?

I understand what you're saying about discoverability. But the developer in me really likes having a relatively simple binary interpreter to target for network-delivered cross-platform binary applications. My hope is that offering such a juicy target to the folks who want to develop applications (including myself) gets them out of all the semantic document features, allowing both sets of code to get simpler and more focused. Because I think they're two different, both entirely valid, tasks folks use the web for.

osigurdson 2 days ago | parent | prev [-]

If you look at a typical next.js type app with all of the build steps, minification, post-css, tree-shaking, client side / server side, etc., it isn't exactly optimized for reverse engineering. Maybe that ship has already sailed.

osigurdson 2 days ago | parent | prev [-]

This! Love it!

timschmidt 2 days ago | parent [-]

Thank you. There is still a lot of work left to get it controlling it's first machine. I am currently figuring out how best to build for all the mcu and board combinations I expect to support, and then wiring up motion control to the UI. Just got the node-graph interface for csgrs working and there's still a lot left to flesh it out fully. Advancing slowly :D

Please consider joining the Discord: https://discord.com/invite/cCHRjpkPhQ

I'm the only one in there at the moment. Bring friends! lol

singinwhale 2 days ago | parent | prev | next [-]

Thinking about user seeing pixels is seeing just a part of the picture. As all software projects you don't optimize just for immediate user experience but also development time. Time to first draw rarely coincides with development time.

osigurdson 2 days ago | parent | next [-]

Dealing with over-fetch / under-fetch most certainly does chew up development time. Creating endless api endpoints that are only used by the web UI is also very time consuming (the industry is finally recognizing how dumb this is and moving back to older SSR approaches). How about, "the user is allowed to access this information on the server - system you figure out what / when to fetch portions of this information for viewing"?

redwall_hp 2 days ago | parent | prev [-]

And I don't care about pixels; I want to see structured documents. How I choose to view those is my business.

osigurdson 2 days ago | parent [-]

Got it, you don't care about how the sausage is made. That's totally fine.

networked 2 days ago | parent [-]

It doesn't sound like this is what redwall_hp meant; it sounds like what they meant was that pixels are the wrong level of abstraction. If so, I agree: the web is about marked-up text. When you break it down to the very basics, what you get is a teletype, not raster.

osigurdson a day ago | parent [-]

I'd say that is more like a terminal than a browser. Regardless, what I really mean is some data is needed for the pixels (or whatever abstraction you want above that). You don't need data for anything other than the current viewport and whatever is likely needed next. No less, no more. No over or under fetching. This would be much better imo.

2 days ago | parent | prev [-]
[deleted]