Remix.run Logo
schobi a day ago

I really like the aestectics, even if physically wrong at the edges. Thanks for sharing the details.

As a embedded developer, I feel this is kind of wasteful. Every client computes an "expensive" blur filter, over an over again? Just for blending to a blurred version of the background image?

I know - this is using the GPU, this is optimized. In the end, this should not be much. (is it really?)

<rant> I feel the general trend with current web development is too much bloat. Simple sites take 5 seconds to load? Heavy lifting on the client? </rant>... but not the authors fault

pavlov a day ago | parent | next [-]

I guess everybody has their own preconceptions of what's wasteful.

I grew up in the era of 14.4k modems, so I'm used to thinking that network bandwidth is many, many orders of magnitude more scarce and valuable than CPU time.

To me, it's wasteful to download an entire image over the Internet if you can easily compute it on the client.

Think about all the systems you're activating along the way to download that image: routers, servers, even a disk somewhere far away (if it's not cached on the server)... All that just to avoid one pass of processing on data you already had in RAM on the client.

gary_0 a day ago | parent | next [-]

"Mips – processing cycles, computer power – had always been cheaper than bandwidth. The computers got cheaper by the week and the phone bills stayed high by the month." - The Star Fraction, 1995

gfody a day ago | parent [-]

each visitor brings their own cpu to do this work whereas the server bandwidth is finite

cj a day ago | parent [-]

I'm confused though.

If the goal is to optimize for server bandwidth, wouldn't you still want to send the already-blurred photo? Surely that will be a smaller image size than the pre-blurred full res photo (while also reducing client-side CPU/OS requirements).

pitched a day ago | parent [-]

We don’t know the aspect ratio of the client window before-hand and on web, there are a lot of possibilities! So if any pre-blurred image is meant to peek out around the edges, those edge widths are dynamic. Otherwise, a low-res blurred image plus high-res non-blurred edges might be less bandwidth if overhead is low enough.

ttfkam a day ago | parent | prev | next [-]

I have the same perspective regarding bandwidth, but I also consider any client to be running on a computer at least ten years old and at least three OS revisions behind.

I like to consider myself a guest on a client CPU, GPU, and RAM. I should not eat all their food, leave an unflushed turd in their toilet, and hog the remote control. Be a thoughtful guest that encourages feelings of inviting me back in the future.

Load fast, even when cell coverage is marginal. Low memory so a system doesn't grind to a halt from swapping. Animate judiciously because it's polite. Good algorithms, because everyone notices when their cursor becomes jerky.

pdimitar a day ago | parent | prev [-]

Okay but how do you compute an image? How would your browser -- or any other client software -- know what's the hero image of a blog that you never visited before, for example?

I feel like I am missing something important in your comment.

highwind a day ago | parent [-]

The article describes computational method of rendering frosted glass effect. You can achieve the same thing by rendering the effect once (then upload to a sever) and have client download the rendered image. Or you can compute the frosted glass effect. What's better? That's the argument.

thoughtpalette 21 hours ago | parent | next [-]

It's like people forgot what graceful degradation and progressive enhancement is.

pdimitar a day ago | parent | prev [-]

Ah, sorry, I didn't make it as far in the article.

IMO it really depends on the numbers. I'd be OK if my client downloads 50KB extra data for the already-rendered image but I'll also agree that from 100KB and above it is kind of wasteful and should be computed.

With the modern computing devices we all have -- including 3rd world countries, where a cheap Android phone can still do a lot -- I'd say we should default to computation.

ericmcer an hour ago | parent | prev | next [-]

This is wasteful and can actually cause perf issues if used really heavily.

I worked on a large applications where the design team decided everything should be frosted. All the text over images, buttons, icons, everything had heavy background blur and mobile devices would just die when scrolling with any speed.

vasco a day ago | parent | prev | next [-]

Most of those websites that are technically "wasteful" in some ways, are way more "wasteful" when you realize what we use them for. Mostly it's for pure entertainment.

So either entertainment is wasteful, or if it's not, spending more compute to make the entertainment better is OK.

klabb3 a day ago | parent [-]

I would say most websites are wasteful wrt the customer, which is usually advertisers. There are websites where the user is the customer, but they’re rare these days.

krsdcbl a day ago | parent | prev | next [-]

I would argue that while it _feels_ wasteful to us humans, as we perceive it as a "big recomputation of the rendered graphics", technically it's not.

the redrawing of anything that changes in your ui requires gpu computation anyway, and some simple blur is quite efficient to add. Likely less expensive than any kind of animations of dom objects thar aren't optimized as gpu layers.

additionally, seeing how nowadays the most simple sites tend to load 1+ mb of JS and trackers galore, all eating at your cpu ressources, Id put that bit of blur for aesthetics very far down on the "wasteful" list

refulgentis 21 hours ago | parent [-]

I generally agree - caveat is for some values of "some simple blur" - the one described in the article is not one in my book.

For reference, for every pixel in the input, we need to average 3x^2 pixels, roughly, where 3 is actually pi and x is the radius.

This blows up quite quickly. Not enough that my $5K MacBook really breaks a sweat with this example. But GPUs are one of the most insidious things a dev can accidentally forget to account for not being so great on other people's devices

ktpsns a day ago | parent | prev | next [-]

Isn't sending both the blurred and non-blurred picture over the network the way we did it since two decades in web dev? With (many!) high resolution pictures this is definetly less performant then a local computation, given that real networks have finite bandwiths, in particular mobile clients on spots with bad wireless coverage. It is astonishing what can be done with CSS/WebGL only these days. We needed a lot of hacks and workarounds in the past for that.

djmips a day ago | parent | next [-]

A blurred image shouldn't be very much extra over the high resolution image considering it's information content is much smaller.

pdimitar a day ago | parent | prev [-]

I don't have much data myself but when I was doing scraping some time ago I had thousands of examples where f.ex. the full-res image was something like 1.7MB and the blurred image was in the range of 70KB - 200KB, so more or less 7% - 11% of the original. And I might be lying here (it's been a while) but I believe at least 80% of the blurred images were 80KB or less.

Technically yes you could make some savings but since images were transferred over an HTTP-1.1 Keep-Alive connection, I don't feel it was such a waste.

Would love to get more data if you have it, it's just that from the limited work I did in the area it did not feel very worth of only downloading the high-res image and do the blur yourself... especially in scenarios when you just need the blurred image + dimensions first, in order to prevent the constant annoying visual reflow as images are downloaded -- something _many_ websites suffer from even today.

mcdeltat 10 hours ago | parent | prev | next [-]

IMO it is time to seriously realise that most of this "ooh looks cool, surely I/we need that" tech has no place in this world. Whether or not the act itself is wasteful (although it generally is in tech...), the thought process itself indicates a bigger problem with society. Why do we need this thing? Why do we consider being without the thing to be bad? Like seriously, at the scale of issues in society today, who cares if your UI panel is blurred or not?

RicoElectrico a day ago | parent | prev | next [-]

As per the central limit theorem one can approximate Gaussian with a repeated convolution with any function, box blur being most obvious candidate here. And box blur can be computed quickly with a summed area table.

jcelerier a day ago | parent [-]

> a repeated convolution

I really wonder what's the field of reference of "quickly" there. To me convolution is one of the last resort techniques in signal processing given how expensive it is (O(size of input data * size of convolution kernel)). It's of course still much faster than gaussian blur which is still non-trivial to manage at a barely decent 120fps even on huge Nvidia GPUs but still.

pitched a day ago | parent [-]

How are we supposed to think about SIMD in Big-O? Because this is still linear time if the kernel width is less than the max SIMD width (which is 16 I think on x64?)

promiseofbeans a day ago | parent | prev | next [-]

I guess eventually it's a trade-off between doing heavy lifting yourself and paying a little more compute and bandwidth, or offloading it to clients and wasting more energy but at lower cost to the developer. I think there are environmental arguments in both directions (more energy spent computing stuff on the client vs more energy sending pre-computed assets over the networks). I'm not sure which is better ultimately - I suppose it varies case-by-case.

dcuthbertson a day ago | parent | next [-]

First, I really like the effect the author has achieved. It's very pretty.

Now for a bit of whimsy. It's been said that a picture is worth a thousand words. However, a thousand words uses far less bandwidth. What if we go full-tilt down the energy saving path, replace some images with prose to describe them? What would articles and blog posts look like then?

I know it's not practical, and sending actual images saves a lot of time and effort over trying to describe them, but I like the idea of imagining what that kind of web might look like.

K0balt a day ago | parent [-]

With a standardized diffusion model on the receiving end, and a starting point image (maybe 16x16 pixels) with a fixed seed, we could send images with tiny amounts of data, with the client deciding the resolution (deciding how much compute to dedicate) as well as whatever local flavor they wanted (display all images in the style of Monet…) bandwidth could be minimized and the user experience deeply customized.

We’d just be sending prompts lol. Styling , css, etc all could receive similar treatment, using a standardized code generating model and the prompt/seed that generates the desired code.

Just need to figure out how to feed code into a model and have it spit out the prompt and seed that would generate that code in its forward generation counterpart.

pitched a day ago | parent [-]

To consistently generate the same image, we’d all have to agree on a standard model, which I can’t see happening any time soon. They feel more like fonts than code libraries.

K0balt 18 hours ago | parent [-]

I mean, yeah, but here we’re talking about a knowledge based compression standard, so I would assume that a specific model would be chosen.

The interesting thing here is that the model wouldn’t have to be the one that produces the end result, just -a- end result deterministically produced from the specified seed.

That end result could then act as the input to the user custom model which would add the user specific adjustments, but presumably the input image would be a strong enough influence to guide the end product to be equivalent in meaning if not in style.

Effectively, this could be lossless compression, but only for data that could be produced by a model given a specific prompt and seed, or lossy compression for other data.

It’s a pretty weird idea, but it might make sense if thermodynamic computing or similar tech fulfills its potential to run huge models cheaply and quickly on several orders of magnitude less power (and physical size) than is currently required.

But that will require nand-scale, room temperature thermodynamic wells or die scale micro-cryogenic coolers. Both are a bit of a stretch but only engineering problems rather than out-of-bounds with known physics.

The real question is whether or not thermodynamic wells will be able to scale, and especially whether we can get them working at room temperature.

pavlov a day ago | parent | prev [-]

I’m pretty sure the radio on a mobile device consumes more energy than the GPU doing a 2D operation on a single image.

If you want to save energy, send less data.

smusamashah a day ago | parent | prev | next [-]

I recently had a shower thought that the bigger you go, more energy you need to do computation. As in you could make a computer out of moving planets. On the other hand you could go small and make a computer out of a tiny particle. Both scales achieve the same result but at very different costs.

mock-possum a day ago | parent [-]

There is a sci-fi series that I am absolutely blanking on that features that concept - I remember a few characters each having access to a somewhat godlike ability to manipulate physics, and using it to restructure the universe to create computers to augment their own capabilities - definitely some planetary stuff and some quantum / atomic level stuff.. hmmmm maybe gpt can help

heatmiser 20 hours ago | parent [-]

would it happen to be "Zones of Thought" by Vernor Vinge?

mock-possum 12 hours ago | parent [-]

Ooh no it is not, but I am coincidentally working my way through the third book in that series!

fidotron a day ago | parent | prev | next [-]

Tbh I think people radically underestimate how fast, and efficiently so, GPUs are. The Apple Watch has physically based rendering in the UI. It would be curious to compare the actual cost of that versus using a microcontroller to update a framebuffer pushed to a display via SPI.

I did some webgl nonsense like https://luduxia.com/showdown/ and https://luduxia.com/whichwayround/ . This is a experimental custom renderer with DoF, subsurface scattering and lots of other oddities. You are not killed by calculation but memory access, but how to reduce this in blur operations is well understood.

What there is not is semi transparent objects occluding each other, because this becomes a sorting nightmare and you would end up having to resolve a whole lot of dependencies on this dynamically. (Unless you do things with restricting blending modes). Implementing that in the context of widgets that move on a 2D plane with z-index sorting is enormously easier than in a 3D scene though.

tyleo 18 hours ago | parent | prev [-]

Did my site take > 5 seconds to load?

I put a lot of effort into minimizing content. The images are orders of magnitude larger than the page content but should be async. Other assets barely break 20 kB in total aside from the font (100 kB) which should also load async.