Remix.run Logo
promiseofbeans a day ago

I guess eventually it's a trade-off between doing heavy lifting yourself and paying a little more compute and bandwidth, or offloading it to clients and wasting more energy but at lower cost to the developer. I think there are environmental arguments in both directions (more energy spent computing stuff on the client vs more energy sending pre-computed assets over the networks). I'm not sure which is better ultimately - I suppose it varies case-by-case.

dcuthbertson a day ago | parent | next [-]

First, I really like the effect the author has achieved. It's very pretty.

Now for a bit of whimsy. It's been said that a picture is worth a thousand words. However, a thousand words uses far less bandwidth. What if we go full-tilt down the energy saving path, replace some images with prose to describe them? What would articles and blog posts look like then?

I know it's not practical, and sending actual images saves a lot of time and effort over trying to describe them, but I like the idea of imagining what that kind of web might look like.

K0balt a day ago | parent [-]

With a standardized diffusion model on the receiving end, and a starting point image (maybe 16x16 pixels) with a fixed seed, we could send images with tiny amounts of data, with the client deciding the resolution (deciding how much compute to dedicate) as well as whatever local flavor they wanted (display all images in the style of Monet…) bandwidth could be minimized and the user experience deeply customized.

We’d just be sending prompts lol. Styling , css, etc all could receive similar treatment, using a standardized code generating model and the prompt/seed that generates the desired code.

Just need to figure out how to feed code into a model and have it spit out the prompt and seed that would generate that code in its forward generation counterpart.

pitched a day ago | parent [-]

To consistently generate the same image, we’d all have to agree on a standard model, which I can’t see happening any time soon. They feel more like fonts than code libraries.

K0balt 18 hours ago | parent [-]

I mean, yeah, but here we’re talking about a knowledge based compression standard, so I would assume that a specific model would be chosen.

The interesting thing here is that the model wouldn’t have to be the one that produces the end result, just -a- end result deterministically produced from the specified seed.

That end result could then act as the input to the user custom model which would add the user specific adjustments, but presumably the input image would be a strong enough influence to guide the end product to be equivalent in meaning if not in style.

Effectively, this could be lossless compression, but only for data that could be produced by a model given a specific prompt and seed, or lossy compression for other data.

It’s a pretty weird idea, but it might make sense if thermodynamic computing or similar tech fulfills its potential to run huge models cheaply and quickly on several orders of magnitude less power (and physical size) than is currently required.

But that will require nand-scale, room temperature thermodynamic wells or die scale micro-cryogenic coolers. Both are a bit of a stretch but only engineering problems rather than out-of-bounds with known physics.

The real question is whether or not thermodynamic wells will be able to scale, and especially whether we can get them working at room temperature.

pavlov a day ago | parent | prev [-]

I’m pretty sure the radio on a mobile device consumes more energy than the GPU doing a 2D operation on a single image.

If you want to save energy, send less data.