▲ | dcuthbertson a day ago | ||||||||||||||||
First, I really like the effect the author has achieved. It's very pretty. Now for a bit of whimsy. It's been said that a picture is worth a thousand words. However, a thousand words uses far less bandwidth. What if we go full-tilt down the energy saving path, replace some images with prose to describe them? What would articles and blog posts look like then? I know it's not practical, and sending actual images saves a lot of time and effort over trying to describe them, but I like the idea of imagining what that kind of web might look like. | |||||||||||||||||
▲ | K0balt a day ago | parent [-] | ||||||||||||||||
With a standardized diffusion model on the receiving end, and a starting point image (maybe 16x16 pixels) with a fixed seed, we could send images with tiny amounts of data, with the client deciding the resolution (deciding how much compute to dedicate) as well as whatever local flavor they wanted (display all images in the style of Monet…) bandwidth could be minimized and the user experience deeply customized. We’d just be sending prompts lol. Styling , css, etc all could receive similar treatment, using a standardized code generating model and the prompt/seed that generates the desired code. Just need to figure out how to feed code into a model and have it spit out the prompt and seed that would generate that code in its forward generation counterpart. | |||||||||||||||||
|