▲ | K0balt a day ago | |||||||
With a standardized diffusion model on the receiving end, and a starting point image (maybe 16x16 pixels) with a fixed seed, we could send images with tiny amounts of data, with the client deciding the resolution (deciding how much compute to dedicate) as well as whatever local flavor they wanted (display all images in the style of Monet…) bandwidth could be minimized and the user experience deeply customized. We’d just be sending prompts lol. Styling , css, etc all could receive similar treatment, using a standardized code generating model and the prompt/seed that generates the desired code. Just need to figure out how to feed code into a model and have it spit out the prompt and seed that would generate that code in its forward generation counterpart. | ||||||||
▲ | pitched a day ago | parent [-] | |||||||
To consistently generate the same image, we’d all have to agree on a standard model, which I can’t see happening any time soon. They feel more like fonts than code libraries. | ||||||||
|