▲ | tgv 7 days ago | |||||||
What good can it be used for? Because I haven't seen anything that makes faking pics with AI so good we can ignore the negatives. The article also seems to take the relativist stance: nothing new to see here, move along now. Why? For the clicks? Just being contrarian? | ||||||||
▲ | djoldman 7 days ago | parent | next [-] | |||||||
Many manifestations of generative AI allow people to put concepts onto screens faster. It generally serves as a more efficient translator of "I want a contract like this one but more tailored to [new client]" or "I want to make a strategy for my [new business]." In information economy jobs, translating thoughts and ideas into better formal communications more efficiently is valuable. Be it pictures or text. | ||||||||
| ||||||||
▲ | 6 days ago | parent | prev | next [-] | |||||||
[deleted] | ||||||||
▲ | godelski 4 days ago | parent | prev [-] | |||||||
The same generation process is also used for... well... generating anything. They are compression functions so you are learning an intractable data distribution (you can't write down the equation) and then turning it into something you have a bit more control over. Images were/are a great test platform for this since we humans can visually identify the outputs and verify that we've accurately learned a good generating function. But this process can be applied to any data and truthfully, variants of it are used all throughout since and have been for decades (arguably at least a century, but statistics really benefited from computers). For just the domain of image generation there's still a lot of useful things. Want to do any upscaling? The processes can help there as you're learning a more complex transform than something like a bicubic interpolation (yeah, there are more advanced algorithms, this is just an example). Same is actually true for downsampling. We can even talk about rotating images, which is a classic problem in old videogames. There's also typical photo editing. This is done widely, most notably by Hollywood. Even if your AI only gets you 70% of the way there it can still be helpful (if the first 70% isn't trivial). It is also directly used in compression algorithms. It is much cheaper to share an encoder and decoder structure which can be computed locally and then transmit a smaller signal. The transmission is not only typically the more expensive part but usually also the bottleneck and has the largest chance of data corruption. Yeah, I agree, most people are using the tech in weird ways and there's a lot of weird hype around malformed images that are obviously malformed if you looked at it with more than a passing glance (or not through rose colored glasses). But there are a lot of useful applications to this stuff. Ones that could far more benefit the world and personally I'm left wondering "why isn't even a small fraction of the investment that's going into status quo image generators and LLMs going into these other domains?" I'm guessing because image generators and LLms are easier to understand? But it is a shame. |