▲ | vunderba a day ago | |||||||
Nice job. I repurposed an old laptop with an RTX 2070 in it for a similar purpose. I have a Samsung Frame in my bedroom so the laptop spins up for about 15-20 minutes every day and generates new imagery using a combination of a small 7b LLM model to generate a stable diffusion prompt to generate the imagery. It's then fed through moderation filters, various upscaling, img2img and finally piped out to the Samsung Frame via a python websocket api. Works pretty well although I've woken up to some pretty bizarre looking stuff every once in a while. This is a mirror of what gets generated for it. FYI, I got around the cable issue by aligning the TV directly above my piano which helps to hide it. EDIT: Just a heads up, if the image s3 URL is identical (meaning you're upserting), you could make a HEAD request to determine if the image has changed versus the current one being shown. Alternatively, you could look into setting up an S3 trigger which triggers off new files being uploaded to it as well. Personally, I'd recommend having two S3 folders, archived and new, then randomly assigning a GUID to the uploaded image. That way you support queueing if several people upload a few images, each one will get scheduled for display. S3 List the "new" files, display one of them, once time is up, raspberry PI moves it to the "archived" folder. | ||||||||
▲ | tylerjrichards a day ago | parent [-] | |||||||
this is awesome, do you mind me asking what your prompt is? There is a really sweet photo on your site right now | ||||||||
|