Remix.run Logo
makeitdouble 13 hours ago

AI has no idea what you're trying to convey through a picture. Imagine you're shitposting a "this is fine" meme with some bad collage on it, asking for a LLM to properly convey your take is just a fool's errand.

CSSer 12 hours ago | parent | next [-]

Moreover some images SHOULD NOT have alt text or at least shouldn't have their associated alt text displayed in all contexts. I put this in all caps because this is a pervasive and common myth. Granted, all images on Bluesky probably should have alt text because ostensibly an image as part of a post is probably content. However, in cases where an image is purely decorative and not meaningfully relevant to the page, a blind or low-vision person doesn't need to, nor do they want to, hear your weird interpretation of some abstract art. If you disagree, take it up with W3C. This isn't just my opinion.

https://www.w3.org/WAI/tutorials/images/decision-tree/

bobbiechen 12 hours ago | parent | prev | next [-]

(creator here) I thought it would at least be easy to transcribe screenshots of just text, which were a common part of the dataset. These are harder to misinterpret so I figured automated alt text would be a win.

But I found that even that was not easy to do with "traditional" OCR, notes here: https://digitalseams.com/blog/image-transcription-humbled-me

NedF 9 hours ago | parent | prev [-]

> asking for a LLM to properly convey

Alt text is not there to explain the joke, that would imply cripples are stupid.

AI is fine, if you want to go against the trillions $ come back with a proof.

LLMs fail, wow easy to find, I'll bet alt text on random internet images fails at over 10 times the rate, most are null.