▲ | astrange 10 hours ago | |||||||
It's fine if you want to, but I think they should consider that basically nobody is reading it. If it was important for society, photo apps would prompt you to embed it in the image like EXIF. Computer vision is getting good enough to generate it; it has to be, because real-world objects don't have alt text. | ||||||||
▲ | simonw 9 hours ago | parent | next [-] | |||||||
I actually use Claude to generate the first draft of most of my alt text, but I still do a manual review of it because LLMs usually don't have enough contents to fully understand the message I'm trying to convey with an image: https://simonwillison.net/2025/Mar/2/accessibility-and-gen-a... | ||||||||
| ||||||||
▲ | lxgr 8 hours ago | parent | prev [-] | |||||||
Why would photo apps do what's "important for society"? Annotating photos takes time/effort, and I could totally imagine photo apps being resistant to prompting their users for that, some of which would undoubtedly find it annoying, and many more confusing. Yet I don't think that one can conclude from that that annotations aren't helpful/important to vision impaired users (at least until very recently, i.e. before the widespread availability of high quality automatic image annotations). In other words, the primary user base of photo editors isn't the set of people that would most benefit from it, which is probably why we started seeing "alt text nudging" first appear on social media, which has both producer and consumer in mind (at least more than photo editors). | ||||||||
|