| ▲ | PebblesHD 3 days ago | |
Rather than improving testing for fallible accessibility assists, why not leverage AI to eliminate the need for them? An agent on your device can interpret the same page a sighted or otherwise unimpaired person would giving you as a disabled user the same experience they would have. Why would that not be preferable? It also puts you in control of how you want that agent to interpret pages. | ||
| ▲ | simonw 3 days ago | parent | next [-] | |
I'm optimistic that modern AI will lead to future improvements in accessibility tech, but for the moment I want to meet existing screenreader users where they are and ensure the products I build are as widely accessible as possible. | ||
| ▲ | K0nserv 3 days ago | parent | prev | next [-] | |
It adds loads of latency for one. If you watch someone who is a competent screen reader user you'll notice they have the speech rate set very high, to you it'll be hard to understand anything. Adding an LLM in the middle of this will add, at least, hundreds of milliseconds of latency to interactions. | ||
| ▲ | eru 3 days ago | parent | prev | next [-] | |
What you are describing is something the end user can do. What simonw was describing is something the author can do, and end user can benefit whether they use AI or not. | ||
| ▲ | 8organicbits 3 days ago | parent | prev | next [-] | |
The golden rule of LLMs is that they can make mistakes and you need to check their work. You're describing a situation where the intended user cannot check the LLM output for mistakes. That violates a safety constraint and is not a good use case for LLMs. | ||
| ▲ | devinprater 3 days ago | parent | prev [-] | |
I, myself, as a singular blind person, would absolutely love this. But we ain't there yet. On-device AI isn't finetuned for this, and neither Apple nor Google have shown indications of working on this in release software, so I'm sure we're a good 3 years away from the first version of this. | ||