| ▲ | jmward01 12 hours ago | |||||||
I imagine this is where LLMs could really help actually. LLMs are natively surfing the web now so I suspect LLM descriptions of sites or even having them re-render a site in a more usable way is becoming much more possible. | ||||||||
| ▲ | UqWBcuFx6NV4r 12 hours ago | parent | next [-] | |||||||
Visually impaired people are more than willing to tell everyone what they need as far as accessible UIs, myself included. Barely anyone listens. Let’s not let LLMs be the next thing that people that don’t understand the actual problem try shoving in as a solution. Wheelchair users / people with mobility impairments rightfully scoff at the myriad “concepts” you see now and again of mech suits, wheelchairs that can climb stairs, etc. “Just give us a ramp! This is a solved problem. Your alternative is just sci-fi fetishism!” Still, it keeps happening. LLMs are increasingly becoming the same thing for people with visual impairments. | ||||||||
| ▲ | shakna 12 hours ago | parent | prev [-] | |||||||
I don't need the text of the page. Thats easy, and I already have it. But information has a hierarchy, usually visual, and that hierarchy needs to be reflected. LLMs are famously bad at structure, especially any tree with significant depth. RAG is not enough - hallucinations become common at depth. My response now, to you, is in a semi-structured node graph. I know a reply has happened, because of the dangling children. I know who made it, and what they said, by cell attributes in the spans, surrounding it. Don't worry - AI is being shoved down accessibility's throat, like everywhere else. FSCompanion for JAWS, NVDA has an OpenAI plugin, and VoiceOver has it builtin. Why do I hate it? Because when it _doesn't work_, you can't tell. You don't know if it is hallucinating data, and cannot verify the response. If it is the mode of communication, it is all you have, making every failure a catastrophic failure. | ||||||||
| ||||||||