| ▲ | w10-1 2 hours ago | |
I'm disappointed that they are taking the long way around, with screen shots and visual recognition. Apple GUI's have underlying accessibility annotations that if surfaced would make UI manipulation easy for LLM's. "Back in the day" - 1990's - Apple had Virtual User, basically a lisp derivative that reported UI state as S-expressions (like a web DOM) and allowed scripts to manipulate settings and perform UI actions. With such a curated DOM/model and selective UI inputs, they could manage privacy and safety, opening up LLM control to users who would otherwise never trust a machine. I hope they're working on that approach and training models for it. It's one way they could distinguish the Apple platform as being more controllable, with safety and permissions built into the subsystems instead of giving the LLM full control over UI input. | ||
| ▲ | rishabhaiover 35 minutes ago | parent | next [-] | |
I'd be very interested to learn about output quality vs token utilization for both these approaches | ||
| ▲ | CharlesW an hour ago | parent | prev [-] | |
> I'm disappointed that they are taking the long way around, with screen shots and visual recognition. This strikes me as more of a universal fallback vs. Apple choosing vision instead of a structured control plane. It nicely complements the layers Apple has been building for years: App Intents, Shortcuts, Spotlight/Siri surfaces, etc. Those are essentially curated action graphs with explicit parameters, validation, and user consent, which is much closer to your "DOM with safety rails" ideal. All iOS app developers should now be building "App Intents first". Vision-based awareness is a nice safely for users of apps whose devs who haven't yet realized where this is all obviously going. | ||