| ▲ | basch 6 hours ago | ||||||||||||||||
I’ll disagree with you a little. The reason I don’t use voice is because of context switching. With a mouse and keyboard I can switch windows. With my voice, the computer can’t yet automatically determine if I am dictating a transcription or giving editing commands. What I really need is the interpreter listening to me to intuitively to know whether I am in the equivalent of VI command mode or insert mode. It is the roadblock to not needing a screen at all, right now I want to visualize whether it understood me correctly because if it didn’t switch from insert to command automatically, I now have all my commands written into my paragraph. I also don’t want to listen to the computer talk back to me to confirm it listened. I want to just keep going, to keep narrating my thoughts and trust it’s doing the right things, not having to check. Having it slowly chime in to repeat that it listened derails my flow and train of thought. TLDR The future of voice is headless vi. | |||||||||||||||||
| ▲ | skeledrew 5 hours ago | parent [-] | ||||||||||||||||
Problem I see here is you're trying to shoehorn a voice interface onto something that's highly optimized for keyboard input. The apps need to be redesigned to be accommodating of the interface, else it's just never-ending papercuts. | |||||||||||||||||
| |||||||||||||||||