▲ | AlotOfReading 3 hours ago | |||||||
Just a suggestion from someone who's worked on industrial robots and autonomous vehicles, but I think you're underselling a lot of difficulties here. Skilled humans have a tendency to fully engage all of their senses during a task. For example, human drivers use their entire field of vision at night even though headlights only illuminate tiny portions of their FoV. I've never operated an excavator, but I would be very surprised if skilled operators only used the portion of their vision immediately around the bucket and not the rest of it for situational awareness. That said, UI design is a tradeoff. There's a paper that has a nice list of teleoperation design principles [0], which does talk about single windows as a positive. On the other hand, a common principle in older aviation HCI literature is the idea that nothing about the system UI should surprise the human. It's hard to maintain a good idea of the system state when you have it resizing windows automatically. The hardest thing is going to be making really good autonomous safety layers. It's the most difficult part of building a fully autonomous system anyway. The main advantage of teleop is that you can [supposedly] sidestep having one. | ||||||||
▲ | jashmota 3 hours ago | parent [-] | |||||||
I definitely agree with you - recreating the scene in teleop is challenging. In excavators, however, it does make it better. An excavator has huge blindspots on the right (due to arm), to the back and sometime near the bucket. Hence the workers who are hired to stand around (banksmen, spotters, signalmen) and signal to the operator. It's like driving a Ford F150 without backup camera. You'd add the backup camera upfront, and not display the back view at the back window. It's definitely challenging and we're far from something that's perfect. We're iterating towards something that's better everyday. | ||||||||
|