▲ | jashmota 2 hours ago | ||||||||||||||||
The way I think about this - we should not have multi screens. Human field of vision is 60 degrees for central and about 120 degrees binocular. The bucket of the excavator is way narrower than this which means actual task doesn't require wide vision. So if we are able to have really good autonomous safety layers to ensure safe movements, and dynamically resize remote teleop windows, you'd make the operator more efficient. So while we stream 360 degree view, we get creative in how we show it. That's on the vision side. We also stream engine audio, and do haptic feedback. Takeuchi are interesting! Rare ones to have blades even on bigger sizes - is that why you got one? | |||||||||||||||||
▲ | AlotOfReading an hour ago | parent [-] | ||||||||||||||||
Just a suggestion from someone who's worked on industrial robots and autonomous vehicles, but I think you're underselling a lot of difficulties here. Skilled humans have a tendency to fully engage all of their senses during a task. For example, human drivers use their entire field of vision at night even though headlights only illuminate tiny portions of their FoV. I've never operated an excavator, but I would be very surprised if skilled operators only used the portion of their vision immediately around the bucket and not the rest of it for situational awareness. That said, UI design is a tradeoff. There's a paper that has a nice list of teleoperation design principles [0], which does talk about single windows as a positive. On the other hand, a common principle in older aviation HCI literature is the idea that nothing about the system UI should surprise the human. It's hard to maintain a good idea of the system state when you have it resizing windows automatically. The hardest thing is going to be making really good autonomous safety layers. It's the most difficult part of building a fully autonomous system anyway. The main advantage of teleop is that you can [supposedly] sidestep having one. | |||||||||||||||||
|