▲ | a2128 4 days ago | |
Everyone shares music and art they make but nobody ever shares videos and motion capture of themselves doing laundry and vacuuming their house. Maybe we need to start sharing that instead | ||
▲ | fragmede 4 days ago | parent | next [-] | |
The UMI gripper project is working on this. they have a handheld gripper device full of sensors that they use to record doing things in the field, like picking Starbucks, which they then use as training data. https://umi-gripper.github.io/ The other thing to note is part of the aloha project isn't just to record people folding laundry and loading the dishwasher, but to take that data and plug it into a simulator with a physics engine, and use a digital twin to get 10x the amount of data to be used in training the model than if they'd just used real world data. So yes we need that data, but not as much as we would otherwise. https://mobile-aloha.github.io/ https://github.com/tonyzhaozh/aloha | ||
▲ | my_username_is_ 4 days ago | parent | prev | next [-] | |
Check out the Epic Kitchens project, there are labeled video data sets of cooking, doing dishes, etc. | ||
▲ | verdverm 3 days ago | parent | prev | next [-] | |
Meta has several first person pov datasets available | ||
▲ | numpad0 4 days ago | parent | prev [-] | |
Even teleops are janky as hell, robots needs bodies |