| ▲ | rgovostes 4 days ago | |
I was working for a marine biologist applying computer vision to the task of counting migrating river herring. The data set was lousy, with background movement, poor foreground/background separation, inconsistent lighting, and slow exposures that created blurry streaks. Moreover, the researchers had posted the frames to some citizen science platform and asked volunteers to tag each fish with a single point—sufficient for counting by hand, but basically useless for object detection. In desperation I turned to Mechanical Turk to have the labels redone as bounding boxes, with some amount of agreement between labelers. Even then, results were so-so, with many workers making essentially random labels. I had to take another pass, flipping rapidly through thousands of frames with low confidence scores, which gave me nausea not unlike seasickness. | ||