| ▲ | KaiserPro 5 hours ago | |||||||||||||
Niantic are a number of people who are doing this. Its not that clear from the article, but niantic spatial are using the images captured from users to create a 3d model of "THE WORLD" or where people play pokemon go. They have then fed that data into a more modern version of colmap (https://github.com/colmap/colmap) to create a point cloud. Then the engineering to make sure that point cloud is aligned accurately and automatically. Once you have that point cloud aligned to the world, all you need is another image with some overlapping feature. Using simple trigonometry you can work out where the camera is from one picture This is largely trivial to do for a few 100 sqaure meters. the hard part is doing it fast in at the city scale. Extracting a few thousand features from an image and then matching them against >billion other points is hard to do quickly, without some optimisations. The thing that is not mentioned here is that data freshness is actually more important. Building change (advertising hoardings, paint jobs, logo changes, building remodelled etc) so the data goes stale. Its actually not that expensive anymore to just send your own people to scan areas. (A number of startups pre 2020 did it, mapillary provides a platform for it, although now owned by facebook) The robots will be feeding that data back in to the map. the special sauce is updating the map without infringing patents, and doing it efficiently. | ||||||||||||||
| ▲ | 5- 5 hours ago | parent [-] | |||||||||||||
see this 2007 talk: https://www.ted.com/talks/blaise_aguera_y_arcas_how_photosyn... i remember this being available in google maps in 2008 or so. fun technology! | ||||||||||||||
| ||||||||||||||