| ▲ | meindnoch 6 hours ago | ||||||||||||||||||||||||||||
1. Create a point cloud from a scene (either via lidar, or via photogrammetry from multiple images) 2. Replace each point of the point cloud with a fuzzy ellipsoid, that has a bunch of parameters for its position + size + orientation + view-dependent color (via spherical harmonics up to some low order) 3. If you render these ellipsoids using a differentiable renderer, then you can subtract the resulting image from the ground truth (i.e. your original photos), and calculate the partial derivatives of the error with respect to each of the millions of ellipsoid parameters that you fed into the renderer. 4. Now you can run gradient descent using the differentiable renderer, which makes your fuzzy ellipsoids converge to something closely reproducing the ground truth images (from multiple angles). 5. Since the ellipsoids started at the 3D point cloud's positions, the 3D structure of the scene will likely be preserved during gradient descent, thus the resulting scene will support novel camera angles with plausible-looking results. | |||||||||||||||||||||||||||||
| ▲ | klondike_klive 6 hours ago | parent | next [-] | ||||||||||||||||||||||||||||
You... you must have been quite some 5 year old. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | renewiltord 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
Great explanation/simplification. Top quality contribution. | |||||||||||||||||||||||||||||
| ▲ | chrisjj 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
Or: Matrix bullet time with more viewpoints and less quality. | |||||||||||||||||||||||||||||
| ▲ | 4 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||
| [deleted] | |||||||||||||||||||||||||||||