| ▲ | yieldcrv 8 hours ago | ||||||||||||||||
so basically despite the higher resource requirements like 10TB of data for 30 minutes of footage, the compositing is so much faster and more flexible and those resources can be deleted or moved to long term storage in the cloud very quickly and the project can move on fascinating I wouldn't have normally read this and watched the video, but my Claude sessions were already executing a plan the tl;dr is that all the actors were scanned in a 3D point cloud system and then "NeRF"'d which means to extrapolate any missing data about their transposed 3D model this was then more easily placed into the video than trying to compose and place 2D actors layer by layer | |||||||||||||||||
| ▲ | darhodester 7 hours ago | parent | next [-] | ||||||||||||||||
Gaussian splatting is not NeRF (neural radiance field), but it is a type of radiance field, and supports novel view synthesis. The difference is in an explicit point cloud representation (Gaussian splatting), versus a process that needs to be inferred by a neural network. | |||||||||||||||||
| |||||||||||||||||
| ▲ | andybak 8 hours ago | parent | prev [-] | ||||||||||||||||
> and then "NeRF"'d which means to extrapolate any missing data about their transposed 3D model Not sure if it's you or the original article but that's a slightly misleading summary of NeRFs. | |||||||||||||||||
| |||||||||||||||||