| ▲ | darhodester 7 hours ago |
| Hi, I'm David Rhodes, Co-founder of CG Nomads, developer of GSOPs (Gaussian Splatting Operators) for SideFX Houdini. GSOPs was used in combination with OTOY OctaneRender to produce this music video. If you're interested in the technology and its capabilities, learn more at https://www.cgnomads.com/ or AMA. Try GSOPs yourself: https://github.com/cgnomads/GSOPs (example content included). |
|
| ▲ | henjodottech 4 hours ago | parent | next [-] |
| I’m fascinated by the aesthetic of this technique. I remember early versions that were completely glitched out and presented 3d clouds of noise and fragments to traverse through. I’m curious if you have any thoughts about creatively ‘abusing’ this tech? Perhaps misaligning things somehow or using some wrong inputs. |
| |
| ▲ | darhodester 2 hours ago | parent | next [-] | | There's a ton of fun tricks you can perform with Gaussian splatting! You're right that you can intentionally under-construct your scenes. These can create a dream-like effect. It's also possible to stylize your Gaussian splats to produce NPR effects. Check out David Lisser's amazing work: https://davidlisser.co.uk/Surface-Tension. Additionally, you can intentionally introduce view-dependent ghosting artifacts. In other words, if you take images from a certain angle that contain an object, and remove that object for other views, it can produce a lenticular/holographic effect. | |
| ▲ | darhodester 2 hours ago | parent | prev [-] | | The ghost effect is pretty cool, too!
https://www.youtube.com/watch?v=DQGtimwfpIo |
|
|
| ▲ | c-fe 3 hours ago | parent | prev | next [-] |
| Hi David, have you looked into alternatives to 3DGS like https://meshsplatting.github.io/ that promise better results and faster training? |
| |
| ▲ | darhodester 2 hours ago | parent [-] | | I have. Personally, I'm a big fan of hybrid representations like this. An underlying mesh helps with relighting, deformation, and effective editing operations (a mesh is a sparse node graph for an otherwise unstructured set of data). However, surface-based constraints can prevent thin surfaces (hair/fur) from reconstructing as well as vanilla 3DGS. It might also inhibit certain reflections and transparency from being reconstructed as accurately. |
|
|
| ▲ | 7 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | dostick 5 hours ago | parent | prev | next [-] |
| Can such plugin be possible for Davinci Resolve, to have merge of scene captured from two iPhones with spatial data, into 3D scene?
With M4 that shouldn’t be problem? |
| |
|
| ▲ | jeffgreco 5 hours ago | parent | prev | next [-] |
| Great work! I’d love to see a proper BTS or case study. |
| |
|
| ▲ | moralestapia 6 hours ago | parent | prev | next [-] |
| Random question, since I see your username is green. How did you find out this was posted here? Also, great work! |
| |
| ▲ | darhodester 6 hours ago | parent [-] | | My friend and colleague shared a link with me. Pretty cool to see this trending here. I'm very passionate about Gaussian splatting and developing tools for creatives. And thank you! |
|
|
| ▲ | sbierwagen 7 hours ago | parent | prev | next [-] |
| From the article: >Evercoast deployed a 56 camera RGB-D array Do you know which depth cameras they used? |
| |
| ▲ | bininunez 5 hours ago | parent | next [-] | | We (Evercoast) used 56 RealSense D455s. Our software can run with any camera input, from depth cameras to machine vision to cinema REDs. But for this, RealSense did the job. The higher end the camera, the more expensive and time consuming everything is. We have a cloud platform to scale rendering, but it’s still overall more costly (time and money) to use high res. We’ve worked hard to make even low res data look awesome. And if you look at the aesthetic of the video (90s MTV), we didn’t need 4K/6K/8K renders. | | |
| ▲ | bredren 2 hours ago | parent [-] | | You may have explained this elsewhere, but if not—-what kind of post processing did you do to upscale or refine the realsense video? Can you add any interesting details on the benchmarking done against the RED camera rig? |
| |
| ▲ | darhodester 7 hours ago | parent | prev | next [-] | | Aha: https://www.red.com/stories/evercoast-komodo-rig So likely RealSense D455. | |
| ▲ | darhodester 7 hours ago | parent | prev | next [-] | | I was not involved in the capture process with Evercoast, but I may have heard somewhere they used RealSense cameras. I recommend asking https://www.linkedin.com/in/benschwartzxr/ for accuracy. | |
| ▲ | secretsatan 7 hours ago | parent | prev | next [-] | | Couldn’t you just use iphone pros for this?
I developed an app specifically for photogrammetry capture using AR and the depth sensor as it seemed like a cheap alternative. EDIT:
I realize a phone is not on the same level as a red camera, but i just saw iphones as a massively cheaper option to alternatives in the field i worked in. | | |
| ▲ | F7F7F7 7 hours ago | parent | next [-] | | ASAP Rocky has a fervent fanbase who's been anticipating this album. So I'm assuming that whatever record label he's signed to gave him the budget. And when I think back to another iconic hip hop (iconic that genre) video where they used practical effects and military helicopters chasing speedboats in the waters off of Santa Monica...I bet they had change to spear. | | | |
| ▲ | numpad0 5 hours ago | parent | prev | next [-] | | A single camera only captures the side of the object facing the camera. Knowing how far away that camera facing side of a Rubik's Cube help if you were making educated guesses(novel view synthesis), but it won't solve the problem of actually photographing the backside. There are usually six sides on a cube, which means you need minimum six iPhone around an object to capture all sides of it to be able to then freely move around it. You might as well seek open-source alternatives than relying on Apple surprise boxes for that. In cases where your subject would be static, such as it being a building, then you can wave around a single iPhone for the same effect for a result comparable to more expensive rigs, of course. | |
| ▲ | darhodester 6 hours ago | parent | prev | next [-] | | I think it's because they already had proven capture hardware, harvest, and processing workflows. But yes, you can easily use iPhones for this now. | | |
| ▲ | secretsatan 5 hours ago | parent [-] | | Looks great by the way, i was wondering if there’s a file format for volumetric video captures | | |
| ▲ | itishappy 2 hours ago | parent | next [-] | | https://developer.apple.com/av-foundation/ https://developer.apple.com/documentation/spatial/ Edit: As I'm digging, this seems to be focused on stereoscopic video as opposed to actual point clouds. It appears applications like cinematic mode use a monocular depth map, and their lidar outputs raw point cloud data. | | |
| ▲ | numpad0 20 minutes ago | parent [-] | | A LIDAR point cloud from a single point of view is a mono-ocular depth map. Unless the LIDAR in question is like, using supernova level gamma rays or neutrino generators for the laser part to get density and albedo volumetric data for its whole distance range. You just can't see the back of a thing by knowing the shape of the front side with current technologies. |
| |
| ▲ | darhodester 2 hours ago | parent | prev | next [-] | | Some companies have a proprietary file format for compressed 4D Gaussian splatting. For example: https://www.gracia.ai and https://www.4dv.ai. Check this project, for example: https://zju3dv.github.io/freetimegs/ Unfortunately, these formats are currently closed behind cloud processing so adoption is a rather low. Before Gaussian splatting, textured mesh caches would be used for volumetric video (e.g. Alembic geometry). | |
| ▲ | secretsatan 3 hours ago | parent | prev [-] | | Recording pointclouds over time i guess i mean. I’m not going to pretend to understand video compression, but could it be possible to do the following movement aspect in 3d the same as 2d? |
|
| |
| ▲ | fastasucan 7 hours ago | parent | prev [-] | | Why would they go for the cheapest option? | | |
| ▲ | secretsatan 6 hours ago | parent [-] | | It was more the point that technology is much cheaper. The company i worked for had completely missed it while trying to develop in house solutions. |
|
| |
| ▲ | brcmthrowaway 7 hours ago | parent | prev [-] | | Kinect Azure |
|
|
| ▲ | huflungdung 4 hours ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | darig 4 hours ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | chrisjj 3 hours ago | parent | prev [-] |
| high-quality 3D content Would have been nice to see some in the video. |
| |