Remix.run Logo
jerkstate 4 hours ago

With AI, VR is even more promising. I have been working on a Gaussian splat renderer for the Quest 3, and by having Claude and ChatGPT read state-of-the-art papers, I have been able to build a training and rendering pipeline that is getting >50 fps for large indoor scenes on the Quest 3. I started with an (AI-driven) port of a desktop renderer, which got less than 1 fps, but I've integrated both training and rendering improvements from research and added a bunch of quality and performance improvements and now it's actually usable. Applying research papers to a novel product is something that used to take weeks or months of a person's time and can now be measured in minutes and hours (and tokens).

guyomes 3 hours ago | parent | next [-]

You might be interested in a new experimental 3D scene learning and rendering approach called Radiant foam [1], which is supposed to be better suited for GPUs that don't have hardware ray tracing acceleration.

[1]: https://radfoam.github.io/

paldepind2 3 hours ago | parent | prev | next [-]

Sorry if this is a basic question, but what's you workflow for feeding the papers into the LLM and getting the implementation done? The coding agents that I've used are not able to read PDFs, so I've been wondering how to do it.

echelon 4 hours ago | parent | prev [-]

What's your take on WorldLabs and Apple's splat models? Are there other open source alternatives?

How would editing work?

Do you think these will win over video world models like Genie?

Have you played with DiamondWM and other open source video world models?