Remix.run Logo
MillionOClock 7 hours ago

An hybrid approach could maybe work, have a more or less standard game engine for coherence and use this kind of generative AI more or less as a short term rendering and physics sim engine.

elfly 6 hours ago | parent [-]

I've thought about this same idea but it probably gets very complicated.

Let's say, you simulate a long museum hallway with some vases in it. Who holds what? The basic game engine has the geometry, but once the player pushes it and moves it, it needs to inform the engine it did, and then to draw the next frame, read from the engine first, update the position in the video feed, then again feed it back to the engine.

What happens if the state diverges. Who wins? If the AI wins then...why have the engine at all?

It is possible but then who controls physics. The engine? or the AI? The AI could have a different understanding of the details of the base. What happens if the vase has water inside? who simulates that? what happens if the AI decides to break the vase? who simulates the AI.

I don't doubt that some sort of scratchpad to keep track of stuff in game would be useful, but I suspect the researchers are expecting the AI to keep track of everything in its own "head" cause that's the most flexible solution.

MillionOClock 4 hours ago | parent [-]

Then maybe the engine should be less about really simulating the 3D world and just trying best to preserve consistency, more about providing memory and saving context for consistency than truly simulating a lot besides higher level concerns (at which point we might wonder if it couldn't be directly part of the model somehow), but writing those lines I realize there would probably still be many edge cases exactly like what you are describing...