| ▲ | in-silico 6 hours ago |
| Everyone here seems too caught up in the idea that Genie is the product, and that its purpose is to be a video game, movie, or VR environment. That is not the goal. The purpose of world models like Genie is to be the "imagination" of next-generation AI and robotics systems: a way for them to simulate the outcomes of potential actions in order to inform decisions. |
|
| ▲ | benlivengood 5 hours ago | parent | next [-] |
| Agreed; everyone complained that LLMs have no world model, so here we go. Next logical step is to backfill the weights with encoded video from the real world at some reasonable frame rate to ground the imagination and then branch the inference on possible interventions (actions) in the near future of the simulation, throw the results into a goal evaluator and then send the winning action-predictions to motors. Getting timing right will probably require a bit more work than literally gluing them together, but probably not much more. |
| |
|
| ▲ | avaer 5 hours ago | parent | prev | next [-] |
| Soft disagree; if you wanted imagination you don't need to make a video model. You probably don't need to decode the latents at all. That seems pretty far from information-theoretic optimality, the kind that you want in a good+fast AI model making decisions. The whole reason for LLMs inferencing human-processable text, and "world models" inferencing human-interactive video, is precisely so that humans can connect in and debug the thing. I think the purpose of Genie is to be a video game, but it's a video game for AI researchers developing AIs. I do agree that the entertainment implications are kind of the research exhaust of the end goal. |
| |
| ▲ | NitpickLawyer 5 hours ago | parent | next [-] | | > I think the purpose of Genie is to be a video game, but it's a video game for AI researchers developing AIs. Yeah, I think this is what the person above was saying as well. This is what people at google have said already (a few podcasts on gdm's channel, hosted by Hannah Fry). They have their "agents" play in genie-powered environments. So one system "creates" the environment for the task. Say "place the ball in the basket". Genie creates an env with a ball and a basket, and the other agent learns to wasd its way around, pick up the ball and wasd to the basket, and so on. Pretty powerful combo if you have enough compute to throw at it. | |
| ▲ | in-silico 5 hours ago | parent | prev | next [-] | | Sufficiently informative latents can be decoded into video. When you simulate a stream of those latents, you can decode them into video. If you were trying to make an impressive demo for the public, you probably would decode them into video, even if the real applications don't require it. Converting the latents to pixel space also makes them compatible with existing image/video models and multimodal LLMs, which (without specialized training) can't interpret the latents directly. | |
| ▲ | SequoiaHope 5 hours ago | parent | prev | next [-] | | Didn’t the original world models paper do some training in latent space? (Edit: yes[1]) I think robots imagining the next step (in latent space) will be useful. It’s useful for people. A great way to validate that a robot is properly imagining the future is to make that latent space renderable in pixels. [1] “By using features extracted
from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment.” https://arxiv.org/abs/1803.10122 | |
| ▲ | sailingparrot 3 hours ago | parent | prev | next [-] | | > you don't need to make a video model. You probably don't need to decode the latents at all. If you don't decode, how do you judge quality in a world where generative metrics are famously very hard and imprecise?
How do you go about integrating RLHF/RLAF in your pipeline if you don't decode, which is not something you can skip anymore to get SotA? Just look at the companies that are explicitly aiming for robotics/simulation, they *are* doing video models. | |
| ▲ | abraxas 3 hours ago | parent | prev | next [-] | | > if you wanted imagination you don't need to make a video model. You probably don't need to decode the latents at all. Soft disagree. What is the purpose of that imagination if not to map it to actual real world outfcomes. For this to compare them to the real world and possibly backpropagate through them you'll need video frames. | |
| ▲ | ACCount37 5 hours ago | parent | prev | next [-] | | If you train a video model, you by necessity train a world model for 3D worlds. Which can then be reused in robotics, potentially. I do wonder if I can frankenstein together a passable VLA using pretrained LTX-2 as a base. | |
| ▲ | thegabriele 5 hours ago | parent | prev | next [-] | | Sure, but at some point you want humans in the loop i guess? | |
| ▲ | koolala 5 hours ago | parent | prev | next [-] | | What model do you need then? If you want 3D real-time understanding of how realities work? Are you focusing on "imagination" in a different abstract way? | |
| ▲ | thegabriele 5 hours ago | parent | prev | next [-] | | Sure, but at some point you want humans in the loop i guess? | |
| ▲ | empath75 4 hours ago | parent | prev [-] | | I am not sure we are at the "efficiency" phase of this. Even if you just wire this output (or probably multiples running different counterfactuals) into a multimodal LLM that interprets the video and uses it to make decisions, you have something new. |
|
|
| ▲ | rzmmm an hour ago | parent | prev | next [-] |
| I feel that this is too costly for that kind of usage. Probably quote different architecture is needed for robotics. |
|
| ▲ | oceanplexian 3 hours ago | parent | prev | next [-] |
| Yeah and the goal of Instagram was to share quirky pictures you took with your friends. Now it’s a platform for influencers and brainrot; arguably it has done more damage than drugs to younger generations. As soon as this thing is hooked up to VR and reaches a tipping point with the general public we all know exactly what is going to happen. The creation of the most profitable, addictive and ultimately dystopian technology Big Tech has ever come up with. |
| |
| ▲ | ceejayoz 2 hours ago | parent [-] | | The good news is we’ll finally have an answer for the Fermi Paradox. | | |
|
|
| ▲ | pizzafeelsright 5 hours ago | parent | prev | next [-] |
| Environment mapping to AI generated alternative outcomes is the holodeck. I prefer real danger as living in the simulation is derivative. |
|
| ▲ | whytaka 4 hours ago | parent | prev | next [-] |
| I think this is the key component of developing subjective experience. |
|
| ▲ | reactordev 5 hours ago | parent | prev | next [-] |
| Still cool though… |
| |
|
| ▲ | echelon 5 hours ago | parent | prev | next [-] |
| Whoa, whoa, whoa. That's just one angle. Please don't bin that as the only use case for "world models"! First of all, there are a variety of different types of world models. Simulation, video, static asset, etc. It's a loaded term, just as the use cases are widespread. There are world models you can play in your browser inferred entirely by your CPU: https://madebyoll.in/posts/game_emulation_via_dnn/ (my favorite, from 2022!) https://madebyoll.in/posts/world_emulation_via_dnn/ (updated, in 3D) There are static asset generating world models, like WorldLabs' Marble. These are useful for video games, previz, and filmmaking. https://marble.worldlabs.ai/ I wrote open source software to leverage marble for filmmaking (I'm a filmmaker, and this tech is extremely useful for scene consistency): https://www.youtube.com/watch?v=wJCJYdGdpHg https://github.com/storytold/artcraft There are playable video-oriented models, many of which are open source and will run on your 3080 and above: https://diamond-wm.github.io/ https://github.com/Robbyant/lingbot-world There are things termed "world models" that really shouldn't be: https://github.com/Tencent-Hunyuan/HunyuanWorld-1.0 There are robotics training oriented world models: https://github.com/leggedrobotics/robotic_world_model Genie is not strictly robotics-oriented. |
| |
| ▲ | in-silico 5 hours ago | parent [-] | | The entertainment industry, as big as it is, just doesn't have as much profit potential as robots and AI agents that can replace human labor. Just look at how Nvidia has pivoted from gaming and rendering to AI. The other examples you've given are neat, but for players like Google they are mostly an afterthought. | | |
| ▲ | echelon 5 hours ago | parent [-] | | Robotics: $88B TAM Gaming: $350B TAM All media and entertainment: $3T TAM Manufacturing: $5T TAM Roughly the same story. This tech is going to revolutionize "films" and gaming. The entire entertainment industry is going to transform around it. When people aren't buying physical things, they're distracting themselves with media. Humans spend more time and money on that than anything else. Machines or otherwise. AI impact on manufacturing will be huge. AI impact on media and entertainment will be huge. And these world models can be developed in a way that you develop exposure and competency for both domains. edit: You can argue that manufacturing will boom when we have robotics that generalize. But you can also argue that entertainment will boom when we have holodecks people can step into. | | |
| ▲ | in-silico 5 hours ago | parent | next [-] | | The current robotics industry is $88B. You have to take into account the potential future industry of general purpose robots that replace a big chunk of blue-collar work. Robots is also just one example. A hypothetically powerful AI agent (which might also use a world model) that controls a mouse and keyboard could replace a big chunk of white-collar work too. Those are worth 10's of trillions of dollars. You can argue about whether they are actually possible, but the people backing this tech think they are. | |
| ▲ | dingnuts 4 hours ago | parent | prev [-] | | [dead] |
|
|
|
|
| ▲ | dyauspitr 5 hours ago | parent | prev | next [-] |
| That’s part of it but if you could actually pull out 3D models from these worlds, it would massively speed up game development. |
| |
| ▲ | avaer 5 hours ago | parent [-] | | You already can, check out Marble/World Labs, Meshy, and others. It's not really as much of a boon as you'd think though, since throwing together a 3D model is not the bottleneck to making a sellable video game. You've had model marketplaces for a long time now. | | |
| ▲ | echelon 5 hours ago | parent [-] | | > It's not really as much of a boon as you'd think though It is for filmmaking! They're perfect for constructing consistent sets and blocking out how your actors and props are positioned. You can freely position the camera, control the depth of field, and then storyboard your entire scene I2V. Example of doing this with Marble: https://www.youtube.com/watch?v=wJCJYdGdpHg | | |
| ▲ | avaer 3 hours ago | parent [-] | | This I definitely agree with, before you had to massage the I2I and now you can just drag the camera. Marble definitely changes the game if the game is "move the camera", just most people would not consider that a game (but hey there's probably a good game idea in there!) |
|
|
|
|
| ▲ | cyanydeez 4 hours ago | parent | prev | next [-] |
| Like LLMs, though: Do you really think a simulation will get them to all the corner cases robots/AI needs to know about, or will it be largely the same problem -- they'll be just good enough to fool the engineers and make the business ops drool and they'll be put into production and suddenly we'll see in a year or two stories about robots crushing peoples hands, stepping in drains and falling over or falling off roofs cause of some bizarre miscommunication between training and reality. So, like, it's very important to understand the lineage of training and not just the "this is it" |
|
| ▲ | slashdave 5 hours ago | parent | prev [-] |
| This is a video model, not a world model. Start learning on this, and cascading errors will inevitably creep into all downstream products. You cannot invent data. |
| |
| ▲ | kingstnap 3 hours ago | parent | next [-] | | Related: https://arxiv.org/abs/2601.03220 This is a paper that recently got popular ish and discusses the counter to your viewpoint. > Paradox 1: Information cannot be increased by deterministic processes. For both Shannon entropy and Kolmogorov complexity, deterministic transformations cannot meaningfully increase the information content of an object. And yet, we use pseudorandom number generators to produce randomness, synthetic data improves model capabilities, mathematicians can derive new knowledge by reasoning from axioms without external information, dynamical systems produce emergent phenomena, and self-play loops like AlphaZero learn sophisticated strategies from games In theory yes, something like the rules of chess should be enough for these mythical perfect reasoners that show up in math riddles to deduce everything that *can* be known about the game. And similarly a math textbook is no more interesting than a book with the words true and false and a bunch of true => true statements in it. But I don't think this is the case in practice. There is something about rolling things out and leveraging the results you see that seems to have useful information in it even if the roll out is fully characterizable. | | |
| ▲ | slashdave 2 hours ago | parent [-] | | Interesting paper, thanks! But, the authors escape the three paradoxes they present by introducing training limits (compute, factorization, distribution). Kind of a different problem here. What I object to are the "scaling maximalists" who believe that if enough training data were available, that complicated concepts like a world model will just spontaneously emerge during training. To then pile on synthetic data from a general-purpose generative model as a solution to the lack of training data becomes even more untenable. |
| |
| ▲ | whytaka 4 hours ago | parent | prev | next [-] | | They have a feature where you can take a photo and create a world from that. If instead of a photo you have a video feed, this is one step closer to implementing subjective experience. | |
| ▲ | 2bitencryption 5 hours ago | parent | prev [-] | | Given that the video is fully interactive and lets you move around (in a “world” if you will) I don’t think it’s a stretch to call it a world model. It must have at least some notion of physics, cause and effect, etc etc in order to achieve what it does. | | |
|