▲ | reactordev 4 days ago | |||||||
I know that, I was making a statement about how you can. Not exactly sure what your point is. If an LLM can take an idea and spit out words, it can spit out instructions (just like we can with code) to generate meshes, or boids, or point clouds or whatever. Secondary stages would refine that into something usable and the artist would come in to refine, texture, bake, possibly animate, and export. In fact, this paper is exactly that. Words as input, code to use with blender as output. We really just need a headless blender to spit it out as a GLTF and it’s good to go to second stage. | ||||||||
▲ | therouwboat 4 days ago | parent [-] | |||||||
If you have an artist, can't you just talk to her about what you want and then she makes the model and all the rest of it? I don't really understand what you gain if you pay for LLM, make model with it and then give it to artist. | ||||||||
|