▲ | jappgar 4 days ago | ||||||||||||||||
LLMs are language models. Meshes aren't language. Yes this can create python to create simple objects, but that's not how anyone actually creates beautiful 3d art. Just like no one is handwriting svg files to create vector art. LLMs alone will never make visual art. They can provide you an interface to other models, but that's not what this is. | |||||||||||||||||
▲ | rozab 4 days ago | parent | next [-] | ||||||||||||||||
This is of course true, but have you ever seen Inigo Quilez's SDF renderings? It's certainly not scalable, but it sure is interesting | |||||||||||||||||
▲ | margalabargala 4 days ago | parent | prev | next [-] | ||||||||||||||||
That's fine. I'm happy to define "visual art" as things LLMs can't do, and use LLMs only for the 3d modelling tasks that are not "visual art". Such tasks can be "not making visual art", but that doesn't mean they aren't useful. | |||||||||||||||||
▲ | reactordev 4 days ago | parent | prev [-] | ||||||||||||||||
I know that, I was making a statement about how you can. Not exactly sure what your point is. If an LLM can take an idea and spit out words, it can spit out instructions (just like we can with code) to generate meshes, or boids, or point clouds or whatever. Secondary stages would refine that into something usable and the artist would come in to refine, texture, bake, possibly animate, and export. In fact, this paper is exactly that. Words as input, code to use with blender as output. We really just need a headless blender to spit it out as a GLTF and it’s good to go to second stage. | |||||||||||||||||
|