| ▲ | cyberrock 8 hours ago | |
>If a client asks, "Can you make the handle slightly longer?", on the human model, I can select a loop of polygons and pull. The edit is done in 10 seconds. >On the AI model, I cannot. There are no loops. I would have to sculpt it like clay, destroying the texture in the process. It is actually faster to rebuild the entire model from scratch than to try and fix the AI's topology. To play devil's advocate for a second, it seems like you didn't provide a requirement to the AI on how the handle should be made, then got frustrated that the result doesn't conform to unspoken norms. If I made you this model by just starting with a sphere and sculpting it in ZBrush, you'd get frustrated by the same problem too. On the other hand, I would expect that the AI could perform the task if you just elongated the handle in the reference image. The same procedure would probably work if the client wanted to add cat ears to the top to make a Mario Tennis clone game, while it might be a whole new commission for human modelers. Now, would the material mapping still be poor, and would it be a questionable use of electricity? Guilty on both counts, but it's exciting to anyone who just wants to make 3D printed items or low-fidelity video games/mods. | ||
| ▲ | rcxdude 2 hours ago | parent [-] | |
It would also likely run into the non-determinism issue that a lot of generative AI has, where you can edit the input data, but you're not going to get the same output with only the edit you wanted, but instead a completely new output. For images the tools are getting better with editing models, but they're still prone to being a bit difficult to control, and there's no analog for 3d model generation at the moment. | ||