| ▲ | OgsyedIE 10 hours ago | |
CAD and machining are different fields, true, but I see a lot of the same flaws that Adam Karvonen highlighted in his essay on LLM-aided machining a few months ago: https://adamkarvonen.github.io/machine_learning/2025/04/13/l... Do any people with familiarity on what's under the hood know if the latent space produced by most transformer paradigms is only capable of natively simulating 1-d reasoning and has to kludge together any process for figuring geometry with more degrees of freedom? | ||
| ▲ | CamperBob2 4 hours ago | parent [-] | |
Well, they couldn't generate 2D artwork if they weren't capable of working with multiple output dimensions. An interesting thing about transformers is that they are world-class at compressing 2D image data even when not trained on anything but textual language ( https://arxiv.org/abs/2309.10668 ). Whether that notion is relevant for 3D content would be two or three figures over my pay grade, though. | ||