| ▲ | guhidalg 5 hours ago | |
But our brains do map high-dimensionality input to dimensions low enough to be describable with text. You can represent a dog as a specific multi-dimensional array (raster image), but the word dog represents many kinds of images. | ||
| ▲ | numpad0 3 hours ago | parent [-] | |
Yeah, so, that's a lossy/ambiguous process. That represent_in_text(raster_image) -> "dog" don't contain a meaningful amount of the original data. The idea of LLM aided CAD sounds to me like, a sufficiently long hash should contain data it represents. That doesn't make a lot of sense to me. | ||