| ▲ | divbzero 9 hours ago |
| The most avid members of the Cartographers Guilds had even proposed a Map of the Empire several times larger than the Empire itself to depict microscopic details that would otherwise be invisible. Such proposals were considered the peak of academic excess after the Study of Cartography fell out of favor. |
|
| ▲ | bobson381 8 hours ago | parent | next [-] |
| I do sometimes wonder if we will get "detailed enough" vector embeddings in LLMs to bring the grain of resolution down below human perception - like having enough bits to fully capture what's on tape in audio world. Maybe this is never possible, and (I hope) some details are unresolvable, but it will be interesting to see. |
| |
| ▲ | storystarling 2 hours ago | parent | next [-] | | I suspect the curse of dimensionality makes this an optimization dead end. You hit prohibitive latency limits on retrieval long before the resolution approaches human perception. Even with current dimensions, the trade-off between index size and query speed is already the main constraint for production systems. | |
| ▲ | pixl97 8 hours ago | parent | prev [-] | | LLMs are already used in signal processing so the idea is explored. Simply put anything that can be encoded is a language, so you just need sensors to capture and classify the incoming data and build that into a model. The real question is post training the model to behave correctly as these places are far less explored than things at the human scale. RLHF may be a poor choice because the models may see actual behaviors that humans don't and humans will discount it as being incorrect. |
|
|
| ▲ | anthk 5 hours ago | parent | prev [-] |
| That wil be doable some time with computers :) |