| ▲ | ValdikSS 5 hours ago |
| That's why LLM will eventually be used only for initial interaction between the user in their language, to prepare the data to a specialized model. Imagine face recognition to work like a text chat, where the PC gets the frame from the camera and writes in the chat: "Who's that? Here's the RGB888 image in hex: ...". |
|
| ▲ | FeepingCreature an hour ago | parent | next [-] |
| That's actually how vision language models already work, pretty much. |
| |
| ▲ | stingraycharles an hour ago | parent [-] | | Huh? The images are tokenized in the same way language is and it’s just fed into one single model. Not multiple smaller expert models. Image gets rasterized into smaller pieces (eg 4x4 pixels) and each of those is assigned a token, similarly how text is broken up into tokens. And the whole thing is fed into a single model. |
|
|
| ▲ | stingraycharles an hour ago | parent | prev [-] |
| Do you know that MoE is a thing? |
| |
| ▲ | jampekka an hour ago | parent [-] | | The experts in MoEs aren't specialized in any meaningful task sense. From level of what we would think as tasks MoEs are selected essentially arbitrarily per token and per block. | | |
| ▲ | stingraycharles 41 minutes ago | parent [-] | | It’s unsupervised, yes, but “unspecialized in any meaningful task sense” is incorrect, that’s the whole point. It’s just not in the sense of “this is a legal expert, this is a software developer”. |
|
|