Remix.run Logo
uniqueuid 10 hours ago

There are many more weird and complex architectures in models for video understanding.

For example, beyond video->text->llm and video->embedding in llm, you can also have an llm controlling/guiding a separate video extractor.

See this paper for a pretty thorough overview.

Tang, Y., Bi, J., Xu, S., Song, L., Liang, S., Wang, T., Zhang, D., An, J., Lin, J., Zhu, R., Vosoughi, A., Huang, C., Zhang, Z., Liu, P., Feng, M., Zheng, F., Zhang, J., Luo, P., Luo, J., & Xu, C. (2025). Video Understanding with Large Language Models: A Survey (No. arXiv:2312.17432). arXiv. https://doi.org/10.48550/arXiv.2312.17432

adastra22 10 hours ago | parent [-]

Sure but all of these find some way of mapping inputs (any medium) to state space concepts. That's the core of the transformer architecture.

ludwigschubert 9 hours ago | parent [-]

The user you originally replied to specifically mentioned > without going to text first

adastra22 9 hours ago | parent [-]

Yeah, and that's my understanding. Nothing goes video -> text, or audio -> text, or even text -> text without first going through state space. That's where the core of the transformer architecture is.