| ▲ | janalsncm 2 hours ago | |||||||
I noticed that this model is multilingual and understands 14 languages. For many use cases, we probably only need a single language, and the extra 13 are simply adding extra latency. I believe there will be a trend in the coming years of trimming the fat off of these jack of all trades models. | ||||||||
| ▲ | decide1000 2 hours ago | parent | next [-] | |||||||
I think this model proves it's very efficient and accurate. | ||||||||
| ▲ | popalchemist an hour ago | parent | prev | next [-] | |||||||
It doesn't make sense to have a language-restricted transcription model because of code switching. People aren't machines, we don't stick to our native languages without failure. Even monolingual people move in and out of their native language when using "borrowed" words/phrases. A single-language model will often fail to deal with that. | ||||||||
| ||||||||
| ▲ | keeganpoppen 37 minutes ago | parent | prev [-] | |||||||
uhhh i cast doubt on multi-language support as affecting latency. model size, maybe, but what is the mechanism for making latency worse? i think of model latency as O(log(model size))… but i am open to being wrong / that being a not-good mental model / educated guess. | ||||||||
| ||||||||