| ▲ | m1el 2 hours ago | |
I've been using nemotron ASR with my own ported inference, and happy about it: https://huggingface.co/nvidia/nemotron-speech-streaming-en-0... https://github.com/m1el/nemotron-asr.cpp https://huggingface.co/m1el/nemotron-speech-streaming-0.6B-g... | ||
| ▲ | Multicomp an hour ago | parent [-] | |
I'm so amazed to find out just how close we are to the start trek voice computer. I used to use Dragon Dictation to draft my first novel, had to learn a 'language' to tell the rudimentary engine how to recognize my speech. And then I discovered [1] and have been using it for some basic speech recognition, amazed at what a local model can do. But it can't transcribe any text until I finish recording a file, and then it starts work, so very slow batches in terms of feedback latency cycles. And now you've posted this cool solution which streams audio chunks to a model in infinite small pieces, amazing, just amazing. Now if only I can figure out how to contribute to Handy or similar to do that Speech To Text in a streaming mode, STT locally will be a solved problem for me. | ||