| ▲ | arcologies1985 9 hours ago | |||||||
Could you make it use Parakeet? That's an offline model that runs very quickly even without a GPU, so you could get much lower latency than using an API. | ||||||||
| ▲ | zachlatta 9 hours ago | parent | next [-] | |||||||
I love this idea, and originally planned to build it using local models, but to have post-processing (that's where you get correctly spelled names when replying to emails / etc), you need to have a local LLM too. If you do that, the total pipeline takes too long for the UX to be good (5-10 seconds per transcription instead of <1s). I also had concerns around battery life. Some day! | ||||||||
| ▲ | s0l 9 hours ago | parent | prev [-] | |||||||
https://github.com/cjpais/Handy It’s free and offline | ||||||||
| ||||||||