Remix.run Logo
mythz 9 hours ago

Big fan of Salvatore's voxtral.c and flux2.c projects - hope they continue to get optimized as it'd be great to have lean options without external deps. Unfortunately it's currently too slow for real-world use (AMD 7800X3D/Blas) when adding Voice Input support to llms-py [1].

In the end Omarchy's new support for voxtype.io provided the nicest UX, followed by Whisper.cpp, and despite being slower, OpenAI's Whisper is still a solid local transcription option.

Also very impressed with both the performance and price of Mistral's new Voxtral Transcription API [2] - really fast/instant and really cheap ($0.003/min), IMO best option in CPU/disk-constrained environments.

[1] https://llmspy.org/docs/features/voice-input

[2] https://docs.mistral.ai/models/voxtral-mini-transcribe-26-02

antirez 6 hours ago | parent | next [-]

Hi! This model is great, but it is too big for local inference, Whisper medium (the "base" IMHO is not usable for most things, and "large" is too large) is a better deal for many environments, even if the transcription quality is noticeable lower (and even if it does not have a real online mode). But... It's time for me to check the new Qwen 0.6 transcription model. If it works as well as their benchmarks claim, that could be the target for very serious optimizations and a no deps inference chain conceived since the start for CPU execution, not just for MPS. Since, many times, you want to install such transcription systems on server rent online via Hetzner and other similar vendors. So I'm going to handle it next, and if it delivers, really, time for big optimizations covering specifically the Intel, AMD and ARM instructions sets, potentially also thinking at 8bit quants if the performance remain good.

dust42 6 hours ago | parent [-]

Same experience here with Whisper, medium is often not good enough. The large-turbo model however is pretty decent and on Apple silicon fast enough for real time conversations. The addition of the prompt parameter can also help with transcription quality, especially when using domain specific vocabulary. In general Whisper.cpp is better with transcribing full phrases than with streaming.

And not to forget, for many use cases more than just English is needed. Unfortunately right now most STT/ASR and TTS focus on English plus 0-10 other languages. Thus being able to add with reasonable effort more languages or domain specific vocabulary would be a huge plus for any STT and TTS.

grigio 3 hours ago | parent | prev | next [-]

+1 for voxtype with Whisper-base model it is quite fast an accurate

mijoharas 9 hours ago | parent | prev [-]

One thing I keep looking for is transcribing while I'm talking. I feel like I need that visual feedback. Does voxtype support that?

(I wasn't able to find anything at glance)

Handy claims to have an overlay, but it seems to not work on my system.

mythz 9 hours ago | parent | next [-]

Not sure how it works in other OS's but in Omarchy [1] you hold down `Super + Ctrl + X` to start recording and release it to stop, while it's recording you'll see a red voice recording icon in the top bar so it's clear when its recording.

Although as llms-py is a local web App I had to build my own visual indicator [2] which also displays a red microphone next to the prompt when it's recording. It also supports both Tap On/Off and hold down for recording modes. When using voxtype I'm just using the tool for transcription (i.e. not Omarchy OS-wide dictation feature) like:

$ voxtype transcribe /path/to/audio.wav

If you're interested the Python source code to support multiple voice transcription backends is at: [3]

[1] https://learn.omacom.io/2/the-omarchy-manual/107/ai

[2] https://llmspy.org/docs/features/voice-input

[3] https://github.com/ServiceStack/llms/blob/main/llms/extensio...

mijoharas 6 hours ago | parent [-]

Ah, the thing I really want is to see the words that I'm speaking being transcribed (i.e. realtime) For some reason I rarely see that feature.

bmn__ 4 hours ago | parent [-]

The more things change…

https://news.ycombinator.com/item?id=21711755

mijoharas 4 hours ago | parent [-]

hahaha! plus ca change indeed.

(I keep coming back to this one so I've got half a dozen messages on HN asking for the exact same thing!).

It's a shame, whisper is so prevalent, but not great at actual streaming, but everyone uses it.

I'm hoping one of these might become a realtime de facto standard so we can actually get our realtime streaming api (and yep, I'd be perfectly happy with something just writing to stdout. But all the tools always end up just batching it because it's simpler!)

Doman 7 hours ago | parent | prev [-]

I am using a window manager with Waybar. Voxtype can display a status icon on Waybar [1], it is enough for me to know what is going on.

[1] https://github.com/peteonrails/voxtype/blob/main/docs/WAYBAR...