▲ | Aachen 6 days ago | ||||||||||||||||
Wait, I'm confused. The text here says all data remains on device and emphasises how much you can trust that, that you're obsessed with local-first software, etc. Clicking on the demo video, step one is... configuring access tokens for external services? Are the services shown at 0:21 (Groq, OpenAI, Antrophic, Google, ElevenLabs) doing the actual transcription, listening to everything I say, and is only the resulting text that they give us subject to "it all stays on your device"? Because that's not at all what I expected after reading this description | |||||||||||||||||
▲ | braden-w 6 days ago | parent | next [-] | ||||||||||||||||
Great catch Aachen, I should have clarified this better. The app supports both external APIs (Groq, OpenAI, etc.), and more recently local transcription (via whisper.cpp, OWhisper, Speaches, etc.), which never leaves your device. Like Leftium said, the local-first Whisper C++ implementation just posted a few hours ago. | |||||||||||||||||
▲ | Leftium 6 days ago | parent | prev | next [-] | ||||||||||||||||
The local transcription feature via whisper.cpp was just released 2 hours ago: https://github.com/epicenter-so/epicenter/releases/tag/v7.3.... | |||||||||||||||||
▲ | IanCal 6 days ago | parent | prev | next [-] | ||||||||||||||||
> All your data is stored locally on your device, and your audio goes directly from your machine to your chosen cloud provider (Groq, OpenAI, ElevenLabs, etc.) or local provider (Speaches, owhisper, etc.) Their point is they aren’t a middleman with this, and you can use your preferred supplier or run something locally. | |||||||||||||||||
| |||||||||||||||||
▲ | dang 5 days ago | parent | prev | next [-] | ||||||||||||||||
We've edited the top text to make this clearer now. Thanks for pointing this out! | |||||||||||||||||
▲ | 6 days ago | parent | prev [-] | ||||||||||||||||
[deleted] |