| ▲ | chrsw a day ago | ||||||||||||||||||||||||||||||||||||||||
The only reason why you run local models is for privacy, never for cost. Or even latency. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | websiteapi a day ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||
indeed - my main use case is those kind of "record everything" sort of setups. I'm not even super privacy conscious per se but it just feels too weird to send literally everything I'm saying all of the time to the cloud. luckily for now whisper doesn't require too much compute, bu the kind of interesting analysis I'd want would require at least a 1B parameter model, maybe 100B or 1T. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | andy99 a day ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||
Autonomy generally, not just privacy. You never know what the future will bring, AI will be enshittified and so will hubs like huggingface. It’s useful to have an off grid solution that isn’t subject to VCs wanting to see their capital returned. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||