| ▲ | grena1re 2 days ago | |||||||
We will for sure, but the issue is that without local LLMs, there's no way to offer a truly fully local version. And the local LLMs are dumb. So basically, you would still need to trust the LLM providers. Totally understand that this is a deal breaker for some people, but for many users, the theoretical risk is worth it. We do regular security audits, encrypt in transit and at rest, pen tests, etc. | ||||||||
| ▲ | throwaway-blaze 2 days ago | parent [-] | |||||||
Um, dismissing the tech as "the local LLMs are dumb" seems shortsighted. I can run some pretty impressive models on my local Mac, but it has >64gb of ram and an M3 Max. Given the privacy benefit I wouldn't dismiss them so fast. I'd suggest picking one or two that your prompts will work well with and treating it as "we let you run with local models too, if you have a computer capable of that." This will (a) quiet the people who complain about everything and (b) get more people to try the cloud model knowing they could move to a local model for real usage. | ||||||||
| ||||||||