| ▲ | jdasdf 15 hours ago | |||||||||||||
I've been using v4 pro for the past few days and honestly in terms of quality it seems more or less on par with open AIs 5.4 or opus 4.6 (i havent tried 4.7) To be clear, i'm not doing state of the art stuff. I mostly used it for frontend development since i'm not great at that and just need a decent looking prototype. But for my purposes it's a perfectly good model, and the price is decent. I can't wait for open model small enough for me to run locally come out though. I hate having to rely on someone elses machines (and getting all my data exfiltrated that way) | ||||||||||||||
| ▲ | FrasiertheLion 3 hours ago | parent | next [-] | |||||||||||||
You can use Tinfoil for inference, which lets you use the model in the cloud while getting similar privacy as running locally: https://tinfoil.sh/inference. Disclaimer I'm the cofounder. This works by running the model inside a secure enclave (using NVIDIA confidential computing) and verifying the open source code running inside the enclave matches the runtime attestation. The docs walk you through the verification process: https://docs.tinfoil.sh/verification/verification-in-tinfoil | ||||||||||||||
| ||||||||||||||
| ▲ | enochthered 13 hours ago | parent | prev [-] | |||||||||||||
Thanks for sharing your experience, I’m looking to try it out. Which provider are you using for inference? Opencode or the DeepSeek api? | ||||||||||||||