Remix.run Logo
jdasdf 15 hours ago

I've been using v4 pro for the past few days and honestly in terms of quality it seems more or less on par with open AIs 5.4 or opus 4.6 (i havent tried 4.7)

To be clear, i'm not doing state of the art stuff. I mostly used it for frontend development since i'm not great at that and just need a decent looking prototype.

But for my purposes it's a perfectly good model, and the price is decent.

I can't wait for open model small enough for me to run locally come out though. I hate having to rely on someone elses machines (and getting all my data exfiltrated that way)

FrasiertheLion 3 hours ago | parent | next [-]

You can use Tinfoil for inference, which lets you use the model in the cloud while getting similar privacy as running locally: https://tinfoil.sh/inference.

Disclaimer I'm the cofounder. This works by running the model inside a secure enclave (using NVIDIA confidential computing) and verifying the open source code running inside the enclave matches the runtime attestation. The docs walk you through the verification process: https://docs.tinfoil.sh/verification/verification-in-tinfoil

100ms an hour ago | parent | next [-]

Tinfoil looks super interesting! Do you have load balancers in front of the trusted compute stack? Looked at a design like this in a different space and the options for ensuring privacy in a traditional "best practice" architecture seemed very limited

7777332215 2 hours ago | parent | prev [-]

Hi there I use your service. It's great. But I have a few requests... Please support crypto payments...? Also you are missing some open source models (qwen 30b 3a, Deepseek 4 flash).

enochthered 13 hours ago | parent | prev [-]

Thanks for sharing your experience, I’m looking to try it out.

Which provider are you using for inference? Opencode or the DeepSeek api?