| ▲ | FrasiertheLion 8 hours ago | |
Another option is verifiably private inference with open source models running inside secure enclaves on the cloud (using NVIDIA confidential computing), and the enclave code is open source and verified via remote attestation upon connection, cryptographically proving that the inference provider cannot see any data. Tinfoil: https://tinfoil.sh/ is a good example of this (disclaimer: i'm the cofounder). You can read more about how this works here: https://docs.tinfoil.sh/verification/verification-in-tinfoil >that open models are in the ballpark of the best commercial models This is basically true for certain tasks. As an example, chat interfaces are not well poised to take advantage of higher model intelligence than what the best open source models already provide. But coding harnesses still benefit from greater model intelligence and even more so, the reinforcement learning that tightly interlinks the provider's coding harness (claude-code, codex) with the model's tool calling interfaces is another reason for discrepancy in effectiveness even when controlled for model intelligence. The opencode founder (open source coding harness that supports different model providers) was recently complaining about the challenges making the harness work well with different providers: https://x.com/thdxr/status/2053290393727324313 | ||