| ▲ | rvz 11 hours ago | |||||||||||||||||||||||||
No one is arguing that it isn't useful. The problem is this: > I’m saying it doesn’t matter it’s probabilistic, everything is, Maybe it doesn't matter for you, but it generally does matter. The risk level of a technology failing is far higher if it is more random and unexplainable than if it is expected, verified and explainable. The former eliminates many serious use-cases. This is why your CPU, or GPU works. LLMs are neither deterministic, no formal verification exists and are fundamentally black-boxes. That is why many vibe-coders reported many "AI deleted their entire home folder" issues even when they told it to move a file / folder to another location. If it did not matter, why do you need sandboxes for the agents in the first place? | ||||||||||||||||||||||||||
| ▲ | dkdcio 10 hours ago | parent [-] | |||||||||||||||||||||||||
I think we agree then? the tech is useful; you need systems around them (like sandboxes and commit hooks that prevent leaking secrets) to use them effectively (along with learned skills) very little software (or hardware) used in production is formally verified. tons of non-deterministic software (including neural networks) are operating in production just fine, including in heavily regulated sectors (banking, health care) | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||