Remix.run Logo
ffsm8 5 days ago

I'm not sure how to take that seriously with the current reality where almost all security findings by LLM tools are false positives

While I suspect that's gonna work good enough on synthetic examples for naive and uninformed people to get tricked into trusting it... At the very least, current LLMs are unable to provide enough stability for this to be useful.

It might become viable with future models, but there is little value in discussing this approach currently. At least until someone actually made a PoC thats at least somewhat working as designed, without having a 50-100% false positive quota.

You can have some false positives, but it has to be low enough for people to still listen to it, which currently isn't the case.