▲ | nothrabannosir 5 days ago | |||||||||||||
Throwing more llm at a prompt escaper is like throwing more regexp at a html parser. If the first llm wasn’t enough, the second won’t be either. You’re in the wrong layer. | ||||||||||||||
▲ | scroogey 5 days ago | parent | next [-] | |||||||||||||
Here's an alternative perspective: https://x.com/rauchg/status/1949197451900158444 Not a professional developer (though Guillermo certainly is) so take this with a huge grain of salt, but I like the idea of an AI "trained" on security vulnerabilities as a second, third and fourth set of eyes! | ||||||||||||||
| ||||||||||||||
▲ | mathgeek 4 days ago | parent | prev [-] | |||||||||||||
While I agree with the idea of vetting things, I too get a chuckle when folks jump straight from "we can't trust this unknown code" to "let's trust AI to vet it for us". Done it myself. |