| ▲ | vimda 6 hours ago | |
Time and time again, be it "hallucination", prompt injection, or just plain randomness, LLMs have proven themselves woefully insufficient at best when presented with and asked to work with untrusted documents. This simply changes the attack vector rather than solving a real problem | ||
| ▲ | TeMPOraL 6 hours ago | parent [-] | |
In a computing system, LLMs aren't substituting for code, they're substituting for humans. Treat them accordingly. | ||