▲ | bc569a80a344f9c 5 days ago | |
He didn't say that - he said they can be _more_ useful. The argument is that LLMs are unreliable, so using LLMs anywhere in your workflow introduces an unreliable contributor. It is then better to have that unreliable contributor on the red team than on the blue team, because an unreliable contributor on defense introduces weaknesses and vulnerabilities while an unreliable contributor on offense introduces a non-viable or trivial attack. |