| ▲ | intended 2 hours ago | |
> IMpossible for an LLM to be configured to do the same? Because that’s what I am seeing emerge from the various efforts to build LLM safety tools. > Do you think a human is capable of providing assistance with defense but not offense, over a textual communication channel with another human? LLM != human? They don’t even use the same reasoning process. | ||