| ▲ | tuwtuwtuwtuw 4 hours ago | |
Couldn't that be solved by whitelisting specific commands? | ||
| ▲ | g947o 2 hours ago | parent | next [-] | |
Give it a try, and challenge yourself (or ChatGPT) to break it. You'll quickly realize that this is not feasible. | ||
| ▲ | wolttam 4 hours ago | parent | prev [-] | |
Such a mechanism would need to be implemented at `execve`, because it would be too easy for the model to stuff the command inside a script or other executable. | ||