| ▲ | philipallstar 4 hours ago | |||||||
> The issue title was interpolated directly into Claude's prompt via ${{ github.event.issue.title }} without sanitisation. It's astonishing that AI companies don't know about SQL injection attacks and how a prompt requires the same safeguards. | ||||||||
| ▲ | WickyNilliams an hour ago | parent | next [-] | |||||||
No such mitigation exists for LLMs because they do not and (as far as anybody knows) cannot distinguish input from data. It's all one big blob | ||||||||
| ▲ | arjvik 3 hours ago | parent | prev | next [-] | |||||||
There’s a known fix for SQL injection and no such known fix for prompt injection | ||||||||
| ▲ | rawling 3 hours ago | parent | prev [-] | |||||||
But you can't, can you? Everything just goes into the context... | ||||||||
| ||||||||