Haven't Github-triggered LLMs already been the source of multiple prompt injection attacks? Seems bad.