| ▲ | warkdarrior 5 days ago |
| So you have some hierarchy of LLMs. The first LLM that sees the prompt is vulnerable to prompt injection. |
|
| ▲ | giancarlostoro 5 days ago | parent [-] |
| The first LLM only knows to delegate and cannot respond. |
| |
| ▲ | maxfurman 5 days ago | parent | next [-] | | But it can be tricked into delegating incorrectly - for example, to the "allowed to use confidential information" agent instead of the "general purpose" agent | |
| ▲ | rafabulsing 5 days ago | parent | prev [-] | | It can still be injected to delegate in a different way than the user would expect/want it to. |
|