▲ | jngiam1 5 days ago | |
I do think there's more infra coming that will help with these challenges - for example, the MCP gateway we're building at MintMCP [1] gives you full control over the tool names/descriptions and informs you if those ever update. We also recently rolled out STDIO server support, so instead of running it locally, you can run it in the gateway instead [2]. Still not perfect yet - tool outputs could be risky, and we're still working on ways to help defend there. But, one way to safeguard around that is to only enable trusted tools and have the AI Ops/DevEx teams do that in the gateway, rather than having end users decide what to use. [1] https://mintmcp.com [2] https://www.youtube.com/watch?v=8j9CA5pCr5c | ||
▲ | lelanthran 4 days ago | parent [-] | |
I dont understand how any of what you said helps or even mitigates the problem with an LLM getting prompt injected. I mean, only enabling trusted tools does not help defend against prompt injection, does it? The vector isn't the tool, after all, it's the LLM itself. |