▲ | 8note 3 days ago | |
couldnt the configuring LLM be poisoned by tool descriptions to grant the lethal trifecta to the run time LLM? | ||
▲ | 76SlashDolphin 3 days ago | parent | next [-] | |
It is possible thay a malicious MCP could poison the LLM's ability to classify it's tools but then your threat model includes adding malicious MCPs which would be a problem for any MCP client. We are considering adding a repository of vetted MCPs (or possibly use one of the existing ones) but, as it is, we rely on the user to make sure that their MCPs are legitimate. | ||
▲ | datadrivenangel 2 days ago | parent | prev [-] | |
Malicious servers are a separate threat I think. If the server is lying about what the tools do, an LLM can't catch that without seeing server source code, thus defeating the purpose of MCP. |