Remix.run Logo
Yeroc 5 days ago

Imagine running an MCP server inside your network that grants you access to some internal databases. You might expect this to be safe but once you connect that internal MCP server to an AI agent all bets are off. It could be something as simple as the AI agent offering to search the Internet but being convinced to embed information provided from your internal MCP server into the search query for a public (or adversarial service). That's just the tip of the iceberg here...

Aunche 5 days ago | parent [-]

I see. It's wild to me that people would be that trusting of LLMs.

LinXitoW 4 days ago | parent | next [-]

This seems like the obvious outcome, considering all the hype. The more powerful the AI, the more power it has to break stuff. And there is literally ZERO possibility to remove that risk. So, whos going to tell your gungho CEO that the fancy features he wants are straight up impossible, without a giant security risk?

withinboredom 5 days ago | parent | prev | next [-]

They weren’t kidding about hooking mcp servers to internal databases. You see people all the time connecting LLMs to production servers and losing everything — on reddit.

Its honestly a bit terrifying.

Aeolun 5 days ago | parent | next [-]

Claude has a habit of running ‘npm prisma reset —force’, then being super apologetic when I tell it that clears my dev database.

gniting 5 days ago | parent [-]

The Prisma team has done work that is part of the recent releases that specifically addresses this issue: https://prisma.io/changelog#log2025-08-27

koakuma-chan 5 days ago | parent | prev [-]

> on reddit

Explains everything

structural 4 days ago | parent | prev [-]

LLMs are approximately your employees on their first day of work, if they didn't care about being fired and there were no penalties for anything they did. Some percentage of humans would just pull the nearest fire alarm for fun, or worse.