Remix.run Logo
consumer451 a day ago

> Over time the limitations of MCP have started to emerge. The most significant is in terms of token usage: GitHub’s official MCP on its own famously consumes tens of thousands of tokens of context, and once you’ve added a few more to that there’s precious little space left for the LLM to actually do useful work.

Supabase MCP really devours your context window. IIRC, it uses 8k for its search_docs tool alone, just on load. If you actually use search_docs, it can return >30k tokens in a single reply.

Workaround: I just noticed yesterday that Supabase MCP now allows you to choose which tools are available. You can turn off the docs, and other tools. [0]

If you are wondering why you should care, all models get dumber as the context length increases. This happens much faster than I had expected. [1]

[0] https://supabase.com/docs/guides/getting-started/mcp

[1] https://github.com/adobe-research/NoLiMa

causal 20 hours ago | parent [-]

It's also not clear to me why using "skills" would consume less context once invoked.

It's just instructions with RAG. The more I read about this the more convinced I am that this is just marketing.

habitue 17 hours ago | parent [-]

Skills wont use less context once invoked, the point is that MCP in particular frontloads a bunch of stuff into your context on the entire api surface area. So even if it doesn't invoke the mcp, it's costing you.

That's why it's common advice to turn off MCPs for tools you dont think are relevant to the task at hand.

The idea behind skills us that they're progressively unlocked: they only take up a short description in the context, relying on the agent to expand things if it feels it's relevant.

consumer451 14 hours ago | parent [-]

Your reply unlocked some serious, yet simple understanding for me. Thank you.