▲ | consumer451 a day ago | ||||||||||||||||
> Over time the limitations of MCP have started to emerge. The most significant is in terms of token usage: GitHub’s official MCP on its own famously consumes tens of thousands of tokens of context, and once you’ve added a few more to that there’s precious little space left for the LLM to actually do useful work. Supabase MCP really devours your context window. IIRC, it uses 8k for its search_docs tool alone, just on load. If you actually use search_docs, it can return >30k tokens in a single reply. Workaround: I just noticed yesterday that Supabase MCP now allows you to choose which tools are available. You can turn off the docs, and other tools. [0] If you are wondering why you should care, all models get dumber as the context length increases. This happens much faster than I had expected. [1] | |||||||||||||||||
▲ | causal 20 hours ago | parent [-] | ||||||||||||||||
It's also not clear to me why using "skills" would consume less context once invoked. It's just instructions with RAG. The more I read about this the more convinced I am that this is just marketing. | |||||||||||||||||
|