Remix.run Logo
SilverElfin 15 hours ago

This came up in recent discussions about the Google apps CLI that was recently released. Google initially included an MCP server but then removed it silently - and some people believe this is because of how many different things the Google Workspace CLI exposes, which would flood the context. And it seemed like in social media, suddenly a lot of people were talking about how MCP is dead.

But fundamentally that doesn’t make sense. If an AI needs to be fed instructions or schemas (context) to understand how to use something via MCP, wouldn’t it need the same things via CLI? How could it not? This article points that out, to be clear. But what I’m calling out is how simple it is to determine for yourself that this isn’t an MCP versus CLI battle. However, most people seem to be falling for this narrative just because it’s the new hot thing to claim (“MCP is dead, Long Live CLI”).

As for Google - they previously said they are going to support MCP. And they’ve rolled out that support even recently (example from a quick search: https://cloud.google.com/blog/products/ai-machine-learning/a...). But now with the Google Workspace CLI and the existence of “Gemini CLI Extensions” (https://geminicli.com/extensions/about/), it seems like they may be trying to diminish MCP and push their own CLI-centric extension strategy. The fact that Gemini CLI Extensions can also reference MCP feels a lot like Microsoft’s Embrace, Extend, Extinguish play.

jswny 15 hours ago | parent [-]

MCP loads all tools immediately. CLI does not because it’s not auto exposed to the agent, got have more control of how the context of which tools exist, and how to deliver that context.

algis-hn 6 hours ago | parent | next [-]

Accurate for naive MCP client implementations, but a proxy layer with inference-time routing solves exactly this control problem. BM25 semantic matching on each incoming query exposes only 3-5 relevant tool schemas to the agent rather than loading everything upfront - the 44K token cold-start cost that the article cites mostly disappears because the routing layer is doing selection work. MCPProxy (https://github.com/smart-mcp-proxy/mcpproxy-go) implements this pattern: structured schemas stay for validation and security quarantine, but the agent only sees what's relevant per query rather than the full catalog. The tradeoff isn't MCP vs CLI - it's routing-aware MCP vs naive MCP, and the former competes with CLI on token efficiency while retaining the organizational benefits the article argues for.

jabber86 4 hours ago | parent | prev | next [-]

It does not have to load all tools. As you are able to hide the details in CLI you can implement the same in MCP server and client.

Just follow the widely accepted pattern (all you need 3 tools in front): - listTools - List/search tools - getToolDetails - Get input arguments for the given tool name - execTool - Execute given tool name with input arguments

HasMCP - Remote MCP framework follows/allows this pattern.

CharlieDigital 15 hours ago | parent | prev | next [-]

You can solve the same problem by giving subsets of MCP tools to subagents so each subagent is responsible for only a subset of tools.

Or...just don't slam 100 tools into your agent in the first place.

simianwords 15 hours ago | parent [-]

>Or...just don't slam 100 tools into your agent in the first place.

But I can do them with CLI so that's a negative for MCP?

CharlieDigital 15 hours ago | parent [-]

You've missed the point and hyperfocused on the story around context and not why an org would want to have centralized servers exposing MCP endpoints instead of CLIs

simianwords 15 hours ago | parent [-]

I would want to know what point I missed. I can have 100 CLI's but not 100 MCP tools.

100 MCP tools will bloat the context whereas 100 CLI's won't. Which part do you disagree with?

CharlieDigital 14 hours ago | parent [-]

1. The part where you are providing 100 tools instead of a few really flexible tools

2. The part where you think your agent is going to know how to use 100 CLI tools that are not already in its training dataset without using extra turns walking the help content to dump out command names and schemas

3. The part where, without a schema defining the inputs, the LLM wastes iterations trying to correct the input format.

4. The part where, not having the full picture of the tools, your odds of it picking the same tools or the right tools is completely gambling that it outputs the right keywords to trigger the tool to be used.

5. The part where you forgot to mention that for your agent to know that your 100 CLI tools exist, you had to either provide it in context directly, provide it in context in a README.md, or have it output the directory listing and send that off to the LLM to evaluate before picking the tool and then possibly expanding the man pages for several tools and sub commands using several turns.

Don't get me wrong, CLIs are great if its already in the LLMs training set (`git`, for example). Not so great if it's not because it will need to walk the man pages anyways.

simianwords 14 hours ago | parent [-]

> The part where you are providing 100 tools instead of a few really flexible tools

I'm not sure how that solves the issue. The shape of each individual tool will be different enough that you will need different schema - something you will be passing each time in MCP and something you can avoid in CLI. Also, CLI's can also be flexible.

> The part where you think your agent is going to know how to use 100 CLI tools that are not already in its training dataset without using extra turns walking the help content to dump out command names and schemas

By CLI's we mean SKILLS.md so it won't require this hop.

> The part where, without a schema defining the inputs, the LLM wastes iterations trying to correct the input format.

What do we lose by one iteration? We lose a lot by passing all the tool shapes on each turn.

> The part where, not having the full picture of the tools, your odds of it picking the same tools or the right tools is completely gambling that it outputs the right keywords to trigger the tool to be used.

we will use skills

> The part where you forgot to mention that for your agent to know that your 100 CLI tools exist, you had to either provide it in context directly, provide it in context in a README.md, or have it output the directory listing and send that off to the LLM to evaluate before picking the tool and then possibly expanding the man pages for several tools and sub commands using several turns.

skills

SilverElfin 6 hours ago | parent | prev | next [-]

I’m not a technical person but I’ve seen people share various tips and tricks to get around the MCP context issues. There’s also this from Anthropic:

https://www.anthropic.com/engineering/code-execution-with-mc

estetlinus 3 hours ago | parent | prev | next [-]

”to know what tools you have access to read the dockerfile”?

climike 15 hours ago | parent | prev [-]

See also https://cliwatch.com/blog/designing-a-cli-skills-protocol