Remix.run Logo
noodletheworld 11 hours ago

Is it just me, or do skills seem enormously similar to MCP?

…including, apparently, the clueless enthusiasm for people to “share” skills.

MCP is also perfectly fine when you run your own MCP locally. It’s bad when you install some arbitrary MCP from some random person. It fails when you have too many installed.

Same for skills.

It’s only a matter of time (maybe it already exists?) until someone makes a “package manager” for skills that has all of the stupid of MCP.

artdigital 10 hours ago | parent | next [-]

I don’t feel they’re similar at all and I don’t get why people compare them.

MCP is giving the agents a bunch of functions/tools it can use to interact with some other piece of infrastructure or technology through abstraction. More like a toolbox full of screwdrivers and hammers for different purposes, or a high-level API interface that a program can use.

Skills are more similar to a stack of manuals/books in a library that teach an agent how to do something, without polluting the main context. For example a guide how to use `git` on the CLI: The agent can read the manual when it needs to use `git`, but it doesn’t need to have the knowledge how to use `git` in it’s brain when it’s not relevant.

verdverm 10 hours ago | parent [-]

> MCP is giving the agents a bunch of functions/tools

A directory of skills... same thing

You can use MCP the same way as skills with a different interface. There are no rules on what goes into them.

They both need descriptions and instruction around them, they both have to be is presented and index/instn to the agent dynamically, so we can tell them what they have access to without polluting the context.

See the Anthropic post on moving MCP servers to a search function. Once you have enough skills, you are going to require the same optimization.

I separate things in a different way

1. What things do I force into context (agents.md, "tools" index, files) 2. What things can the agent discorver (MCP, skills, search)

ricokatayama 10 hours ago | parent | prev | next [-]

It is conceptually different. Skill was created over the context rot problem. You will pull the right skill from the deck after having a challenge and figuring out the best skill just by reading the title and description.

esafak 10 hours ago | parent | prev | next [-]

That's the point. It was supposed to be a simpler, more efficient way of doing the same things as MCP but agents turned out not to like them as much.

exitb 10 hours ago | parent | prev | next [-]

It's mostly just static/dynamic content behind descriptive names.

DonHopkins 3 hours ago | parent | prev | next [-]

There's a fundamental architectural difference being missed here: MCP operates BETWEEN LLM complete calls, while skills operate DURING them. Every MCP tool call requires a full round-trip — generation stops, wait for external tool, start a new complete call with the result. N tool calls = N round-trips. Skills work differently. Once loaded into context, the LLM can iterate, recurse, compose, and run multiple agents all within a single generation. No stopping. No serialization.

Skills can be MASSIVELY more efficient and powerful than MCP, if designed and used right.

Leela MOOLLM Demo Transcript: https://github.com/SimHacker/moollm/blob/main/designs/LEELA-...

  2. Architecture: Skills as Knowledge Units

  A skill is a modular unit of knowledge that an LLM can load, understand, and apply. 
  Skills self-describe their capabilities, advertise when to use them, and compose with other skills.

  Why Skills, Not Just MCP Tool Calls?
  MCP (Model Context Protocol) tool calls are powerful, but each call requires a full round-trip:

  MCP Tool Call Overhead (per call):
  ┌─────────────────────────────────────────────────────────┐
  │ 1. Tokenize prompt                                      │
  │ 2. LLM complete → generates tool call                   │
  │ 3. Stop generation, universe destroyed                  │
  │ 4. Async wait for tool execution                        │
  │ 5. Tool returns result                                  │
  │ 6. New LLM complete call with result                    │
  │ 7. Detokenize response                                  │
  └─────────────────────────────────────────────────────────┘
  × N calls = N round-trips = latency, cost, context churn

  Skills operate differently. Once loaded into context, skills can:

  Iterate:
      MCP: One call per iteration
      Skills: Loop within single context
  Recurse:
      MCP: Stack of tool calls
      Skills: Recursive reasoning in-context
  Compose:
      MCP: Chain of separate calls
      Skills: Compose within single generation
  Parallel characters:
      MCP: Separate sessions
      Skills: Multiple characters in one call
  Replicate:
      MCP: N calls for N instances
      Skills: Grid of instances in one pass
I call this "speed of light" as opposed to "carrier pigeon". In my experiments I ran 33 game turns with 10 characters playing Fluxx — dialogue, game mechanics, emotional reactions — in a single context window and completion call. Try that with MCP and you're making hundreds of round-trips, each suffering from token quantization, noise, and cost. Skills can compose and iterate at the speed of light without any detokenization/tokenization cost and distortion, while MCP forces serialization and waiting for carrier pigeons.

speed-of-light skill: https://github.com/SimHacker/moollm/tree/main/skills/speed-o...

Skills also compose. MOOLLM's cursor-mirror skill introspects Cursor's internals via a sister Python script that reads cursor's chat history and sqlite databases — tool calls, context assembly, thinking blocks, chat history. Everything, for all time, even after Cursor's chat has summarized and forgotten: it's still all there and searchable!

cursor-mirror skill: https://github.com/SimHacker/moollm/tree/main/skills/cursor-...

MOOLLM's skill-snitch skill composes with cursor-mirror for security monitoring of untrusted skills, also performance testing and optimization of trusted ones. Like Little Snitch watches your network, skill-snitch watches skill behavior — comparing declared tools and documentation against observed runtime behavior.

skill-snitch skil: https://github.com/SimHacker/moollm/tree/main/skills/skill-s...

You can even use skill-snitch like a virus scanner to review and monitor untrusted skills. I have more than 100 skills and had skill-snitch review each one including itself -- you can find them in the skill-snitch-report.md file of each skill in MOOLLM. Here is skill-snitch analyzing and reporting on itself, for example:

skill-snitch's skill-snitch-report.md: https://github.com/SimHacker/moollm/blob/main/skills/skill-s...

MOOLLM's thoughtful-commitment skill also composes with cursor-mirror to trace the reasoning behind git commits.

thoughtful-commit skill: https://github.com/SimHacker/moollm/tree/main/skills/thought...

MCP is still valuable for connecting to external systems. But for reasoning, simulation, and skills calling skills? In-context beats tool-call round-trips by orders of magnitude.

baggachipz 10 hours ago | parent | prev [-]

> Is it just me, or do skills seem enormously similar to MCP?

Ok I'm glad I'm not the only one who wondered this. This seems like simplified MCP; so why not just have it be part of an MCP server?

PantaloonFlames 8 hours ago | parent [-]

For one thing, it’s a text file and not a server. That makes it simpler.

baggachipz 8 hours ago | parent [-]

Sure, but in an MCP server the endpoints provide a description of how to use the resource. I guess a text file is nice too but it seems like a stepping stone to what will eventually be necessary.