Remix.run Logo
makestuff a day ago

Is a skill essentially a reusable prompt that is inserted at the start of any query? The marketing of Agents/MCP/skills/etc is very confusing to me.

cshimmin a day ago | parent | next [-]

It's basically just a way for the LLM to lazy-load curated information, tools, and scripts into context. The benefit of making it a "standard" is that future generations of LLMs will be trained on this pattern specifically, and will get quite good at it.

csomar 3 hours ago | parent | next [-]

> It's basically just a way for the LLM to lazy-load curated information, tools, and scripts into context.

So basically a reusable prompt like the previous has asked?

bnchrch 9 minutes ago | parent | next [-]

Ah not exactly.

The way the OP phrased it

> Is a skill essentially a reusable prompt that is inserted at the start of any query?

Actually is a more apt description for a different Claude Code feature called Slash Commands

Where I can create a preset "prompt" and call it with /name-of-my-prompt $ARGS

and this feature is the one that essentially prefixes a Prompt.

The other description of lazy loading is more accurate for Skills.

Where I can tell my Claude Code system: Hey if you need to run our dev server see my-dev-server-skill

and the agent will determine when to pull in that skill if it needs it.

ActionHank 2 hours ago | parent | prev [-]

Yes, but with more sales magic sprinkled on top.

prodigycorp a day ago | parent | prev [-]

Does it persist the loaded information for the remainder of the conversation or does it intelligently cull the context when it's not needed?

dcre 13 hours ago | parent | next [-]

This question doesn’t have anything to do with skills per se, this is just about how different agents handle context. I think right now the main way they cull context is by culling noisy tool call output. Skills are basically saved prompts and shouldn’t be that long, so they would probably not be near the top of the list of things to cull.

terminalkeys 20 hours ago | parent | prev | next [-]

Claude Code subagents keep their context windows separate from the main agent, sending back only the most relevant context based on the main agent's request.

brabel 21 hours ago | parent | prev [-]

Each agent will do that differently, but Gemini CLI, for example, lets you save any session with a name so you can continue it later.

stavros a day ago | parent | prev | next [-]

It's the description that gets inserted into the context, and then if that sounds useful, the agent can opt to use the skill. I believe (but I'm not sure) that the agent chooses what context to pass into the subagent, which gets that context along with the skill's context (the stuff in the Markdown file and the rest of the files in the FS).

This may all be very wrong, though, as it's mostly conjecture from the little I've worked with skills.

subpixel an hour ago | parent [-]

Claude also has custom slash-commands, so you can force skill usage as you see fit.

This lets you trigger a skill with '/foo' in a way that resembles the way you'd use the command line.

Claude Code is very good at using well-defined skills without a command though, but in a scenario where this is some nuance between similar skills they are useful.

dcre 13 hours ago | parent | prev | next [-]

“inserted at the start of any query” feels like a bit of a misunderstanding to me. It plops the skill text into the context when it needs it or when you tell it to. It’s basically like pasting in text or telling it to read a file, except for the bit where it can decide on its own to do it. I’m not sure start, middle, or end of query is meaningful here.

danielbln a day ago | parent | prev | next [-]

Its part of managing the context. It's a bit of prepared context that can be lazy-loaded in as the need arises.

Inversely, you can persist/summarize a larger bit of context into a skill, so a new agent session can easily pull it in.

So yes, it's just turtles, sorry, prompts all the way down.

theshrike79 21 hours ago | parent | prev | next [-]

Skills can be just instructions how to do things.

BUT what makes them powerful is that you can include code with the skill package.

Like I have a skill that uses a Go program to traverse the AST of a Go project to find different issues in it.

You COULD just prompt it but then the LLM would have to dig around using find and grep. Now it runs a single executable which outputs an LLM optimised clump of text for processing.

langitbiru a day ago | parent | prev [-]

It also has (Python/Ruby/bash) scripts which Claude Code can execute.