| ▲ | Skills for organizations, partners, the ecosystem(claude.com) |
| 274 points by adocomplete 21 hours ago | 152 comments |
| |
|
| ▲ | irrationalfab 16 hours ago | parent | next [-] |
| There's a pattern I keep seeing: LLMs used to replace things we already know how to do deterministically. Parsing a known HTML structure, transforming a table, running a financial simulation. It works, but it's like using a helicopter to cross the street: expensive, slow, and not guaranteed to land exactly where you intended. The real opportunity with Agent Skills isn't just packaging prompts. It's providing a mechanism that enables a clean split: LLM as the control plane (planning, choosing tools, handling ambiguous steps) and code or sub-agents as the data/execution plane (fetching, parsing, transforming, simulating, or executing NL steps in a separate context). This requires well-defined input/output contracts and a composition model. I opened a discussion on whether Agent Skills should support this kind of composability: https://github.com/agentskills/agentskills/issues/11 |
| |
| ▲ | basch 16 hours ago | parent | next [-] | | The same applies to context vs a database. If a reasoning model makes a decision about something, it should be put off to the side and stored as a value/variable/entry somewhere. Instead of using pages and pages of context, it makes sense for some tasks to "press" decisions that become more permanent to the conversation. You can somewhat accomplish that with notebooklm, by turning results into notes into sources, but notebooklm is insular and doesnt have the research and imaging features of gemini. And also, in writing, writing from top to bottom has its disadvantages. It makes sense to emulate human writing process and have passes, as you flesh out, and conversely summarize writing. Current LLMs can brute force these things through emulation/observation/mimicry but they arent as good as doing it the right way. Not only would I like to see "skills" but also "processes" where you create a well defined order that tasks are accomplished in sequence. Repeatable templates. This would essentially include variables in the templates, set for replacement. | |
| ▲ | gradus_ad 13 hours ago | parent | prev | next [-] | | I've recently been doing some work with Autodesk. It would be great for an LLM to be as comfortable with the "vocabulary" of these applications as they are with code. Maybe part of this involves creating a language for CAD design in the first place. But the principle that we need to build out vocabularies and subsequently generate and expose "sentences" (workflows) for LLM's to train on seems like a promising direction. Of course this requires substantial buy in from application owners - create the vocabulary - and users - agree to expose and share the sentences they generate - but the results would be worth it. | | |
| ▲ | baq 3 hours ago | parent [-] | | Mildly amusing since i remember AutoCAD having a lisp interpreter ~30 years ago…? | | |
| |
| ▲ | ugh123 16 hours ago | parent | prev | next [-] | | 100% Additionally, I can't even get claude or codex to reliable use the prompt and simple rules (use this command to compile) in an agents.md or whatever required markdown file is needed. Why would I assume they will reliably handle skills prompts spread about a codebase? I've even seen tool usage deteriorate while it's thinking and self commanding through its output to say.. read code from a file. Sometimes it uses tail while other times it gets confused on the output and then writes a basic python program to parse lines and strings from the same file to effectively get what was the same output as before. How bizarre! | |
| ▲ | _the_inflator 4 hours ago | parent | prev | next [-] | | I agree partly. Skills are essentially boiling down to distributed parts of a Main Prompt. If you consider a state model you can see this pattern: Task is the state and combining the task's specifics skills defines the current prompt augmentation. When the task changes, another prompt emerges. In the end, it is the clear guidance of the Agent that is the deciding factor. | |
| ▲ | itissid 8 hours ago | parent | prev | next [-] | | Isn't atleast part of that GH issue something that this https://docs.boundaryml.com/guide/introduction/what-is-baml is also trying to solve? LLM inputs and outputs must be functions with defined functions. That was their starting point. IIUC their most recent arc focuses on prompt optimization[0] where you can optimize — using DSPy and an optimization algo GEPA [1] — using relative weights on different things like errors, token usage, complexity. [0] https://docs.boundaryml.com/guide/baml-advanced/prompt-optim...
[1] https://github.com/gepa-ai/gepa?tab=readme-ov-file | |
| ▲ | hintymad 16 hours ago | parent | prev [-] | | > Parsing a known HTML structure, transforming a table, running a financial simulation. Transforming an arbitrary table is still hard, especially a table on a webpage or in a document. Sometimes I even struggle to find the right library. The effort does not seem worth it for one-off need of such transformation too. LLM can be a great tool for doing the tasks. |
|
|
| ▲ | reedf1 21 hours ago | parent | prev | next [-] |
| How likely are we to look back on Agent/MCP/Skills as some early Netscape peculiarity? I would dive into adoption if I didn't think some new thing would beat the paradigm in a fortnight. |
| |
| ▲ | vessenes 18 hours ago | parent | next [-] | | I've built a number of MCP servers, including an MCP wrapper. I'd generally recommend you skip it unless you know you need it. Conversely, I'd generally recommend you write up a couple skills ASAP to get a feel for them. It will take you 20 minutes to write and test some. MCP does three things conceptually: it lets you build a bridge between an agent and <something else>, it specifies a UI+API layer between the bridge and the LLM, and it formalizes the description of that bridge in a tool-calling format. It's that UI+API layer that's the biggest pain in the ass, in my opinion. Sometimes you need it; for instance, if you wanted an agent to access your emails, a high quality MCP server that can't destroy your life through enthusiastic tool calling makes sense. If, however, you have, say a CLI tool or simple API that's reasonably self documenting and you're willing to have it run, and/or if you need specific behavior with a different context setting, then a skill can just be a markdown file that explains what, how, why. | | |
| ▲ | throwup238 17 hours ago | parent [-] | | Agreed. I use only one MCP server regularly and it’s a custom one integrated into my QT desktop app. It has tools for inspecting the widget tree, using selectors to click/type/etc, and take screenshots. Functionality that would otherwise be hard or impossible to reliably implement using CLI calls but gives Claude a closed feedback loop. All public MCP server I’ve seen have been a disaster with too many tools and tokens polluting the context. It’s really most useful when you need tight integration with some other environment and can write a little custom wrapper to provide it. |
| |
| ▲ | irrationalfab 16 hours ago | parent | prev | next [-] | | Agent/MCP/Skills might be "Netscape-y" in the sense that today's formats will evolve fast. But Netscape still mattered: it lost the market, not the ideas. The patterns survived (JavaScript, cookies, SSL/TLS, progressive rendering) and became best practices we take for granted. The durable pattern here isn't a specific file format. It's on-demand capability discovery: a small index with concise metadata so the model can find what's available, then pull details only when needed. That's a real improvement over tool calling and MCP's "preload all tools up front" approach, and it mirrors how humans work. Even as models bake more know-how into their weights, novel capabilities will always be created faster than retraining cycles. And even if context becomes unlimited, preloading everything up front remains wasteful when most of it is irrelevant to the task at hand. So even if "Skills" gets replaced, discoverability and progressive disclosure likely survive. | |
| ▲ | verelo 18 hours ago | parent | prev | next [-] | | Yes this 100%. Every person i speak with who is excited about MCP is some LinkedIn Guru or product expert. I'm yet to encounter a seriously technical person excited by any of this. | | |
| ▲ | hnlmorg 18 hours ago | parent | next [-] | | MCP, as a concept, is a great idea. The problem isn’t having a standard way for agents to branch out. The problem is that AI is the new Javascript web framework: there’s nothing wrong with frameworks, but when everyone and their son are writing a new framework and half those frameworks barely work, you end up with a buggy, fragmented ecosystem. I get why this happens. Startups want VC money, established companies then want to appear relevant, and then software engineers and students feel pressured to prove they’re hireable. And you end up with one giant pissing contest where half the players likely see the ridiculousness of the situation but have little choice other than to join party. | |
| ▲ | anthuswilliams 18 hours ago | parent | prev | next [-] | | I have found MCPs to be very useful (albeit with some severe and problematic limitations in the protocol's design). You can bundle them and configure them with a desktop LLM client and distribute them to an organization via something like Jamf. In the context I work in (biotech) I've found it a pretty high-ROI way to give lots of different types of researchers access to a variety of tools and data very cheaply. | | |
| ▲ | verelo 17 hours ago | parent [-] | | I believe you, but can you elaborate? What exactly does MCP give you in this context? How do you use it? I always get high level answers and I'm yet to be convinced, but i would love this to be one of those experiences where i walk away being wrong and learning something new. | | |
| ▲ | anthuswilliams 16 hours ago | parent [-] | | Sure, absolutely. Before I do, let me just say, this tooling took a lot of work and problem solving to establish in the enterprise, and it's still far from perfect. MCPs are extremely useful IMO, but there are a lot of bad MCP servers out there and even good ones are NOT easy to integrate into a corporate context. So I'm certainly not surprised when I hear about frustrations. I'm far from an LLM hype man myself. Anyway: a lot of earlier stages of drug discovery involve pulling in lots of public datasets, scouring scientific literature for information related to a molecule, a protein, a disease, etc. You join that with your own data and laboratory capabilities and commercial strategy in order to spot opportunities for new drugs that you could maybe, one day, take into the clinic. This is traditionally an extremely time consuming and bias prone activity, and whole startups have gone up around trying to make it easier. A lot of the public datasets have MCPs someone has put together around someone's REST API. (For example, a while ago Anthropic released "Claude for Life Sciences" which was just a collection of MCPs they had developed over some popular public resources like PubMed). For those datasets that don't have open source MCPs, and for our proprietary datasets, we stand up our own MCPs which function as gateways for e.g. running SQL queries or Spark jobs against those datasets. We also include MCPs for writing and running Python scripts using popular bioinformatics libraries, etc. We bundle them with `mcpb` so they can be made into a fully configured one-click installer you can load into desktop LLM clients like Claude Desktop or LibreChat. Then our IT team can provision these fully configured tools for everyone in our organization using MDM tools like Jamf. We manage the underlying data with classical data engineering patterns, ETL jobs, data definition catalogs, etc, and give MCP-enabled tools to our researchers as front end concierge type tools. And once they find something they like, we also have MCPs which can help transform those queries into new views, ETL scripts, etc and serve them using our non-LLM infra, or save tables, protein renderings, graphs, etc and upload them into docs or spreadsheets to be shared with their peers. Part of the reason we have set it up this way is to work through the limitations of MCPs (e.g. all responses have to go through the context window, so you can't pass large files around or trust that it's not mangling the responses). But also we do this so as to end up with repeatable/predictable data assets instead of LLM-only workflows. After the exploration is done, the idea is you use the artifact, not the LLM, to intact with it (though of course you can interact with the artifact in an LLM-assisted workflow as you iterate once again in developing a yet another derivative artifact). Some of why this works for us is perhaps unique to the research context where the process of deciding what to do and evaluating what has already been done is a big part of daily work. But I also think there are opportunities in other areas, e.g. SRE workflows pulling logs from Kubernetes pods and comparing to Grafana metrics, saving the result as a new dashboard, and so on. What these workflows all have in common, IMO, is that there are humans using the LLM as an aid to dive understanding, and then translating that understanding into more traditional, reliable tools. For this reason, I tend to think that the concept of autonomous "agents" is stupid, outside of a few very narrow contexts. That is to say, once you know what you want, you are generally better off with a reliable, predictable, LLM-free application, but LLMs are very useful in the prices of figuring out what you want. And MCPs are helpful there. |
|
| |
| ▲ | james2doyle 17 hours ago | parent | prev | next [-] | | I have found MCPs helpful. Recently, I used one to migrate a site from WordPress to Sanity. I pasted in the markdown from the original site and told it to create documents that matched my schemas. This was much quicker and more flexible than whipping up a singular migration tool. The Sanity MCP uses oAuth so I also didn’t need to do anything in order to connect to my protected dataset. Just log in. I’ll definitely be using this method in the future for different migrations. | |
| ▲ | danmaz74 17 hours ago | parent | prev [-] | | I use only one MCP, but I use it a lot: it's chrome devtools. I get Claude Code to test in the browser, which makes a huge difference when I want it to fix a bug I found in the browser - or if I just want it to do a real world test on something it just built. |
| |
| ▲ | xnx 20 hours ago | parent | prev | next [-] | | Don't forget A2A: https://developers.googleblog.com/en/a2a-a-new-era-of-agent-... We'll see how many of these are around in a few years. | | |
| ▲ | SamDc73 18 hours ago | parent [-] | | I'm yet to come across applications implementing A2A in real life. | | |
| ▲ | dcreater 15 hours ago | parent [-] | | Ive yet to come across applications implementing ANY AI framework in real life/production grade projects... |
|
| |
| ▲ | veunes 4 hours ago | parent | prev | next [-] | | The space is moving fast enough that everything feels provisional | |
| ▲ | isodev 19 hours ago | parent | prev | next [-] | | How likely is it to even remember “the AI stuff” 2-3 years from now? What we’re trying to do with LLMs today is extremely unsustainable. NVidia/openai will run out of silly investors eventually… | |
| ▲ | wuliwong 20 hours ago | parent | prev | next [-] | | So like any early phase, there's risk in picking a technology to use. | |
| ▲ | adw 17 hours ago | parent | prev | next [-] | | Skills are just prompt conventions; the exact form may change but the substance is reasonable. MCP, eh, it’s pretty bad, I can see it vanishing. The agent loop architectural pattern (and that’s the relevant bit) is going to continue to matter. There will be new patterns for sure, but tool calling plus while loop (which is all an “agent” is) is powerful and highly general. | |
| ▲ | DenisM 20 hours ago | parent | prev | next [-] | | Why do you think they will fade out? | | |
| ▲ | observationist 20 hours ago | parent | next [-] | | Frontier models will eventually eat all the tedious tailored add-ons as just part of something they can do. Right now models have roughly all of the written knowledge available to mankind, minus some obscure held out private archives and so on. They have excellent skills and general abilities to construct plausible sequences of actions to accomplish work, but we need to hold their hands to really get decent performance across a wide range of activities. Skills and agent frameworks and MCP carve out different domains of that problem, with successful solutions providing training data for future models that might be able to be either generalized, or they'll be able to create a vast mountain of synthetic data following successful patterns, and make the next generation of models incredibly useful for a huge number of tasks, by default. It might also be possible that by studying the problem, identifying where mode collapses and issues with training prevent the right sort of generalization, they might tweak the architecture and be able to solve the deficiency through normal training runs, and thereby discard the need for all the bespoke artisanal agent specifications. | | |
| ▲ | jonahbenton 19 hours ago | parent | next [-] | | To my eyes skills disappear, MCP and agent definitions do not. You can have the most capable human available to you, a supreme executive assistant. You still have to convey your intent and needs to them, your preferences, etc, with as high a degree of specificity as necessary. And you need to provide them with access and mechanisms to do things on your behalf. Agentic definitions are the former, and they will evolve and grow. I like the metaphor of deal terms in financial contracts- benchmarkers document billions of these now. The "deal terms" governing the work any given entity does for you will be rich and bespoke and specific, like any valuable relationship. Even if the agent is learning about you, your governance is still needed. MCP is the latter. It is the protocol by which a thing does things for you. It will get extensions. Skill-like directives and instructions will get delivered over it. Skills themselves are near term scaffold that will soon disappear. | | |
| ▲ | verdverm 19 hours ago | parent [-] | | Skills are specific, contextual, and persistent (stateful) whereas LLMs are not | | |
| ▲ | jonahbenton 17 hours ago | parent [-] | | It isn't between llm and skill, it's between agent and skill. Orgs that invest in skills will duplicate what they could do once in an agent. Orgs that "buy" skills from a provider will need to endlessly tweak them. Multiskill workflows will have semantic layer mismatches. Skill is a great sleight of hand for Anthropic to get people to think Claude Code is a platform. There is no there there. Orgs will figure this out. Cheers. |
|
| |
| ▲ | DenisM 20 hours ago | parent | prev | next [-] | | I hear you - model development might overcome the shortcomings one day. However the "waiting out" strategy needs a timeout. It might happen that agentic crutches around LLMs will bear fruit much sooner than high-quality LLMs arrive. If you don't have a timeout or a decent exit criteria you may end up waiting indefinitely, or at least until reality of things becomes too painful to ignore. The "ski rental problem" comes to mind here, but maybe there is another "wait it out" exit strategy? | |
| ▲ | airstrike 16 hours ago | parent | prev | next [-] | | > Frontier models will eventually eat all the tedious tailored add-ons as just part of something they can do. I don't this makes any sense as MCP is a part of something they can do already | |
| ▲ | mbesto 19 hours ago | parent | prev [-] | | > Right now models have roughly all of the written knowledge available to mankind, minus some obscure held out private archives and so on. Sorry for the nit, but this is a gross oversimplification. Most private archives are not obscure but obfuscated and largely are way more valuable training data then the publicly available ones. Want to know how the DOD may technically tracks your phone? Private. Want to know how to make Coca Cola at scale? Private. Want to know what the schematic is for a Google TPU? Private. etc etc. |
| |
| ▲ | amitport 20 hours ago | parent | prev | next [-] | | His point, I believe, was that it is early in the innovation cycle and they very well be replaced quickely with different solutions/paradigms. | | |
| ▲ | DenisM 20 hours ago | parent | next [-] | | Well, some things fade out and some do not. How do we decide which one it is? The reason I ask is that the pace of new things arriving is overwhelming, hence I was tempted to just ignore it. Not because things had signs of transience, but because I was drowning and didn't know where to start. That is not the same thing as actually observing signs of things being too foamy. | |
| ▲ | wuliwong 20 hours ago | parent | prev [-] | | Agreed. I think if this is overly concerning, developing early in the innovation cycle just might not be the ideal place to be. :) |
| |
| ▲ | orliesaurus 20 hours ago | parent | prev [-] | | Adoption on most of these has been weak, except MCP (and whatever flavor of markdown file you like to add to your agent context) | | |
| ▲ | zingababba 20 hours ago | parent [-] | | Microsoft seems to be pushing MCP pretty hard in the Azure ecosystem. My cynical take is they are very aware of the context bloat so see it as extra inference $$. | | |
| ▲ | bonesss 18 hours ago | parent [-] | | Pure speculation, but I feel the inference money is tiny compared to the speed and permanence of Office integrations MCP enables through the consultancy swarm. MCP lets you glue random assed parts of services to mega-ultra-high critical business initiatives with no go between. Delivered through a personalized chat interface that will tell you how sexy you are and how you deserved to win at golf yesterday… from salesman to auto interface to forever contract in minutes. MS sells to insecurities of incompetent management and facilitates territory marking at the expense of governments and societies around the world for mega bucks. MCP, obvious as it is technically, also lets them plug a library into existing services for a quick upgrade then an atomized upsell directly to the chat interfaces of upper management. Microsoft’s CEO has talked about his agent swarm. Much like RPA this woo appeals strongly to the barely technical. |
|
|
| |
| ▲ | smrtinsert 20 hours ago | parent | prev [-] | | Extremely likely but that doesn't mean it lacks value today |
|
|
| ▲ | itissid 8 hours ago | parent | prev | next [-] |
| One thing that is interesting to think about is given a skill which is just "pre-context", how can it be _evolved_ to create prompts given _my_ context? e.g. here is their web artifact skill builder from desktop app: ```
web-artifacts-builder Suite of tools for creating elaborate, multi-component claude.ai HTML artifacts using modern frontend web technologies (React, Tailwind CSS, shadcn/ui). Use for complex artifacts requiring state management, routing, or shadcn/ui components - not for simple single-file HTML/JSX artifacts.
``` Say I want to build a landing page with some relatively static content — I don't know it yet but its just gonna be bootstrap CSS, no SPA/React(ish), it'll be fine with templated server side thing. But I don't know how to express this in words. Could the skill _evolve_ based on what my preferences are and what is possible for a relative novice to grok and construct? This is a simple example, but it could extend to say using sqlite+litestream instead of postgres or using Gradient boosted trees instead of an expensive transformer based classifier. |
|
| ▲ | makestuff 20 hours ago | parent | prev | next [-] |
| Is a skill essentially a reusable prompt that is inserted at the start of any query? The marketing of Agents/MCP/skills/etc is very confusing to me. |
| |
| ▲ | cshimmin 20 hours ago | parent | next [-] | | It's basically just a way for the LLM to lazy-load curated information, tools, and scripts into context. The benefit of making it a "standard" is that future generations of LLMs will be trained on this pattern specifically, and will get quite good at it. | | |
| ▲ | csomar an hour ago | parent | next [-] | | > It's basically just a way for the LLM to lazy-load curated information, tools, and scripts into context. So basically a reusable prompt like the previous has asked? | | | |
| ▲ | prodigycorp 20 hours ago | parent | prev [-] | | Does it persist the loaded information for the remainder of the conversation or does it intelligently cull the context when it's not needed? | | |
| ▲ | dcre 11 hours ago | parent | next [-] | | This question doesn’t have anything to do with skills per se, this is just about how different agents handle context. I think right now the main way they cull context is by culling noisy tool call output. Skills are basically saved prompts and shouldn’t be that long, so they would probably not be near the top of the list of things to cull. | |
| ▲ | terminalkeys 19 hours ago | parent | prev | next [-] | | Claude Code subagents keep their context windows separate from the main agent, sending back only the most relevant context based on the main agent's request. | |
| ▲ | brabel 20 hours ago | parent | prev [-] | | Each agent will do that differently, but Gemini CLI, for example, lets you save any session with a name so you can continue it later. |
|
| |
| ▲ | stavros 20 hours ago | parent | prev | next [-] | | It's the description that gets inserted into the context, and then if that sounds useful, the agent can opt to use the skill. I believe (but I'm not sure) that the agent chooses what context to pass into the subagent, which gets that context along with the skill's context (the stuff in the Markdown file and the rest of the files in the FS). This may all be very wrong, though, as it's mostly conjecture from the little I've worked with skills. | |
| ▲ | dcre 11 hours ago | parent | prev | next [-] | | “inserted at the start of any query” feels like a bit of a misunderstanding to me. It plops the skill text into the context when it needs it or when you tell it to. It’s basically like pasting in text or telling it to read a file, except for the bit where it can decide on its own to do it. I’m not sure start, middle, or end of query is meaningful here. | |
| ▲ | danielbln 20 hours ago | parent | prev | next [-] | | Its part of managing the context. It's a bit of prepared context that can be lazy-loaded in as the need arises. Inversely, you can persist/summarize a larger bit of context into a skill, so a new agent session can easily pull it in. So yes, it's just turtles, sorry, prompts all the way down. | |
| ▲ | theshrike79 20 hours ago | parent | prev | next [-] | | Skills can be just instructions how to do things. BUT what makes them powerful is that you can include code with the skill package. Like I have a skill that uses a Go program to traverse the AST of a Go project to find different issues in it. You COULD just prompt it but then the LLM would have to dig around using find and grep. Now it runs a single executable which outputs an LLM optimised clump of text for processing. | |
| ▲ | langitbiru 20 hours ago | parent | prev [-] | | It also has (Python/Ruby/bash) scripts which Claude Code can execute. |
|
|
| ▲ | mrbonner 20 hours ago | parent | prev | next [-] |
| The agentic development scene has slowly turned into a full-blown JavaScript circus—bright lights, loud chatter, and endless acts that all look suspiciously familiar. We keep wrapping the same old problems in shiny new packages, parading them around as if they’re groundbreaking innovations. How long before the crowd grows tired of yet another round of “RFC” performances? |
| |
| ▲ | isoprophlex 20 hours ago | parent | next [-] | | MCP: we're uber, but for stdout | | | |
| ▲ | hugs 20 hours ago | parent | prev | next [-] | | the tech industry is forever in denial that it is also actually a fashion industry. | | |
| ▲ | recursive 19 hours ago | parent | next [-] | | That's only true for companies that make most of their money from investment instead of customers. Those exist too. | | |
| ▲ | falcor84 18 hours ago | parent [-] | | What do you mean? Are you saying that customers don't follow fashions? |
| |
| ▲ | pixl97 19 hours ago | parent | prev [-] | | Beyond assembly everything is window dressing. |
| |
| ▲ | beoberha 19 hours ago | parent | prev | next [-] | | It’s a fast moving field. People aren’t coming up with new ideas to be performative. They see issues with the state of the art and make something that may or may not advance things forward. MCP is huge for getting agents to do things in the “real world”. However, it’s costly! Skills is a cheap way to fill that gap for many cases. People are finding immediate value in both of these. Try not to be so pessimistic. | | |
| ▲ | verdverm 18 hours ago | parent [-] | | It's not pessimism, but actual compatibility issues like deno vs npm package ecosystems that didn't work together for many years There are multiple AGENTS vs CLAUDE vs .github/instructions; skills vs commands; ... intermixed and inconsistent concepts, all out in the wild When I work on a project, do all the files align? If I work in an org, where developers have agent choice, how many of these instructions and skills "distros" do I need to put (pollute?) my repo with? | | |
| ▲ | detkin 15 hours ago | parent | next [-] | | Skills have been really helpful in my team as we've been encoding tribal knowledge into something that other developers can easily take advantage of. For example, our backend architecture has these hidden patterns, that once encoding in a skill, can be followed by full stack devs doing work there, saving a ton of time in coding and PR review. We then hit the problem of how to best share these and keep them up to date, especially with multiple repositories. It led us to build sx - https://github.com/sleuth-io/sx, a package manager for AI tools. | |
| ▲ | ffsm8 18 hours ago | parent | prev [-] | | Depending on your workflow, none. While I do agentic development in personal projects a lot at this point, at work it's super rare beyond quick lookups to things I should already know but can't be arsed to remember exactly (like writing a one-off SQL scripts which does batching mutations and similar) |
|
| |
| ▲ | veunes 4 hours ago | parent | prev | next [-] | | There's definitely a performative vibe to a lot of it right now | |
| ▲ | toomuchtodo 20 hours ago | parent | prev | next [-] | | When the AI investment dollars run out. "As long as the music is playing, you've got to get up and dance." (Chuck Prince, Citigroup) | |
| ▲ | rvz 20 hours ago | parent | prev | next [-] | | Well, these agentic / AI companies don't even know what an RFC is, let alone how to write one. The last time they attempted to create a "standard" (MCP) it was not only premature, but it was a complete security mess. Apart from Google Inc., I have not seen a single "AI company" propose an RFC that was reviewed by the IETF and became a proper internet standard. [0] "MCP" was one of the worst so-called "standards" ever built since the JWT was proposed. So I do not take Anthropic seriously when they create so-called "open standards" especially when the reference implementation is in Javascript or TypeScript. [0] https://www.rfc-editor.org/standards | | |
| ▲ | lxgr 19 hours ago | parent [-] | | To be fair, security wasn’t even a consideration until RFCs were well into triple digits. We’re still very early, as they say. > I have not seen a single "AI company" propose an RFC that was reviewed by the IETF and became a proper internet standard. Why would the IETF have anything to do with LLM/agent standards? This seems like a category error. They also don’t ratify web standards, for example. | | |
| |
| ▲ | wiseowise 16 hours ago | parent | prev [-] | | > full-blown JavaScript circus It is not healthy when you have an obsession this bad, seriously. Seek help. |
|
|
| ▲ | quacky_batak 21 hours ago | parent | prev | next [-] |
| i like how Anthropic has positioned themselves as the true AI research company and donating “standards” like that. Although Skills are just md files but it’s good to see them “donate” it. There goal seems to be simple: Focus on coding and improving it. They’ve found a great niche and hopefully revenue generating business there. OpenAI on the other hand doesn’t give me same vibes, they don’t seem very oriented. They’re playing catchup with both Google models and Anthropic |
| |
| ▲ | plufz 20 hours ago | parent | next [-] | | I have no idea why I’m about to defend OpenAI here. BUT OpenAI have released some open weight models like gpt-oss and whisper. But sure open weight not open source. And yeah I really don’t like OpenAI as a company to be clear. | | |
| ▲ | dismantlethesun 20 hours ago | parent [-] | | They have but it does feel like they are developing a closed platform aka Apple. Apple has shortcuts, but they haven’t propped it up like a standard that other people can use. To contrast this is something you can use even if you have nothing to do with Claude, and your tools created will be compatible with the wider ecosystem. |
| |
| ▲ | theshrike79 20 hours ago | parent | prev [-] | | A skill can also contain runnable code. Many many MCPs could and should just be a skill instead. |
|
|
| ▲ | unbelievably 19 hours ago | parent | prev | next [-] |
| Why does this need to be a standard in the first place. This isn't DDR5 lol, it's literally just politely asking the model to remember some short descriptions and read a corresponding file when it thinks appropriate. I feel like these abstractions are supposed to make Claude sound more sophisticated because WOW now we can give the guy new skills! But really they're just obfuscating the "data as code" aspect of LLMs which is their true power (and vulnerability ofc). |
|
| ▲ | an0malous 20 hours ago | parent | prev | next [-] |
| I feel inspired and would like to donate my standard for Agent Personas to the community. A persona can be defined by a markdown file with the following frontmatter: ---
persona: hacker
description: logical, talks about computers a lot, enjoys coffee, somewhat snarky and arrogant
---
<more details here>
|
| |
| ▲ | lxgr 19 hours ago | parent | next [-] | | This isn’t just a standard—this is a templating system that could offer us a straight shot to AGI! | |
| ▲ | allisdust 20 hours ago | parent | prev | next [-] | | Please consider donating this to the Linux Foundation so they can drive this inspiring innovation forward. | |
| ▲ | falcor84 18 hours ago | parent | prev | next [-] | | I have a few qualms about this standard: 1. For an experienced Claude Code user, you can already build such an agent persona quite trivially by using the /agents settings. 2. It doesn't actually replace agents. Most people I know use pre-defined agents for some tasks, but they still want the ability to create ad-hoc agents for specific needs. Your standard, by requiring them to write markdown files does not solve this ad-hoc issue. 3. It does not seem very "viral" or income-generating. I know this is premature at this point, but without charging users for the standard, is it reasonable to expect to make money off of this? | |
| ▲ | acedTrex 20 hours ago | parent | prev | next [-] | | Have you considered publishing this with a few charts about vague levels of "correctness"? | | |
| ▲ | rvz 20 hours ago | parent [-] | | What is "correctness?"... wait hang on let me think... "you're absolutely right!" |
| |
| ▲ | brap 20 hours ago | parent | prev | next [-] | | Give this man a Turing Award | |
| ▲ | InitialLastName 20 hours ago | parent | prev | next [-] | | Luckily you get the "extremely confident, even when wrong" attribute for free. | | |
| ▲ | sshine 19 hours ago | parent [-] | | But always willing to admit the opposite is true and go with that on a whim. |
| |
| ▲ | baobun 15 hours ago | parent | prev | next [-] | | As groundbreaking as this is, it will never get traction without a LICENSE.md. | |
| ▲ | weitendorf 20 hours ago | parent | prev | next [-] | | announcing md2ai spec | |
| ▲ | zikani_03 18 hours ago | parent | prev | next [-] | | absolutely revolutionary! ;) | |
| ▲ | oblio 20 hours ago | parent | prev [-] | | > logical Please tell us how REALLY feel about JavaScript. |
|
|
| ▲ | apf6 19 hours ago | parent | prev | next [-] |
| It was just a few months ago that the MCP spec added a concept called "prompts" which are really similar to skills. And of course Claude Code has custom slash commands which are also very similar. Getting a lot of whiplash from all these specifications that are hastily put together and then quickly forgotten. |
| |
| ▲ | vedmakk 17 hours ago | parent | next [-] | | My understanding is, that MCP Prompts and Slash Commands are "user triggered" whereas Skills (and MCP Tools) are "model triggered". Other than that it appears MCP prompts end up as slash commands provided by an MCP Server (instead of client side command definitions). But the actual knowledge that is encoded in skills/commands/mcp prompts is very similar. | |
| ▲ | verdverm 18 hours ago | parent | prev [-] | | It's a "standard" though! /s |
|
|
| ▲ | layer8 20 hours ago | parent | prev | next [-] |
| They published a specification, that doesn’t yet make it a standard. |
|
| ▲ | veunes 4 hours ago | parent | prev | next [-] |
| If skills really can move across platforms, that's a meaningful push against lock-in, at least in theory |
|
| ▲ | vladsh 20 hours ago | parent | prev | next [-] |
| Skills are a pretty awkward abstraction. They emerged to patch a real problem, generic models require fine-tuning via context, which quickly leads to bloated context files and context dilution (ie more hallucinations) But skills dont really solve the problem. Turning that workaround into a standard feels strange. Standardizing a patch isn’t something I’d expect from Anthropic, it’s unclear what is their endgame here |
| |
| ▲ | ako 19 hours ago | parent | next [-] | | Skills don’t solve the problem if you think an llm should know everything. But if you see LLMs mostly as a text plan-do-check-act machine that can process input text, generate output text, and can create plans how to create more knowledge and validate the output, without knowing everything upfront, skills are perfectly fine solution. The value of standardizing skills is that the skills you define work with any agentic tool. Doesn't matter how simple they are, if they dont work easily, they have no use. You need a practical and efficient way to give the llm your context. Just like every organization has its own standards, best practices, architectures that should be documented, as new developers do not know this upfront, LLMs also need your context. An llm is not an all knowing brain, but it’s a plan-do-check-act text processing machine. | |
| ▲ | brabel 19 hours ago | parent | prev | next [-] | | How would you solve the same problem? Skills seem to be just a pattern (before this spec) that lets the LLMs choose what information they need to "load". It's not that different from a person looking up the literature before they do a certain job, rather than just reading every book every time in case it comes in handy one day. Whatever you do you will end up with the same kind of solution, there's no way to just add all useful context to the LLM beforehand. | |
| ▲ | root_axis 19 hours ago | parent | prev | next [-] | | > it’s unclear what is their endgame here Marketing. That defines pretty much everything Anthropic does beyond frontier model training. They're the same people producing sensationalized research headlines about LLMs trying to blackmail folks in order to prevent being deleted. | |
| ▲ | verdverm 18 hours ago | parent | prev | next [-] | | > Standardizing a patch isn’t something I’d expect from Anthropic This is not the first time, perhaps expectation adjustment is in order. This is also the same company that has an exec telling people in his Discord (15m of fame recently) Claude has emotions | |
| ▲ | wuliwong 20 hours ago | parent | prev | next [-] | | >But skills dont really solve the problem. I think that they often do solve the problem, just maybe they have some other side effects/trade offs. | |
| ▲ | theshrike79 20 hours ago | parent | prev [-] | | They’re not a perfect solution, but they are a good one. The best one we have thought of so far. |
|
|
| ▲ | terminalkeys 18 hours ago | parent | prev | next [-] |
| All the talk about "open" standards from AI companies feels like VC-backed public LLM experiments. Even if these standards fade, they help researchers create and validate new tools. I see this especially with local models. The rise of CLI-based LLM coding tools lets me use models like GPT OSS 20B to build apps locally and offline. |
|
| ▲ | officialchicken 3 hours ago | parent | prev | next [-] |
| XML Config? Can someone explain that decision? |
|
| ▲ | htrp 19 hours ago | parent | prev | next [-] |
| I wish agentic skills were something other than a system prompt or a series of step-by-step instructions. feels like anthropicide and opportunity here to do something truly groundbreaking but ended up with prompt engineering. |
|
| ▲ | runtimepanic 20 hours ago | parent | prev | next [-] |
| Interesting move. One thing I’m curious about is how opinionated the standard is supposed to be.
In practice, agent “skills” tend to blur the line between capabilities, tools, and workflows, especially once statefulness and retries enter the picture.
Is the goal here mostly interoperability between agent frameworks, or do you see this evolving into something closer to a contract for agent behavior over time?
I can imagine standardization helping a lot, but only if it stays flexible enough to avoid freezing today’s agent design assumptions. |
|
| ▲ | liampulles 18 hours ago | parent | prev | next [-] |
| I'm curious about the `license` field in the specification: https://agentskills.io/specification. Could one make a copyleft type license such that the generated code must be licensed free and open and under the same license? How enforceable are licenses on these skills anyway, if one can take in the whole skill with an agent and generate a legally distinct but functionally close variant? |
|
| ▲ | fudged71 15 hours ago | parent | prev | next [-] |
| I'd love to see way more interest, rigor, tooling, etc in the industry regarding Skills, I really think they have solved the biggest problems that killed Expert Systems back in the day. I'd love to see the same enthusiasm as MCPs for these, I think in the long term they will be much more important than MCPs (still complementary). |
|
| ▲ | mkagenius 18 hours ago | parent | prev | next [-] |
| If anyone wants to use Skills in Gemini CLI or any other llm tool - check out something I have created, open-skills https://github.com/BandarLabs/open-skills It does code execution in an apple container if your Skill requires any code execution. It also proves the point that Skills are basically repackaged MCPs (if you look into my code). |
| |
| ▲ | theturtletalks 18 hours ago | parent [-] | | Will Skills and Code Execution replace MCPs eventually? | | |
| ▲ | mkagenius 18 hours ago | parent [-] | | I doubt that. MCPs are broader. You can serve a Skill via a MCP but the reverse may not be always true. For example, you can't have a directory named "Stripe-Skills" which will give you a breakdown of last week's revenue (unless you write in the skills how to connect to stripe and get that information). So, most of the remote, existing services are better used as MCPs (essentially APIs). |
|
|
|
| ▲ | good-idea 19 hours ago | parent | prev | next [-] |
| I have been switching between OpenCode and Claude - one thing I like about OpenCode is the ability to define custom agents. These can be ones tailored to specific workflows like PR reviews or writing change logs. I haven't yet attempted the equivalent of this with skills in Claude. These two solutions look feel and smell like the same thing. Are they the same thing? Any OpenCode users out there have any hot or nuanced takes? |
| |
| ▲ | terminalkeys 18 hours ago | parent | next [-] | | Claude Code has subagents as well. I created a workflow with multiple agents to build iOS apps, including agents for orchestration, design, build, and QA. | |
| ▲ | 0x008 18 hours ago | parent | prev | next [-] | | The skills can be specific to a repository but the agents are global, right? | |
| ▲ | abatilo 19 hours ago | parent | prev [-] | | Claude code simply supports agents also |
|
|
| ▲ | albingroen 21 hours ago | parent | prev | next [-] |
| They really do love standards |
|
| ▲ | robertheadley 19 hours ago | parent | prev | next [-] |
| I had developed a tool for Roo Code, and have moved over to anti-gravity with no problem, that basically gives playwright the ability to develop and test user scripts in an automated fashion. It is functionally a skill. I suppose once anti-gravity supports skills, I will make it one officially. |
|
| ▲ | gaigalas 20 hours ago | parent | prev | next [-] |
| Finally I can share this beauty with a wider world: https://github.com/alganet/skills/blob/main/skills/left-padd... |
| |
| ▲ | debugnik 20 hours ago | parent | next [-] | | Amazing. It's just missing an order for the chatbot to say "I know left-pad" before actually doing any work. | |
| ▲ | xd1936 20 hours ago | parent | prev | next [-] | | This is hilarious | |
| ▲ | josteink 20 hours ago | parent | prev [-] | | Is that intentionally designed to completely occupy the full context window of the earlier GPT models? Either way, that’s hilarious. Well done. | | |
| ▲ | gaigalas 19 hours ago | parent [-] | | I asked a model to write for me following the style and tone of other skills! <conspiracy_mode> maybe all of them were designed to occupy the full context window of earlier GPT models </conspiracy_mode> |
|
|
|
| ▲ | someguy101010 20 hours ago | parent | prev | next [-] |
| Is it possible to provide a llm a skill through the mcp resource feature? |
| |
| ▲ | uhgrippa 19 hours ago | parent | next [-] | | In a way yes, you can reduce context usage by a non-negligible amount approaching it this way. I'm investigate this on my skill validation/analysis/bidirectional MCP server project and hope to have it as a released feature soon: https://github.com/athola/skrills | |
| ▲ | theshrike79 20 hours ago | parent | prev [-] | | It’s also possible to implement an MCP as a skill |
|
|
| ▲ | Seattle3503 21 hours ago | parent | prev | next [-] |
| My company has a plugin marketplace in a git repo where we host our shared skills. It would be nice if we could plug that into the web interface. |
| |
| ▲ | verdverm 18 hours ago | parent [-] | | Or if we wrote these things in a language with real imports and modules? I'm authoring equivalent in CUE, and assimilating "standard" provider ones into CUE on the fly so my agent can work with all the shenanigans out there. |
|
|
| ▲ | ada1981 20 hours ago | parent | prev | next [-] |
| Our lab has been experimenting with “meta skills” that allow creation of skills to use later after a particular workflow. Paper & applications published here:
https://earthpilot.ai/metaskills/ |
| |
| ▲ | uhgrippa 19 hours ago | parent | next [-] | | I noticed a similar optimization path with skills, where I now have subagents to analyze the performance of a previous skill/command/hook execution, triggered by a command. I've pushed this to my plugin marketplace https://github.com/athola/claude-night-market | |
| ▲ | babyshake 20 hours ago | parent | prev [-] | | I have been experimenting with these same type of factory pattern skills. Thanks for sharing. | | |
| ▲ | danielbln 20 hours ago | parent [-] | | After a session with Claude Code I just tell it "turn this into a skill, incorporate what we've learned in this session". |
|
|
|
| ▲ | kristo 17 hours ago | parent | prev | next [-] |
| Still can’t symlink skills from Claude code to codex tho :/ |
|
| ▲ | exasperaited 20 hours ago | parent | prev | next [-] |
| Argh word creep. It has been published as an open specification. Whether it is a standard isn't for them to declare. |
|
| ▲ | pplonski86 18 hours ago | parent | prev | next [-] |
| Is codex working well with python notebooks? |
|
| ▲ | skillcreator 16 hours ago | parent | prev | next [-] |
| Love seeing this become an open standard.
We just shipped the first universal skill installer built on it: npx ai-agent-skills install frontend-design 20 of the most starred Claude skills ever, now open across Claude Code, Cursor, Amp, VS Cod : anywhere that supports the spec. Would love some feedback on it github.com/skillcreatorai/Ai-Agent-Skills |
|
| ▲ | delayedrelease 18 hours ago | parent | prev | next [-] |
| Tired of having to learn the Next New Thing (tm) that'll be replaced in a month. |
|
| ▲ | almosthere 20 hours ago | parent | prev | next [-] |
| how are skills and mcp different? |
| |
|
| ▲ | jameslk 18 hours ago | parent | prev | next [-] |
| “Agent skills” seems more like a pattern than something that needs a standard. It’s like announcing you’ve created a standard for “singletons” or “monads” |
|
| ▲ | user3939382 16 hours ago | parent | prev | next [-] |
| This is the right direction but this implementation is playdough and this needs to be a stone mansion. I’m working on a non-LLM AI model that will blow this out of the water. |
|
| ▲ | foobarqux 20 hours ago | parent | prev | next [-] |
| What is the difference between 3rd party skills and connectors? How do you access/install 3rd party skills in claude code? |
|
| ▲ | alexgotoi 21 hours ago | parent | prev | next [-] |
| Claude's skills thing just leveled up from personal toy to full enterprise push - org admins shoving Notion/Figma/Atlassian workflows straight into the model? That's basically turning Claude into your company's AI front door. The open standard bit is smart though, means every partner skill keeps funneling tokens back their way. But good luck when every PM wants their custom agent snowflake and your infra bill triples overnight. Might add this to the next https://hackernewsai.com/ newsletter. |
|
| ▲ | asadm 20 hours ago | parent | prev [-] |
| Good riddance MCP. |
| |
| ▲ | observationist 20 hours ago | parent [-] | | That's not what this is. MCP is still around and useful- skills are tailored prompt frameworks for specific tasks or contexts. They're useful for specialization, and in conjunction with post-training after some good data is acquired, will allow the next generation of models to be a lot better at whatever jobs produce good data for training. | | |
| ▲ | adam_arthur 20 hours ago | parent | next [-] | | Local tools/skills/function definitions can already invoke any API. There's no real benefit to the MCP protocol over a regular API with a published "client" a local LLM can invoke. The only downside is you'd have to pull this client prior. I am using local "skill" as reference to an executable function, not specifically Claude Skills. If the LLM/Agent executes tools via code in a sandbox (which is what things are moving towards), all LLM tools can be simply defined as regular functions that have the flexibility to do anything. I seriously doubt MCP will exist in any form a few years from now | |
| ▲ | asadm 20 hours ago | parent | prev | next [-] | | I have seen ~10 IQ points drop with each MCP I added. I have replaced them all with either skill-like instructions or curl calls in AGENTS.md with much better "tool-calling" rate. | | | |
| ▲ | AndyNemmity 20 hours ago | parent | prev [-] | | It isn't particularly useful. It uses a lot of context without a lot of value. Claude has written a blog post saying as much. Skills keep the context out unless it's needed. It's a much better system in my experience. | | |
| ▲ | verdverm 18 hours ago | parent [-] | | Claude did not say don't use MCP because it pollutes the context What they said was don't pollute your context with lots of tool defs, from MCP or not. You'll see this same problem if you have 100s of skills, with their names and descriptions chewing up tokens Their solution is to let the agent search and discover as needed, it's a general concept around tools (mcp, func, code use, skills) https://www.anthropic.com/engineering/advanced-tool-use |
|
|
|