Remix.run Logo
Agent Skills(addyosmani.com)
185 points by BOOSTERHIDROGEN 8 hours ago | 75 comments
wg0 an hour ago | parent | next [-]

Snake oil. Good to read for sure. Seems all plausible too. But snake oil nevertheless.

Here's why: The slot machine can drop any hard requirement that you specifically in your AGENTS.md, memory.md or your dozens of skill markdowns. Pretty much guaranteed.

These harnesses approaches pretend as if LLMs are strict and perfect rule followers and the only problem is not being able to specify enough rules clearly enough. That's fundamental cognitive lapse in how LLMs operate.

That leaves only one option not reliable but more reliable nevertheless: Human review and oversight. Possibly two of them one after the other.

Everything else is snake oil but at that point, you also realize that promised productivity gains are also snake oil because reading code and building a mental model is way harder than having a mental model and writing it into code.

cortesoft 18 minutes ago | parent [-]

Everything you say is all possible, and in theory I agree with you.

However, I have been using spec-kit (which is basically this style of AI usage) for the last few months and it has been AMAZING in practice. I am building really great things and have not run into any of the issues you are talking about as hypotheticals. Could they eventually happen? Sure, maybe. I am still cautious.

But at some point once you have personally used it in practice for long enough, I can't just dismiss it as snake oil. I have been a computer programmer for over 30 years, and I feel like I have a good read on what works and what doesn't in practice.

wg0 10 minutes ago | parent [-]

We can build all the scaffolding around but I assure you that the LLMs aren't perfect rule following machines is the fundamental problem here and that would remain.

Give it a few more months and I'm sure you'll see some of what I see if not all.

I'm saying all the above having all sorts of systems tried and tested with AI leading me to say what I said.

ai_fry_ur_brain 3 hours ago | parent | prev | next [-]

Cant wait for everyone to realize they've wasted a year + messing with agents and experiencing a feeling of psuedo productivity.

cortesoft 4 minutes ago | parent | next [-]

I can understand skepticism to a degree, and even fundamentally believing that AI is bad for all sorts of reasons, but I am becoming more and more perplexed at the certainty behind statements like this one. How are you so certain that AI development is this doomed? It just hasn't matched my experience at all, and I wonder what your experience is that has driven you to this level of certainty about the certain doom of AI coding?

Is it just a philosophical belief that AI is morally bad? Or have you actually used AI to build things and feel confident that you have explored the space enough to come to such a strong conclusion?

I have been writing code every day for over 30 years, and have been doing it professionally for over 20. I have seen fads come and go, and I have seen real developments that have changed the way I do what I do numerous times. The more experience and the more projects I create with AI, the more certain I am that this is a lasting and fundamental change to how we produce software, and how we use computers generally. I have seen AI get better, and I have seen myself get more proficient at using it to get real work done, work that has already been tested with real world, production, workloads.

You can hate that it is happening, and hate the way working with AI feels, but that doesn't mean it is not providing real value for people and doing real work.

tokioyoyo 31 minutes ago | parent | prev | next [-]

I’m a bit curious with these takes. Arguing in good faith - is the general assumption that people who use AI/agents/harnesses don’t ship features? We’ve been all in Claude Code since ~Septemberish, and have been able to successfully track the boost. Like the features that we ship that get used in production. Both from infrastructure side, and business logic implementations. Frontend and backend.

I don’t think people are wasting too much time. Although, I do agree most of these posts are just bs, including this one. But AI-development has been a thing across a lot of companies in the world.

bot403 25 minutes ago | parent | next [-]

Ignore the people who haven't found out how to use ai yet or don't want to.

AI is a powerful tool. Depending on what I need I use chatgpt, in-ide agents, or a platform like Devin.ai.

I use it when it helps me advance my goals. I don't when it doesn't. Sometimes it misses the mark and I scale back and have it do a specific piece and I'll do the rest.

Sometimes I use it to analyze the code base in seconds vs minutes. Sometimes I use it to pinpoint a bug fast.

Ive solved customer issues in seconds and minutes with it vs hours.

I worked on a banking app with deeply domain specific data issues. AI was not very helpful on that team. My current work on consumer web apps mean my problems are more mundane and AI is a big accelerant.

Being and engineer means solving the problems with the right tools with the right tradeoffs as well. It's why I use an idea vs notepad, I use chatgpt for one-off scripts and "chat", and i use agentic workflows for big, repetitive, or "boring" low-stakes tasks.

swyx 15 minutes ago | parent | prev [-]

> have been able to successfully track the boost.

lets get nitty gritty on this - can you say how you did this? because a lot of people think this is an unsolved problem

_sharp 2 hours ago | parent | prev | next [-]

Right, just like all the productivity lost when people stopped using paper ledgers to mess around with these so-called 'databases'

c0rruptbytes 2 hours ago | parent | prev | next [-]

i treat it like Minecraft automation - it's just for funsies and to pass the time haha

I don't think agentic workflows are there yet, but implementing skills to manually call and use while working side by side with an AI is definitely nice - our company is focused a lot on sandboxing right now and having safe skills

I don't think we've gotten feature development well yet, but the review skills + grafana skills they wrote have been pretty solid

wg0 4 minutes ago | parent | prev | next [-]

This will be another Microservices moment in our industry.

0000000000100 2 hours ago | parent | prev | next [-]

Trick is to not burn too much time worrying about the perfect skills and this and that. See a lot of people filling skills with LLM junk, or overdoing rules that start confusing the LLM. Just try Vanilla, see something you don't like? Then you make a skill and funnel the LLM to use it for the style of task it's working on. E.g. database work is a mixed bag with LLMs, they tend to do work in totally different styles if you leave them unconstrained.

Agents are unbelievably useful at helping takeover and refactor messy codebases though. I just started taking over this monstrous nightmare of a codebase, truly ancient code the bulk of it written over 10+ years ago in PHP. With the use of Claude / Codex I was able to port over the vast majority of the existing legacy storefront and laid the groundwork for centralizing the 10-20k LOC mega-controller logic over to reusable repo/service patterns.

Just shit that would've taking years previously, is achievable in under a month.

BOOSTERHIDROGEN 42 minutes ago | parent [-]

This.

Everything needs an element of human touch, I would somehow only run vanilla things. But if, let’s say, I’m creating backup scripts, I meticulously outline the plan.

pantheragmb 2 hours ago | parent | prev | next [-]

I couldn't agree more, just because I know I already wasted months and pulled the plug :D

slopinthebag an hour ago | parent | prev | next [-]

They will lie to themselves and deny it.

wahnfrieden 3 hours ago | parent | prev | next [-]

You haven’t made money from their use yet?

nothinkjustai 3 hours ago | parent | prev [-]

You’ll get downvoted for this hearsay!

footy 2 hours ago | parent [-]

I think you mean heresy. But maybe I don't get the reference you're making when you say hearsay

bot403 16 minutes ago | parent | next [-]

I'm wondering if there are anti-ai bots trolling the boards. Look at all the usernames of the negative AI posts.

Or maybe the only people left opposing AI are so hardcore against it they form their identity (username) around it

IncRnd an hour ago | parent | prev [-]

Hearsay is a rumor or something that can't be verified.

dmix 2 hours ago | parent | prev | next [-]

I've tried these larger agent skillsets in the past and felt it was a waste of time because it was just doing too much. Just like vim it's often better to pick and choose from the community instead of installing skills like they are an IDE. Skills are way too personal because every dev and dev team is different. So better to treat these as a reference for your own config rather than bulk install someone else's config.

cortesoft 16 minutes ago | parent | prev | next [-]

What makes this better/different than spec-kit? It seems to have a very similar philosophy. I wonder if they could work together? Or would they just be duplicative?

https://github.com/github/spec-kit

thatmf 2 hours ago | parent | prev | next [-]

Why are people so excited to put themselves out of a job?

Not that these or any "skills" will do that, but just- in principle. This is like alienation from labor at scale.

hibikir 2 hours ago | parent | next [-]

Because we've been automating large parts of our former jobs for decades. Otherwise we'd all be trying to build things in the least efficient way possible to maximize how long the job takes, which IMO isn't a great idea.

Humans have been minimizing how much work is needed to get a certain level of output for as long as we can track. It's civilization. Should we go back to farming by hand with hoes, to maximize labor used? Go back to streetlights that are individually lit? The society that falls behind on automation becomes poorer, and eventually just dies, as even the people born there tend to choose to leave to higher productivity places. It happened to eastern europe, it happens to the Amish. To any poor society which gets emigration. Doing more with less has always been exciting.

dewey an hour ago | parent | prev | next [-]

Because usually the people who lose their jobs are people who do not adapt to the market.

Right now it's not clear in which direction everything is involving and that's why people experiment with handing all their data to random agents, figuring out how to store and access context, re-use prompts and other attempts to harness this tech. Most of these will maybe be useless in a year as they might be deeply integrated into the next wave of models but staying on top of the development has always been part of the fun of working in this field.

kiba an hour ago | parent [-]

People are building bots to do the most legible thing possible which is feature in X amount of time. But it doesn't matter if the bottleneck is human thinking time required to output quality code rather than X amount of code written.

clapthewind 2 hours ago | parent | prev | next [-]

Some people are playing the global optimization game; a world where anyone can have any (production grade) software they want.

yieldcrv 34 minutes ago | parent | prev | next [-]

Month 30 of software engineers not existing in 6 months

cuteboy19 43 minutes ago | parent | prev [-]

people are now being encouraged to use ai notetaking features under the guise of productivity.

a worker is just the sum total of all work related context. to collate, verify and organize this context is just asking to be replaced.

CharlesW 6 hours ago | parent | prev | next [-]

From an SEO/LLMO perspective, the discoverability of these skills will be difficult without a rename: https://agentskills.io/

If Addy reads this, how do you pitch this vs. Superpowers? https://github.com/obra/superpowers

consumer451 5 hours ago | parent | next [-]

I would love to know how many people are actually using superpowers.

I showed up on the agentic dev scene prior to superpowers, and I am getting concerned that >50% of my self-rolled processes are now covered by superpowers.

I no longer trust gh stars, can anyone chime in? Is superpowers now truly adopted?

If it is truly valuable, why hasn't Boris integrated the concepts yet?

supermdguy 2 hours ago | parent | next [-]

I've used it off and on over the last month or so. For more complicated tasks (30+ minutes) it works well, and seems to replace a lot of prompting that I'd normally need to do (e.g. asking questions about requirements, creating specs and implementation plans, staying on task). For simple tasks, it tries to do too much and gets in the way.

marcus_holmes 4 hours ago | parent | prev | next [-]

I adopted superpowers, but then adapted it. I've changed some things, added some things. I suspect that my set of agent skills is probably overlapping with OP's by quite a lot now.

I also found that I have different skills for different tasks; at work security is a huge concern and I over-emphasise security in the skills. At play I'm less bothered about security and so the skills I've written to help me build stupid one-shot exploratory websites are less about security and more about refactoring and exploring concepts.

RideOnTime22 3 hours ago | parent | prev | next [-]

It's just the new thing.

People were hyping up Oh My Opencode. When they realized it didn't lead to any significant gains in performance they hopped on the next thing.

And when the same thing happens to Superpowers it'll be something else they cling on because "this time it's different"

nullstyle 5 hours ago | parent | prev [-]

I just removed superpowers from my own setup. In my opinion, given the quality of the planning modes in both claude code and codex, superpowers was really just slowing things down and burning more tokens than vanilla.

ramoz an hour ago | parent | next [-]

It never worked well for me. The only thing I really needed outside of the harnesses was a better plan review surface. https://github.com/backnotprop/plannotator

consumer451 5 hours ago | parent | prev | next [-]

Thank you for the data point.

To give back as much as I can, I use the two built-in CC review processes when appropriate. But, those only do "is this PR good code?"

Far too late did I finally roll my own custom review skill that tests: "does this PR accomplish what the specs required?"

If I could ask for one more vanilla CC skill, it might be that. However, maybe rolling your own repo-aware skill via prompt is better?

horsawlarway 3 hours ago | parent | prev [-]

anecdata, but I ended up in the same spot.

I used superpowers - but it burns waay more tokens for basically the same outcome as a single line that states

"Please do planning and ask any required questions before implementing.

[my prompt]"

On the latest models and with a decent harness, the planning modes are quite good, and the single sentence telling it to ask you questions lets the model pick the right thing to ask about, instead of wasting a bunch of time/tokens on predefined skills that try to force basically the same result.

It does introduce a second set of required interactions, but you can have another agent be your "questions answerer" if you need it (result quality goes down a bit vs answering myself, but still quite good, especially if you spend a bit of time on the answerer prompt)

Basically - things are moving fast enough I'm not convinced buying into superpowers/agentskills/[daily prompt magic beans]/etc tooling really makes sense.

I'd stick to the defaults in the harness for most cases, and then work on being clear with the ask.

ssgodderidge 2 hours ago | parent | prev | next [-]

This is like creating a React framework called ReactJS to compete with NextJS

esafak 5 hours ago | parent | prev | next [-]

Looks like a bunch of canned skills served through a plugin?

ricardobeat 5 hours ago | parent | prev [-]

Does superpowers actually work? The main skill file doesn't inspire much confidence:

    "If you think there is even a 1% chance a skill might apply to what you are doing, you ABSOLUTELY MUST invoke the skill."
CharlesW 4 hours ago | parent [-]

This kind of "overprompting" is one technique that even the best skills/agents use to compensate for under-invocation, which happens when more demure advisory language tends to be rationalized away by LLMs.

It shouldn't be your default, but should absolutely be tried when your skill/agent test suite displays evidence that it's not being reliably invoked without it.

turlockmike 5 hours ago | parent | prev | next [-]

The best way to prompt an LLM is to describe the outcome you want, that's it. They are trained as task completers. A clear outcome is way better than a process.

If the LLM fails, either you didn't describe your outcome sufficiently or is misinterpreted what you said or it couldn't do it (rare).

Common errors should be encoded as context for future similar tasks, don't bloat skills with stuff that isn't shown to be necessary.

stingraycharles 4 hours ago | parent | next [-]

> The best way to prompt an LLM is to describe the outcome you want, that's it. They are trained as task completers. A clear outcome is way better than a process.

This is not true for anything complex. They’re instruction followers, of which task completion is just one facet.

They’re also extremely eager to complete tasks without enough information, and do it wrongly. In the case of just describing task completion, despite your best efforts, there are always some oversights or things you didn’t even realize were underspecified.

So it helps a lot to add some process around it, eg “look up relevant project conventions and information. think through how to complete the task. ask me clarifying questions to resolve ambiguities. blah blah”. This type of prompt will also help with the new Opus 4.7 adaptive thinking to ensure it thinks through the task properly.

stult 4 hours ago | parent [-]

Agreed, and further, I'd argue the OP's division of LLM instructions into either process or outcome specification is a false dichotomy. My agentic process specification is about automatically specifying the outcomes that I would otherwise repeatedly have to tell the LLM to consider, like making sure test coverage is maintained, or that decisions are documented on the original Github issue. Or it's about correcting common failure modes, like when the agent spends an enormous amount of time running repo-wide tests while debugging a focused change, because the agent doesn't consistently optimize around the time-to-implement as an outcome. Arguably part of addressing those failure modes boils down to pure process in the sense that I specify a logical order for achieving the outcomes, e.g. creating a plan before implementing. But that is mostly to organize approval gates for my convenience, rather than structuring the agent's work per se.

tecoholic 4 hours ago | parent | prev | next [-]

If there is anything we have learned in decades of Software engineering, it's "A clear outcome" is not easy to describe. In many cases, it's impossible unless people from 4 different domains collaborate. That's why process matters. It allows for software to be built is a "semi-standardized" way that can allow iterations to get us closed towards the expected outcome, that might emerge over time.

Yes, not everything I use LLMs for going to have the same level of ambiguity or complex requirements. Optimizing by choosing to skip over parts of the process is exactly Addy is talking in this article.

alexjurkiewicz 4 hours ago | parent | prev | next [-]

I agree that many skills are overblown and unnecessary. But there's a lot of value in giving AI the right process. See how much more effective Claude can be for moderate or large changes when using the superpowers skill.

tmaly 4 hours ago | parent | prev | next [-]

Sometimes people don't know what they want.

I prefer the start small and iterate approach to arrive at a result.

Then I ask it to summarize. Sometimes after that I ask it to generalize.

peab 4 hours ago | parent | prev | next [-]

a skill is just reusuable/shareable context. It's just text, really. It's useful for things like documentation on how to use an API (this works better than MCP in my opinion), or a non consensus way of doing something. For example, you can use remotion to generate video. There are useful remotion skills that allow you to reliably generate specific types of videos. Captions of a certain style, for example.

markbao 4 hours ago | parent | prev [-]

That seems a bit reductive. Even with humans, there’s a range of interpretations and ways that something can be built or a task completed. Engineers remember stuff so you don’t have to keep repeating yourself. Skills are a way to describe your outcome without similar repetition.

koliber an hour ago | parent | prev | next [-]

Lately I keep hearing the same thing over and over: the things that are good for managing a team of devs are good for LLMs.

Good test cases.

Clear and concise documentation.

CI/CD.

Best practices and onboarding docs.

Managing LLMs is becoming more and more similar to managing teams of people.

tempoponet 23 minutes ago | parent [-]

Similarly, the agentic coding success stories are from orgs that had all of these things out of the gate.

theahura an hour ago | parent | prev | next [-]

I really wish he wouldn't use AI to write his posts. It would be faster to just post the prompt he used to write the article

petesergeant an hour ago | parent [-]

I wish this fucking meme of "post the prompt" would die. Very little work is one-shotted, very little has a singular "the prompt", most is iterated until it's close to the vision of what the author actually set out to write.

SudheerTammini 2 hours ago | parent | prev | next [-]

Recently I have got an access(enterprise)to the latest ChatGPT module with an ability to write skills to automate repeatable taks. Without any prior knowledge I just started tinkering and now after creating and testing multiple skills in real business environment I can confidently say writing a good skill is a skill itself. As the author mentioned it's not an essay but a specific instructions sets organised in steps and in a concise manner.

konaraddi 2 hours ago | parent | prev | next [-]

There’s so many ways, many redundant, to set up agents for software development that beyond personal/team/org needs+tastes, I need to look into setting up some benchmarks to evaluate what set up is optimal or whether the differences are even worth it.

zmmmmm 5 hours ago | parent | prev | next [-]

I was surprised how long some of these skills are. They are pages and pages long with tables and checkbox lists and code examples, etc.

Curious how normal that is - it would only take a couple of these to really fill the context alot.

gwerbin 3 hours ago | parent | next [-]

I quickly skimmed and it looks like at least a few of them are intended to be more like system prompts for a tightly scoped sub agent than a skill as such. I agree, I wouldn't want to use a lot of of these in a longer-running work session.

I have been successful with short and focused skills so far. I treat them as a reusable snippet of context, but small ones. For example a couple of paragraphs at most about how to use Python in my project and how to run unit tests. I also have several short "info" skills that don't actually provide the agent instructions, they merely contain useful contextual information that the agent can choose to pull in if needed.

Even having too many skills can be an issue because the list of skill names and their descriptions all end up in the context at some point.

tecoholic 4 hours ago | parent | prev | next [-]

I have written zero skills, so not sure how normal it is. I counted the words in couple of them and they seem to be around 2k range. So 5 skills would be around 10K. Even at a small LLM context of 128k, that's still around 10%. And for a 1M context window like the big ones, it barely registers.

umeshunni an hour ago | parent | prev | next [-]

> it would only take a couple of these to really fill the context alot.

Only skill front-matter (name, description, triggers etc) are loaded within context by default, so this isn't likely to happen without 1000s of skills.

sergiotapia 4 hours ago | parent | prev [-]

I reviewed the line counts of my own project skill files, and the top 3 I have are:

    805 lines
    660 lines
    511 lines
Maybe I am _too_ conservative here. Lots to explore.
mohamedkoubaa 3 hours ago | parent [-]

No, you aren't.

codemog 3 hours ago | parent | prev | next [-]

Everyone who writes this kind of stuff skips the boring parts: science and engineering.

Yep, benchmarks, comparisons of with/without, samples of generated code with/without. This kind of stuff matters, and you may be making your agent stupider or getting worse results without real analysis.

Also this prose reads like the author has drunk the Google kool-aid and not much else.

ElijahLynn 6 hours ago | parent | prev | next [-]

I've been using Agent Skills on a new side project and I'm really impressed so far! It really holds my hand a lot of the way and really lets me focus on developing a product instead of figuring out how to build it. I get to focus much more energy on high level architecture and product design.

Very grateful for this repository and everyone who contributed to it!

senko 5 hours ago | parent | prev | next [-]

> This isn’t a coincidence. It’s the same SDLC every functioning engineering organisation runs, just in different vocabulary. [...] Amazon calls it the working-backwards memo and the bar raiser. Every healthy team has some version of this loop.

This (sdlc == working backwards & bar raiser) is so horribly wrong, that I hope this was an LLM hallucination.

In general, I'm starting to see these agent scaffolding systems as an anti-pattern: people obsess over systems for guiding agents and construct elaborate rube-goldberg machines and then others cargo-cult them wholesale, in an effort to optimize and control a random process and minimize human involvement.

yks 5 hours ago | parent | next [-]

The problem is it’s so rarely A/B tested, definitely not at scale. An engineer, who writes all these my-workflow-but-for-agents skills, proceeds to get the good outcome, while also seeing affirmations that the agent did follow the prescribed processes - that is considered a victory. In reality the outcome could’ve been just as good if they fed Claude a spec + acceptance criteria, or even a basic prompt for the simpler tasks.

AndyNemmity 4 hours ago | parent [-]

Yeah, I Blind A/B test everything, and a lot.

But I don't expect anyone to every use my stuff. It's complicated as hell. But it's for me, and it works without me having to remotely think about the complexity.

I love that.

BOOSTERHIDROGEN 5 hours ago | parent | prev [-]

This is how similarly we collectively approach Taylorism, isn't it? However, the world favors capitalism, of which Taylorism becomes a handy scaffolding.

gavmor 5 hours ago | parent | prev | next [-]

Naming things is such a hard problem that many devs don't even bother trying.

That being said, this post is full of reasonable assertions, so I'm looking forward to experimenting with this... whatever it is.

fragmede 4 hours ago | parent [-]

Wait, shit, are people using LLMs to name things now? I'm definitely out of a job then!

y-curious 6 hours ago | parent | prev | next [-]

Thanks for this, going to steal a lot of this. I would install your plugin, but I worry about being able to delete it later. I also think that each one of these is better served customized to a developer. That said, I'm still going to grab some of these, thanks!

bvirkler 4 hours ago | parent [-]

A plugin is just a set of files, right? why wouldn't you be able to delete it later?

gosukiwi 5 hours ago | parent | prev | next [-]

I wonder how does this compare to superpowers

AndyNemmity 4 hours ago | parent | prev | next [-]

This is why I created the /do router, to route to all skills. I also have anti rationalization, progressive context discovery etc.

I only make it for me, so it's a bit complex and targeted towards me, and what I do, but it's pretty easy to adjust things.

https://github.com/notque/vexjoy-agent

Working on reading through Agent Skills, it seems we've converged on a lot of the same points, and I've never seen it, so trying to get an understanding of it.

Edit 1: I don't like all the commands. I just rely on a single router to automatically decide what I want, and that feels like the most reasonable way to me to communicate with it.

I don't want to remember things. And that's the way for me to scale the number of skills and activities. I don't have to think about them.

Edit 2: We have very different routers.

https://github.com/addyosmani/agent-skills/blob/f504276d8e07...

vs

https://github.com/notque/vexjoy-agent/blob/main/skills/do/S...

I personally wouldn't call theirs an intelligent router. They are dancing between a few different skills. We have extremely different setups there.

But of course, I'm using way more context to get it done. I'm even sending it out to Haiku to build the route choices.

I choose to use tokens to make things better for myself, not everyone would make the same choice, so I certainly see why they are using a few skills, and composing them.

Edit 3: This is much easier for a user to wrap their head around because there's much less.

I am only focused on the best improvements I can make that show value for my use cases. This is straight foward to reason about.

This seems like a nice way to get the best concepts for people trying to understand them. I commend them for a clean, simple approach.

Edit 4: Yeah, I think there are some things I can learn from them which is always good.

I especially like simple decisions like collapsing the install details for each harness in the readme.

I'm going to read over the entire thing and look for opportunities to improve my stuff.

We are all working together, learning, testing, building, trying to find the best way to implement things.

encoderer 7 hours ago | parent | prev [-]

I adopted a couple of these, the api design and ui testing ones have been particularly helpful.