| ▲ | vladsh a day ago | |
Skills are a pretty awkward abstraction. They emerged to patch a real problem, generic models require fine-tuning via context, which quickly leads to bloated context files and context dilution (ie more hallucinations) But skills dont really solve the problem. Turning that workaround into a standard feels strange. Standardizing a patch isn’t something I’d expect from Anthropic, it’s unclear what is their endgame here | ||
| ▲ | ako 21 hours ago | parent | next [-] | |
Skills don’t solve the problem if you think an llm should know everything. But if you see LLMs mostly as a text plan-do-check-act machine that can process input text, generate output text, and can create plans how to create more knowledge and validate the output, without knowing everything upfront, skills are perfectly fine solution. The value of standardizing skills is that the skills you define work with any agentic tool. Doesn't matter how simple they are, if they dont work easily, they have no use. You need a practical and efficient way to give the llm your context. Just like every organization has its own standards, best practices, architectures that should be documented, as new developers do not know this upfront, LLMs also need your context. An llm is not an all knowing brain, but it’s a plan-do-check-act text processing machine. | ||
| ▲ | brabel 21 hours ago | parent | prev | next [-] | |
How would you solve the same problem? Skills seem to be just a pattern (before this spec) that lets the LLMs choose what information they need to "load". It's not that different from a person looking up the literature before they do a certain job, rather than just reading every book every time in case it comes in handy one day. Whatever you do you will end up with the same kind of solution, there's no way to just add all useful context to the LLM beforehand. | ||
| ▲ | root_axis 21 hours ago | parent | prev | next [-] | |
> it’s unclear what is their endgame here Marketing. That defines pretty much everything Anthropic does beyond frontier model training. They're the same people producing sensationalized research headlines about LLMs trying to blackmail folks in order to prevent being deleted. | ||
| ▲ | verdverm 20 hours ago | parent | prev | next [-] | |
> Standardizing a patch isn’t something I’d expect from Anthropic This is not the first time, perhaps expectation adjustment is in order. This is also the same company that has an exec telling people in his Discord (15m of fame recently) Claude has emotions | ||
| ▲ | wuliwong a day ago | parent | prev | next [-] | |
>But skills dont really solve the problem. I think that they often do solve the problem, just maybe they have some other side effects/trade offs. | ||
| ▲ | theshrike79 21 hours ago | parent | prev [-] | |
They’re not a perfect solution, but they are a good one. The best one we have thought of so far. | ||