Remix.run Logo
rahimnathwani 10 hours ago

This is unsurprising and irrelevant.

When you create a skill for a particular model, you don't typically ask the model to create the skill based solely on its own latent knowledge. Otherwise, you'd expect the effect to be similar to telling the model 'make a plan before acting, make not mistakes'.

But that's what the paper's authors did!

When they say 'self-generated' they don't allow the model any tool access at all, not even web search.

It would be much more interesting if they had tested skills that were created in one of these ways:

A) The model interviews a human and then creates the skill, or

B) The model executes one or more deep research tasks in order to gather information, or

C) Some combo of the above.

cheema33 8 hours ago | parent | next [-]

> This is unsurprising and irrelevant. When you create a skill for a particular model, you don't typically ask the model to create the skill based solely on its own latent knowledge.

This!

The only surprising part about the paper is that somebody wrote a paper on skills without a good understanding of the topic.

therealdrag0 5 hours ago | parent | prev | next [-]

Modern science encourages publishing non-surprising results.

And also I’ve seen my manager LARP as an engineer by asking a model to generate a best practices doc for a service repo without supplying any additional context. So this sort of paper helps discourage that behavior.

stitched2gethr 6 hours ago | parent | prev | next [-]

I had to scroll too far to find this take. 100%.

This is like saying the CLAUDE.md or AGENTS.md is irrelevant because the LLM generated it.

zahlman 9 hours ago | parent | prev [-]

> Otherwise, you'd expect the effect to be similar to telling the model 'make a plan before acting, make not mistakes'.

Have there not been previous iterations of these tools where such techniques were actually effective?

gwern 7 hours ago | parent | next [-]

But that's a reason you should expect it to stop working soon, just like all the older tricks like "my grandmother will die". If you have a universal 'blind' prompt which can increase performance a little bit... the AI labs can just toss that into the training loop to teach the model to do it automatically, whatever 'it' was, like 'trying harder' or 'writing down a useful idea'. And then the prompt stops working because the next generations do it by default.

(This also suggests that you should expect them to generally be bad at judging novel self-generated prompts/skills - if they could judge those, they would already be using them! There is a generator-verifier gap, but it is already exploited heavily during post-training and not much low-hanging fruit left there.)

zahlman 4 hours ago | parent [-]

> But that's a reason you should expect it to stop working soon

I agree. (And it seems like it already stopped working, if I understood others here correctly.)

But again if I understood others here correctly, an academic paper like this would necessarily be studying models that are well behind the leading edge at time of publication. My argument is that the study authors shouldn't be faulted for investigating something that currently seems unlikely to work, because at the time of investigation it would have seemed much more likely to work.

rahimnathwani 9 hours ago | parent | prev [-]

Yes, but this paper studied recent models.