Remix.run Logo
FuckButtons 5 hours ago

That’s because it’s superstition.

Unless someone can come up with some kind of rigorous statistics on what the effect of this kind of priming is it seems no better than claiming that sacrificing your first born will please the sun god into giving us a bountiful harvest next year.

Sure, maybe this supposed deity really is this insecure and needs a jolly good pep talk every time he wakes up. or maybe you’re just suffering from magical thinking that your incantations had any effect on the random variable word machine.

The thing is, you could actually prove it, it’s an optimization problem, you have a model, you can generate the statistics, but no one as far as I can tell has been terribly forthcoming with that , either because those that have tried have decided to try to keep their magic spells secret, or because it doesn’t really work.

If it did work, well, the oldest trick in computer science is writing compilers, i suppose we will just have to write an English to pedantry compiler.

majormajor 4 hours ago | parent | next [-]

> If it did work, well, the oldest trick in computer science is writing compilers, i suppose we will just have to write an English to pedantry compiler.

"Add tests to this function" for GPT-3.5-era models was much less effective than "you are a senior engineer. add tests for this function. as a good engineer, you should follow the patterns used in these other three function+test examples, using this framework and mocking lib." In today's tools, "add tests to this function" results in a bunch of initial steps to look in common places to see if that additional context already exists, and then pull it in based on what it finds. You can see it in the output the tools spit out while "thinking."

So I'm 90% sure this is already happening on some level.

stingraycharles 2 hours ago | parent | prev | next [-]

I actually have a prompt optimizer skill that does exactly this.

https://github.com/solatis/claude-config

It’s based entirely off academic research, and a LOT of research has been done in this area.

One of the papers you may be interested in is “emotion prompting”, eg “it is super important for me that you do X” etc actually works.

“Large Language Models Understand and Can be Enhanced by Emotional Stimuli”

https://arxiv.org/abs/2307.11760

onion2k 2 hours ago | parent | prev | next [-]

i suppose we will just have to write an English to pedantry compiler.

A common technique is to prompt in your chosen AI to write a longer prompt to get it to do what you want. It's used a lot in image generation. This is called 'prompt enhancing'.

imiric 2 hours ago | parent | prev | next [-]

> That’s because it’s superstition.

This field is full of it. Practices are promoted by those who tie their personal or commercial brand to it for increased exposure, and adopted by those who are easily influenced and don't bother verifying if they actually work.

This is why we see a new Markdown format every week, "skills", "benchmarks", and other useless ideas, practices, and measurements. Consider just how many "how I use AI" articles are created and promoted. Most of the field runs on anecdata.

It's not until someone actually takes the time to evaluate some of these memes, that they find little to no practical value in them.[1]

[1]: https://news.ycombinator.com/item?id=47034087

rzmmm 3 hours ago | parent | prev [-]

I think "understand this directory deeply" just gives more focus for the instruction. So it's like "burn more tokens for this phase than you normally would".