| ▲ | avaer 3 hours ago | |||||||
Thanks for the research! Though I feel like industry veterans (especially those working with LLMs) came to this conclusion without having to write a single prompt. Even ignoring the technical merits of these kinds of hacks, if you think you've outwitted billions of dollars of statistics with a prompt, you're probably wrong at this point. What I find most interesting is the popularity of these snake oils, especially the ones that are easy to install and never check. The tech moves so fast and the research is so scarce and poor-quality that the bullshit asymmetry principle wins and people buy into these cargo cults. Maybe we need a plugin to check if a new plugin/prompting technique/LLM lifehack is BS. | ||||||||
| ▲ | max-t-dev 2 hours ago | parent | next [-] | |||||||
I think there is some benefit to plugins, it's hard to say how much. I find the superpowers plugin is quite good, mostly in its structured approach to a conversation. Generally they do feel pretty overhyped. | ||||||||
| ▲ | 0xbadcafebee 3 hours ago | parent | prev | next [-] | |||||||
The thing is they're not BS when they're released. Prompt Engineering was a real thing that had real results, but then they re-trained the models and now prompt engineering isn't needed on large models. Techniques are gonna vary over time. | ||||||||
| ▲ | oezi 3 hours ago | parent | prev [-] | |||||||
Maybe we need a term such as prompt homeopathy to call out prompt engineering without any empirical proof. | ||||||||
| ||||||||