▲ | imiric 19 hours ago | ||||||||||||||||||||||||||||||||||||||||||||||
> Look for frantic efforts by companies to offload responsibility for LLM mistakes onto consumers. Not just by companies. We see this from enthusiastic consumers as well, on this very forum. Or it might just be astroturfing, it's hard to tell. The mantra is that in order to extract value from LLMs, the user must have a certain level of knowledge and skill of how to use them. "Prompt engineering", now reframed as "context engineering", has become this practice that separates anyone who feels these tools are wasting their time more than they're helping, and those who feel that it's making them many times more productive. The tools themselves are never the issue. Clearly it's the user who lacks skill. This narrative permeates blog posts and discussion forums. It was recently reinforced by a misinterpretation of a METR study. To be clear: using any tool to its full potential does require a certain skill level. What I'm objecting to is the blanket statement that people who don't find LLMs to be a net benefit to their workflow lack the skills to do so. This is insulting to smart and capable engineers with many years of experience working with software. LLMs are not this alien technology that require a degree to use correctly. Understanding how they work, feeding them the right context, and being familiar with the related tools and concepts, does not require an engineering specialization. Anyone claiming it does is trying to sell you something; either LLMs themselves, or the idea that they're more capable than those criticizing this technology. | |||||||||||||||||||||||||||||||||||||||||||||||
▲ | rightbyte 16 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
> Or it might just be astroturfing, it's hard to tell. Compare the hype for commercial SaaS models to say Deepseek. I think there is an insane amount of astroturfing. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
▲ | dmbche 7 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Simple thought I had reading this: I've used a tool to do a task today. I used a suction sandblasting machine to remove corrosion from a part. Without the tool, had I wanted to remove the corrosion, I would've spent all day (if not more) scraping it with sandpaper (is that a tool too? With the skin of my hands then?) - this would have been tedious and could have taken me all day, scraping away millimeter by millimeter. With the machine, it took me about 3 minutes. I necessitated 4-5 minutes of training to attain this level of expertise. The worth of this machine is undeniable. How is it that LLMs are not at all so undeniably efficient? I keep hearing people tell me how they will take everyones job, but it seems like the first faceplant from all the big tech companies. (Maybe second after Meta's VR stuff) | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
▲ | rgoulter 18 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
A couple of typical comments about LLMs would be: "This LLM is able to capably output useful snippets of code for Python. That's useful." and "I tried to get an LLM to perform a niche task with a niche language, it performed terribly." I think the right synthesis is that there are some tasks the LLMs are useful at, some which they're not useful at; practically, it's useful to be able to know what they're useful for. Or, if we trust that LLMs are useful for all tasks, then it's practically useful to know what they're not good at. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
▲ | mumbisChungo 19 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
The more I learn about prompt engineering the more complex it seems to be, but perhaps I'm an idiot. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
▲ | 17 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
[deleted] | |||||||||||||||||||||||||||||||||||||||||||||||
▲ | cheevly 17 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Unless you have automated fine-tuning pipelines that self-optimize optimize models for your tasks and domains, you are not even close to utilizing LLMs to their potential. But stating that you don’t need extensive, specialized skills is enough of a signal for most of us to know that offering you feedback would be fruitless. If you don’t have the capacity by now to recognize the barrier to entry, experts are not going to take the time to share their solutions with someone unwilling to understand. | |||||||||||||||||||||||||||||||||||||||||||||||
▲ | ygritte 18 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
The sad thing is that it seems to work. Lots of people are falling for the "you're holding it wrong" narrative. | |||||||||||||||||||||||||||||||||||||||||||||||
▲ | AnimalMuppet 12 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||
It's probably not astroturfing, or at least not all astroturfing. At least some software engineers tend to do this. We've seen it before, with Lisp, and then with Haskell. "It doesn't work for you? You just haven't tried it for long enough to become enlightened!" Enthusiastic supporters that assume that if was highly useful for them, it must be for everyone in all circumstances, and anyone who disagrees just hasn't been enlightened yet. |