▲ | balder1991 5 days ago | |||||||
Yeah, if you go to a subreddit like ClaudeAI, you convince yourself there’s something you don’t know because they keep telling people it’s all their prompt faults if the LLM isn’t turning them into billionaires. But then you read more of the comments and you see it’s really different interpretations from different people. Some “prompt maximalists” believe that perfect prompting is the key to unlocking the model's full potential, and that any failure is a user error. They tend to be the most vocal and create a sense that there's a hidden secret or a "magic formula" you're missing. | ||||||||
▲ | Jensson 5 days ago | parent [-] | |||||||
Its basically making a stone soup, people wont believe it can be done, but then put a stone in water and boil it, and tell people if you aren't getting a nice soup you aren't doing it right, just put in all these other ingredients that aren't required but really helps and you get this awesome soup! Then someone say that isn't stone soup, they just did all the work without the stone! But that is just a stone hater, how can you not see this awesome soup made by the stone? | ||||||||
|