▲ | coffeefirst 5 days ago | ||||||||||||||||
Agreed. I keep trying stuff because I feel like I’m missing whatever magic people are talking about. So far, I’ve found nothing of value besides natural language search. | |||||||||||||||||
▲ | balder1991 5 days ago | parent [-] | ||||||||||||||||
Yeah, if you go to a subreddit like ClaudeAI, you convince yourself there’s something you don’t know because they keep telling people it’s all their prompt faults if the LLM isn’t turning them into billionaires. But then you read more of the comments and you see it’s really different interpretations from different people. Some “prompt maximalists” believe that perfect prompting is the key to unlocking the model's full potential, and that any failure is a user error. They tend to be the most vocal and create a sense that there's a hidden secret or a "magic formula" you're missing. | |||||||||||||||||
|