▲ | jrimbault 11 hours ago | |
The issue is probably that the first sentence, the prompt, statistically looks like fantasy (as in the literary genre) and it primes the LLM to answer in the same probabilistic genre. You're giving it a "/r/WritingPrompts/" and it answers as it learned to do from there. | ||
▲ | beklein 10 hours ago | parent | next [-] | |
I just want to second this. Your prompt asks for a description, and you get a description. If you instead ask something like, "Do or don't you know about the unspoken etiquette ..." you'll get an answer about whether that specific thing exists. https://chatgpt.com/share/680b32bc-5854-8000-a1c7-cdf388eeb0... It's easy to blame the models, but often the issue lies in how we write our prompts. No personal criticism here—I fall short in this way too. A good tip is to ask the model again, with the prompt + reply and the expected reply why this didn't work... we all will get better over time (humans and models) | ||
▲ | alissa_v 11 hours ago | parent | prev [-] | |
Good catch! That makes a lot of sense. The fantasy-like phrasing probably directed the AI's response. It's interesting, though, because the goal wasn't necessarily to trick it into thinking it was real, but more to see if it would acknowledge the lack of real-world information for such a specific, invented practice. |