▲ | amelius 3 days ago | |
I think you cannot really change the personality of an LLM by prompting. If you take the statistical parrot view, then your prompt isn't going to win against the huge numbers of inputs the model was trained with in a different personality. The model's personality is in its DNA so to speak. It has such an urge to parrot what it knows that a single prompt isn't going to change it. But maybe I'm psittacomorphizing a bit too much now. | ||
▲ | joquarky 2 days ago | parent | next [-] | |
I liked the completion models because they have no chatter that needs to follow human conversational protocol, which inherently introduces "personality". The only difference from conversational chat was that you had to be creative about how to set up a "document" with the right context that will lead to the answer you're looking for. It was actually kind of fun. | ||
▲ | brookst 3 days ago | parent | prev [-] | |
Yeah different system prompts make a huge difference on the same base model”. There’s so much diversity in the training set, and it’s such a large set, that it essentially equals out and the system prompt has huge leverage. Fine tuning also applies here. |