Remix.run Logo
CivBase 2 hours ago

This seems pretty obvious, no?

It's pattern matching on training material. There is almost certainly an overlap between positivity and success in the training material. Positive prompts cause the pattern matching to weight towards positivity and therefor more successful material.

lamasery 2 hours ago | parent [-]

The training or system prompts have shoved the probabilities toward a space that tends to select “halt” sooner. You need to drag the probability weights around until they are less likely to reach “halt” so soon.

Nice language often sorta does this for whatever model(s) they looked at, and is also something people are likely to try. Probably lots and lots of nonsense token combos would work even better, but who’s gonna try sticking “gerontocratic green giant giraffes” on the end of their prompts to see if it helps?

Positive or negative language likely also prevents pulling the probabilities away from the correct topic, being so generic a thing. The above suggestion might only be ultra-effective if the topic is catalytic converters, for some reason, and push the thing into generating tokens about giraffes otherwise. How would you ever discover the dozens or thousands of more-effective but only-sometimes nonsense token combos? You’d need automation and a lot of brute force, or some better way to analyze the LLM’s database.