Remix.run Logo
fpgaminer 2 hours ago

I wish they would keep 4.1 around for a bit longer. One of the downsides of the current reasoning based training regimens is a significant decrease in creativity. And chat trained AIs were already quite "meh" at creative writing to begin with. 4.1 was the last of its breed.

So we'll have to wait until "creativity" is solved.

Side note: I've been wondering lately about a way to bring creativity back to these thinking models. For creative writing tasks you could add the original, pretrained model as a tool call. So the thinking model could ask for its completions and/or query it and get back N variations. The pretrained model's completions will be much more creative and wild, though often incoherent (think back to the GPT-3 days). The thinking model can then review these and use them to synthesize a coherent, useful result. Essentially giving us the best of both worlds. All the benefits of a thinking model, while still giving it access to "contained" creativity.

greatgib 29 minutes ago | parent | next [-]

I also terribly regret the retirement of 4.1. From my own personal usage, for code or normal tasks, I clearly noticed a huge gap in degraded performance between 4.1 and 5.1/5.2.

4.1 was the best so far. With straight to the point answers, and most of the time correct. Especially for code related questions. 5.1/5.2 on their side would a lot more easily hallucinate stupid responses or stupid code snippet totally not what was expected.

MillionOClock 2 hours ago | parent | prev | next [-]

My theory, based on what I would see with non-thinking models, is that as soon as you start detailing something too much (ie: not just "speak in the style of X" but more like "speak in the style of X with [a list of adjectives detailing the style of X]" they would loose creativity, would not fit the style very well anymore etc. I don't know how things have evolved with new training techniques etc. but I suspected that overthinking their tasks by detailing too much what they have to do can lower quality in some models for creative tasks.

perardi 2 hours ago | parent | prev [-]

Have you tried the relatively recent Personalities feature? I wonder if that makes a difference.

(I have no idea. LLMs are infinite code monkeys on infinite typewriters for me, with occasional “how do I evolve this Pokémon’ utility. But worth a shot.)