Remix.run Logo
SignalStackDev 3 hours ago

Something I noticed building multi-agent pipelines: the ablation compounds. Had a 4-step pipeline - summarize, expand, review, refine - and by step 3 everything had the same rhythm and vocabulary. Anchoring the original source text explicitly at each step helped, but only partially.

The more interesting cause I think: RLHF is the primary driver, not just the architecture. Fine-tuning is trained on human preference ratings where "clear," "safe," and "inoffensive" consistently win pairwise comparisons. That creates a training signal that literally penalizes distinctiveness - a model that says something surprising loses to one that says something expected. Successful RLHF concentrates probability mass toward the median preferred output, basically by definition.

Base models - before fine-tuning - are genuinely weirder. More likely to use unusual phrasing, make unexpected associative leaps, break register mid-paragraph. Semantic ablation isn't a side effect of the training process, it's the intended outcome of the objective.

Which makes the fix hard: you can't really prompt your way out of it once a model is heavily tuned. Temperature helps a little but the distribution is already skewed. Where we've gotten better results is routing "preserve the voice" tasks to less-tuned models, and saving the heavily RLHF'd models for structured extraction and classification where blandness is actually what you want.

writeslowly 21 minutes ago | parent | next [-]

I wonder if you can use lower quality models (or some other non-llm related process) to inject more "noise" into the text in between stages. Of course it wouldn't help retain uniqueness from the original source text, just add more in between.

causal 40 minutes ago | parent | prev [-]

I’m not convinced removing RLHF would really make the probabilities generator give us distributions that can diverge from the mean while remaining useful.

In other words, this might not a problem that can be overcome in LLMs alone.