Remix.run Logo
sb057 4 hours ago

Well yeah, LLMs generate resumes (and other text) that they judge as superior to alternative plausible texts. Why would that judgement change just because a different instance hasn't seen it before? To anthropomorphize it, it's like having a hiring manager write a resume, get amnesia, and then have to judge it among other resumes.

Ekaros 4 hours ago | parent | next [-]

Seems like obvious thing. If LLM have some weights involved on what is good resume to write there is very likely correlation to what would be good resume to rate. And this is probably a even good thing, at least from model quality perspective. Model itself should rate highly whatever it produces. There should be correlation between output and review of same output.

bendergarcia 4 hours ago | parent | prev [-]

I wouldn’t put it past these tech companies to prefer ai outputs to encourage ai inputs