| ▲ | johndhi 4 hours ago | |
Another way to phrase this might be that LLMs make better resumes no? | ||
| ▲ | budoso 4 hours ago | parent | next [-] | |
If that were the case they would select the ones generated by other models at a similar rate to the ones they generated themselves. | ||
| ▲ | delecti 4 hours ago | parent | prev | next [-] | |
You'd have to define "better". All this shows is that LLMs generate resumes that fit the heuristics LLMs use to judge resumes. And that makes sense, but isn't necessarily a given. | ||
| ▲ | mrktf 4 hours ago | parent | prev | next [-] | |
Or in other words: LLM it is optimizing function which is generated by same LLM, think you have random variable y, where generator sin(x+r) and your optimizer trying to fit function sin(x+unkown1) + unknown2 ("unknown" function) - it is obvious that will find best fit. | ||
| ▲ | rectang 4 hours ago | parent | prev | next [-] | |
By one metric, yes! If you are a candidate who wants to be hired, and your target employers use LLMs to filter resumes, then an LLM-generated resume that the employer LLM-powered resume filters favor is "better" — as in "more likely to get you the job". | ||
| ▲ | jezzamon 4 hours ago | parent | prev | next [-] | |
In text generation, LLM language is full of very emphatic phrases. At a surface level it might sound stronger. But as a human reader, it's not necessarily better | ||
| ▲ | mathgeek 4 hours ago | parent | prev | next [-] | |
*for getting past ATS reviews. | ||
| ▲ | Emanation 4 hours ago | parent | prev [-] | |
Where I work, my boss decided to make an application that uses AI to score long text field entries to ensure required information is present. The AI lacks the ability to extract nuance and implicit information, which means entires end up being long winded and repeatitive. For each requirement its looking for, it must be explicity expressed-- it's quite unnatural, and almost feels like solving a puzzle, to which the obvious solution is to write a comment, then give it and the AI feedback to a failing comment to AI, so it can generate the proper structure the rubric-AI is looking for. LLMs are statistically driven, and I can only imagine having the AI rewrite the comment produces a result that's more statistically fitting to the model than if any given human were to write it. So, it might mean, yeah, LLMs are better at writing resumes that the LLM can successfully classify-- are they better for a human to consume? Who knows. | ||