| ▲ | bonoboTP 3 hours ago | |
> Number of released papers/number of citations is a target Only in stupid university leaderships is that truly what gets you hired or promoted. It's simply not true. Junior researchers in fact are believing it stronger than the facts actually support. Yes, you have to have a solid amount of publications, but doing a ridiculous amount of low-impact salami-sliced stuff or getting your name on a ton of papers where you did no real work is not going to win you a job. People who evaluate applications also live in this world and know that these metrics are being gamed. It's a cat and mouse game but the cats are also paying attention. You can only play this against really dumb government bureaucracies that mechanically give points for publications and have hard numerical criteria etc. Good institutions don't do that. Good evaluators actually read the papers themselves. Of course you can't read the papers of every single applicant if there are many. But once the applicant gets into the a somewhat filtered down list, reading the paper(s) or having an interview about it, or having them give a talk is much more informative than the number of the papers. Still not perfect, because some people can't communicate well, but communicating is part of the job, so maybe that's super bad but somewhat bad. Evaluators will use also other evidence such as recommendation letters (informally being aware of the reputation of the recommender), previous fellowships or grants obtained, etc. None of these are foolproof in themselves. But someone who has super few publications relative to their career stage will need some other piece of evidence in favor. In machine learning and AI, peer reviews are known to be quite random. If you have a good Arxiv-only paper that makes sense and you can give a good talk on it and answer questions, that will get you further than having a rubberstamp on some paper that's "meh, so what". There are some players in this game (which includes funding agencies, journals, university administration, hiring committees, conference organizers, students, etc) that are more ossified and slow-moving than others. And it's also true that double blind peer review and the rubberstamp of a top-tier conference was mostly beneficial to small, not well connected research groups, as it puts the paper on an equal footing with the big labs. The more this system erodes, to more we fall back to reputation and branding of big labs and famous researchers. Again, because there is no infinite time and infinite wisdom available to pick from applicants and there never will be. There are only tradeoffs. | ||
| ▲ | 2 hours ago | parent [-] | |
| [deleted] | ||