▲ | kaladin-jasnah 3 days ago | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Things like citation brokers (paid to cite papers), abuse of power, paper mills, and blackmail (pg. 10) is appalling to me. I have to question how we ended up here. Academia seems very focused on results and output, and this is used as a metric to measure a researcher's worth or value. Has this always been an issue in academia, or is this an increasing or new phenomenon? It seems as if there is a widespread need to take shortcuts and boost your h-index. Is there a better way to determine the impact of research and to encourage researchers to not feel so pressed to output and boost their citations? Why is it like this today? Academic mathematics, from what I've seen, seems incredibly competitive and stressful (to be fair, so does competition math from a young age), perhaps because the only career for many mathematicians (outside a topics with applications such as but not limited to number theory, probability, and combinatorics) is academia. Does this play into what this article talks about? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | cycomanic 3 days ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
In my time in academia (~20 years) I have seen the demands and competition increase quite significantly, however talking to older researchers the this really started in the 90s the demands to demonstrate measurable outcomes increased dramatically and funding moved to be primarily through competitive grants (compared to significant base funding for researchers previously). The issue is that while previously it was common for academics to have funding for 1-2 PhD students to look into new research areas, now many researchers are required to bring in competitive grants for even covering part of their salary. What that means is that researchers become much more risk averse, and and stay in their research area even if they believe it is not the most interesting/imapactfull. You just can't afford to not publish for several years, to e.g. investigate a novel research direction, because without the publications it becomes much much harder to secure funding in the future. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | aoki 3 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The issue in all fields became significantly worse as developing countries decided their universities needed to become world class and demanded more international publications for promotion. Look at the universities in the table in the paper and you can see which countries are clearly gaming the system. If your local bureaucrats can’t tell which journals are good and which are fake, the fake journals become the most efficient strategy. Even worse, publishers figured out that if you can attract a few high-citation papers, your impact factor will go way up (it’s an arithmetic mean) and your fake journal becomes “high quality” according to the published citation metrics! Math is particularly susceptible to this because there are few legitimate publications and citation counts are low. If you are a medical researcher you can publish fake medical papers but more easily become “high impact” on leaderboards (scaled by subject) by adding math topics to your subjects/keywords. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | guyomes 3 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
> Has this always been an issue in academia, or is this an increasing or new phenomenon? The introduction of this article [1] gives an insight on the metric used in the Middle Ages. Essentially, to keep his position in a university, a researcher could win public debates by solving problems nobody else could solve. This led researchers to keep their work secret. Some researchers even got angry about having their work published, even with proper credit. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | gus_massa 3 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
> Is there a better way to determine the impact of research and to encourage researchers to not feel so pressed to output and boost their citations? Why is it like this today? It's hard, specially if you have to compare people of different areas (like algebra vs calculus) that have different threshold for what is a paper worthily result and each community has a different size and different length of review time. Solution 1) Just count the papers! Each one is 1 point. You can finish before lunch. Solution 2) Add some metrics like citations (that favor big areas and areas that like to add many citations). Add impact index (that has the same problem). How do you count self citations and citation rings? Solution 3) Cherry pick some good journals, but ensure the classification committee is not just making a list of the journals they publish in. Filter the citations, or add some weight according to the classification. Solution 4) Give the chair of the department a golden crown and pretend s/he is the queen/king and can do whatever they like. It may work, but there are BDFL and nepotist idiots. Now try scaling it for a country. Solution 5) RTFA. Nah. It's too hard. Assume you have 5 candidates and they have 5 papers in the last 5 years (or some other arbitrary threshold). You need like two weeks to read a paper, more if it's not in you area, perhaps you can skim it in 1 or 2 days, but it's not easy to have an accurate understanding of how interesting is the result and how much impact it has in the community. (How do you evaluate if it's a interesting new result, or just a hard stupid calculation?) You can distribute the process of reading the papers, but now you have the problem of merging the opinion of different people. (Are your 3/5 stars the same that my 3/5 stars?) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | non_aligned 3 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I've seen similar stuff in a couple of other places, including IT back in the 1990s (back when it wasn't nearly as glamorous as it is today). I think some of this has to do with... resentment? You're this incredibly smart person, you worked really hard, and no one values you. No one wants to pay you big bucks, no one outside a tiny group knows your name even if you make important contributions to the field. Meanwhile, all the dumb people are getting ahead. It's easy to get depressed, and equally easy to decide that if life is unfair, it's OK to cheat to win. Add to this the academic culture where, frankly, there are fewer incentives to address misbehavior and where many jobs are for life... and the nature of the field, which makes cheating is easy (as outlined in the article)... and you have an explosive mix. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | SilverElfin 3 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abuse of power is definitely not new. Professors have historically overworked their grad students and withheld support for their progress towards a PhD or a paper unless they get something out of it. For women it’s extra bad because they can use their power in other ways. |