| ▲ | mike_hearn an hour ago | |
I explored this question a bit a few years ago when GPT-3 was brand new. It's tempting to look for technological solutions to social problems. It was COVID so public health papers were the focus. The idea failed a simple sanity check: just going to Google Scholar, doing a generic search and reading randomly selected papers from within the past 15 years or so. It turned out most of them were bogus in some obvious way. A lot of ideas for science reform take as axiomatic that the bad stuff is rare and just needs to be filtered out. Once you engage with some field's literatures in a systematic way, it becomes clear that it's more like searching for diamonds in the rough than filtering out occasional corruption. But at that point you wonder, why bother? There is no alchemical algorithm that can convert intellectual lead into gold. If a field is 90% bogus then it just shouldn't be engaged with at all. | ||
| ▲ | MarkusQ 5 minutes ago | parent [-] | |
There is in fact a method, and it got us quite far until we abandoned it for the peer review plus publish or perish death spiral in the mid 1900s. It's quite simple: 1) Anyone publishes anything they want, whenever they want, as much or as little as the want. Publishing does not say anything about your quality as a researcher, since anyone can do it. 2) Being published doesn't mean it's right, or even credible. No one is filtering the stream, so there's no cachet to being published. We then let memetic evolution run its course. This is the system that got us Newton, Einstein, Darwin, Mendeleev, Euler, etc. It works, but it's slow, sometimes ugly to watch, and hard to game so some people would much rather use the "Approved by A Council of Peers" nonsense we're presently mired in. | ||