An Email to Andrew Gelman About Evaluating Journals
I wrote the below transcribed email to Andrew Gelman today. He often publishes my emails on his blog with a 6 month time delay, so I figured I’d post it here just in case. If he bloggs about it, I’ll link to it here.
i noticed you disparage a number of journals quite frequently on your blog. i wonder what metric you are using implicitly to make such evaluations? is it the number of articles that they publish that end up being bogus? or the fraction of articles that they publish that end up being bogus? or the fraction of articles that get through their review process that end up being bogus? or the number of articles that they publish that end up being bogus AND enough people read them and care about them to identify the problems in those articles.
my guess (without actually having any data), is that Nature, Science, and PNAS are the best journals when scored on the metric of fraction of bogus articles that pass through their review process. in other words, i bet all the other journals publish a larger fraction of the false claims that are sent to them than Nature, Science, or PNAS.
the only data i know on it is described [here] (https://www.nature.com/articles/s41562-018-0399-z) according to the article, 62% of social-science articles in Science and Nature published from 2010-2015 replicated an earlier paper from the same group found that 61% of papers from specialty journals published between 2011 and 2014 replicated.
i’d suspect that the fraction of articles on social sciences that pass the review criteria for Science and Nature is much smaller than that of the specialty journals, implying that the fraction of articles that get through peer review in Science and Nature that replicate is much higher than the specialty journals.
curious to here your thoughts….