Throw Peer Review Under the Bus?

From The Independent

Richard Smith, who edited the British Medical Journal for more than a decade, said there was no evidence that peer review was a good method of detecting errors and claimed that “most of what is published in journals is just plain wrong or nonsense”.

…Speaking at a Royal Society event earlier this week, he said an experiment conducted during his time at the BMJ, in which eight deliberate errors were included in a short paper sent to 300 reviewers, had exposed how easily the peer review process could fail.

Pointer from Jason Collins.

What might be better? Off the top of my head, I propose that:

1. No individual study should receive more than a page or two in a journal. Just explain the findings, interpret them, and put all of the methodological details and literature review on the author’s web page. Results from all such papers should be treated as “preliminary and unconfirmed.” Accept any study for publication, including studies with findings of “no significant effect.”

2. Longer articles should be survey articles that focus on studies that have been replicated and confirmed. The survey articles should also report on studies where attempted replication failed or the method was otherwise shown to be invalid.

3. Do not assign high status to researchers just because they get studies published. Instead, assign high status to researchers who attempt to replicate or otherwise confirm other studies and also to researchers whose work is cited favorably in survey articles.

10 thoughts on “Throw Peer Review Under the Bus?

  1. Amgen took another approach and got similar results:

    During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 “landmark” publications — papers in top journals, from reputable labs — for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.

    Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.

    http://www.reuters.com/article/2012/03/28/us-science-cancer-idUSBRE82R12P20120328

  2. Currently, peer review in the social sciences is asked to determine if a paper is correct, innovative, and important. All three are required for publication. Some alternatives would be:
    – The Physics model: Only assess whether the paper is free of errors, and then let the ideas marketplace determine whether the paper is important, through citations. Hence to physicists citations are more important than publications.
    – The open review model (e-conomics): If the paper is not complete junk (Editor decides), the paper is accepted and open reviews follow. Reviews are published alongside the paper for full transparence.
    – The required-replication model (unused to my knowledge): The Editor and reviewers accept the paper if it is innovative and important, and then mandatory replications are commissioned and published alongside the paper. But we lose the double-blind review. And from whom? I see this as required pro-bono duties, but will the incentive be sufficient to get quality replications? Maybe advanced PhD students could be tasked with this, with an incentive to impress through good work?

    Peer review should not be considered a magical elixir. Peer review *will* weed out a number of problems, but not all, and moreover introduces a “positive finding” bias which has been discussed at length elsewhere.

  3. “Assigning status” is the same as “planning the economy”. It doesn’t work like that. There’s an invisible hand which assigns status in human communities, and this invisible hand unlike the market one, often does a very bad job.

  4. One would assume peer review evolved out of necessity and spontaneous order. Start by undoing any ossifying duplications of requirements for peer review that have been added on over the years (akin to how banks liked AAA securities, then the government mandated AAA securities, then the market created a lot of really bad AAA securities, then all AAA securites became suspect).. Let the invisible hand do the rest.

      • Some journals do this, kind of. The Journal of the American Statistical Association and Journal of Business & Economic Statistics each have special invited papers alongside which are published discussions. But that’s not the same. The trouble with publishing all reviews is that without blind reviews, things get personal, and ugly. Just look at the Hoxby-Rothstein or Acemoglu-Albouy or Reinhart&Rogoff-UMassAmherst group

    • One more:

      Require that author’s pre-submit their analysis plan before data collection, to avoid p-value fishing.

  5. Pay the reviewers.

    In computer software there was for a time a claim that “open source” code will be better because there will be “lots of eyes on it”

    As a number of security flaws have proven, the “lots of eyes on it” theory is just wrong. Two really hard constraints make it not apply (and likely apply to many science papers)

    1. The set of people who can look at something and really comment on whether it’s radical news or utter rubbish is quite small. Having large numbers of people review the paper often won’t help because there may only be, say, 300 people in the 1st world who can really grok it. The other billions can’t really evaluate it.

    2. The people in the set #1 all have much better things to do than review somebody’s paper, especially since they all know that most papers are wrong, and are in general written by their competitors.

    So you need to incentivize the right people to review papers.

    And will immediately hit the issue of “who decides what papers the paid reviewers will be asked to review?” since the pool of papers will almost always swamp the budget of reviewers.

    • Some finance and economics journals pay their reviewers, but I don’t think it makes a huge difference. The main impact is to accelerate reviews. This is especially true for finance journals. No more waiting for 15 months to get a decision on your submission (happened to me, and to many others for sure).

Comments are closed.