American Culture

Why scientific peer review matters

“Peer review isn’t all it’s cracked up to be,” and that phrase’s many variants, is something you year a lot these days. It’s a complaint that is heard most often with reference to climate science, and most often by people who understand neither the science behind climate disruption nor the purpose of peer review. So, as someone who’s undergone peer review repeatedly in both academia and as an electrical engineer, I’d like to explain why peer review matters.

First, a little explanation of what peer review is.

Peer review is a formal process instituted by an organization to ensure that a quality product is produced. In the case of scientific peer review, the product is scientific papers that meet a presumably high standard of scientific accuracy and professionalism. Scientific peer review involves distributing review copies of submitted papers to experts in the field in advance of publication. These usually anonymous experts read and offer comments that suggest improvements in the style, data, or thought processes described in the paper. The comments are then given to the paper’s authors who incorporate the suggestion, resubmit the paper to the journal for another round of review and, if there are no additional necessary improvements, the paper is then published.

In my own profession of electrical engineering, the peer reviews tend to be more in-depth than what I just described. Here’s the usual list of reviews. I’ll pick up after the specification phase, when engineers get handed a design and told “design this.” Any one of the reviews described below may be done repeatedly if the first review finds a significant number of problems.

  1. Project internal design review: When the design is largely complete, the entire design team sits down and tries to ensure that nothing major was missed. Problems are identified and corrected before the next review
  2. Independent design review: other engineers in the company, often senior engineers with decades of experience, are brought in to review the design with the engineer in charge and other members of the team. Problems are identified and corrected before the next review.
  3. Customer design review (optional): If you’re doing contract engineering work, then the customer often wants to be involved in the design review process. This review may include any outside experts that the customer hires to review the work as well. This review always happens after the independent review.
  4. Design document review: In the process of writing up the documentation in enough detail that another engineer could replicate the design if the engineer in charge were struck by lightning, small errors can be discovered. These errors would be corrected and the documentation review (usually by independent reviewers again) often turns up further small errors missed in earlier reviews.
  5. Preliminary layout review: Just because you can design an electronic circuit doesn’t mean it can be built. Design problems that cause manufacturing difficulties are first identified here.
  6. Final layout review: Due to (usually) minor changes that are made during layout, the schematic is reviewed yet again at the same time that the layout of the printed circuit board is.

One major misconception about all varieties of peer review is that the reviews guarantee no errors in the final product. Not even five or more engineering design reviews described above can ensure that there aren’t any errors in the design. In fact, one of the differences between a designer with experience and one fresh out of college is that the experienced designer plans for the inevitable errors that are discovered in testing, while the inexperienced designer thinks his or her design won’t have errors because it was reviewed.

While catching errors is an important purpose of peer review, what kind of errors are caught and corrected depends greatly on who the expert reviewers are. If a climatology paper that relies on detailed knowledge of statistics has an error in the statistics, sending the paper to only expert climatology reviewers might not turn up the statistical error. Similarly, if there’s a power problem in an electronics design, it might only be detected if a power supply engineer was one of the reviewers.

Ultimately, though, this isn’t a problem – not with peer review in general and not with scientific publications in particular. Just as there’s a check on an engineering design review called “product testing,” the publication of a peer reviewed paper provides the final check as well. Once a scientific paper is published, it’s reasonably likely that someone reading the paper will discover any errors. Discovered errors are corrected either by a retraction if the error is significant, or by the submission of another another paper describing the errors that also goes through the peer review and publication cycle.

So why does scientific peer review matter? Because the peer review process described above is a key component in a process that continually improves the overall quality of scientific research. And because participating in that process is the price of entry to be taken seriously as a scientist.

It takes confidence in your skills and knowledge in order to put yourself through the wringer that is publication. After all, your discoveries and data are then publicly available for all to critique, agree with, or even mock. The process alone provides a level of confidence in the scientific accuracy and veracity of the papers that are published.

After all, anyone can publish a blog filled with with so many numbers that it looks legitimate, but only a scientist would subject him or herself to peer review.

As a final aside, there is no such thing as a “peer reviewed book.” Books are published not based on the quality of the science as determined by anonymous expert reviewers, but rather on a publisher’s and editor’s determination of how well the book will sell. If the science happens to be interesting and accurate, great, but at best that’s a secondary consideration over profit. Scientific journals publish papers based on the quality of the science first and foremost, and then charge whatever the market will bear. Given that a single journal can cost over $1,000 per year for a subscription, the market is willing to bear a great deal.

11 replies »

  1. Thanks Brian. Great background on how engineers do it.

    One thing I’ve heard about both peer review and publication from scientists is that it’s not only rigorous, but can get rancorous (if collegially so). I’ve heard it described as more of an acid test than a stress test, especially if the paper is at all controversial.

    I think what John Q. Public often doesn’t understand is that science is a kind of jungle of ideas in which only the best ideas survive.

    At least, I’ve heard it explained to me that way.

  2. This is how it is ideally supposed to work, sure. But I guarantee you that politics gets involved in the review process. Not only in publishing manuscripts, but also in grant reviews. Stuff like this happens more often than you might think:

    I’m not saying that there’s a better way out there. I’ve published 30+ manuscripts through this process and I can attest to the amount of pain and frustration it causes. But hell if I can figure out a better way to do it.

    • I’m not trying to pretend the process is perfect. Politics certainly plays a part, but because the peer review system is relatively rough and tumble, it’s also self-correcting over the long run. Like you, though, I don’t know of any better way to do it at this point.

  3. To paraphrase Churchill: Peer review is the worst method of accumulating scientific knowledge except for all others that have been tried.

    Thanks for the link Ubertramp. I considered doing something similar in my anthropology days but never ha access. I might have to pick that book up.

    And thanks for the discussion Brian. When I was looking for a job a few years ago, I just looked for something in academic research, period. I’ve been fascinated by the methodology ever since I was first introduced to it and never looked back.

  4. As a scientist, I’d agree with the comments above regarding some contention in the review process, but there are two big differences in science, including the social sciences.

    First, there is a very wide range of rigor brought to bear in the review process and how much depends on the specific journal and the professional society involved. Even the superficial procedures differ widely. For example, Science magazine, of the AAAS, used to have only two reviewers, wherein the disapproval of either could scotch publication without appeal. Others have 3 or more reviewers with feedback to the authors simiilar to that described by Brian. Within those superficial differences are varying standards of rigor with regard to relevance, originality etc.

    Second, and this is almost uniformly misunderstood by the public, successful peer review and publication in science is only the beginning of the “evaluation” process – not certification that other scientists in the field, or even a majority of them,.agree with theory or models proffered or the authors’ conclusions, given the methods and procedures used. Review and publication only means that a study meets minimum standards of design, execution, interpretation, and reporting for the society involved. If the reviewers overlooked uncontrolled variables that other theorists and “bench” or “field” researchers feel are relevant, this will only come out in subsequent letters or other publications. If the theory and hypothesis tested fall short, in the opinions of others, those disagreements will be brought out only in subsequent publications. There are fundamental differences in the way engineers and scientists approach research, discovery and acceptance with those differences being fully reflected in their respective review, publication and evaluation processes.

    Unfortunately, the second factor leads to endless “false alarms” by “science writers” and journalists (but, for which “science” is often blamed) who take the contents and conclusions of each peer-reviewed and published paper as “the truth” at face value without understand the context of all other historical and current research in the field. Typically, this is evidenced by successive contradictory announcements, each touted as a new breakthrough but contradicting the previous one, when, in fact, it is only their – and sometimes the authors’ – selective view of the field with respect to their favored theory (their chosen device for organizing the known facts). In the worst case, the “finding” is simply ignored by all others. While this may appear untidy, indeterminate and frustrating to some, it is how science “stays on track” and, eventually, comes up with sustainable explanations for nature that are widely accepted.

  5. johns, I agree. One paper isn’t very strong by itself. As scientists, we kind of intuitively recognize that it’s the network of related pubs that makes the stronger argument. Hell, that’s half the reason we HAVE a bibliography section at the end of every paper. However, I’ve also noticed that some journals don’t like to publish data if it looks too much like a repeat of other work. To me, it always made sense to have at least a few different labs try to run the same experiment.

    For this very reason, I’m also a bit concerned with the way the recent NIH RC1 Stimulus package/Challenge grants are being written. Not only are we supposed to skip the intro, background, and preliminary data sections, but we are limited to exactly 1 page of references for a 12 page proposal. I’m not sure how that’s going to work out. Either the reviewers have to be experts in every aspect of every grant proposal, or a lot of stuff is going to slip through the cracks that shouldn’t. I understand that this is supposed to help the reviewers get through everything they have to get through in the review process, but it seems like a dangerous way to go about it.

  6. Ubertramp, I think the favor by journals of exact replication varies all over the map with the theoretical importance of the finding, the developmental stage of the science (maturity of the theory), and the cost of replication. “Big Science” projects occur in well developed fields where theory is mature (or, at least, successfully represented as such – I’m getting old waiting for the oft-promised grand unified theory of everything) and, together with cost, make the suggestion of replication almost foolish. In the biological, and the least mature social sciences, exact replication is usually treated as a “note” or “communication” (To replicate is “no big deal” and to fail brings suspicion on your methods, all of which can make killing the result of shabby research very difficult.). Alternatively, my preference and easily published, the attempted replication may be embedded with other experimental variables or controls of a larger experiment that aims, if the replication succeeds, to add to or limit the scope of the original conclusion and, if the replication fails, guarantee an unambiguous alternative explanation. Not always easy, but that’s the challenge of it.

    Re NIH. Some years back, when last I dealt with it, the competition for grants had reached an insane level wherein many really wonderful proposals were being routinely rejected for lack of funds. The differences between the accepted and rejected were imperceptible. You had to be quite exceptional to even enter what was, in effect, a lottery. I’m not trying to discourage you, only show my admiration for your ambition and talent – obvious, by your even being in that race.

    Your concern with the “summary proposal” method is, likely, well founded. One can only hope for competent reviewers that, also, will pool their talents when in doubt and not, as you fear, hand out moneys for outlandish promises or sequester too much of it in their own NIH labs – which has been a problem in the past. However, with a pot of added money, the odds are certainly improved and, hopefully, a large number of worthy researchers will be funded. Good luck to you.

  7. Competition is still pretty insane and it’s still very much a crapshoot. Haha! But I keep plugging away. I don’t think they’ll fund anything “outlandish” (which isn’t necessarily a good thing), but I do think that this method of reviewing grant proposals will limit creativity. My guess is that the grant reviewers will give high marks to ideas they are familiar with, and trash anything that might be on the edge. But we’ll see.