The scholarly work we present in these pages is the culmination of a process that reflects criticism and judgment. After investigators submit their work to our journal, our editors forward submissions they find of suitable quality to up to 3 external reviewers who make independent judgments about the form and substance of the work. These judgments—of the editors and peer reviewers—by aiming to improve the quality and suitability of the work for our readers, become the process by which we present new knowledge into these pages, knowledge that we hope will improve the science of curing, comfort, and care.1
Toward a Philosophy of Scientific Peer Review
From where does this judgment come? Francis Bacon in his Novum Organum (published in 1620) articulated 4 potential key processes that we think allow for skillful criticism and judgment of scholarly work.2 First, judgment emerges from what Bacon called a “simple sensuous perception.” Such perception begins with a thorough examination of the scholarly work, a kind of snuggling-up with the work that comes from a deep affection for the science that the scholarly work will join. Such close readings can only be enhanced using tools or signposts that help reviewers see and articulate whether a particular study, as reported, meets the standards that have emerged from the scientific community. Initiatives like the Equator Network Resource Center3 and the Committee on Publication Ethics (COPE),4 by publishing international consensus statements and guidelines about the reporting standards for research of all types, provide a central location for editors and peer reviewers to find tools to help them in their close reading of a work. The reliance on these tools to improve one’s perception of a particular work should not be interpreted to mean that scientific judgment is as simple as deciding whether a submission has met all of a prescribed list of standards. We would argue, as did John Dewey, that criticism that takes a more judicial stance by focusing more on acquittal or condemnation rather than on careful perception5 is likely to mischaracterize innovative work, particularly when such work is presented in new ways or with structural flaws.
A second key process skill that we think cultivates good judgment is a posture of skepticism. Bacon asked his reviewers to “make some little trial . . . of the way” the new knowledge was presented.2 Such a posture of skepticism pushes us away from the more consumerist social-media type of engagement in which a thumbs-up or a thumbs-down passes for judgment. Judgment, imagined in this way, though it may begin with a series of impressions, pushes beyond such impressions to analyze the strengths and weaknesses of the various sections of the work. This posture assumes that asking questions of the work—debating the work—does not have to be driven toward its destruction, that a healthy recognition of a work’s potential strengths and limitations is key to ensuring a high standard of both scientific practice and scientific communication.6
Journals like ours [must] find innovative ways to increase the pool of potential reviewers.
We think it’s important to acknowledge that scientific judgment emerges from a lived experience and expertise. Bacon asked of reviewers that they “familiarize [their] thoughts with that subtlety of nature to which experience bears witness.”2 Such lived experience or expertise is what editors depend on when deciding which peer reviewers to choose for a submission. A scholarly work testing a novel clinical intervention via a randomized clinical trial, for example, might be sent out to an expert in the intervention, a methodological expert in clinical trials, and a clinician who would be potentially implementing the intervention. Even if these 3 experts were to set about to use the same checklist in their systematic close reading of the work, to acknowledge the importance of their different lived experience is to hope that the individual judgments that emerge be colored by the differences in their expertise.
We also believe that the best scientific judgment acknowledges that human decision-making is vulnerable to systematic bias. Bacon asked reviewers to “through seasonable patience and due delay” correct the “deep-rooted habits” of their mind. If we were to imagine each scholarly work as having a true value that is measured through the review process, biases are cognitive processes that can make judgment of the work’s value deviate far from the work’s actual value. The list of potential biases that can influence reviewers’ judgment are too many to name here but could include anything from confirmation bias (in which a reviewer is biased against manuscripts that describe results that are inconsistent with their theoretical perspective) to gender, geographic, language, or author-prestige biases that can distort a reviewer’s perception of a submitted work if the identity of the authors is known to the reviewer.7 Bacon appears to have had a very compassionate understanding of the human tendency to distort the nature of things by these decision-making shortcuts, as he suggested that the work of debiasing is a slow process—requiring patient reflection and deliberation.2
What Are the Challenges Associated With Peer Review as We Do It?
Despite the importance of the peer review process to the quality of the work we present to our readers, there remain significant challenges in the process as we practice it currently. The first and most pressing challenge is how to incentivize high-quality peer reviews. Many reviewers are academics who see peer reviewing as an integral part of their duty to the scientific community. For many, to be asked to review a manuscript is a recognition of their expertise in an area. For some reviewers, peer reviewing may help with academic or clinical promotion or provide them with an important way to keep abreast of the scientific literature. And yet, research suggests that the peer review system is strained: more than 2 million scientific research publications are published each year, and the majority of peer reviews are being conducted by a minority of academic investigators.8 As the number of submitted scholarly works increases, it will remain important that journals like ours find innovative ways to increase the pool of potential reviewers.
Even from the admittedly idealistic description of the principles of scientific judgment that we coalesced from Frances Bacon, the tension between scientific innovation (which values creativity, originality, and rapid dissemination) and quality control (which values accuracy, validity, and slow, consensus-driven deliberation) remains inherent in our current peer review process.1 No doubt, there are many examples of innovative and important work that had great potential to improve patient care that was suppressed or stifled by the peer review process. In many of these examples, when we examine the peer review process closely, we find that the peer review process often missed the mark on many of our key principles, most often because of lack of modesty or because of bias on the part of the peer reviewers. Potential bias as a human element in the review process is something that cannot be completely eradicated. When such bias leads to corruption, the peer review process could very easily lead to the inappropriate delay or suppression of innovative work. Still, in an increasingly fast-paced world, we think it important to emphasize two things. First, it is rare indeed that we encounter a scholarly work so innovative and so urgent as to be hampered by a peer review process that takes a few months to complete. Second, high-quality peer review takes time and should take time. So, for journal communities like ours, even as we continue to find innovative ways to shorten the time between work submission and publication, we must also ensure that peer review remains an avenue for improving the form and the substance of the knowledge we present.
An AJCC Junior Peer Reviewer Program … will aim to bring new voices into our peer reviewing community and will leverage adult learning principles to improve learners’ skill in scientific criticism.
A third challenge is to determine how transparent the peer review process should be to the readers, the authors, and other stakeholders. Investigators who submit their work to AJCC do not know who is reviewing their submission, and we take steps to ensure that peer reviewers are not able to easily ascertain who are the authors of the submission. In fact, our author guidelines request that the author’s name or institution not be included in the running head or anywhere in the manuscript after the title page or in the file names of the manuscript components. Manuscripts that do not meet this requirement are not reviewed by AJCC. This so-called double-blinded review process requires considerable effort and is done in the hopes of providing ample space for frank analysis from peer reviewers who can focus more clearly on judging the work that has been submitted rather than the social identity and status of the people who have submitted the work. Many other scientific journals will aim for a single-blind review during which the reviewers are anonymous to the authors, but the authors are known to the reviewers. This single-blind review has the advantage of allowing reviewers to view the full context of the author’s work, which can be an important ingredient in analyzing the quality of the submission. Many of the recent innovations in the peer review process have focused on increasing the transparency of the peer review process by allowing peer reviewers and authors to both be known to one another, with some journals even moving toward having the peer review be published along with the submitted work (so-called open peer review). The limited research into the effectiveness of blinding reviewers for improving the quality of reviews suggests that revealing the identity of the peer reviewer may have little impact on the quality of the review but may lower the likelihood that someone will accept an invitation to review a scholarly submission.8,9 In light of the limited training that is available in conducting peer reviews and the decreasing incentives for peer reviewers to volunteer to do this important work, we worry that clinicians outside of traditional academic settings will be more afraid to do peer reviews in an open system that requires them to publish their reviews. We also are concerned that early career investigators may be more reticent to sign a negative review of a scholarly work from a prominent research group in an open system that requires peer reviewers to be identifiable to the authors.
The Future of Peer Review for AJCC
What does the future hold for the peer review system at AJCC? As a learning community, we must be willing to fight against inertia and the status quo by making commitments to improve our peer review process. First, we will strengthen our procedures to more systematically evaluate the effectiveness of our peer review system by putting our own peer review process through the kind of rigorous evaluation we profess to afford our manuscript submissions. A more thorough qualitative self-assessment of our peer review process could provide important avenues for improvement and has the potential to improve further the value of the work that we publish in these pages. The second commitment we will make is to increase the pool of well-qualified reviewers available to us by starting an AJCC Junior Peer Reviewer Program. The program will aim to bring new voices into our peer reviewing community and will leverage adult learning principles to improve learners’ skill in scientific criticism. We expect that this selective program would be appropriate for early stage academic faculty or clinicians with an interest in acute critical care or nursing research. Third, we will look to find innovative ways (besides this editorial) to increase the transparency of our peer review process. Ours is a community with a passion for learning and a passion for improving care. Our willingness to continue improving our ability to judge new science will remain an important means through which we can improve care.
The statements and opinions contained in this editorial are solely those of the coeditors in chief.
To purchase electronic or print reprints, contact American Association of Critical-Care Nurses, 101 Columbia, Aliso Viejo, CA 92656. Phone, (800) 899-1712 or (949) 362-2050 (ext 532); fax, (949) 362-2049; email, email@example.com.