By Damian Dalle Nogare, PhD
“...Science must break the tyranny of the luxury journals. The result will be better research that better serves science and society.” So concludes Dr. Randy Schekman, joint winner of the 2013 Nobel Prize in Physiology or Medicine, in a December op-ed in the British newspaper The Guardian. High profile journals like Nature, Science, and Cell, Schekman argues, undermine the scientific process by emphasizing “the flashiest work, not the best” and lead to a distorted incentive structure. They do this, in part, by aggressively curating their brand, limiting the number of papers published, and instituting a strict policy whereby professional editors accept only the papers most likely to make a splash. The subtext is clear: scientific journals select publications that further their own interests, which are not necessarily aligned with science’s broader goal of serving the public, who ultimately pay the bill.
All due credit to Schekman, who has put his money where his mouth is. He refuses to publish future work in such high profile journals and prefers to publish in open access journals, including eLife, which he edits. A cynic might note that Schekman himself, Nobel Prize in hand, has no further need to publish high profile papers and wonder how his postdocs, when faced with their own job search, feel about this policy. Setting that aside, for postdocs and other early career scientists, this debate is hardly academic. We will all face the decision of where to publish what we consider our best work. Do we send our papers to Nature or Science knowing that we are perpetuating a problem that in the long run may be detrimental to our careers or do we, as Schekman suggests, send our papers to journals such as eLife, knowing that these papers will not carry as much cachet with job search committees?
In game theory, there is a solution wherein no players are incentivized to change their strategy unless the other players also do so. This position, called the Nash equilibrium, is where we find ourselves today. By not publishing in the highest profile journals—in essence, by changing our strategy unilaterally—we punish ourselves. This is unless, of course, we can convince everyone else that it is in their interests to do the same. Scientists have many admirable traits, but I’m not sure that martyrdom is among them. Ultimately, it will be the role of more established scientists to break this cycle, both by refusing to chase impact factors and, perhaps more importantly, by changing the conversation during grant review and job search panels.
When asked recently on the Internet forum Reddit about how to accurately assess publication quality, Schekman responded by saying “I believe the only way to judge the scholarly impact of published work is to read the paper!” This is an admirable sentiment, but the tyranny of the impact factor, which high profile journals exploit and perpetuate, is a very real response to a very real problem—how do we assess the worth of scientific publication? Within our own fields, we have a grasp on who is doing good work, through close reading of their publications, attending conferences, or informal discussions. However, to expect members of a grant review panel or job search committee to carefully read the dozens of papers that might constitute a scientific body of work for each one of potentially hundreds of applicants is unrealistic. Thus, we have to a large extent outsourced the development of the primary metric we use—publication quality—to journals and journal editors, thereby having our goals subsumed by theirs.
We are scientists. Part of our job is to measure things that are extraordinarily difficult to measure. That we continue—out of convenience or laziness—to rely on such poor metrics is a condemnation of ourselves. Before laying too much blame at the feet of journal editors, scientists might wish to take a good long look in the mirror. Perhaps Shakespeare would agree: the fault may not be in our stars, but in ourselves.
Opinions in this article are solely those of the author and do not represent the views of the NIH, federal government, NICHD labs, or anyone else.