Skip to main content
 

Publishing your research: Impact factors - alternative views

San Francisco Declaration on Research Assessment (DORA)

 The San Francisco Declaration on Research Assessment (DORA) was established in 2012 to highlight 

“a pressing need to improve the ways in which the output of scientific research is evaluated by funding agencies, academic institutions, and other parties. To address this issue, a group of editors and publishers of scholarly journals met during the Annual Meeting of The American Society for Cell Biology (ASCB) in San Francisco, CA, on December 16, 2012. The group developed a set of recommendations, referred to as the San Francisco Declaration on Research Assessment”

The declaration has since been signed by individual researchers and institutions across a range of disciplines worldwide.

“The Journal Impact Factor, as calculated by Thomson Reuters, was originally created as a tool to help librarians identify journals to purchase, not as a measure of the scientific quality of research in an article. With that in mind, it is critical to understand that the Journal Impact Factor has a number of well-documented deficiencies as a tool for research assessment. These limitations include:

A. citation distributions within journals are highly skewed; 
B. the properties of the Journal Impact Factor are field-specific: it is a composite of multiple, highly diverse article types, including primary research papers and reviews; 
C. Journal Impact Factors can be manipulated (or "gamed") by editorial policy; and 
D. data used to calculate the Journal Impact Factors are neither transparent nor openly available to the public.

See also this pdf of the declaration which includes a bibliography on research assessment and the impact factor.

Nobel Laureates speak out against the role of impact factors in research.

Recent research on citation distributions and the impact factor

“Journal-level metrics, the Journal Impact Factor (JIF) being chief among them, do not appropriately reflect the impact or influence of individual articles—a truism perennially repeated by bibliometricians, journal editors and research administrators alike. Yet, many researchers and research assessment panels continue to rely on this erroneous proxy of research – and researcher – quality to inform funding, hiring and promotion decisions.

In strong support for the shedding of this misguided habit, seven journal representatives and two independent researchers – including the three authors of this post – came together to add voice to the rising opposition to journal-level metrics as a measure of an individual’s scientific worth. The result is a collaborative article from Université de Montréal, Imperial College London, PLOS, eLife, EMBO Journal, The Royal Society, Nature and Science, posted on BioRxiv this week. Using a diverse selection of our own journals, we provide data illustrating why no article can be judged on the basis of the Impact Factor of the journal in which it is published”

From: http://blogs.plos.org/plos/2016/07/impact-factors-do-not-reflect-citation-rates/

See the full article:

A simple proposal for the publication of journal citation distributions

Vincent Lariviere, Veronique Kiermer, Catriona J MacCallum, Marcia McNutt, Mark Patterson, Bernd Pulverer, Sowmya Swaminathan, Stuart Taylor, Stephen Curry

bioRxiv 062109; doi: http://www.biorxiv.org/content/early/2016/09/11/062109