Let's protect science, from science

Share now on

Giovanna Cottin | Research Associate, SAPHIR Millennium Institute

The current national public debate surrounding scientific work raises challenges that invite deep and critical reflection on how we value and measure the generation of knowledge. For most of us who work in science, the process of generating knowledge culminates in a coveted scientific publication. Getting an article published in a prestigious international journal involves going through a process where new ideas are developed (which takes months or even years of work) and then evaluated and tested through a rigorouspeer review process, in which experts in the field assess the originality and quality of the article. What constitutes a prestigious journal and who authors the articles published in them depends on and varies by research field.

A researcher in particle physics publishes in different journals than a researcher in art history. Furthermore,each subfield of research operates differently. In theoretical particle physics, for example, it is common to produce papers with few authors—anywhere from one to perhaps ten people. These papers may propose new theoretical models that predict new elementary particles and suggest methods for detecting them. In experimental particle physics, the large collaborations at CERN publish a high volume of articles with thousands of authors. This is because the search for the proposed new particles involves the development and construction of software and sophisticated instruments, their constant monitoring and calibration, and an intensive subsequent data analysis for each model—all of which depend on the transfer of specialized knowledge regarding the efficient operation of a massive particle accelerator. This process requires the work of many people from various nations and institutions around the world, who are listed as co-authors in these articles as part of a global scientific collaboration.

This illustrates the nature and diversity of publications within each specific field of knowledge.Understanding what determines the number of publications and their co-authorship in each specialized field is both a necessity and a responsibility for researchers and institutions, in order to protect scientific work from abuses of financial incentives. The hyper-specialization of knowledge, where each field of study becomes increasingly defined and in-depth, also contributes to the debate on how we measure scientific productivity and quality—and how these are rewarded.

How is scientific productivity measured? The outputs associated with scientific work and their most common metrics include the number of publications and the number of citations each publication receives. For each researcher, a productivity index called theh-index is defined. If a researcher has, for example, 20 articles and each of those articles has at least 20 citations, this means thattheirh-indexis 20. This is an easy number to calculate to define individual productivity. However, it has the limitation that it does not allow for a fair comparison between authors in specific disciplines. This is because not all disciplines publish in the same way, nor with the same volume or frequency. When the number of authors is large, it is not easy to externally identify individual contributions. The calculation of individual scientific productivity is therefore often supplemented by the number of talks in which the researcher has presented their work at international conferences. Perhaps the appropriate evaluation of institutional seminars, where the contributions of each speaker to their own article are specifically presented, would contribute to the design of new, fairer productivity metrics.

How do we measure the quality of scientific output? This is not easy, as it depends on each specific field, and standard metrics also have their limitations. The number of citations an article receives reflects its usefulness or impact on the scientific community. The “impact factor” of scientificjournalsis calculated based on citations of articles published in those journals and is used as a metric to measure the journal’s quality. While this does not measure the quality or impact of an individual article (but rather the average of the articles published in that journal over a certain period of time), it is a metric that protects against, for example, theexistence of predatory journals, which also threaten science. Precisely because they“prey” on or take advantage of institutional loopholes. Every week, my institutional email sends dozens of messages from thesejournals to the spam folder. These emails often invite scientists to pay to publish their articles—articles that were even already publishedopen access!

Given the limitations of metrics and the threats we face, the following question arises: How can we, through our own scientific work, safeguard its quality? Assessing the quality of a scientist and their research based solely on the number of publications is an incomplete metric. Identifying the quality of an article, a research study, or an idea becomes very difficult—especially in the absence of reliable metrics—if we do not have thetime necessary to assess the value of both the outputs and the processes associated with research work in the generation of knowledge.

In terms of products, one way of judging the value of productivity that is used is to devalue the impact of an article according to the number of authors who sign it. But sometimes this devaluation does not consider the specific processes of each area. This measure can even be detrimental to other areas when the areas are not comparable. So it is prudent to have different impact measurement metrics even for each sub-area. Perhaps, on a cross-cutting basis, publications in conjunction with students could be of greater value.

As for the processes themselves, I find them harder to evaluate, since they often depend on the ethics and integrity of us as researchers and our institutions.Ethical lapses and a lack of transparency and rigor in these processes undermine the quality of science, and it is also our responsibility to address them. Based on my own experience, minimizing these issues can involve regularly, responsibly, and rigorously discussing ideas, methodologies, results, and how to make them public with our research groups.

In my view, protocols for usingopen science platformssuch as arXiv help safeguard transparency, as they give the community free access to research and allow authors to demonstrate their processes, enabling us to revise our manuscripts through multiple versions before they are accepted for publication injournals. Normalizing the publication oferratafor our own articles when necessary also contributes to rigor and transparency in research.

If we add to all of the above the implementation of sound institutional policies, I believe we will be able to identify and neutralize the threats facing science. And in doing so, we can move from a scientific culture marked by the shadow of“publish or perish”to a reality where the creation of thriving knowledge shines and takes precedence. In my view,investing time in assertively defining, recognizing, and valuing scientific productivity—its quality and its processes—will allow a publication to cease being a bargaining chip and become a cherished space where a small grain of knowledge is immortalized.

Other news