Impact Factor and H Index to evaluate publications and authors

Impact Factor and H Index alone may not be sufficient to fulfill the task of evaluating journals, articles and authors


There are adequate indices to classify scientific journals, but these indices were not created and should not be used to evaluate authors of scientific articles or researchers

The evaluation of scientific production in different areas, institutions and countries has become a necessary tool to map science and to contribute to scientific decision-making. In this scenario, bibliometric indicators play an important role in the evaluation process of scientific production. As a way of evaluating scientific journals and researchers, the Impact Factor (FI) and the H Index stand out. However, an article published as the Editorial of this years first issue of the Clinical Case Report entitled Importance of Journals Impact Factor on Authors Valuation makes an important warning.

According to the author, Dr. Andy Petroianu, Professor of Surgery, Department of Surgery, School of Medicine, Federal University of Minas Gerais, the article is aware of the fact that Impact Factor and most other indices were created to classify scientific journals. “It is not correct to use them to evaluate the authors of published articles, much less to qualify researchers or their work. You cannot reach a correct conclusion using an inappropriate evaluation method”, he points out.

Professor Carlos Henrique Nery Costa, coordinator of the Leishmaniasis Laboratory at the Federal University of Piauí (UFPI), in charge of the medias of the Brazilian Society of Tropical Medicine (BSTM), recalls that in the last 30 years of life, at the age of 54, Albert Einstein published less than two papers a year. “He would not receive a research grant in Brazil, nor would he pass a competition for the most popular Brazilian universities. And why is that?”, he asks. To him, basically due to the research evaluation system based on bibliographic metrics. Also according to the professor, the metrics of research evaluation meet the ideology of those who build them and the interests of those who are able to influence those who develop them.

With the growing demand for financing scientific research, it became necessary to create mechanisms for evaluating academic-scientific quality as a way of honoring institutions and individuals capable of producing cutting-edge research, thus guaranteeing a return on invested resources. According to Dr. Petroianu, currently, there is a requirement from funding sectors, who also evaluate the research and higher education institutions, for scientific publications to be published in journals with high IF. These sectors consider essentially the works published in journals with an impact greater than 1, and in Brazil of the almost 400 existing journals in the health field, less than five reached IF 1, he explains. Still according to the researcher, unfortunately, these sectors are not interested in the real value of scientific works and the changes in concept they generated, even though they were published in journals with little impact. Another deleterious aspect is with national journals, which stop receiving good articles and, therefore, do not improve their IF.

For Dr. Costa there is something wrong. Attributing values to science and scientists based on the metrics for evaluation of scientific journals prepared by publishers whose interests are necessarily profit, exposes the perverse, degrading and predatory relationship of scientific publishers with the State and with scientists, he points out. Randy Schekman, Nobel Prize in Physiology and Medicine in 2013, decided to block submissions of scientific articles from his laboratory to major scientific journals such as Nature, Science and Cell to break the tyranny of major publishers. In fact, there is something rotten in the realm of scientific assessments, points out the professor.

Need to rethink IF as the predominant criterion in the evaluation of scientific publications

The IF is used to assess the impact of journals and newspapers, research management and policy, information retrieval and resource allocation, in addition to assuming an important role in assessing the scientific production of research groups, universities, institutes and countries. Nowadays, every author considers the value of the IF to choose the journal that can give greater visibility to their work. Asked about the influence of IF in the editorial, academic and scientific environment, Dr. Petroianu clarifies that this situation was valid until about 20 years ago, but with the introduction of electronic databases at the end of the 20th century, scientific dissemination has entered a new era of scientific advancement. These databases accept indiscriminately and on equal terms journals with very variable impact factors, from the highest to the lowest. “In this new reality, the search for knowledge ceased to be based on the value of the journals and started to be based on the content of scientific articles. The quote started to value only the quality of the scientific work and its product, without considering the name of the authors or the journals in which the article was published. Therefore, the journals IF is no longer important”, he highlights.

As the IF was establishing itself as a criterion for evaluating publications in the most diverse instances, the number of researches that began to investigate it in various ways and, invariably, to criticize it as an indicator of influence of publications, grew. There are criticisms that this index should not be used as a research evaluation tool, however, it remains the most used instrument to evaluate scientific journals and intellectual productivity. “IF is a good method for evaluating journals, but it is not the only one. There are the Citation Index (CI), created at the Institute for Scientific Information (ISI), Medline database, PubMed, CiteSeer, Web of Science, Scopus, Google Scholar, Microsoft Academic Search, Cite Score, etc. However, none of these indexes assesses the quality of published articles, much less the value of the works inserted in them or the quality of the authors”, points out Dr. Petroianu. In fact, among the many conflicting opinions about IF there is a consensus that there is no perfect index to measure the quality of an article or even the scientific production of researchers and professors in higher education.

According to Dr. Petroianu, the IF in no way measures the quality of an article and the most serious mistake is to qualify scientific production under it. To better understand this impropriety, let us see the following association: the fame of museums (in this case, the scientific journals) is given by the high number of good objects (scientific articles) they contain, some of them known by their authors and others anonymous. In these museums, most of the works have little artistic value and are included in them for several reasons, even without having a relationship with art. Obviously, having a work inside a famous museum is an honor for all artists, even if it has never been seen or mentioned. In reality, the great work values the museum and not the other way around. As an example, we can mention the Museo Reina Sofia, in Madrid, which is visited by an immense number of people just interested in seeing the extraordinary painting Guernica by Pablo Picasso. Other works can be seen, but almost none of them are ever mentioned.

The original purpose of the IF, which was to support the evaluation of journals and guide researchers in choosing the vehicles to publish their work, was distorted. The indicator started to be used for decision making and this started to have effects, such as manipulations to inflate the index of scientific journals through self-citations or cross-citations, almost obligatorily for an article to be accepted and published. There are adequate indices to classify scientific journals, but these indices were not created and should not be used to evaluate authors of scientific articles or researchers, in order to avoid serious errors and injustices. The task of judging, whether the scientific reputation of a researcher or the eligibility of an institution as a recipient of financial resources, must strive for impartiality and precision of assessment, thus avoiding irreparable mistakes.

The head of the BSTM medias admits that the IF is cruel to some areas of knowledge that are not ahead of the findings of high repercussion. Among others, the areas of fundamental biology, biotechnology and diseases with higher earned commercial value, such as obesity, cancer, high blood pressure and diabetes mellitus, for example, have a high bibliometric impact. “While the discoveries of fundamental science are linked to the impact they generate on all knowledge, the economic value is linked to the prevalence of diseases, the economic power of the affected populations and the potential to generate commercial products. Rare diseases, diseases of neglected populations and tropical diseases are not in this scope”, laments Dr. Costa while arguing that they have little economic and political impact. Still according to him, the impact factor of these diseases, with the exception of one of the most important tropical diseases – AIDS -, is very low. It is worth remembering that only two Nobel Prize winners had works related to tropical diseases, Sir Ronald Ross, in 1902, for the discovery of Plasmodium in mosquitoes and Dr. Luc Montaigner along with Dr. Françoise Barré-Sinoussi, for the discovery of HIV, in 2008. In the latter case, linked to the spread of HIV throughout the world and, mainly, to developed countries.

Professor Costa is categorical in saying that researchers and researches in tropical diseases are invariably undervalued by bibliographic metrics, not because of the lesser contribution or less importance of their investigations and publications, but because of the little economic impact they represent. According to him, it is necessary to change the state of the evaluation of science in a tropical country like Brazil, so that the investigations of its scientists are not devalued by criteria destined to the enormous economic returns of the great scientific publishers of non-tropical countries.

Even with the prestige and benefits of the IF, it cannot be denied that it also presents problems

Among the gaps that the IF presents and are commented on even by its creator Eugene Garfield among other authors, is that the journals with the highest IF tend to be those that publish review articles, as they are articles that receive more citations. Therefore, the journal that values unpublished works and that needs time for further analysis can be harmed by a metric like this, as it will have less impact within the two years evaluated after the publication of these articles. “The H index stands out, which assesses the impact of publications by their citations in other articles. This index brings together with equal value, just by the number of citations obtained, excellent articles and terrible articles that were cited just to mention that they must be discarded from the medical literature and forgotten”, adds Dr. Petroianu. Another problem is that developing countries, such as Brazil, where the institutionalization of universities, research and scientific journals came in late, journals have less international visibility and low IF. In the opinion of Dr. Petroianu, the promotion and evaluation sectors, almost all belonging to the federal and state governments, are the main responsible for the destruction of the vehicles of scientific information. Also according to the him, the IF was created to evaluate scientific journals and its importance should be restricted to this function.

Multidimensional evaluation in evaluation processes

Knowing some of the bibliometric indexes has become of paramount importance for researchers who depend on inputs for their research and are often evaluated with these instruments. But is it necessary to look for other metrics to complement the overview of the journal and its proposed mission instead of changing the editorial policy just to try to achieve high IF values? The H-index was created by Jorge Eduardo Hirsch specifically to evaluate researchers through citations of their scientific products, articles and patents. However, even this index is not adequate, as it puts all authors of the articles on equal terms, regardless of their number and location in the authorship, acknowledges Dr. Petroianu, reminding that the evaluator is not the key to the success of a journal, since he only uses the instruments that already exist and are accepted worldwide to evaluate the journals. The success of any journal depends on the quality and dedication of its Chief Editor.

And is it possible to change this state of affairs? In Costas opinion, there are two elements that feed this process, in addition to the economic interests of the highly profitable publishers. “I imagine that the alternative to bibliometric evaluation would be the direct evaluation of the scientific contribution. This could be done with a summary of the impact of the contribution to ones field of knowledge developed by the scientist or intellectual himself, as is done in large research institutions. This summary would be verified and evaluated in open, honest and registered evaluator meetings”, he details. However, the problems with this type of evaluation are the time it takes and the subjectivity of the evaluation. While the first is simpler to be solved, the second is much more complex, as it requires the agreement of the Justice system, which is already very suspicious of university tenders, and it also requires irreproachable suitability of the evaluation boards. “Anonymity is difficult to obtain and the vices of rewarding groups of friends and medal eminences demand an enormous amount of education and evaluation of the declaration of the absence of conflicts of interest. It is difficult in the oligarchic, corporatist and elitist Brazilian culture, but it is not impossible”, concludes the professor.

Reversing this situation where an advertising indicator is handed over to the center of valorization of science and scientists is not an easy task. For that, Dr. Petroianu, ensures that it is necessary to ask the leaders of the Promotion and Evaluation Sectors from all over the national territory to reason about the harm they are doing to Brazilian science by demanding that their evaluators use the IF as the main tool to qualify scientific works, researchers and higher education and research institutes. It is very well fit to know the real scientific, cultural and humanistic value of several of these leaders with great political and financial power and to know how they got to the positions they are in, as well as what sustains them, concludes the researcher.