Impact Factor: Who are you bullshitting?

At the lunch table, I was thinking of an experiment when my attention turned to a colleague whose paper was recently rejected by a medium caliber (read impact factor) journal and his supervisor had dissuaded him from addressing the reviewers’ mean questions.  Instead, he was gently cajoled into submitting his paper to a new open-access online journal.  Despite the old adage that good things in the nature are free, he was unconvinced of the value of publishing in an open-access journal.  That only tells how much we are used to scientific journals’ policy of charging authors to ‘defray the cost of publication’.  In any other field, authors are paid when they publish. My colleague, probably smarting from the scathing verbiage of the ‘behind the curtain’ reviewers, was unimpressed and unconvinced, and skeptical about the quality of the open-access online journal.

My colleague is not alone in his quest of collecting impact factor points.  Every scientist, at least in biomedical research, is worried about the impact factor of the papers published.  Many have figured out complex algorithms as to which impact factor zone they should reside to keep their research lab afloat. The impact factor frenzy has generated a class system in science where publication in a journal with the glossiest cover page has become the ultimate goal of scientists. It also helps the supervisors as a carrot to dangle in front of their postdocs, ‘if you perform fifty experiments in a day, with a 24/7 attitude, you will get your paper published in the Cosmopolitan or Vogue of science world’.

Ever wondered why the movie The Devil Wears Prada appeared eerily familiar to the postdocs?  The only difference was that the Devil’s minion gets to wear glitzy clothes and gives away fabulous Bang & Olufsen phone;  most postdocs cannot even spell that name.

The impact factor sickness has not only caught scientists, it has also affected the morale of major hardcore science journals. Just in case you forgot, there are roughly two categories of science journals;  first, journals that are published by scientific societies and most of their scientific matters of soliciting, reviewing, and editing is done by real working scientists.  Second, those journals that are run by publishing powerhouses who pluck away energetic hotshot postdocs as editors to their ritzy offices to run the business of scientific publishing.

The impact factor is determined by a commercial arm of a major publishing conglomerate whose non-scientific methods of assigning impact factors generated brouhaha among the Rockefeller Press journals.  These journals were assigned low impact factor despite being darlings of a cross-section of research community.  Probably, the failure to attract good papers and loss of revenue led them to publish a syndicated editorial challenging and ridiculing the impact factor system (Click here).  Their arguments were cogent and the language was bold and challenging.  It is not clear how, but their impact factor did improve. However, after they gained the impact factor, their campaign against impact factor disparity fizzled. Publishers are not the only one who benefit from impact factor inflation.

Impact factor is a crutch that is most often used by impotent, unimaginative and incompetent committees in academic institutions for recruitment, promotions, and fiscal matters. Notice that I showered the adjectives on committees, not the members of the committees, who are generally intelligent people (including me).  Overworked, unappreciated, and sometimes lazy and indifferent members of a committee do not want to be held responsible for making a decision.  Therefore, they rely on impact factor to show their ‘objectivity’.  If they hire a new faculty member who later turns out to be a complete jerk in the department, they can easily blame it on the impact factor of his publication which led to his recruitment.  Had they selected him on the basis of their ‘judgement’, they would be scoffed at by their peers and colleagues.

So, once you begin to equate impact factor as being objective index of productivity, smartness, intelligence, and innovation, you have unleashed a monster that is going to take over the part of the system that traditionally relied on competing interests.  Grant reviewers and paper reviewers can now exercise more arbitrary control over the decision-making without appearing to be unfair.  They can veto the impact factor invoking their experience and judgement.  Essentially, the reviewers are manipulating the system in their favor.

One may argue that eventually, the system will be ‘normalized’ so that no one will be clearly at an undue advantage.  The truth is that it is the same old bullshit with the added objectivity armor of the impact factor.

In case you wondered how some journals achieve high impact factor, it is quite revealing to notice that the Annual Reviews series have some of the highest impact factor.  Wow!!  You would have thought that real research papers should be the winners.  Apparently not!  And there lies the trick.  Most high impact journals are highly cited not because of their published research papers but because of the review articles.  It is not their altruism that glitzy journals are happy to let you download artistic slides for your PowerPoint presentations.

Although it is a great business plan to target lazy scientists who don’t want to do their own legwork of literature review, there is another reason for using review articles to boost impact factor. Many shrewd scientists like to cite reviews published in the high impact factor journals in their grant proposals and research papers upfront.  This way a lazy reviewer can be convinced that because the topic was reviewed in a high impact journal, it must be of great importance.

When I was a new postdoc, I learnt a valuable lesson in assessing the scientific caliber of a scientist.  My research advisor was a soft-spoken, astute scientist with an incisive vision. He showed me how he judged the quality and productivity of a faculty candidate from his Curriculum vitae.

1.  Throw out all reviews, he (or she) has listed.
2.  Take away all papers where authorship is beyond the second author (or senior author).
3.  Trash all conferences and posters presented.
4.  Look at how regularly papers have been published and how good they are.  Yes, use your judgement.  A good paper does not need any assistance, you will know when you see it (at least in the area of research close to you).

I think I agree with his style of assessment rather than the bullshit of impact factor.  Won’t you agree?

43 thoughts on “Impact Factor: Who are you bullshitting?

  1. Great ideas! Generally I would agree, with these suggestions:

    Why reject beyond the second author? This might make sense if the research team is hierarchical in nature, but this approach would disadvantage the non-hierarchical collaborative research team (my preferred approach)

    What about more open and new forms of sharing? Why aren’t we writing literature reviews collaboratively wiki-style, and keeping them up to date? What about release of data and methodology early on in Open Lab Notebook style?

    • sometimes it may happen that the person who has done all the analysis and written the whole paper is 3rd or 4th author. Ask the postdocs and contractual researchers

      • Authorship
        The not very happy experience for authors is truer in non-research Institutions or organisations, where the researcher has to live (survive) a smooth life if he brings into his fold of confidence as many people as possible. At the end it may appear to be a hierarchical testament. When it is a management prescription it is, however, a good idea to have that kind of authorship. People who matter get encouraged and also implement its advantages. Nevertheless, I have also noticed that although I have ‘created many coauthors’ during the course of my 36years of research, with the hope that output of published information will continue from my coauthors and seniors in the administrative (does not apply to my students or assistants) hierarchy, they have seldom used their opportunities and produced a technical publication even in their next posting. I have a notion that publication potential is everywhere and in every posting of a person. My coauthors have certainly asked for a list of publications for the preceding year as the ‘publication list’ may have helped them when they write annual self-assessments or may be when they apply for a position requiring publications. Behind the entire above scene, I always expected sometime somewhere somebody will show his modesty to bring my name to the first. It has never happened in an administrative set up. It had not happen when I was a student and produced five research papers out of my MSc project work. That was my training time, so I have taken it in that spirit. Here however, I must pay my tribute to one person– Dr H. R. Bustard, the FAO Consultant in India for Crocodile Conservation programme that started during 1970s, and where I started my professional career. Dr Bustard was the field guide and non-official PhD guide for me. He had issued a circular very early about authorship of technical papers. The underlying principle for first authorship was thus: if it was equal contribution starting from an idea to experimentation or data collection and writing, the authorship was alphabetical, though Dr Bustard had the alphabetical advantage over me. For other instances, depending on contributions the first authorship was shared among us. That was a very healthy practice. Other coresearchers were also happy with it. It got flouted when I had to write with (for) others. Yet, I always expected, sometime somewhere somebody should have shown his modesty to bring my name to the first, or while giving a talk or powerpoint presentation at least used a ‘plural term’ to indicate that the analysis was done with others in the administrative set up. It didn’t happen. That sometimes hurts even today!

  2. Heather:
    Thank you for bringing up both the points. You are right. In the modern scientific environment, the research has become quite complex and the order of authors on a paper does not represent their relative contribution. Obviously, in these cases, personal interviews are going to be more accurate representation.

    As for the Wiki-style collaborative reviews, I believe that reviews do serve a purpose in the scientific literature. They should not be just collections of published results, but should attempt to unravel some underlying phenomenon or interpret the published results in a different light.

    About the Open Notebooks, I am totally in agreement with you. I would say that research publication should accompany a complete release of all the raw data and notebooks on the institution’s server that is accessible to everyone. Such a possibility is not a dream. Many large projects now do release their data for alternative interpretations.

  3. Pingback: Scientific journals | ZullfiKar

  4. I gather from my friends in the hard sciences that when papers have lots of authors the order of them alone does not tell you the contribution–there is a complicated “formula” that everyone within each field understands and can decode (although, actually, I don’t suppose automated algorithms can). Just to add an extra element of complexity, there is considerable uncertainty in the humanities and social sciences about how books will be counted. There are no commercial impact factors for books yet and it is not clear how a book will be balanced against journal articles by those who would wish to measure the importance of departments’ work.

  5. Pingback: Ein paar Wahrheiten über den Impact Factor - Infobib

  6. Pingback: Linktipps der Woche: älteste Bibliothek Sachsens online, ePub und Ende vom Open Access in Brasilien? | Wissenschaft und neue Medien

  7. Pingback: Nevada Analytical Services Blog Site 775.284.3970 | Blog | Can we count on journal metrics?

  8. Pingback: Medical Sciences – Research and Impact Analysis « Doug Newman: Global Strategic Analyst

  9. Pingback: On the future of scientific publishing « Doing PhD

  10. Everything that exists has a reason. Impact factor itself shows the average quality of the papers published in one journal based on citations. It doesn’t guarantee every paper has the equal quality. That’s why people care less and less about the journal’s impact factor. Instead, people start to emphasize on H-index of his or her own publications. You don’t have to argue that this number or that number does not make any sense to you. If the majortiy of the scientific community thinks it has some meaning, that’s it — you are just left out.

  11. I agree with frances. Impact factors are really taking science for ride.
    In addition I would like to add one more point, although I am not sure how relevant is that at this platform. I submitted a research article in a journal with IF 3.5 (as a PhD student). The editor and reviewer accepted the importance of work, but due to poor language (you must have got an idea by now. I referred dictionary 9 times while going through this article) they asked us to revise. Meanwhile in 2009 IF for the journal jumped to 4.5 and after two revision, the editor was enlightened that research was not worth of publication in high quality journal with IF 4.4 and it was rejected after 10months.
    I hope I am not sounding disgruntled.

  12. Pingback: 2010 ISI Impact Factors out now (with some surprises) « ConservationBytes.com

  13. Let me put things in right perspective about impact factors. One person mutates a human gene by site directed mutagenesis and studies its impact in-vitro. Other person tries to identify the same mutation and other neighbouring mutations (variations) in the gene in human populations and reports his observation that there is no possibility of such a mutation likely to be present in any organism leave alone human.
    You realise for yourself Which work is a scientific burden and which is beneficial. Unfortunately the scientific community credits the first person as his work is published in high impact journal while the second persons work is discredited and gets published in a low impact journal. Why one needs to work on a mutation which does not appear in nature.

  14. As someone who has reviewed for over 20 different journals from Nature to the truly obscure, I can say that IF is not entirely meaningless, but it is way overvalued. A paper that gets published in Nature or Science could just as certainly have been published in a so-called lower journal if the author had sent it there, and often that is what happens. On the other hand an uninspiring, poorly controlled study has almost zero chance to publish in Nature. The consequence is that top-shelf journals do indeed represent outstanding work, but lower impact (and especially mid-level) journals often do contain work of equal or better quality. On the other hand when I review for the bottom of the stack, I am far more reluctant to trash the study unless it is horribly flawed. So the chaff that gets published in these lower journals is probably not the best quality work — there is a reason it was sent there in the first place.

    Ultimately it would be best if journal impact factor were ignored and individual papers were rated for impact, but even this is a flawed approach. As someone who uses a relatively obscure species for his experiments, I know that the exact study published using mouse would instantly be cited 5 times more often.

    However there does need to be some way to recognize the editorial and review hurdles that a truly outstanding study has overcome, as this does often differentiate it from the great unwashed…

  15. Mostly agree with francesscientist, I would like to add an example of how publish research in ‘top’ sciencific journals (aka high impact factor) can be, at least, hilarious. The work was published in PNAS (impact factor around 10) and basically explains how Sildenafil (aka Viagra) cures hamster jetlag. Ok, I would like to say that this was the title in Nature News, but could you imagine for a second a hamster traveling from London to New York and taking Viagra to relieve jetlag symptoms??… could you explain to the ethical committee why you want to use Viagra in hamsters to study jetlag… (I could not stop laughing at this point). Nevertheless, at this point you may have a wrong idea about that work. The study was not as simple as that, they showed that the hamsters (with wheel running activity) accelerated entrainment to the new cycle (light/dark cycle) once it was advance 6 hours, and increase in the suprachiasmatic nucleus of cGMP levels 45 min after Sildenafil injection. My point of view is that, independently of the effect of Viagra on jetlag side effects, there are many things here that make me really think that something is really wrong.
    To start, I would like to say that hamsters are not humans. By definition, our work in research are limited by the tools that we have in the lab and that we can use. Nevertheless, let’s be humble and think that the questions we are trying to solve in animals (c. elegans, drosophila melanogaster, mus musculus, rattus novergicus, syrian hamsters or nonhuman primates) are not the same that in humans. Also, I think that the system to revise the work (paper) needs to be changed. Once you submit your work to a journal, nobody knows who are the reviewers. Instead, the reviewers know who is submitting the work. Let me guess who is going to be published before you. Not to say what happens if your work comes from France or Spain.
    Things are not going to change, at least not for a while, unfortunately. Check out upcoming ‘top’ impact journals, Viagra is going to solve more problems.

  16. WHERE SHOULD I PUBLISH MY WORK!
    I am considered to be one of the prolific authors of research notes, field observations, reports, articles, etc. from 1974 onward in the field of biodiversity, wildlife biology and management. Out of over 250 such write ups about a dozen only may be in overseas publications. Others are in Indian journals. Being a pioneer in many respects in my field, and the subject being such that anything I observe and want to communicate is new to wildlife science, I had always exercised my own choice and option for a journal, where I should communicate my writing. I have chosen the journal in such a way that it should reach the right audience or the right user of the finding or technique I have to tell about. When I am developing a technique which field foresters are to use, I choose Indian Forester which reaches all Divisional Forest offices of India and many desks overseas. When it is a biological note on an Indian species I chose the Journal of Bombay Natural History. When it is about or has implications in captive management of animals in India I chose the Zoos’ Print. These are very widely circulated, established journals for years. In those days, there was nothing like internet to browse and search, as it is today. There was no ‘impact factor’, either. I had to build my own library. Carrying with bulks of paper for the last four decades. I came to know about impact factor when my daughter started publishing her work on nano medicines and discuss with me issues relating to it. I searched the net for any possible impact factor given to my journals, but no. My journals, although widely used by concerned workers, had nothing like an impact factor. I am not aware, what exactly is the situation today. I do not comment on medical science research, which has global implications and is meant for the entire humankind. And the trend is changing in research world where I am getting fossilised. My main point is that where field research on natural history or techniques involving sanctuary management, etc is involved, one must not worry where the journal is published or what is it’s impact factor; but must think whether my writing and I will be able to reach the field staff, field biologist who will benefit from this and shall not spend time in rediscovering what I have already discovered. They should instead carry the work ahead from the point where I left. In this respect the open access journals are very good. Very quick, on the net, and perhaps with some impact factors, if some one is bothered about it. See, I am probably changing, not fully fossilised. Dr Lala A. K. Singh, Phd. Senior Research Officer-Wildlife (Retired), Government of Orissa.
    http://www.geocities.ws/laksingh33/
    http://laksinghindia.blogspot.com/

    • Thank you. I take your words as compliments. I feel a bit disturbed for present day trend even in wildlife sector. The findings or recommendations which have the potentiality for use by field staff in a sanctuary are submitted for publications in such journals which doesn’t reach the real user. It, however, does some good when there is institutional review for career progression of the concerned scientist. Then the question is for whom the scientific research and findings are meant, these days?

  17. “who pluck away energetic hotshot postdocs”

    Let’s be real here, you meant to say “who bottom feed off desperate, failed postdocs who don’t know their @#$ from a hole in the ground”, but you(r) iPhone autocorrect got the best of you when you were on the train editing this piece.

    Publish for profit journals too often now have people who cannot make an informed scientific judgment call. The editors, the ones holding the keys to science publication, career advancement, and determine trends in science, are too often not anywhere close to the best and brightest. Too often they rely on impact factors themselves to judge the “merits” of a publication (what field is it in? is it a hot topic? etc) and overly rely on reviewers because they themselves lack both the intrinsic knowledge and understanding, as well as the clout, to “make the call”.

    Science has become a big money game, where the gatekeepers are not real scientists, and manuscripts are often judged too much on social factors than merit and significance of the work. There is a growing proliferation of increasingly lousy journals publishing increasingly crap research.

    Cesspool comes to mind.

    3 cents

  18. LOL, actually, I meant what I wrote and it does not translate into what you are accusing me of saying. But I do understand your point which is close to reality. If you read carefully, I don’t think we are in disagreement. Enjoy the train ride!

  19. Pingback: Profitable reviews: Nature Immunology defends reviews. | In the name of science…

  20. The problem is well described in the last sentence of the article: “A good paper does not need any assistance, you will know when you see it (at least in the area of research close to you).” Impact factor was intended to avoid such subjective assessment or ‘gut’ assessment of papers. However, with time everyone, including authors, publishers, and readers began misusing it. I found the following article interesting in this regard.
    http://www.sciencedebate.com/science-blog/journal-impact-factors-2011-released

  21. Another problem with science quality and its evaluation is that there are simply too many researchers, writing too many papers in too many journals. Scientists are smart but very dumb too. Because they eschew material gain in favour of a ‘higher purpose’ they become manipulated by people that do care how much they are paid. Hence there is no organisation, no minimum standard, no regulation on who may or may not be a scientist and so on, just a vague adherence to a code of ‘ethics’ and a blinding naivety.
    Any other profession has all of these things and while we all dance for a living I’d say none prostitute themselves more than scientists. We get what we deserve. In some way our inability to get organised comes from our collective inability to agree on anything because we are trained to deconstruct every suggestion and apply the rules of ‘science’ to everything else.
    Most scientific societies I know are travel clubs for the aged who award one another meaningless accolades and serve to prevent change.
    I think anyone who does a commercial contract for less than say $1k per person per day, who employs trainees at less than nationally fixed rates, who works more than 60h per week without getting paid more, who regularly appears on TV claiming miracle breakthroughs, who publishes in journals with almost zero rejection rate, with more than 50% self citations, who does not attend continuing education regularly and annually, who accepts idiot students because they can pay etc and generally reduces our profession to that of crack whore should be banned from publication for life. Also no journals launched without our approval, I used to get beaten up by endless requests for reviews now it is to be part of some hapless open access effort.
    I dream of course, I know people who have endorsed products in exchange for a reasonable lunch and plenty of for sale self citers float to the top regardless and the funding agencies have got us chasing our tales so badly that we never have the time to wonder why we do it. Less scientists, less journals and higher quality. That means less PhD students, tougher standards for qualification and a reason to bother, namely a decent job at a decent pay waiting for them with sufficient resources to support their labs.

  22. Pingback: Conservation and Ecology Impact Factors 2011 « ConservationBytes.com

  23. The problem with not caring about IFs is that grant agencies still use it to evaluate the likely impact of the proposed research projects. So no good (read high IF) paper, no grant. No grant, no research. No research, no good paper… As unfair and wrong as it is, unless a more suitable alternative is proposed, that’s unfortunately the political game all scientists have to play. Thanks god, there seems to be a trend for open-access research and publications (https://petitions.whitehouse.gov/petition/require-free-access-over-internet-scientific-journal-articles-arising-taxpayer-funded-research/wDX82FLQ). That’s a start…

    To come back on the article itself, why systematically discard reviews? It takes a *lot* (and I mean it) to produce unbiaised benchmarks and reviews of good quality (not bullshits from authors or companies praising their work) and that can be truly useful to people. Not all labs have the resources to benchmark and evaluate every new system that’s released every month…

    And why discard conferences papers? Unlike in life sciences, most computer science conferences are peer-reviewed, precisely to ensure a minimum level of quality. And proceedings are usually open-access…

    Same for the order of authors’ names: in CS, we normally use alphabetical order. Based on your rules, with a name starting with a ‘T’, I must be a really bad researcher. Well, maybe I am, who knows…

  24. Thank you for taking your time to comment. You are right about playing the ‘political game’. But that is precisely the point: you should not be playing games with your research. I do not know if granting agencies look for so called ‘high impact publications’. It is the reviewers who might do so. But then again, they are the self-serving, idiotic, power hungry, non-thinking colleagues of yours and mine. I will write about these reviewers’ mentality soon.

    Several years ago, I came across an excellent essay by Mary H. O’Brian (http://bit.ly/SO3hxI). It is a good read. As a scientist, one has to take a rational stance towards a problem. Somehow, most cower down in fear when it comes to airing their feelings about such stupidity as whose research is more important. The political landscape in science is not different from most oligarchic set ups (read Military Junta). Again, this fear is generated because of ‘you never know who will review your grant or paper’ mentality. I can go on ranting on this topic but I will keep it for a later post.

    As to your question of why discard the review publications? You mentioned some of the reasons already. However, scientific career is judged by one’s original contributions. Even when we categorize things (Mendeleiev or Carl von Linnaeus) scientists have made original contributions to open up an entirely new dimensions in research.

    Conference proceedings are abstracts and the peer review is only to reduce the number of the abstracts to the specified space and resources available to the meeting organizers. It is hardly a reflection of quality of research work. Most results presented in such abstracts hardly reach formal publication in original form.

    As the Maddox editorial pointed out about different rules of authorship in different labs, your lab is following what has been working there. It may also suggest that your lab has an autocratic system in place that no one wants to challenge from the fear of whatever. I do not know how it works, but is it possible that your system is robbing some researchers of their credit? Is it the system akin to ‘from all according to their abilities and to all according to their needs”? That would suck.

    • “Conference proceedings are abstracts and the peer review is only to reduce the number of the abstracts to the specified space and resources available to the meeting organizers. It is hardly a reflection of quality of research work. Most results presented in such abstracts hardly reach formal publication in original form.”

      This is mostly true in life sciences, but not everywhere. In computer science for instance, most conferences ask for full 6-15 page papers (depending on the conf.) which are reviewed by 2 or 3 peers. Acceptance rate is usually 10-30%. Between an open-access quality online conference proceedings and a fee-based journal paper, computer scientists will mostly publish in a conference, as long as it is covered by standard indexes and googlable. In fact, most CS publications are in conference proceedings, because there are very few printed journals. Which makes sense when you know that journal publication fees are significantly higher than conference registration (even including travel). And as a bonus, you get to travel to nice venues and you can interact with colleagues working in your field.

      • Thomas: Thank you for clarifying it. We learn new things everyday. I must say that I had no clue about the peer review process in Computer Sciences.

  25. I want to MAKE a reform for all scientists, Let us work independently… we dont need to tolerate underpaid labs. Let us make a new institution were we can work freely without worrying about impact factors or publication, but rather focus on the science …technological advancements, scientific applications for the good of humanity…. Our generation needs a shift …Todays science is not based on curiosity and passion its based on the commercial value …

  26. I do think that the impact factor now is not leaving any impact few are reputed journal maintained this but rest are doing money with impact factor.

  27. Money, Money and Money. This is the requirement for everything. If you want to to do real science you need money, If you want to get big funding you need good science and money. If you want to hire best talent you need money. We can not work independently, we need money for almost everything. So, competition is for funding and its solely depend on impact factor. People will show (funding agency) what they want to see. Once funding agency/committee will start rejecting review article,high IF factor (paid journal) everything will change eventually.

  28. The IF seems to me to a sort fo black icing on an already fascist approach to “science”. First, the journals, rescind our copyright. Then they make us (or our labs) pay lots of money to publish. Finally, they boast their impact factors widely, to increase a sense of competition and anxiety in researchers. I found it particularly disturbing when I was publishing in Journal XXX in which the bottom line of EVERY EMAIL read “Impact Factor: x.xxx” (not that you didnt know what it was from the same bull&@#! numbers being plastered all over the journal website). Its really demoralizing – this is not science at all …

  29. Why reject articles with second or later authorship be it hierarchical or otherwise listing ? In my laboratory I write a research project, compete and fight to get my hands on a research grant, recruit a PhD candidate and ask him to work on it .. encourage him to write and publish the results of that research as first author to boost his morale and give him needed experience and develop expertise and acknowledge the whole team including my professor who plays a supervisory role on me and his professor whose overall supervision and guidance pulls the whole team upwards … and at the end of 5 years when my research project generates 10 publications with none of me as first or second author (due to hierarchical listing) I get kicked out for not publishing ? Isn’t that the most lame thing anyone could ever do ?

  30. Very well elaborated and I agree, the competition for impact factor has compromised the quality of research. Many people are only running to score high impact factor without having in depth understanding of the research objectives. To publish a quality paper is a technique not an understanding. So many real scientists remain behind the scene when impact factors are counted for promotion, up-gradation, and appreciation, This negative trend for impact factor hunting should not encouraged in the real scientific community.

Leave a comment