ResearchGate: Bogus impact on the ego.

Every week I am bombarded by emails that are trying to sell me something personal (sometimes very personal), lab equipments, or reagents. There also are incessant chains of seemingly good-natured invitations to attend free-webinars. I promptly delete them without opening to see the contents. However, I stop at one email that is sent by ResearchGate, ‘a social networking site for scientists and researchers’. I have a strong urge to delete it without looking at the content but I am reluctant to do that. I know that the email contains my latest ‘Impact Score’. Instead of deleting the message, I anxiously click on it to view my score wondering whether this week I fared well or not.

On most occasions my score has remained unchanged. However, there have been days when the score dropped a few decimal points. It was agonizing to watch that happen. The immediate response was to open the link in the browser to check what happened. Inside my head, I know that the score is dropped because there were fewer ‘hits’ or views of my research papers. But the scientist inside me looks for verification of the phenomenon, and ResearchGate promptly provides me with a graph to support its scoring system. In the absence of any external reference, my graph can shoot through the roof or drop to the baseline (zero) by only one ‘view’ of my research papers.

What is this graph? How is it scored? Who are the viewers? Does the site record all the views of my research paper on the web or only the ResearchGate website? Are my papers curated on the ResearchGate site? Do the views only from the members count? There are at least three different scores for each researcher seen on the website, what are they? You get a ‘Total impact’ point, then an RG Score and an ‘Impact point’. How do you make any sense of it? With all these questions, I don’t think it is clear what they are scoring and to what end.

As for the impact scores, several lab technicians have much larger impact scores than some Principal Investigators. These technicians never published a first author paper or a senior author paper. Yet, they score big in ResearchGate scheme. What impact should we consider here? It is not that a lab technician’s research contribution is not important, but if RG score is mere ‘contribution score’ then it is contaminating the scores of ‘impactful researchers’.

The ResearchGate web site claims that it was started by scientists to ‘Connect, collaborate and discover scientific publications, jobs and conferences’. Then why score their research impact in a manner which does not make sense to anyone? Are the founders of ResearchGate network too smart to have figured out that all humans, whether lay people or trained scientists, have the weakness of vanity and are willing to take an ego trip with bogus scores?

Coming soon: Don’t be an asshole reviewer!

Gather the reviews that you got for your research papers and grant applications. Everyone has one or more of those idiotic reviewers’ comments. Bring them out to have some fun. 🙂

Scientific Research: A Ponzi Scheme.

Screenshot_3_10_13_6_28_PMRecently, a friend and colleague blurted out, “Man, academic scientific research is a Ponzi scheme”.  At first I laughed at this but soon I realized his point of view.  My friend is primarily a clinician.  His training and interest in understanding the bases of disease and hope of discovering new therapeutic targets had brought him in laboratory research.

He quickly realized that there was a chasm between his lofty ideals of studying a biological phenomenon and his mentor’s single-minded interest in using his data for fetching money.   My friend’s enthusiasm and motivation that had been his strength in conquering the daily grind of the lab work and failure of experiments were suddenly overcome by despair.  He is a good scientist who carefully designs and plans his experiments and is resourceful and skilled to execute them well.  Unfortunately, he decided to return to the clinic without completing his research project.

Under ordinary situations I would not have thought much about his return to clinics. Such departures are not uncommon among physician scientists who do not like the long drawn battle of laboratory scientists against leaking gels, failing western blots, suboptimal reagents, and a long dark tunnel of uncertainty without any glimmer of light at the end.  Many do not see how abstract concepts of basic research could ever be translated into clinically relevant knowledge.  But our guy has the smarts.

Like a painful sliver his analogy of scientific research as a Ponzi scheme stuck in my head.  Of course, I am not immune to the widely publicized Bernie Merdoff’s case of financial bungling.   I googled Ponzi scheme to find that…

In a Ponzi scheme potential investors are wooed with promises of unusually large returns, usually attributed to the investment manager’s savvy, skill or some other secret sauce. (Reference:  The New York Times)

Scientific research indeed is like a Ponzi scheme.  A very small number of people (established investigators) entice a very large number of young people (investor) for a dream of a very large profit (Nobel Prize, glory, publication, publicity, creative satisfaction etc).  To keep the scheme running, they do tell the ‘fine prints’ that not everyone gets there, the harder you work the larger the reward.  Cynics call it ‘rat-race’.  But I think Ponzi scheme is a better description.

Of course, once in a while from this large pool of investors a few are selected to receive the big profit that was promised to all.  They are given awards, positions and attention.  Usually these are the mediocre lot. The reason for this favor is that these mediocre are either unsure about their abilities or are too sure about it.  They stay indebted to the generosity of the ‘system’ and to display their loyalty to the system, they propagate the same scheme.  This is the pyramid scheme taken to extreme.

Does this mean that there are no smart people in scientific research?  On the contrary, there is a large number of smart people who keep pushing the leading edge further and beyond.  They are the pioneers with true passion for advancing the knowledge.  They are the ones who are genuinely interested in understanding nature of things.  They are not wheeler-dealers who relentlessly try to fill round holes of their hypotheses with square pegs of data.

I am not sure whether my friend will ever return to laboratory research but with a simple remark he gave me a different point of view.  We all thrive on such diverse points of view in research and I think that he did shift my paradigm.

Profitable reviews: Nature Immunology defends reviews.

In one of my previous rantings (Click here), I wrote about how journals publish reviews to improve their impact factor. Now, in the recent issue of Nature Immunology (Click here for link), the editorial acquiesces:

“Because they are highly cited (on average, a review article is cited almost twice as often as a research paper), they help boost the impact factor of the journal.”

What the editorial does not mention is the trend that some glossy journals have adopted to publish special issues that predominantly contain reviews.

It also does not take into account the harm done by ‘expert reviews’ where an interpretation or speculation by an expert is perpetuated in the scientific literature as scientific facts. However, I would agree that scientists are responsible for testing the veracity of these ‘facts’, not the journals.

Author ranking system: ‘Impact factor’ of the last author.

AuthorshipWe all know that there is a very little room at the first author position on any scientific paper.  There can only be one name.  Even if two researchers equally contributed to the paper, only one name will appear at the front end of the author list.  According to the current convention, the other equal contributing author cannot put his name at front even on his own resume.  That’s a bummer!

Consider another scenario;  a young researcher who is the major contributor to the paper is on the way to become an independent researcher.  He writes the manuscript and has to decide the author list.  Whom should he put as first author? And the last author? Although there are collaborating scientists, their contribution is too small to grant first authorship.  In this case, the researcher takes the first authorship and also declares himself the corresponding author.  Problem solved!  Not exactly!  This researcher just lost a major point in becoming an expert in his field.

Both these cases illustrate an existing problem of author ranking in a paper.  It is a lesser known fact of scientific publication that funding agencies (including NIH), journals, and often the hiring authorities use softwares to rank the ‘impact factor’ of authors in a publication.  NIH uses such softwares to determine who are the experts in a research field.  These ‘experts’ are then invited to the study sections for reviewing grant applications.  Journals use these softwares to decide who could be potential reviewers for the manuscripts.

On the surface, the idea sounds reasonable.  However, there is a serious flaw in this reliance on softwares to select ‘experts’.  These softwares are mostly primitive and are not designed to rank contribution in multi-author papers.  They are highly biased towards the ‘senior author’  which they determine only by one criterion- the last position on the author list.  Selecting experts based on such faulty method may have ridiculous consequences.

Recently, a well established journal requested a newly minted postdoc to review a research manuscript.  The postdoc was thrilled by this opportunity and took the challenge.  However, we learnt that the scope and content of the manuscript was clearly beyond his expertise.  I don’t know what happened to the manuscript but I am glad to think that there are safeguards against such anomalies.  I must clarify that I am not against inviting new researchers to participate and contribute in the functioning of the scientific community.  However, this should be done with a deliberate choice by program officers and journal editors. It should not happen by mistake. Otherwise it will erode the confidence in validity of the process.

In case you are curious, a current ranking system used by the NIH, for example, gives highest score to the author whose name appears last on a paper.  The software considers the last author as senior author.  The next highest score goes to the first author.  Finally, it does not matter where your name is between the first and the last author, the software assigns you the same low score for ‘contributing authors’.

I see an irony here.  Traditionally, the last author is the senior author who directs the project and in most cases provides funding and laboratory space for the scientific work.  If you want to find out the experts, let common sense prevail- a simple Pubmed search should suffice.  Why do we need technological voodoo to assign complex scoring system to discover the known?

Impact Factor: Who are you bullshitting?

At the lunch table, I was thinking of an experiment when my attention turned to a colleague whose paper was recently rejected by a medium caliber (read impact factor) journal and his supervisor had dissuaded him from addressing the reviewers’ mean questions.  Instead, he was gently cajoled into submitting his paper to a new open-access online journal.  Despite the old adage that good things in the nature are free, he was unconvinced of the value of publishing in an open-access journal.  That only tells how much we are used to scientific journals’ policy of charging authors to ‘defray the cost of publication’.  In any other field, authors are paid when they publish. My colleague, probably smarting from the scathing verbiage of the ‘behind the curtain’ reviewers, was unimpressed and unconvinced, and skeptical about the quality of the open-access online journal.

My colleague is not alone in his quest of collecting impact factor points.  Every scientist, at least in biomedical research, is worried about the impact factor of the papers published.  Many have figured out complex algorithms as to which impact factor zone they should reside to keep their research lab afloat. The impact factor frenzy has generated a class system in science where publication in a journal with the glossiest cover page has become the ultimate goal of scientists. It also helps the supervisors as a carrot to dangle in front of their postdocs, ‘if you perform fifty experiments in a day, with a 24/7 attitude, you will get your paper published in the Cosmopolitan or Vogue of science world’.

Ever wondered why the movie The Devil Wears Prada appeared eerily familiar to the postdocs?  The only difference was that the Devil’s minion gets to wear glitzy clothes and gives away fabulous Bang & Olufsen phone;  most postdocs cannot even spell that name.

The impact factor sickness has not only caught scientists, it has also affected the morale of major hardcore science journals. Just in case you forgot, there are roughly two categories of science journals;  first, journals that are published by scientific societies and most of their scientific matters of soliciting, reviewing, and editing is done by real working scientists.  Second, those journals that are run by publishing powerhouses who pluck away energetic hotshot postdocs as editors to their ritzy offices to run the business of scientific publishing.

The impact factor is determined by a commercial arm of a major publishing conglomerate whose non-scientific methods of assigning impact factors generated brouhaha among the Rockefeller Press journals.  These journals were assigned low impact factor despite being darlings of a cross-section of research community.  Probably, the failure to attract good papers and loss of revenue led them to publish a syndicated editorial challenging and ridiculing the impact factor system (Click here).  Their arguments were cogent and the language was bold and challenging.  It is not clear how, but their impact factor did improve. However, after they gained the impact factor, their campaign against impact factor disparity fizzled. Publishers are not the only one who benefit from impact factor inflation.

Impact factor is a crutch that is most often used by impotent, unimaginative and incompetent committees in academic institutions for recruitment, promotions, and fiscal matters. Notice that I showered the adjectives on committees, not the members of the committees, who are generally intelligent people (including me).  Overworked, unappreciated, and sometimes lazy and indifferent members of a committee do not want to be held responsible for making a decision.  Therefore, they rely on impact factor to show their ‘objectivity’.  If they hire a new faculty member who later turns out to be a complete jerk in the department, they can easily blame it on the impact factor of his publication which led to his recruitment.  Had they selected him on the basis of their ‘judgement’, they would be scoffed at by their peers and colleagues.

So, once you begin to equate impact factor as being objective index of productivity, smartness, intelligence, and innovation, you have unleashed a monster that is going to take over the part of the system that traditionally relied on competing interests.  Grant reviewers and paper reviewers can now exercise more arbitrary control over the decision-making without appearing to be unfair.  They can veto the impact factor invoking their experience and judgement.  Essentially, the reviewers are manipulating the system in their favor.

One may argue that eventually, the system will be ‘normalized’ so that no one will be clearly at an undue advantage.  The truth is that it is the same old bullshit with the added objectivity armor of the impact factor.

In case you wondered how some journals achieve high impact factor, it is quite revealing to notice that the Annual Reviews series have some of the highest impact factor.  Wow!!  You would have thought that real research papers should be the winners.  Apparently not!  And there lies the trick.  Most high impact journals are highly cited not because of their published research papers but because of the review articles.  It is not their altruism that glitzy journals are happy to let you download artistic slides for your PowerPoint presentations.

Although it is a great business plan to target lazy scientists who don’t want to do their own legwork of literature review, there is another reason for using review articles to boost impact factor. Many shrewd scientists like to cite reviews published in the high impact factor journals in their grant proposals and research papers upfront.  This way a lazy reviewer can be convinced that because the topic was reviewed in a high impact journal, it must be of great importance.

When I was a new postdoc, I learnt a valuable lesson in assessing the scientific caliber of a scientist.  My research advisor was a soft-spoken, astute scientist with an incisive vision. He showed me how he judged the quality and productivity of a faculty candidate from his Curriculum vitae.

1.  Throw out all reviews, he (or she) has listed.
2.  Take away all papers where authorship is beyond the second author (or senior author).
3.  Trash all conferences and posters presented.
4.  Look at how regularly papers have been published and how good they are.  Yes, use your judgement.  A good paper does not need any assistance, you will know when you see it (at least in the area of research close to you).

I think I agree with his style of assessment rather than the bullshit of impact factor.  Won’t you agree?