ResearchGate: Bogus impact on the ego.

Every week I am bombarded by emails that are trying to sell me something personal (sometimes very personal), lab equipments, or reagents. There also are incessant chains of seemingly good-natured invitations to attend free-webinars. I promptly delete them without opening to see the contents. However, I stop at one email that is sent by ResearchGate, ‘a social networking site for scientists and researchers’. I have a strong urge to delete it without looking at the content but I am reluctant to do that. I know that the email contains my latest ‘Impact Score’. Instead of deleting the message, I anxiously click on it to view my score wondering whether this week I fared well or not.

On most occasions my score has remained unchanged. However, there have been days when the score dropped a few decimal points. It was agonizing to watch that happen. The immediate response was to open the link in the browser to check what happened. Inside my head, I know that the score is dropped because there were fewer ‘hits’ or views of my research papers. But the scientist inside me looks for verification of the phenomenon, and ResearchGate promptly provides me with a graph to support its scoring system. In the absence of any external reference, my graph can shoot through the roof or drop to the baseline (zero) by only one ‘view’ of my research papers.

What is this graph? How is it scored? Who are the viewers? Does the site record all the views of my research paper on the web or only the ResearchGate website? Are my papers curated on the ResearchGate site? Do the views only from the members count? There are at least three different scores for each researcher seen on the website, what are they? You get a ‘Total impact’ point, then an RG Score and an ‘Impact point’. How do you make any sense of it? With all these questions, I don’t think it is clear what they are scoring and to what end.

As for the impact scores, several lab technicians have much larger impact scores than some Principal Investigators. These technicians never published a first author paper or a senior author paper. Yet, they score big in ResearchGate scheme. What impact should we consider here? It is not that a lab technician’s research contribution is not important, but if RG score is mere ‘contribution score’ then it is contaminating the scores of ‘impactful researchers’.

The ResearchGate web site claims that it was started by scientists to ‘Connect, collaborate and discover scientific publications, jobs and conferences’. Then why score their research impact in a manner which does not make sense to anyone? Are the founders of ResearchGate network too smart to have figured out that all humans, whether lay people or trained scientists, have the weakness of vanity and are willing to take an ego trip with bogus scores?

Author ranking system: ‘Impact factor’ of the last author.

AuthorshipWe all know that there is a very little room at the first author position on any scientific paper.  There can only be one name.  Even if two researchers equally contributed to the paper, only one name will appear at the front end of the author list.  According to the current convention, the other equal contributing author cannot put his name at front even on his own resume.  That’s a bummer!

Consider another scenario;  a young researcher who is the major contributor to the paper is on the way to become an independent researcher.  He writes the manuscript and has to decide the author list.  Whom should he put as first author? And the last author? Although there are collaborating scientists, their contribution is too small to grant first authorship.  In this case, the researcher takes the first authorship and also declares himself the corresponding author.  Problem solved!  Not exactly!  This researcher just lost a major point in becoming an expert in his field.

Both these cases illustrate an existing problem of author ranking in a paper.  It is a lesser known fact of scientific publication that funding agencies (including NIH), journals, and often the hiring authorities use softwares to rank the ‘impact factor’ of authors in a publication.  NIH uses such softwares to determine who are the experts in a research field.  These ‘experts’ are then invited to the study sections for reviewing grant applications.  Journals use these softwares to decide who could be potential reviewers for the manuscripts.

On the surface, the idea sounds reasonable.  However, there is a serious flaw in this reliance on softwares to select ‘experts’.  These softwares are mostly primitive and are not designed to rank contribution in multi-author papers.  They are highly biased towards the ‘senior author’  which they determine only by one criterion- the last position on the author list.  Selecting experts based on such faulty method may have ridiculous consequences.

Recently, a well established journal requested a newly minted postdoc to review a research manuscript.  The postdoc was thrilled by this opportunity and took the challenge.  However, we learnt that the scope and content of the manuscript was clearly beyond his expertise.  I don’t know what happened to the manuscript but I am glad to think that there are safeguards against such anomalies.  I must clarify that I am not against inviting new researchers to participate and contribute in the functioning of the scientific community.  However, this should be done with a deliberate choice by program officers and journal editors. It should not happen by mistake. Otherwise it will erode the confidence in validity of the process.

In case you are curious, a current ranking system used by the NIH, for example, gives highest score to the author whose name appears last on a paper.  The software considers the last author as senior author.  The next highest score goes to the first author.  Finally, it does not matter where your name is between the first and the last author, the software assigns you the same low score for ‘contributing authors’.

I see an irony here.  Traditionally, the last author is the senior author who directs the project and in most cases provides funding and laboratory space for the scientific work.  If you want to find out the experts, let common sense prevail- a simple Pubmed search should suffice.  Why do we need technological voodoo to assign complex scoring system to discover the known?