Gather the reviews that you got for your research papers and grant applications. Everyone has one or more of those idiotic reviewers’ comments. Bring them out to have some fun. 🙂
The great economic crisis in the Western world has affected the academic and research institutions. One of the major funding agencies NIH has seen effective funding cut that has translated in reduction of both number of research grants and the amount of money apportioned to them. The situation has reached a crisis level. Yet, there seems to be no effect on the ‘higher echelons’ of the research community.
Research dollars are disproportionately distributed among researchers. Although we resent to the notion that 1% of the US population possesses 90% of the wealth, we do not react the same way to the financial disparity in scientific research. Relatively few scientists have monopolized the major chunk of tax-payers’ dollars while a large number of competent and innovative scientists do not. This needs to end!
In these difficult times, everyone is required to sacrifice a little. We ought to ensure that publicly funded scientific research is distributed to all competent scientists and not only to the members of scientific power broker cartel. There is no obvious reason why a researcher should have more than one R01 grant, especially during tough economic situation. By adopting One PI= One R01, the NIH can support thousands more new scientists and diversify the scientific research base. By doing so, NIH will promote innovative research to catalyze scientific growth.
We should also understand that NIH cannot make a law. To achieve One PI=One R01, we have to inform and educate our legislators of the benefits of this formula. Write to your House Representative and Senator asking them to consider broadening the productive and innovative scientific base by expanding the participation by new scientists. Ask them to implement One PI= One R01 formula. There are numerous benefits of One PI= One R01 to the scientific community. It will improve educational standards of the universities and will bring back talent to our educational institutions. This is the only way to assure that dwindling scientific impact is regained.
We all know that there is a very little room at the first author position on any scientific paper. There can only be one name. Even if two researchers equally contributed to the paper, only one name will appear at the front end of the author list. According to the current convention, the other equal contributing author cannot put his name at front even on his own resume. That’s a bummer!
Consider another scenario; a young researcher who is the major contributor to the paper is on the way to become an independent researcher. He writes the manuscript and has to decide the author list. Whom should he put as first author? And the last author? Although there are collaborating scientists, their contribution is too small to grant first authorship. In this case, the researcher takes the first authorship and also declares himself the corresponding author. Problem solved! Not exactly! This researcher just lost a major point in becoming an expert in his field.
Both these cases illustrate an existing problem of author ranking in a paper. It is a lesser known fact of scientific publication that funding agencies (including NIH), journals, and often the hiring authorities use softwares to rank the ‘impact factor’ of authors in a publication. NIH uses such softwares to determine who are the experts in a research field. These ‘experts’ are then invited to the study sections for reviewing grant applications. Journals use these softwares to decide who could be potential reviewers for the manuscripts.
On the surface, the idea sounds reasonable. However, there is a serious flaw in this reliance on softwares to select ‘experts’. These softwares are mostly primitive and are not designed to rank contribution in multi-author papers. They are highly biased towards the ‘senior author’ which they determine only by one criterion- the last position on the author list. Selecting experts based on such faulty method may have ridiculous consequences.
Recently, a well established journal requested a newly minted postdoc to review a research manuscript. The postdoc was thrilled by this opportunity and took the challenge. However, we learnt that the scope and content of the manuscript was clearly beyond his expertise. I don’t know what happened to the manuscript but I am glad to think that there are safeguards against such anomalies. I must clarify that I am not against inviting new researchers to participate and contribute in the functioning of the scientific community. However, this should be done with a deliberate choice by program officers and journal editors. It should not happen by mistake. Otherwise it will erode the confidence in validity of the process.
In case you are curious, a current ranking system used by the NIH, for example, gives highest score to the author whose name appears last on a paper. The software considers the last author as senior author. The next highest score goes to the first author. Finally, it does not matter where your name is between the first and the last author, the software assigns you the same low score for ‘contributing authors’.
I see an irony here. Traditionally, the last author is the senior author who directs the project and in most cases provides funding and laboratory space for the scientific work. If you want to find out the experts, let common sense prevail- a simple Pubmed search should suffice. Why do we need technological voodoo to assign complex scoring system to discover the known?