Sabotaging experiments and flat tires.

This week’s Science magazine carried a story (http://www.ncbi.nlm.nih.gov/pubmed/24604172) about a postdoctoral fellow at Yale whose experiments were sabotaged by her fellow worker. As usual, there was some drama associated with the entire process of fault-finding and blames being thrown around but one interesting thing surfaced that this event was considered to be a laboratory prank, not a serious offense. The article goes-

“The complex case raises a host of questions about how to deal with sabotage, a type of misbehavior that some scientists believe is more common than the few known cases suggest. One key point of debate is whether ruining someone’s experiments should fall under the definition of research misconduct, which is usually restricted to fabricating or falsifying data and plagiarism. Some experts argue that wrecking experiments, while terrible, is more akin to slashing a fellow researcher’s tires than to making up data.”

Seriously? Slashing a tire is just that- it is a display of displeasure at a person or his/her act or a series of acts. Even then its validity as a retribution is only in the mind of the perpetrator. Sabotaging an experiment is much more than that. The saboteur wants not only to demonstrate displeasure but also goes to a length of discrediting the targeted scientist’s work. It is far more sinister. It erodes the credibility of the victim and that is what the goal of sabotaging a scientific experiment is. There are ‘legal’ questions being attached to it now:

“Whether sabotage belongs under ORI’s purview is questionable, Rasmussen says. A long and contentious debate took place in the 1990s over whether the U.S. federal definition of research misconduct should include anything beyond fabrication, falsification, and plagiarism, commonly referred to as FFP. Some argued that other types of bad behavior, such as sexual harassment or vandalism, could constitute research misconduct as well; others said that would open the floodgates to all kind of accusations, and that such misdeeds could be dealt with through other mechanisms.”

It does not matter whether sabotage comes under the ORI’s purview. The decency of a mentor and the institution demands that the incident should be reported to the authorities. Essentially, the fear among the faculty members is that they would be considered to be lousy managers to let it happen. They take it too personally and distort the facts and penalize the victims. In doing so, they undermine their own credibility and promote a dishonest view of the scientific world.

In the era of print news this ploy of obfuscation could have worked for the lab managers and university authorities. But in an era of Facebook, Twitter and lightening fast worldwide communication, such approach may backfire and undermine the credibility of the scientific researchers in general. Beyond petty bickering, there is no place for egregious acts of sabotaging anyone’s experiments and such people should be quickly removed from the lab before they do bigger damage to the core of the scientific values.

Coming soon: Don’t be an asshole reviewer!

Gather the reviews that you got for your research papers and grant applications. Everyone has one or more of those idiotic reviewers’ comments. Bring them out to have some fun. 🙂

Pope’s resignation and scientific research.

Pope Resignation_bYou may wonder what has Pope Benedict’s resignation to do with scientific research. Well, not much. But the discussions that followed his resignation may be relevant.

A prominent question arose whether there would be major changes after this pope is gone. Analysts looked at the roster of the College of Cardinals and observed that the cardinals are relics of the past. They were inducted by either Pope John Paul II or Pope Benedict. They espouse the old ideas of their ‘mentors’. So the consensus emerged that given the pedigree and age of the cardinals, major reforms cannot be expected. Bummer!

Scientific research establishment is a living fossil.  The scientific establishment, like the Church, is recalcitrant to change and resistant to new ideas.  The policies, review processes, money allocation, research activities, hiring of scientists, publication of research papers, and the decisions on tenure and promotion of faculty are conducted in the spirit of medieval feudal system.

A scientist is supposed to push the boundaries of knowledge.  Unfortunately, old duds on review committees’ rosters are impervious to new ideas.  Some are mean and greedy who have outlived their scientific utility and are unable to grasp the explosion of knowledge in  modern science.  Many are insecure and are bitterly critical of new developments yet they pretend to be broad-minded.  Others are fools who still hold on to the idea of ‘hypothesis driven research’ as sacred.  Together, they have stymied the progress of science and emergence of new ideas more than any politician could ever do.  Scientific progress cannot be achieved at its fullest if the research is to be judged by scientists entrenched in their archaic research ideas.

The question is not whether the old scientists sitting at the helm of affairs should be replaced by the young blood, but how soon it should be done.  Take action, write to your elected representative.  Every voice counts.

Fire the editor.

Fire EditorIf a paper is retracted because of falsified data the authors probably did terribly wrong things in the name of science. But there is another side of the equation: the journal editor.

Commercial journals make money by publishing scientists’ work. To keep their circulation and impact factor high, they have to lure the manuscripts that effect a ‘paradigm shift’.

Contrary to the common fallacy that the high-profile journals are brutally objective in manuscript selection, their editors give plenty of unnecessary opportunities to the authors to resubmit their shoddy work. In principle they send the letter that the manuscript is rejected but they would consider it as a new submission if the reviewers’ concerns are addressed.  In practice, they routinely override some valid criticism and concerns of the reviewers to publish the paper.

If an editor overrides the reviewers’ concerns and the paper is later retracted, what should be done?  I have to find out how the board of editors acts under these circumstances.  As far as I know, there are no serious consequences for the lapse of editorial judgement.

EMBO Journal has adopted a policy of publishing the review proceedings should the authors agree to it.  Such policy should be embraced by every decent scientific journal because it affirms that the readers are intelligent scientists who will understand the limitations of the research work.

As for the editorial veto of the reviewers’ concern that leads to retraction of a paper, some accountability is expected not only for the commercial success of the journal but because there is also tax-payers’ money involved.   I would say, ‘Fire the editor’.

How to steal scientific ideas.

Locked drqwerScience is a business of ideas. By its very definition, researchers are required to generate new ideas. However, the ideas do not pop up in vacuum. Astute researchers have to master the literature, learn where the gaps in the current field of research exist and then find a feasible way to fill those gaps.

The way the current research training is done, the majority of researchers eventually become rigid in their ideas. Their research becomes dull and boring. In the name of ‘detailed study’ they keep burrowing deeper into descriptive research. Years of battles with paper publications, failed grant applications and stress of obtaining tenure and load of teaching wears them out. Only few remain as enthusiastic as they were in the beginnings of their career. Of those who remain enthusiastic, most are not driven by scientific inquiry but by the social and political thrill of it.

Surviving on the stolen ideas of trainees and postdocs becomes a viable means of their academic lives. But they have to do it in a sophisticated way. Here are a few simple ways to do it:

1. ‘Encourage’ every trainee applicant to write a 2 page mock research proposal. This is a shotgun approach whereby anyone showing an interest in your research can be asked to provide idea of what to do. You then take those ideas and adopt them in your current research.

2. Group discussions/brain storming in lab. Pretend that you are helping people bring out their best. Make them bust their ass to beat each other’s ideas and then pick all the good ones as your own.

3. Once the trainee presents a great idea with some interesting preliminary data, kill his/her enthusiasm by saying that the idea is useless, not relevant, premature, too complex for the current state of science etc. During the next few months, gently incorporate the idea in your casual talks. Finally, give the project to someone other than the originator of the idea as your own.

4.  Make your trainees write a fellowship proposal. Incorporate those questions as an aim in your own grant. Pretend that it was all your own to begin with.

There are many more subtle ways you can steal the idea of your trainees to call your own. With the years of toiling under your own mentor, you have consciously or unconsciously picked up techniques to put down your colleagues and steal intellectual property. Now it is your turn to perpetuate it. Do it with style, do it with authority and when challenged, you can always say that all data and ideas belong to NIH or the institution. You only happen to be an agent of theft (read hired thief).

There are other better ways as you climb up the ladder of your academic career. You can steal from other labs by being a reviewer. Oh, don’t give me the shit about ethics and confidentiality. You know what I mean.

If everything else fails, you can also resort to saying that ideas are not novel it is the ability to materialize them matters.

Lawyers are universally loathed for their ability to fudge the truth. In reality, scientists can be worse than lawyers. They wear the cloak of honesty and objectivity, but the unscrupulous ones are constantly twisting the truth, presenting half-truth, and backstabbing with hidden dagger of greed and deception.

One PI =One R01 grant.

The great economic crisis in the Western world has affected the academic and research institutions.  One of the major funding agencies NIH has seen effective funding cut that has translated in reduction of both number of research grants and the amount of money apportioned to them.  The situation has reached a crisis level.  Yet, there seems to be no effect on the ‘higher echelons’ of the research community.

Research dollars are disproportionately distributed among researchers.  Although we resent to the notion that 1% of the US population possesses 90% of the wealth, we do not react the same way to the financial disparity in scientific research.  Relatively few scientists have monopolized the major chunk of tax-payers’ dollars while a large number of competent and innovative scientists do not.  This needs to end!

In these difficult times, everyone is required to sacrifice a little.  We ought to ensure that publicly funded scientific research is distributed to all competent scientists and not only to the members of scientific power broker cartel.  There is no obvious reason why a researcher should have more than one R01 grant, especially during tough economic situation.  By adopting One PI= One R01, the NIH can support thousands more new scientists and diversify the scientific research base.  By doing so, NIH will promote innovative research to catalyze scientific growth.

We should also understand that NIH cannot make a law.  To achieve One PI=One R01, we have to inform and educate our legislators of the benefits of this formula.  Write to your House Representative and Senator asking them to consider broadening the productive and innovative scientific base by expanding the participation by new scientists.  Ask them to implement One PI= One R01 formula.  There are numerous benefits of One PI= One R01 to the scientific community. It will improve educational standards of the universities and will bring back talent to our educational institutions.  This is the only way to assure that dwindling scientific impact is regained.

Profitable reviews: Nature Immunology defends reviews.

In one of my previous rantings (Click here), I wrote about how journals publish reviews to improve their impact factor. Now, in the recent issue of Nature Immunology (Click here for link), the editorial acquiesces:

“Because they are highly cited (on average, a review article is cited almost twice as often as a research paper), they help boost the impact factor of the journal.”

What the editorial does not mention is the trend that some glossy journals have adopted to publish special issues that predominantly contain reviews.

It also does not take into account the harm done by ‘expert reviews’ where an interpretation or speculation by an expert is perpetuated in the scientific literature as scientific facts. However, I would agree that scientists are responsible for testing the veracity of these ‘facts’, not the journals.

Author ranking system: ‘Impact factor’ of the last author.

AuthorshipWe all know that there is a very little room at the first author position on any scientific paper.  There can only be one name.  Even if two researchers equally contributed to the paper, only one name will appear at the front end of the author list.  According to the current convention, the other equal contributing author cannot put his name at front even on his own resume.  That’s a bummer!

Consider another scenario;  a young researcher who is the major contributor to the paper is on the way to become an independent researcher.  He writes the manuscript and has to decide the author list.  Whom should he put as first author? And the last author? Although there are collaborating scientists, their contribution is too small to grant first authorship.  In this case, the researcher takes the first authorship and also declares himself the corresponding author.  Problem solved!  Not exactly!  This researcher just lost a major point in becoming an expert in his field.

Both these cases illustrate an existing problem of author ranking in a paper.  It is a lesser known fact of scientific publication that funding agencies (including NIH), journals, and often the hiring authorities use softwares to rank the ‘impact factor’ of authors in a publication.  NIH uses such softwares to determine who are the experts in a research field.  These ‘experts’ are then invited to the study sections for reviewing grant applications.  Journals use these softwares to decide who could be potential reviewers for the manuscripts.

On the surface, the idea sounds reasonable.  However, there is a serious flaw in this reliance on softwares to select ‘experts’.  These softwares are mostly primitive and are not designed to rank contribution in multi-author papers.  They are highly biased towards the ‘senior author’  which they determine only by one criterion- the last position on the author list.  Selecting experts based on such faulty method may have ridiculous consequences.

Recently, a well established journal requested a newly minted postdoc to review a research manuscript.  The postdoc was thrilled by this opportunity and took the challenge.  However, we learnt that the scope and content of the manuscript was clearly beyond his expertise.  I don’t know what happened to the manuscript but I am glad to think that there are safeguards against such anomalies.  I must clarify that I am not against inviting new researchers to participate and contribute in the functioning of the scientific community.  However, this should be done with a deliberate choice by program officers and journal editors. It should not happen by mistake. Otherwise it will erode the confidence in validity of the process.

In case you are curious, a current ranking system used by the NIH, for example, gives highest score to the author whose name appears last on a paper.  The software considers the last author as senior author.  The next highest score goes to the first author.  Finally, it does not matter where your name is between the first and the last author, the software assigns you the same low score for ‘contributing authors’.

I see an irony here.  Traditionally, the last author is the senior author who directs the project and in most cases provides funding and laboratory space for the scientific work.  If you want to find out the experts, let common sense prevail- a simple Pubmed search should suffice.  Why do we need technological voodoo to assign complex scoring system to discover the known?

Scientific misconduct debate: The idea is getting traction.

We have all wondered about the debate over scientific misconduct and the utter lack of accountability demanded by the ‘system’. Earlier, I have written on this blog (Click here) that the privilege of using enormous amounts of public funds requires accountability from the scientists. Now, in the current issue of EMBO Reports, this idea has been featured by one of their editors (Click here). In addition, the journal commissioned at least three articles addressing different but related aspects of the rampant issues in contemporary scientific research.
Journals should not only concern themselves with the quality and validity of hypotheses, theories and data but they should also discuss how to improve the socio-economic framework of scientific research.  Discussing the ‘bread and butter’ issues of research are equally, if not more, important than vague policy matters.   At this point I should say that since the days of Frank Gannon as the editor of EMBO Reports, the journal has commendably highlighted the concerns of researchers.  Through advocacy of good research practices, the public trust can be won to improve funding.

Impact Factor: Who are you bullshitting?

At the lunch table, I was thinking of an experiment when my attention turned to a colleague whose paper was recently rejected by a medium caliber (read impact factor) journal and his supervisor had dissuaded him from addressing the reviewers’ mean questions.  Instead, he was gently cajoled into submitting his paper to a new open-access online journal.  Despite the old adage that good things in the nature are free, he was unconvinced of the value of publishing in an open-access journal.  That only tells how much we are used to scientific journals’ policy of charging authors to ‘defray the cost of publication’.  In any other field, authors are paid when they publish. My colleague, probably smarting from the scathing verbiage of the ‘behind the curtain’ reviewers, was unimpressed and unconvinced, and skeptical about the quality of the open-access online journal.

My colleague is not alone in his quest of collecting impact factor points.  Every scientist, at least in biomedical research, is worried about the impact factor of the papers published.  Many have figured out complex algorithms as to which impact factor zone they should reside to keep their research lab afloat. The impact factor frenzy has generated a class system in science where publication in a journal with the glossiest cover page has become the ultimate goal of scientists. It also helps the supervisors as a carrot to dangle in front of their postdocs, ‘if you perform fifty experiments in a day, with a 24/7 attitude, you will get your paper published in the Cosmopolitan or Vogue of science world’.

Ever wondered why the movie The Devil Wears Prada appeared eerily familiar to the postdocs?  The only difference was that the Devil’s minion gets to wear glitzy clothes and gives away fabulous Bang & Olufsen phone;  most postdocs cannot even spell that name.

The impact factor sickness has not only caught scientists, it has also affected the morale of major hardcore science journals. Just in case you forgot, there are roughly two categories of science journals;  first, journals that are published by scientific societies and most of their scientific matters of soliciting, reviewing, and editing is done by real working scientists.  Second, those journals that are run by publishing powerhouses who pluck away energetic hotshot postdocs as editors to their ritzy offices to run the business of scientific publishing.

The impact factor is determined by a commercial arm of a major publishing conglomerate whose non-scientific methods of assigning impact factors generated brouhaha among the Rockefeller Press journals.  These journals were assigned low impact factor despite being darlings of a cross-section of research community.  Probably, the failure to attract good papers and loss of revenue led them to publish a syndicated editorial challenging and ridiculing the impact factor system (Click here).  Their arguments were cogent and the language was bold and challenging.  It is not clear how, but their impact factor did improve. However, after they gained the impact factor, their campaign against impact factor disparity fizzled. Publishers are not the only one who benefit from impact factor inflation.

Impact factor is a crutch that is most often used by impotent, unimaginative and incompetent committees in academic institutions for recruitment, promotions, and fiscal matters. Notice that I showered the adjectives on committees, not the members of the committees, who are generally intelligent people (including me).  Overworked, unappreciated, and sometimes lazy and indifferent members of a committee do not want to be held responsible for making a decision.  Therefore, they rely on impact factor to show their ‘objectivity’.  If they hire a new faculty member who later turns out to be a complete jerk in the department, they can easily blame it on the impact factor of his publication which led to his recruitment.  Had they selected him on the basis of their ‘judgement’, they would be scoffed at by their peers and colleagues.

So, once you begin to equate impact factor as being objective index of productivity, smartness, intelligence, and innovation, you have unleashed a monster that is going to take over the part of the system that traditionally relied on competing interests.  Grant reviewers and paper reviewers can now exercise more arbitrary control over the decision-making without appearing to be unfair.  They can veto the impact factor invoking their experience and judgement.  Essentially, the reviewers are manipulating the system in their favor.

One may argue that eventually, the system will be ‘normalized’ so that no one will be clearly at an undue advantage.  The truth is that it is the same old bullshit with the added objectivity armor of the impact factor.

In case you wondered how some journals achieve high impact factor, it is quite revealing to notice that the Annual Reviews series have some of the highest impact factor.  Wow!!  You would have thought that real research papers should be the winners.  Apparently not!  And there lies the trick.  Most high impact journals are highly cited not because of their published research papers but because of the review articles.  It is not their altruism that glitzy journals are happy to let you download artistic slides for your PowerPoint presentations.

Although it is a great business plan to target lazy scientists who don’t want to do their own legwork of literature review, there is another reason for using review articles to boost impact factor. Many shrewd scientists like to cite reviews published in the high impact factor journals in their grant proposals and research papers upfront.  This way a lazy reviewer can be convinced that because the topic was reviewed in a high impact journal, it must be of great importance.

When I was a new postdoc, I learnt a valuable lesson in assessing the scientific caliber of a scientist.  My research advisor was a soft-spoken, astute scientist with an incisive vision. He showed me how he judged the quality and productivity of a faculty candidate from his Curriculum vitae.

1.  Throw out all reviews, he (or she) has listed.
2.  Take away all papers where authorship is beyond the second author (or senior author).
3.  Trash all conferences and posters presented.
4.  Look at how regularly papers have been published and how good they are.  Yes, use your judgement.  A good paper does not need any assistance, you will know when you see it (at least in the area of research close to you).

I think I agree with his style of assessment rather than the bullshit of impact factor.  Won’t you agree?