Sabotaging experiments and flat tires.

This week’s Science magazine carried a story (http://www.ncbi.nlm.nih.gov/pubmed/24604172) about a postdoctoral fellow at Yale whose experiments were sabotaged by her fellow worker. As usual, there was some drama associated with the entire process of fault-finding and blames being thrown around but one interesting thing surfaced that this event was considered to be a laboratory prank, not a serious offense. The article goes-

“The complex case raises a host of questions about how to deal with sabotage, a type of misbehavior that some scientists believe is more common than the few known cases suggest. One key point of debate is whether ruining someone’s experiments should fall under the definition of research misconduct, which is usually restricted to fabricating or falsifying data and plagiarism. Some experts argue that wrecking experiments, while terrible, is more akin to slashing a fellow researcher’s tires than to making up data.”

Seriously? Slashing a tire is just that- it is a display of displeasure at a person or his/her act or a series of acts. Even then its validity as a retribution is only in the mind of the perpetrator. Sabotaging an experiment is much more than that. The saboteur wants not only to demonstrate displeasure but also goes to a length of discrediting the targeted scientist’s work. It is far more sinister. It erodes the credibility of the victim and that is what the goal of sabotaging a scientific experiment is. There are ‘legal’ questions being attached to it now:

“Whether sabotage belongs under ORI’s purview is questionable, Rasmussen says. A long and contentious debate took place in the 1990s over whether the U.S. federal definition of research misconduct should include anything beyond fabrication, falsification, and plagiarism, commonly referred to as FFP. Some argued that other types of bad behavior, such as sexual harassment or vandalism, could constitute research misconduct as well; others said that would open the floodgates to all kind of accusations, and that such misdeeds could be dealt with through other mechanisms.”

It does not matter whether sabotage comes under the ORI’s purview. The decency of a mentor and the institution demands that the incident should be reported to the authorities. Essentially, the fear among the faculty members is that they would be considered to be lousy managers to let it happen. They take it too personally and distort the facts and penalize the victims. In doing so, they undermine their own credibility and promote a dishonest view of the scientific world.

In the era of print news this ploy of obfuscation could have worked for the lab managers and university authorities. But in an era of Facebook, Twitter and lightening fast worldwide communication, such approach may backfire and undermine the credibility of the scientific researchers in general. Beyond petty bickering, there is no place for egregious acts of sabotaging anyone’s experiments and such people should be quickly removed from the lab before they do bigger damage to the core of the scientific values.

Advertisements

Coming soon: Don’t be an asshole reviewer!

Gather the reviews that you got for your research papers and grant applications. Everyone has one or more of those idiotic reviewers’ comments. Bring them out to have some fun. 🙂

Fire the editor.

Fire EditorIf a paper is retracted because of falsified data the authors probably did terribly wrong things in the name of science. But there is another side of the equation: the journal editor.

Commercial journals make money by publishing scientists’ work. To keep their circulation and impact factor high, they have to lure the manuscripts that effect a ‘paradigm shift’.

Contrary to the common fallacy that the high-profile journals are brutally objective in manuscript selection, their editors give plenty of unnecessary opportunities to the authors to resubmit their shoddy work. In principle they send the letter that the manuscript is rejected but they would consider it as a new submission if the reviewers’ concerns are addressed.  In practice, they routinely override some valid criticism and concerns of the reviewers to publish the paper.

If an editor overrides the reviewers’ concerns and the paper is later retracted, what should be done?  I have to find out how the board of editors acts under these circumstances.  As far as I know, there are no serious consequences for the lapse of editorial judgement.

EMBO Journal has adopted a policy of publishing the review proceedings should the authors agree to it.  Such policy should be embraced by every decent scientific journal because it affirms that the readers are intelligent scientists who will understand the limitations of the research work.

As for the editorial veto of the reviewers’ concern that leads to retraction of a paper, some accountability is expected not only for the commercial success of the journal but because there is also tax-payers’ money involved.   I would say, ‘Fire the editor’.

How to steal scientific ideas.

Locked drqwerScience is a business of ideas. By its very definition, researchers are required to generate new ideas. However, the ideas do not pop up in vacuum. Astute researchers have to master the literature, learn where the gaps in the current field of research exist and then find a feasible way to fill those gaps.

The way the current research training is done, the majority of researchers eventually become rigid in their ideas. Their research becomes dull and boring. In the name of ‘detailed study’ they keep burrowing deeper into descriptive research. Years of battles with paper publications, failed grant applications and stress of obtaining tenure and load of teaching wears them out. Only few remain as enthusiastic as they were in the beginnings of their career. Of those who remain enthusiastic, most are not driven by scientific inquiry but by the social and political thrill of it.

Surviving on the stolen ideas of trainees and postdocs becomes a viable means of their academic lives. But they have to do it in a sophisticated way. Here are a few simple ways to do it:

1. ‘Encourage’ every trainee applicant to write a 2 page mock research proposal. This is a shotgun approach whereby anyone showing an interest in your research can be asked to provide idea of what to do. You then take those ideas and adopt them in your current research.

2. Group discussions/brain storming in lab. Pretend that you are helping people bring out their best. Make them bust their ass to beat each other’s ideas and then pick all the good ones as your own.

3. Once the trainee presents a great idea with some interesting preliminary data, kill his/her enthusiasm by saying that the idea is useless, not relevant, premature, too complex for the current state of science etc. During the next few months, gently incorporate the idea in your casual talks. Finally, give the project to someone other than the originator of the idea as your own.

4.  Make your trainees write a fellowship proposal. Incorporate those questions as an aim in your own grant. Pretend that it was all your own to begin with.

There are many more subtle ways you can steal the idea of your trainees to call your own. With the years of toiling under your own mentor, you have consciously or unconsciously picked up techniques to put down your colleagues and steal intellectual property. Now it is your turn to perpetuate it. Do it with style, do it with authority and when challenged, you can always say that all data and ideas belong to NIH or the institution. You only happen to be an agent of theft (read hired thief).

There are other better ways as you climb up the ladder of your academic career. You can steal from other labs by being a reviewer. Oh, don’t give me the shit about ethics and confidentiality. You know what I mean.

If everything else fails, you can also resort to saying that ideas are not novel it is the ability to materialize them matters.

Lawyers are universally loathed for their ability to fudge the truth. In reality, scientists can be worse than lawyers. They wear the cloak of honesty and objectivity, but the unscrupulous ones are constantly twisting the truth, presenting half-truth, and backstabbing with hidden dagger of greed and deception.

One PI =One R01 grant.

The great economic crisis in the Western world has affected the academic and research institutions.  One of the major funding agencies NIH has seen effective funding cut that has translated in reduction of both number of research grants and the amount of money apportioned to them.  The situation has reached a crisis level.  Yet, there seems to be no effect on the ‘higher echelons’ of the research community.

Research dollars are disproportionately distributed among researchers.  Although we resent to the notion that 1% of the US population possesses 90% of the wealth, we do not react the same way to the financial disparity in scientific research.  Relatively few scientists have monopolized the major chunk of tax-payers’ dollars while a large number of competent and innovative scientists do not.  This needs to end!

In these difficult times, everyone is required to sacrifice a little.  We ought to ensure that publicly funded scientific research is distributed to all competent scientists and not only to the members of scientific power broker cartel.  There is no obvious reason why a researcher should have more than one R01 grant, especially during tough economic situation.  By adopting One PI= One R01, the NIH can support thousands more new scientists and diversify the scientific research base.  By doing so, NIH will promote innovative research to catalyze scientific growth.

We should also understand that NIH cannot make a law.  To achieve One PI=One R01, we have to inform and educate our legislators of the benefits of this formula.  Write to your House Representative and Senator asking them to consider broadening the productive and innovative scientific base by expanding the participation by new scientists.  Ask them to implement One PI= One R01 formula.  There are numerous benefits of One PI= One R01 to the scientific community. It will improve educational standards of the universities and will bring back talent to our educational institutions.  This is the only way to assure that dwindling scientific impact is regained.

Scientific Research: A Ponzi Scheme.

Screenshot_3_10_13_6_28_PMRecently, a friend and colleague blurted out, “Man, academic scientific research is a Ponzi scheme”.  At first I laughed at this but soon I realized his point of view.  My friend is primarily a clinician.  His training and interest in understanding the bases of disease and hope of discovering new therapeutic targets had brought him in laboratory research.

He quickly realized that there was a chasm between his lofty ideals of studying a biological phenomenon and his mentor’s single-minded interest in using his data for fetching money.   My friend’s enthusiasm and motivation that had been his strength in conquering the daily grind of the lab work and failure of experiments were suddenly overcome by despair.  He is a good scientist who carefully designs and plans his experiments and is resourceful and skilled to execute them well.  Unfortunately, he decided to return to the clinic without completing his research project.

Under ordinary situations I would not have thought much about his return to clinics. Such departures are not uncommon among physician scientists who do not like the long drawn battle of laboratory scientists against leaking gels, failing western blots, suboptimal reagents, and a long dark tunnel of uncertainty without any glimmer of light at the end.  Many do not see how abstract concepts of basic research could ever be translated into clinically relevant knowledge.  But our guy has the smarts.

Like a painful sliver his analogy of scientific research as a Ponzi scheme stuck in my head.  Of course, I am not immune to the widely publicized Bernie Merdoff’s case of financial bungling.   I googled Ponzi scheme to find that…

In a Ponzi scheme potential investors are wooed with promises of unusually large returns, usually attributed to the investment manager’s savvy, skill or some other secret sauce. (Reference:  The New York Times)

Scientific research indeed is like a Ponzi scheme.  A very small number of people (established investigators) entice a very large number of young people (investor) for a dream of a very large profit (Nobel Prize, glory, publication, publicity, creative satisfaction etc).  To keep the scheme running, they do tell the ‘fine prints’ that not everyone gets there, the harder you work the larger the reward.  Cynics call it ‘rat-race’.  But I think Ponzi scheme is a better description.

Of course, once in a while from this large pool of investors a few are selected to receive the big profit that was promised to all.  They are given awards, positions and attention.  Usually these are the mediocre lot. The reason for this favor is that these mediocre are either unsure about their abilities or are too sure about it.  They stay indebted to the generosity of the ‘system’ and to display their loyalty to the system, they propagate the same scheme.  This is the pyramid scheme taken to extreme.

Does this mean that there are no smart people in scientific research?  On the contrary, there is a large number of smart people who keep pushing the leading edge further and beyond.  They are the pioneers with true passion for advancing the knowledge.  They are the ones who are genuinely interested in understanding nature of things.  They are not wheeler-dealers who relentlessly try to fill round holes of their hypotheses with square pegs of data.

I am not sure whether my friend will ever return to laboratory research but with a simple remark he gave me a different point of view.  We all thrive on such diverse points of view in research and I think that he did shift my paradigm.

Profitable reviews: Nature Immunology defends reviews.

In one of my previous rantings (Click here), I wrote about how journals publish reviews to improve their impact factor. Now, in the recent issue of Nature Immunology (Click here for link), the editorial acquiesces:

“Because they are highly cited (on average, a review article is cited almost twice as often as a research paper), they help boost the impact factor of the journal.”

What the editorial does not mention is the trend that some glossy journals have adopted to publish special issues that predominantly contain reviews.

It also does not take into account the harm done by ‘expert reviews’ where an interpretation or speculation by an expert is perpetuated in the scientific literature as scientific facts. However, I would agree that scientists are responsible for testing the veracity of these ‘facts’, not the journals.

Impact Factor: Who are you bullshitting?

At the lunch table, I was thinking of an experiment when my attention turned to a colleague whose paper was recently rejected by a medium caliber (read impact factor) journal and his supervisor had dissuaded him from addressing the reviewers’ mean questions.  Instead, he was gently cajoled into submitting his paper to a new open-access online journal.  Despite the old adage that good things in the nature are free, he was unconvinced of the value of publishing in an open-access journal.  That only tells how much we are used to scientific journals’ policy of charging authors to ‘defray the cost of publication’.  In any other field, authors are paid when they publish. My colleague, probably smarting from the scathing verbiage of the ‘behind the curtain’ reviewers, was unimpressed and unconvinced, and skeptical about the quality of the open-access online journal.

My colleague is not alone in his quest of collecting impact factor points.  Every scientist, at least in biomedical research, is worried about the impact factor of the papers published.  Many have figured out complex algorithms as to which impact factor zone they should reside to keep their research lab afloat. The impact factor frenzy has generated a class system in science where publication in a journal with the glossiest cover page has become the ultimate goal of scientists. It also helps the supervisors as a carrot to dangle in front of their postdocs, ‘if you perform fifty experiments in a day, with a 24/7 attitude, you will get your paper published in the Cosmopolitan or Vogue of science world’.

Ever wondered why the movie The Devil Wears Prada appeared eerily familiar to the postdocs?  The only difference was that the Devil’s minion gets to wear glitzy clothes and gives away fabulous Bang & Olufsen phone;  most postdocs cannot even spell that name.

The impact factor sickness has not only caught scientists, it has also affected the morale of major hardcore science journals. Just in case you forgot, there are roughly two categories of science journals;  first, journals that are published by scientific societies and most of their scientific matters of soliciting, reviewing, and editing is done by real working scientists.  Second, those journals that are run by publishing powerhouses who pluck away energetic hotshot postdocs as editors to their ritzy offices to run the business of scientific publishing.

The impact factor is determined by a commercial arm of a major publishing conglomerate whose non-scientific methods of assigning impact factors generated brouhaha among the Rockefeller Press journals.  These journals were assigned low impact factor despite being darlings of a cross-section of research community.  Probably, the failure to attract good papers and loss of revenue led them to publish a syndicated editorial challenging and ridiculing the impact factor system (Click here).  Their arguments were cogent and the language was bold and challenging.  It is not clear how, but their impact factor did improve. However, after they gained the impact factor, their campaign against impact factor disparity fizzled. Publishers are not the only one who benefit from impact factor inflation.

Impact factor is a crutch that is most often used by impotent, unimaginative and incompetent committees in academic institutions for recruitment, promotions, and fiscal matters. Notice that I showered the adjectives on committees, not the members of the committees, who are generally intelligent people (including me).  Overworked, unappreciated, and sometimes lazy and indifferent members of a committee do not want to be held responsible for making a decision.  Therefore, they rely on impact factor to show their ‘objectivity’.  If they hire a new faculty member who later turns out to be a complete jerk in the department, they can easily blame it on the impact factor of his publication which led to his recruitment.  Had they selected him on the basis of their ‘judgement’, they would be scoffed at by their peers and colleagues.

So, once you begin to equate impact factor as being objective index of productivity, smartness, intelligence, and innovation, you have unleashed a monster that is going to take over the part of the system that traditionally relied on competing interests.  Grant reviewers and paper reviewers can now exercise more arbitrary control over the decision-making without appearing to be unfair.  They can veto the impact factor invoking their experience and judgement.  Essentially, the reviewers are manipulating the system in their favor.

One may argue that eventually, the system will be ‘normalized’ so that no one will be clearly at an undue advantage.  The truth is that it is the same old bullshit with the added objectivity armor of the impact factor.

In case you wondered how some journals achieve high impact factor, it is quite revealing to notice that the Annual Reviews series have some of the highest impact factor.  Wow!!  You would have thought that real research papers should be the winners.  Apparently not!  And there lies the trick.  Most high impact journals are highly cited not because of their published research papers but because of the review articles.  It is not their altruism that glitzy journals are happy to let you download artistic slides for your PowerPoint presentations.

Although it is a great business plan to target lazy scientists who don’t want to do their own legwork of literature review, there is another reason for using review articles to boost impact factor. Many shrewd scientists like to cite reviews published in the high impact factor journals in their grant proposals and research papers upfront.  This way a lazy reviewer can be convinced that because the topic was reviewed in a high impact journal, it must be of great importance.

When I was a new postdoc, I learnt a valuable lesson in assessing the scientific caliber of a scientist.  My research advisor was a soft-spoken, astute scientist with an incisive vision. He showed me how he judged the quality and productivity of a faculty candidate from his Curriculum vitae.

1.  Throw out all reviews, he (or she) has listed.
2.  Take away all papers where authorship is beyond the second author (or senior author).
3.  Trash all conferences and posters presented.
4.  Look at how regularly papers have been published and how good they are.  Yes, use your judgement.  A good paper does not need any assistance, you will know when you see it (at least in the area of research close to you).

I think I agree with his style of assessment rather than the bullshit of impact factor.  Won’t you agree?

Should Scientific Misconduct be Criminalized?

It has been a while since the last post.  It was not a ‘mysterious disappearance’.  No, I have not been manhandled or killed.  Not yet.

I noticed an article (click here) that some vigilante group has been sending accusatory notices targeting stem-cell researchers of their wrongdoings.  This has rattled the researchers and the publishers alike.

Well, if you look at it, the business of science has been given a lot of freedom to operate, and enormous amounts of trust has been put into scientists’ integrity, when it comes to their conduct.

Scientists obtain sumptuous chunk of money from the exchequer and when bad things happen, they simply say, “Oops!  We fucked!”  There are practically no consequences to their misdeeds.

Publishing a research paper is an enormous undertaking.  It not only takes time, money and collaborative effort of the authors involved,  but it also affects a huge number of researchers across the globe.

When someone produces and publishes fraudulent data in a major journal, it means years of work and at least a quarter million dollar worth of time and reagent go down the toilet.

Who pays for this?  People pay for this.  But, all the culprit gets is a slap on the wrist.  The culprits are told not to participate in any publicly funded program in any manner, and sometimes, the institution washes its hands with them. That’s it.  In fact, in most cases, the culprit returns to science to continue.

When a junior scientist publishes fraudulent results, it takes a while before the results could be verified by other researchers.  There is the ‘window of opportunity’ during which the junior scientist moves on to find a cushy job and by the time the fraud is exposed, he or she has obtained job security.  The senior scientist, on the other hand, has nothing to lose because he/she can blame the person who has already left the lab.  So it is convenient to everyone.

Anywhere else, one would be tried in a criminal court for such misappropriation of public funds and will likely be thrown in the prison, if the guilt is proven.  Not in science.

Why?  Because it is a ‘noble’ profession.  Scientists walk with an aura around them that rivals that of the angels.  Some even think that they are gods.

So, the question is, whether such misbehavior of the unscrupulous scientists be pardoned, or it should be consider a criminal act?  Only public can decide.

Scamming the Antibody.

Pick and choose.

Has anyone noticed that suddenly there is a mushrooming growth of companies offering antibodies. Anything you can think of, protein, DNA, acetylation, phosphorylation, methylation- nothing is sacred anymore. All the Nature’s secrets can be revealed by these antibodies.

In the antibody, the researchers have found a wonderful tool to snub that stupid reviewer who doubts their hypothesis that their favorite protein has a wonderful modification called ‘titillation’. If you isolate your favorite protein under inebriated conditions looking at the full moon through your laboratory window (only if you have a window in your lab), the protein gets titillated.  Hell, no one has any clue what this titillation of the protein does, but all you have to do is to buy an anti-titillation antibody supplied by the BigAss Biotech company and smear it on your immunoblot.

So what if more protein bands light up than are catalogued in the proteome database, you are on the front grille of the ‘omic’ discovery train.  How can the reviewer ever refute your meticulous selection of the protein band that runs at the right size. For God’s sake (yeah I mean God, the ultimate reviewer) you even stripped (hopefully the blot) and reprobed with another funky antibody to again select the correct size band.  And you also know that if in your Experimental Methods section you write ‘anti-titillation antibody was used (BigAss Biotech, Timbuktu) according to manufacturer’s protocol’, no one, even God, can question its validity.

With the advent of proteomics, the new surge of antibody selling companies is not surprising.  More research is done using antibodies.  But it is more so because selling antibodies is a lucrative business.  It is almost a scam.  There is a well-known antibody company that has earned its reputation and fortune by selling antibodies that do not work.  Yet, it has managed to stay in business for almost 20 years!

Others are not any better.  Another company pretends to support and promote science by sponsoring and organizing scientific conferences and symposia.  They also invite free review of their products by researchers and end-users.  Oh give me a break.  If the end-user had time, he or she would have written that paper they have been contemplating for months, or at least completed the laboratory notebook.

Well, there is another ploy, make antibodies to anything.  It is easy.  Search the database for proteins, find the cDNA sequence, produce recombinant protein in some kind of  biological system and inject the partially purified protein into your animal of choice.  Antibody will be produced in a few weeks.  Most of these companies do not do any testing or quality control.

If you ask the Technical Support of the company, they will offer you something like this, “buy our antibody and see if it works.  If it does not work, we will give you another antibody’.

No, thank you sir!  I am NOT going to test your lousy antibody at my expense!  How stupid I have to be with my MD, PhD degree to buy a product at inflated price with no guarantee, and to test it so that you could refer to my work and my endorsement to make a killing?

Diamond price based on Borsheim.com

Did I say a killing?  Yes, I did.  Antibodies sell at a profit where DeBeers would look like the corner grocery store.  Apparently, they have got knuckleheads working for them in their planning division otherwise, they would quit the blood diamond business and dive into the real blood business of making antibodies.

Antibodies cost more than diamonds! Do your maths, baby!  An aliquot of 100 microgram (usual packing size) costs on average US$ 300!  A 1 carat diamond (1 carat= 200 mg) costs approximately 10,000 dollars.  Remember that the inflection point for the price rise in diamonds is at 1 carat size, so this is a generous comparison.  Now if you see, a diamond would cost you at a rate of 5 cents per microgram but an antibody will cost you about 3 US dollars per microgram! Now that is a killing if you consider that a Diamond is forever, but an antibody…?  Well, go figure!

Just because NIH gave you that money to spend, don’t spend on crap.  Demand the best and insist on quality control.  If they fail to do that, make your own antibody.  There are numerous vendors who will produce antibodies for you at a fraction of the purchase cost.  What are you scared of?  In the worst case, your titillation will go undetected.