Coming soon: Don’t be an asshole reviewer!

Gather the reviews that you got for your research papers and grant applications. Everyone has one or more of those idiotic reviewers’ comments. Bring them out to have some fun. ūüôā

How to steal scientific ideas.

Locked drqwerScience is a business of ideas. By its very definition, researchers are required to generate new ideas. However, the ideas do not pop up in vacuum. Astute researchers have to master the literature, learn where the gaps in the current field of research exist and then find a feasible way to fill those gaps.

The way the current research training is done, the majority of researchers eventually become rigid in their ideas. Their research becomes dull and boring. In the name of ‘detailed study’ they keep burrowing deeper into descriptive research. Years of battles with paper publications, failed grant applications and stress of obtaining tenure and load of teaching wears them out. Only few remain as enthusiastic as they were in the beginnings of their career. Of those who remain enthusiastic, most are not driven by scientific inquiry but by the social and political thrill of it.

Surviving on the stolen ideas of trainees and postdocs becomes a viable means of their academic lives. But they have to do it in a sophisticated way. Here are a few simple ways to do it:

1. ‘Encourage’ every trainee applicant to write a 2 page mock research proposal. This is a shotgun approach whereby anyone showing an interest in your research can be asked to provide idea of what to do. You then take those ideas and adopt them in your current research.

2. Group discussions/brain storming in lab. Pretend that you are helping people bring out their best. Make them bust their ass to beat each other’s ideas and then pick all the good ones as your own.

3. Once the trainee presents a great idea with some interesting preliminary data, kill his/her enthusiasm by saying that the idea is useless, not relevant, premature, too complex for the current state of science etc. During the next few months, gently incorporate the idea in your casual talks. Finally, give the project to someone other than the originator of the idea as your own.

4.  Make your trainees write a fellowship proposal. Incorporate those questions as an aim in your own grant. Pretend that it was all your own to begin with.

There are many more subtle ways you can steal the idea of your trainees to call your own. With the years of toiling under your own mentor, you have consciously or unconsciously picked up techniques to put down your colleagues and steal intellectual property. Now it is your turn to perpetuate it. Do it with style, do it with authority and when challenged, you can always say that all data and ideas belong to NIH or the institution. You only happen to be an agent of theft (read hired thief).

There are other better ways as you climb up the ladder of your academic career. You can steal from other labs by being a reviewer. Oh, don’t give me the shit about ethics and confidentiality. You know what I mean.

If everything else fails, you can also resort to saying that ideas are not novel it is the ability to materialize them matters.

Lawyers are universally loathed for their ability to fudge the truth. In reality, scientists can be worse than lawyers. They wear the cloak of honesty and objectivity, but the unscrupulous ones are constantly twisting the truth, presenting half-truth, and backstabbing with hidden dagger of greed and deception.

One PI =One R01 grant.

The great economic crisis in the Western world has affected the academic and research institutions. ¬†One of the major funding agencies NIH has seen effective funding cut that has translated in reduction of both number of research grants and the amount of money apportioned to them. ¬†The situation has reached a crisis level. ¬†Yet, there seems to be no effect on the ‘higher echelons’ of the research community.

Research dollars are disproportionately distributed among researchers. ¬†Although we resent to the notion that 1% of the US population possesses 90% of the wealth, we do not react the same way to the financial disparity in scientific research. ¬†Relatively few scientists have monopolized the major chunk of tax-payers’ dollars while a large number of competent and innovative scientists do not. ¬†This needs to end!

In these difficult times, everyone is required to sacrifice a little.  We ought to ensure that publicly funded scientific research is distributed to all competent scientists and not only to the members of scientific power broker cartel.  There is no obvious reason why a researcher should have more than one R01 grant, especially during tough economic situation.  By adopting One PI= One R01, the NIH can support thousands more new scientists and diversify the scientific research base.  By doing so, NIH will promote innovative research to catalyze scientific growth.

We should also understand that NIH cannot make a law.  To achieve One PI=One R01, we have to inform and educate our legislators of the benefits of this formula.  Write to your House Representative and Senator asking them to consider broadening the productive and innovative scientific base by expanding the participation by new scientists.  Ask them to implement One PI= One R01 formula.  There are numerous benefits of One PI= One R01 to the scientific community. It will improve educational standards of the universities and will bring back talent to our educational institutions.  This is the only way to assure that dwindling scientific impact is regained.

2011 in review

The WordPress.com stats helper monkeys prepared a 2011 annual report for this blog.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 9,200 times in 2011. If it were a concert at Sydney Opera House, it would take about 3 sold-out performances for that many people to see it.

Click here to see the complete report.

Scientific Research: A Ponzi Scheme.

Screenshot_3_10_13_6_28_PMRecently, a friend and colleague blurted out, “Man, academic scientific research is a Ponzi scheme”. ¬†At first I laughed at this but soon I realized his point of view. ¬†My friend is primarily a clinician. ¬†His training and interest in understanding the bases of disease and hope of discovering new therapeutic targets had brought him in laboratory research.

He quickly realized that there was a chasm between his lofty ideals of studying a biological phenomenon and his mentor’s single-minded interest in using his data for fetching money. ¬† My friend’s enthusiasm and motivation that had been his strength in conquering the daily grind of the lab work and failure of experiments were suddenly overcome by despair. ¬†He is a good scientist who carefully designs and plans his experiments and is resourceful and skilled to execute them well. ¬†Unfortunately, he decided to return to the clinic without completing his research project.

Under ordinary situations I would not have thought much about his return to clinics. Such departures are not uncommon among physician scientists who do not like the long drawn battle of laboratory scientists against leaking gels, failing western blots, suboptimal reagents, and a long dark tunnel of uncertainty without any glimmer of light at the end.  Many do not see how abstract concepts of basic research could ever be translated into clinically relevant knowledge.  But our guy has the smarts.

Like a painful sliver his analogy of scientific research as a Ponzi scheme stuck in my head. ¬†Of course, I am not immune to the widely publicized Bernie Merdoff’s case of financial bungling. ¬† I googled Ponzi scheme to find that…

In a Ponzi scheme potential investors are wooed with promises of unusually large returns, usually attributed to the investment manager’s savvy, skill or some other secret sauce. (Reference:  The New York Times)

Scientific research indeed is like a Ponzi scheme. ¬†A very small number of people (established investigators) entice a very large number of young people (investor) for a dream of a very large profit (Nobel Prize, glory, publication, publicity, creative satisfaction etc). ¬†To keep the scheme running, they do tell the ‘fine prints’ that not everyone gets there, the harder you work the larger the reward. ¬†Cynics call it ‘rat-race’. ¬†But I think Ponzi scheme is a better description.

Of course, once in a while from this large pool of investors a few are selected to receive the big profit that was promised to all. ¬†They are given awards, positions and attention. ¬†Usually these are the mediocre lot. The reason for this favor is that these mediocre are either unsure about their abilities or are too sure about it. ¬†They stay indebted to the generosity of the ‘system’ and to display their loyalty to the system, they propagate the same scheme. ¬†This is the pyramid scheme taken to extreme.

Does this mean that there are no smart people in scientific research?  On the contrary, there is a large number of smart people who keep pushing the leading edge further and beyond.  They are the pioneers with true passion for advancing the knowledge.  They are the ones who are genuinely interested in understanding nature of things.  They are not wheeler-dealers who relentlessly try to fill round holes of their hypotheses with square pegs of data.

I am not sure whether my friend will ever return to laboratory research but with a simple remark he gave me a different point of view.  We all thrive on such diverse points of view in research and I think that he did shift my paradigm.

Profitable reviews: Nature Immunology defends reviews.

In one of my previous rantings (Click here), I wrote about how journals publish reviews to improve their impact factor. Now, in the recent issue of Nature Immunology (Click here for link), the editorial acquiesces:

“Because they are highly cited (on average, a review article is cited almost twice as often as a research paper), they help boost the impact factor of the journal.”

What the editorial does not mention is the trend that some glossy journals have adopted to publish special issues that predominantly contain reviews.

It also does not take into account the harm done by ‘expert reviews’ where an interpretation or speculation by an expert is perpetuated in the scientific literature as scientific facts. However, I would agree that scientists are responsible for testing the veracity of these ‘facts’, not the journals.

Author ranking system: ‘Impact factor’ of the last author.

AuthorshipWe all know that there is a very little room at the first author position on any scientific paper. ¬†There can only be one name. ¬†Even if two researchers equally contributed to the paper, only one name will appear at the front end of the author list. ¬†According to the current convention, the other equal contributing author cannot put his name at front even on his own resume. ¬†That’s a bummer!

Consider another scenario;  a young researcher who is the major contributor to the paper is on the way to become an independent researcher.  He writes the manuscript and has to decide the author list.  Whom should he put as first author? And the last author? Although there are collaborating scientists, their contribution is too small to grant first authorship.  In this case, the researcher takes the first authorship and also declares himself the corresponding author.  Problem solved!  Not exactly!  This researcher just lost a major point in becoming an expert in his field.

Both these cases illustrate an existing problem of author ranking in a paper. ¬†It is a¬†lesser known fact of scientific publication that funding agencies (including NIH), journals, and often the hiring authorities use softwares to rank the ‘impact factor’ of authors in a publication. ¬†NIH uses such softwares to determine who are the experts in a research field. ¬†These ‘experts’ are then invited to the study sections for reviewing grant applications. ¬†Journals use these softwares to decide who could be potential reviewers for the manuscripts.

On the surface, the idea sounds reasonable. ¬†However, there is a serious flaw in this reliance on softwares to select ‘experts’.¬†¬†These softwares are mostly primitive and are not designed to rank contribution in multi-author papers. ¬†They are highly biased towards the ‘senior author’ ¬†which they determine only by one criterion- the last position on the author list. ¬†Selecting experts based on such faulty method may have ridiculous consequences.

Recently, a well established journal requested a newly minted postdoc to review a research manuscript. ¬†The postdoc was thrilled by this opportunity and took the challenge. ¬†However, we learnt that the scope and content of the manuscript was clearly beyond his expertise. ¬†I don’t know what happened to the manuscript but I am glad to think that there are safeguards against such anomalies. ¬†I must clarify that I am not against inviting new researchers to participate and contribute in the functioning of the scientific community. ¬†However, this should be done with a deliberate choice by program officers and journal editors. It should not happen by mistake. Otherwise it will erode the confidence in validity of the process.

In case you are curious, a current ranking system used by the NIH, for example, gives highest score to the author whose name appears last on a paper. ¬†The software considers the last author as senior author. ¬†The next highest score goes to the first author. ¬†Finally, it does not matter where your name is between the first and the last author, the software assigns you the same low score for ‘contributing authors’.

I see an irony here.  Traditionally, the last author is the senior author who directs the project and in most cases provides funding and laboratory space for the scientific work.  If you want to find out the experts, let common sense prevail- a simple Pubmed search should suffice.  Why do we need technological voodoo to assign complex scoring system to discover the known?

Scientific misconduct debate: The idea is getting traction.

We have all wondered about the debate over scientific misconduct and the utter lack of accountability demanded by the ‘system’. Earlier, I have written on this blog (Click here) that the privilege of using enormous amounts of public funds requires accountability from the scientists. Now, in the current issue of EMBO Reports, this idea has been featured by one of their editors (Click here). In addition, the journal commissioned at least three articles addressing different but related aspects of the rampant issues in contemporary scientific research.
Journals should not only concern themselves with the quality and validity of hypotheses, theories and data but they should also discuss how to improve the socio-economic framework of scientific research. ¬†Discussing the ‘bread and butter’ issues of research are equally, if not more, important than vague policy matters. ¬†¬†At this point I should say that since the days of Frank Gannon as the editor of EMBO Reports, the journal has commendably highlighted the concerns of researchers. ¬†Through advocacy of good research practices, the public trust can be won to improve funding.

Impact Factor: Who are you bullshitting?

At the lunch table, I was thinking of an experiment when my attention turned to a colleague whose paper was recently rejected by a medium caliber (read impact factor) journal and his supervisor had dissuaded him from addressing the reviewers’ mean questions. ¬†Instead, he was gently cajoled into submitting his paper to a new open-access online journal. ¬†Despite the old adage that good things in the nature are free, he was unconvinced of the value of publishing in an open-access journal. ¬†That only tells how much we are used to scientific journals’ policy of charging authors to ‘defray the cost of publication’. ¬†In any other field, authors are paid when they publish. My colleague, probably smarting from the scathing verbiage of the ‘behind the curtain’ reviewers, was unimpressed and unconvinced, and skeptical about the quality of the open-access online journal.

My colleague is not alone in his quest of collecting impact factor points. ¬†Every scientist, at least in biomedical research, is worried about the impact factor of the papers published. ¬†Many have figured out complex algorithms as to which impact factor zone they should reside to keep their research lab afloat. The impact factor frenzy has generated a class system in science where publication in a journal with the glossiest cover page has become the ultimate goal of scientists. It also helps the supervisors as a carrot to dangle in front of their postdocs, ‘if you perform fifty experiments in a day, with a 24/7 attitude, you will get your paper published in the Cosmopolitan or Vogue of science world’.

Ever wondered why the movie The Devil Wears Prada appeared eerily familiar to the postdocs? ¬†The only difference was that the Devil’s minion gets to wear glitzy clothes and gives away fabulous Bang & Olufsen phone; ¬†most postdocs cannot even spell that name.

The impact factor sickness has not only caught scientists, it has also affected the morale of major hardcore science journals. Just in case you forgot, there are roughly two categories of science journals;  first, journals that are published by scientific societies and most of their scientific matters of soliciting, reviewing, and editing is done by real working scientists.  Second, those journals that are run by publishing powerhouses who pluck away energetic hotshot postdocs as editors to their ritzy offices to run the business of scientific publishing.

The impact factor is determined by a commercial arm of a major publishing conglomerate whose non-scientific methods of assigning impact factors generated brouhaha among the Rockefeller Press journals.  These journals were assigned low impact factor despite being darlings of a cross-section of research community.  Probably, the failure to attract good papers and loss of revenue led them to publish a syndicated editorial challenging and ridiculing the impact factor system (Click here).  Their arguments were cogent and the language was bold and challenging.  It is not clear how, but their impact factor did improve. However, after they gained the impact factor, their campaign against impact factor disparity fizzled. Publishers are not the only one who benefit from impact factor inflation.

Impact factor is a crutch that is most often used by impotent, unimaginative and incompetent committees in academic institutions for recruitment, promotions, and fiscal matters. Notice that I showered the adjectives on committees, not the members of the committees, who are generally intelligent people (including me). ¬†Overworked, unappreciated, and sometimes lazy and indifferent members of a committee do not want to be held responsible for making a decision. ¬†Therefore, they rely on impact factor to show their ‘objectivity’. ¬†If they hire a new faculty member who later turns out to be a complete jerk in the department, they can easily blame it on the impact factor of his publication which led to his recruitment. ¬†Had they selected him on the basis of their ‘judgement’, they would be scoffed at by their peers and colleagues.

So, once you begin to equate impact factor as being objective index of productivity, smartness, intelligence, and innovation, you have unleashed a monster that is going to take over the part of the system that traditionally relied on competing interests.  Grant reviewers and paper reviewers can now exercise more arbitrary control over the decision-making without appearing to be unfair.  They can veto the impact factor invoking their experience and judgement.  Essentially, the reviewers are manipulating the system in their favor.

One may argue that eventually, the system will be ‘normalized’ so that no one will be clearly at an undue advantage. ¬†The truth is that it is the same old bullshit with the added objectivity armor of the impact factor.

In case you wondered how some journals achieve high impact factor, it is quite revealing to notice that the Annual Reviews series have some of the highest impact factor.  Wow!!  You would have thought that real research papers should be the winners.  Apparently not!  And there lies the trick.  Most high impact journals are highly cited not because of their published research papers but because of the review articles.  It is not their altruism that glitzy journals are happy to let you download artistic slides for your PowerPoint presentations.

Although it is a great business plan to target lazy scientists who don’t want to do their own legwork of literature review, there is another reason for using review articles to boost impact factor. Many shrewd scientists like to cite reviews published in the high impact factor journals in their grant proposals and research papers upfront. ¬†This way a lazy reviewer can be convinced that because the topic was reviewed in a high impact journal, it must be of great importance.

When I was a new postdoc, I learnt a valuable lesson in assessing the scientific caliber of a scientist.  My research advisor was a soft-spoken, astute scientist with an incisive vision. He showed me how he judged the quality and productivity of a faculty candidate from his Curriculum vitae.

1.  Throw out all reviews, he (or she) has listed.
2.  Take away all papers where authorship is beyond the second author (or senior author).
3.  Trash all conferences and posters presented.
4.  Look at how regularly papers have been published and how good they are.  Yes, use your judgement.  A good paper does not need any assistance, you will know when you see it (at least in the area of research close to you).

I think I agree with his style of assessment rather than the bullshit of impact factor. ¬†Won’t you agree?

Technician or Postdoc?

Postdocs are the slaves of the modern ‘Science Plantations’. ¬†If you look carefully, some cases of horrible treatments of the postdocs may just qualify to be the cases of human trafficking. Strong horrifying words? You betcha!

There used to be a time when postdoc-ing was done only to finish an unfinished business of a project or to get highly desirable additional training to conduct independent research.  Not anymore!

Postdocs are the workhorses of the modern labs.  Given a choice, a scientist with his own lab will hire a postdoc rather than a technician.  Why?  Read the first line-  a postdoc is a virtual slave.

  1. A technician will work 8 to 5;  a postdoc will practically live in the lab.
  2. A technician has a life outside the lab; a postdoc has never seen life, neither here nor in his own country.
  3. A technician’s rights are protected by the institution and government’s labor laws; who gives a fuck to the postdoc?
  4. A technician will do research only if you have a brain to tell him what to do; a postdoc will bust his ass to find a new project even if you are a dud.
  5. A technician observes weekends and holidays; a postdoc will be tormented by the guilt of holidays.
  6. You tell a technician about the virtues of scientific tempo and most likely he will give you the middle finger;  you can make the postdoc cry in shame by telling him that he is not up to the snuff.
  7. You ask a technician to work harder and you will see the bird flipped up again;  a postdoc will kowtow to you because you got the power of writing the reference letter for him.
  8. You cannot threaten a technician about the pending immigration visa; ¬†you can manipulate the postdoc’s entire life by dangling the visa/immigration/green card in front.
  9. You have to pay a technician a salary that is defined by the institution/labor law; you pay what you think is ‘commensurate with experience’ to the postdoc, and if you are a real asshole, you can make the postdoc even work for free in your lab as a volunteer.
  10. A technician will only do what a job description is; you can make postdoc do any dirty job in the lab or, if you are a scum bag, even your dirty laundry at home.
  11. You cannot easily find a good technician who can do the job right; you can find hundreds of mail-order postdocs simply by placing a 10 dollar advertisement in Science magazine.

So, what do you want?  A technician or a postdoc?