ResearchGate: Bogus impact on the ego.

Every week I am bombarded by emails that are trying to sell me something personal (sometimes very personal), lab equipments, or reagents. There also are incessant chains of seemingly good-natured invitations to attend free-webinars. I promptly delete them without opening to see the contents. However, I stop at one email that is sent by ResearchGate, ‘a social networking site for scientists and researchers’. I have a strong urge to delete it without looking at the content but I am reluctant to do that. I know that the email contains my latest ‘Impact Score’. Instead of deleting the message, I anxiously click on it to view my score wondering whether this week I fared well or not.

On most occasions my score has remained unchanged. However, there have been days when the score dropped a few decimal points. It was agonizing to watch that happen. The immediate response was to open the link in the browser to check what happened. Inside my head, I know that the score is dropped because there were fewer ‘hits’ or views of my research papers. But the scientist inside me looks for verification of the phenomenon, and ResearchGate promptly provides me with a graph to support its scoring system. In the absence of any external reference, my graph can shoot through the roof or drop to the baseline (zero) by only one ‘view’ of my research papers.

What is this graph? How is it scored? Who are the viewers? Does the site record all the views of my research paper on the web or only the ResearchGate website? Are my papers curated on the ResearchGate site? Do the views only from the members count? There are at least three different scores for each researcher seen on the website, what are they? You get a ‘Total impact’ point, then an RG Score and an ‘Impact point’. How do you make any sense of it? With all these questions, I don’t think it is clear what they are scoring and to what end.

As for the impact scores, several lab technicians have much larger impact scores than some Principal Investigators. These technicians never published a first author paper or a senior author paper. Yet, they score big in ResearchGate scheme. What impact should we consider here? It is not that a lab technician’s research contribution is not important, but if RG score is mere ‘contribution score’ then it is contaminating the scores of ‘impactful researchers’.

The ResearchGate web site claims that it was started by scientists to ‘Connect, collaborate and discover scientific publications, jobs and conferences’. Then why score their research impact in a manner which does not make sense to anyone? Are the founders of ResearchGate network too smart to have figured out that all humans, whether lay people or trained scientists, have the weakness of vanity and are willing to take an ego trip with bogus scores?

Advertisements

Sabotaging experiments and flat tires.

This week’s Science magazine carried a story (http://www.ncbi.nlm.nih.gov/pubmed/24604172) about a postdoctoral fellow at Yale whose experiments were sabotaged by her fellow worker. As usual, there was some drama associated with the entire process of fault-finding and blames being thrown around but one interesting thing surfaced that this event was considered to be a laboratory prank, not a serious offense. The article goes-

“The complex case raises a host of questions about how to deal with sabotage, a type of misbehavior that some scientists believe is more common than the few known cases suggest. One key point of debate is whether ruining someone’s experiments should fall under the definition of research misconduct, which is usually restricted to fabricating or falsifying data and plagiarism. Some experts argue that wrecking experiments, while terrible, is more akin to slashing a fellow researcher’s tires than to making up data.”

Seriously? Slashing a tire is just that- it is a display of displeasure at a person or his/her act or a series of acts. Even then its validity as a retribution is only in the mind of the perpetrator. Sabotaging an experiment is much more than that. The saboteur wants not only to demonstrate displeasure but also goes to a length of discrediting the targeted scientist’s work. It is far more sinister. It erodes the credibility of the victim and that is what the goal of sabotaging a scientific experiment is. There are ‘legal’ questions being attached to it now:

“Whether sabotage belongs under ORI’s purview is questionable, Rasmussen says. A long and contentious debate took place in the 1990s over whether the U.S. federal definition of research misconduct should include anything beyond fabrication, falsification, and plagiarism, commonly referred to as FFP. Some argued that other types of bad behavior, such as sexual harassment or vandalism, could constitute research misconduct as well; others said that would open the floodgates to all kind of accusations, and that such misdeeds could be dealt with through other mechanisms.”

It does not matter whether sabotage comes under the ORI’s purview. The decency of a mentor and the institution demands that the incident should be reported to the authorities. Essentially, the fear among the faculty members is that they would be considered to be lousy managers to let it happen. They take it too personally and distort the facts and penalize the victims. In doing so, they undermine their own credibility and promote a dishonest view of the scientific world.

In the era of print news this ploy of obfuscation could have worked for the lab managers and university authorities. But in an era of Facebook, Twitter and lightening fast worldwide communication, such approach may backfire and undermine the credibility of the scientific researchers in general. Beyond petty bickering, there is no place for egregious acts of sabotaging anyone’s experiments and such people should be quickly removed from the lab before they do bigger damage to the core of the scientific values.

Fire the editor.

Fire EditorIf a paper is retracted because of falsified data the authors probably did terribly wrong things in the name of science. But there is another side of the equation: the journal editor.

Commercial journals make money by publishing scientists’ work. To keep their circulation and impact factor high, they have to lure the manuscripts that effect a ‘paradigm shift’.

Contrary to the common fallacy that the high-profile journals are brutally objective in manuscript selection, their editors give plenty of unnecessary opportunities to the authors to resubmit their shoddy work. In principle they send the letter that the manuscript is rejected but they would consider it as a new submission if the reviewers’ concerns are addressed.  In practice, they routinely override some valid criticism and concerns of the reviewers to publish the paper.

If an editor overrides the reviewers’ concerns and the paper is later retracted, what should be done?  I have to find out how the board of editors acts under these circumstances.  As far as I know, there are no serious consequences for the lapse of editorial judgement.

EMBO Journal has adopted a policy of publishing the review proceedings should the authors agree to it.  Such policy should be embraced by every decent scientific journal because it affirms that the readers are intelligent scientists who will understand the limitations of the research work.

As for the editorial veto of the reviewers’ concern that leads to retraction of a paper, some accountability is expected not only for the commercial success of the journal but because there is also tax-payers’ money involved.   I would say, ‘Fire the editor’.

How to steal scientific ideas.

Locked drqwerScience is a business of ideas. By its very definition, researchers are required to generate new ideas. However, the ideas do not pop up in vacuum. Astute researchers have to master the literature, learn where the gaps in the current field of research exist and then find a feasible way to fill those gaps.

The way the current research training is done, the majority of researchers eventually become rigid in their ideas. Their research becomes dull and boring. In the name of ‘detailed study’ they keep burrowing deeper into descriptive research. Years of battles with paper publications, failed grant applications and stress of obtaining tenure and load of teaching wears them out. Only few remain as enthusiastic as they were in the beginnings of their career. Of those who remain enthusiastic, most are not driven by scientific inquiry but by the social and political thrill of it.

Surviving on the stolen ideas of trainees and postdocs becomes a viable means of their academic lives. But they have to do it in a sophisticated way. Here are a few simple ways to do it:

1. ‘Encourage’ every trainee applicant to write a 2 page mock research proposal. This is a shotgun approach whereby anyone showing an interest in your research can be asked to provide idea of what to do. You then take those ideas and adopt them in your current research.

2. Group discussions/brain storming in lab. Pretend that you are helping people bring out their best. Make them bust their ass to beat each other’s ideas and then pick all the good ones as your own.

3. Once the trainee presents a great idea with some interesting preliminary data, kill his/her enthusiasm by saying that the idea is useless, not relevant, premature, too complex for the current state of science etc. During the next few months, gently incorporate the idea in your casual talks. Finally, give the project to someone other than the originator of the idea as your own.

4.  Make your trainees write a fellowship proposal. Incorporate those questions as an aim in your own grant. Pretend that it was all your own to begin with.

There are many more subtle ways you can steal the idea of your trainees to call your own. With the years of toiling under your own mentor, you have consciously or unconsciously picked up techniques to put down your colleagues and steal intellectual property. Now it is your turn to perpetuate it. Do it with style, do it with authority and when challenged, you can always say that all data and ideas belong to NIH or the institution. You only happen to be an agent of theft (read hired thief).

There are other better ways as you climb up the ladder of your academic career. You can steal from other labs by being a reviewer. Oh, don’t give me the shit about ethics and confidentiality. You know what I mean.

If everything else fails, you can also resort to saying that ideas are not novel it is the ability to materialize them matters.

Lawyers are universally loathed for their ability to fudge the truth. In reality, scientists can be worse than lawyers. They wear the cloak of honesty and objectivity, but the unscrupulous ones are constantly twisting the truth, presenting half-truth, and backstabbing with hidden dagger of greed and deception.

Impact Factor: Who are you bullshitting?

At the lunch table, I was thinking of an experiment when my attention turned to a colleague whose paper was recently rejected by a medium caliber (read impact factor) journal and his supervisor had dissuaded him from addressing the reviewers’ mean questions.  Instead, he was gently cajoled into submitting his paper to a new open-access online journal.  Despite the old adage that good things in the nature are free, he was unconvinced of the value of publishing in an open-access journal.  That only tells how much we are used to scientific journals’ policy of charging authors to ‘defray the cost of publication’.  In any other field, authors are paid when they publish. My colleague, probably smarting from the scathing verbiage of the ‘behind the curtain’ reviewers, was unimpressed and unconvinced, and skeptical about the quality of the open-access online journal.

My colleague is not alone in his quest of collecting impact factor points.  Every scientist, at least in biomedical research, is worried about the impact factor of the papers published.  Many have figured out complex algorithms as to which impact factor zone they should reside to keep their research lab afloat. The impact factor frenzy has generated a class system in science where publication in a journal with the glossiest cover page has become the ultimate goal of scientists. It also helps the supervisors as a carrot to dangle in front of their postdocs, ‘if you perform fifty experiments in a day, with a 24/7 attitude, you will get your paper published in the Cosmopolitan or Vogue of science world’.

Ever wondered why the movie The Devil Wears Prada appeared eerily familiar to the postdocs?  The only difference was that the Devil’s minion gets to wear glitzy clothes and gives away fabulous Bang & Olufsen phone;  most postdocs cannot even spell that name.

The impact factor sickness has not only caught scientists, it has also affected the morale of major hardcore science journals. Just in case you forgot, there are roughly two categories of science journals;  first, journals that are published by scientific societies and most of their scientific matters of soliciting, reviewing, and editing is done by real working scientists.  Second, those journals that are run by publishing powerhouses who pluck away energetic hotshot postdocs as editors to their ritzy offices to run the business of scientific publishing.

The impact factor is determined by a commercial arm of a major publishing conglomerate whose non-scientific methods of assigning impact factors generated brouhaha among the Rockefeller Press journals.  These journals were assigned low impact factor despite being darlings of a cross-section of research community.  Probably, the failure to attract good papers and loss of revenue led them to publish a syndicated editorial challenging and ridiculing the impact factor system (Click here).  Their arguments were cogent and the language was bold and challenging.  It is not clear how, but their impact factor did improve. However, after they gained the impact factor, their campaign against impact factor disparity fizzled. Publishers are not the only one who benefit from impact factor inflation.

Impact factor is a crutch that is most often used by impotent, unimaginative and incompetent committees in academic institutions for recruitment, promotions, and fiscal matters. Notice that I showered the adjectives on committees, not the members of the committees, who are generally intelligent people (including me).  Overworked, unappreciated, and sometimes lazy and indifferent members of a committee do not want to be held responsible for making a decision.  Therefore, they rely on impact factor to show their ‘objectivity’.  If they hire a new faculty member who later turns out to be a complete jerk in the department, they can easily blame it on the impact factor of his publication which led to his recruitment.  Had they selected him on the basis of their ‘judgement’, they would be scoffed at by their peers and colleagues.

So, once you begin to equate impact factor as being objective index of productivity, smartness, intelligence, and innovation, you have unleashed a monster that is going to take over the part of the system that traditionally relied on competing interests.  Grant reviewers and paper reviewers can now exercise more arbitrary control over the decision-making without appearing to be unfair.  They can veto the impact factor invoking their experience and judgement.  Essentially, the reviewers are manipulating the system in their favor.

One may argue that eventually, the system will be ‘normalized’ so that no one will be clearly at an undue advantage.  The truth is that it is the same old bullshit with the added objectivity armor of the impact factor.

In case you wondered how some journals achieve high impact factor, it is quite revealing to notice that the Annual Reviews series have some of the highest impact factor.  Wow!!  You would have thought that real research papers should be the winners.  Apparently not!  And there lies the trick.  Most high impact journals are highly cited not because of their published research papers but because of the review articles.  It is not their altruism that glitzy journals are happy to let you download artistic slides for your PowerPoint presentations.

Although it is a great business plan to target lazy scientists who don’t want to do their own legwork of literature review, there is another reason for using review articles to boost impact factor. Many shrewd scientists like to cite reviews published in the high impact factor journals in their grant proposals and research papers upfront.  This way a lazy reviewer can be convinced that because the topic was reviewed in a high impact journal, it must be of great importance.

When I was a new postdoc, I learnt a valuable lesson in assessing the scientific caliber of a scientist.  My research advisor was a soft-spoken, astute scientist with an incisive vision. He showed me how he judged the quality and productivity of a faculty candidate from his Curriculum vitae.

1.  Throw out all reviews, he (or she) has listed.
2.  Take away all papers where authorship is beyond the second author (or senior author).
3.  Trash all conferences and posters presented.
4.  Look at how regularly papers have been published and how good they are.  Yes, use your judgement.  A good paper does not need any assistance, you will know when you see it (at least in the area of research close to you).

I think I agree with his style of assessment rather than the bullshit of impact factor.  Won’t you agree?

Technician or Postdoc?

Postdocs are the slaves of the modern ‘Science Plantations’.  If you look carefully, some cases of horrible treatments of the postdocs may just qualify to be the cases of human trafficking. Strong horrifying words? You betcha!

There used to be a time when postdoc-ing was done only to finish an unfinished business of a project or to get highly desirable additional training to conduct independent research.  Not anymore!

Postdocs are the workhorses of the modern labs.  Given a choice, a scientist with his own lab will hire a postdoc rather than a technician.  Why?  Read the first line-  a postdoc is a virtual slave.

  1. A technician will work 8 to 5;  a postdoc will practically live in the lab.
  2. A technician has a life outside the lab; a postdoc has never seen life, neither here nor in his own country.
  3. A technician’s rights are protected by the institution and government’s labor laws; who gives a fuck to the postdoc?
  4. A technician will do research only if you have a brain to tell him what to do; a postdoc will bust his ass to find a new project even if you are a dud.
  5. A technician observes weekends and holidays; a postdoc will be tormented by the guilt of holidays.
  6. You tell a technician about the virtues of scientific tempo and most likely he will give you the middle finger;  you can make the postdoc cry in shame by telling him that he is not up to the snuff.
  7. You ask a technician to work harder and you will see the bird flipped up again;  a postdoc will kowtow to you because you got the power of writing the reference letter for him.
  8. You cannot threaten a technician about the pending immigration visa;  you can manipulate the postdoc’s entire life by dangling the visa/immigration/green card in front.
  9. You have to pay a technician a salary that is defined by the institution/labor law; you pay what you think is ‘commensurate with experience’ to the postdoc, and if you are a real asshole, you can make the postdoc even work for free in your lab as a volunteer.
  10. A technician will only do what a job description is; you can make postdoc do any dirty job in the lab or, if you are a scum bag, even your dirty laundry at home.
  11. You cannot easily find a good technician who can do the job right; you can find hundreds of mail-order postdocs simply by placing a 10 dollar advertisement in Science magazine.

So, what do you want?  A technician or a postdoc?

Should Scientific Misconduct be Criminalized?

It has been a while since the last post.  It was not a ‘mysterious disappearance’.  No, I have not been manhandled or killed.  Not yet.

I noticed an article (click here) that some vigilante group has been sending accusatory notices targeting stem-cell researchers of their wrongdoings.  This has rattled the researchers and the publishers alike.

Well, if you look at it, the business of science has been given a lot of freedom to operate, and enormous amounts of trust has been put into scientists’ integrity, when it comes to their conduct.

Scientists obtain sumptuous chunk of money from the exchequer and when bad things happen, they simply say, “Oops!  We fucked!”  There are practically no consequences to their misdeeds.

Publishing a research paper is an enormous undertaking.  It not only takes time, money and collaborative effort of the authors involved,  but it also affects a huge number of researchers across the globe.

When someone produces and publishes fraudulent data in a major journal, it means years of work and at least a quarter million dollar worth of time and reagent go down the toilet.

Who pays for this?  People pay for this.  But, all the culprit gets is a slap on the wrist.  The culprits are told not to participate in any publicly funded program in any manner, and sometimes, the institution washes its hands with them. That’s it.  In fact, in most cases, the culprit returns to science to continue.

When a junior scientist publishes fraudulent results, it takes a while before the results could be verified by other researchers.  There is the ‘window of opportunity’ during which the junior scientist moves on to find a cushy job and by the time the fraud is exposed, he or she has obtained job security.  The senior scientist, on the other hand, has nothing to lose because he/she can blame the person who has already left the lab.  So it is convenient to everyone.

Anywhere else, one would be tried in a criminal court for such misappropriation of public funds and will likely be thrown in the prison, if the guilt is proven.  Not in science.

Why?  Because it is a ‘noble’ profession.  Scientists walk with an aura around them that rivals that of the angels.  Some even think that they are gods.

So, the question is, whether such misbehavior of the unscrupulous scientists be pardoned, or it should be consider a criminal act?  Only public can decide.

Scientific misconduct: A prick-ly issue of gutless scientists.

Data forgery is a frikkin troublesome issue in scientific research.  It is the same story every time:  “The damned post-doc fucked up the data!  We are retracting the paper although so and so stands by his or her results.”

A lamentation recently appeared in a prominent science journal. A junior researcher in a scientist’s laboratory had fucked up some data that resulted in retraction of four papers and data from a few more paper are suspect. The writer was thankful that the prominent scientist’s image was not tarnished. Had it been, the author would have used spit to bring it to the original shine.

I mean come on, give me a break!  Don’t we have enough of this bullshit? Every time it happens, and it is happening a lot nowadays, an obituary of the paper is published and apology is issued for any inconvenience to the research community.  What about those other researchers who were genuinely working on a similar project and had contrary results?  Their bosses were whipping their rear ends for being incompetent and not being able to produce similar results as the pioneers.  They lost their credibility because some jerk happened to publish fake data.

And thank goodness the image of the senior scientist is not tarnished because it would be a bad press. (really?  who cares?) The big guy brings a lot of money to the institution  so that they can claim to have an internal investigation committee.

Don’t you think that the rot is a bit deeper?  How the hell two measly post-doc get away with four high profile papers with grafted data? Were they constantly providing data to fit the pet theory of the boss?  Was the jet-lagged senior scientist’s judgement obscured by the desire of giving a crushing blow to the competitor?

In politics, when someone screws up in a big way, the superior takes the responsibility of failure to oversee.  In science, it is all about the lowly post-doc’s mess.  The principal investigators, as they are called, drop the culprits like a hot potato. That is the easy way out.

Today, science has completely changed in its intensity, competition, and amount of money involved.  It is no more a recluse hobbyist’s muse.  With these changes have come the unwanted but expected problems.  Data forgery is one of them.  It is a big enough problem that the National Institute of Health, the largest scientific funding body in the world, has established an Office of Research Integrity that monitors these allegations.

Although there are many facets of this problem that involve the bench researcher, the institution, the scientific and technical journals and the research funding agency, still the senior scientists have to own their responsibility of directly or indirectly promoting data falsification and other scientific misconducts.  It is a prick-ly issue and it sure requires some guts to deal with.