Thursday, January 16, 2014

How to increase value and reduce waste 2

For the second paper in the Lancet series I have to cherry pick a little as large parts of the publication focus on specifics for biomedical research. My intention is to present and talk about the findings that can easily be applied to research in general. Nevertheless it is more than worth to read these publications in their entirety as most of them contain very useful ideas on how to change current drawbacks.

The general topics of this study are shortcomings of research design, conduct and analysis and the authors don't hold off on criticism:

  • absence of detailed written protocols and poor documentation of research is common 
  • information obtained might not be useful or important 
  • statistical precision or power is often too low or used in a misleading way 
  • insufficient consideration might be given to both previous and continuing studies 
  • arbitrary choice of analyses and an overemphasis on random extremes might affect the reported findings.

I must admit that none of those points are really news to me which means I've crossed path with each and every one throughout my career, perhaps even myself walking into those traps. Three reasons for this are carved out in this publication. 

One major problem is the failure to train the research workforce properly. Statistics is a very good example. For example a study cited in the paper showed that p values did not correspond to the given test statistics in 38% of articles published in Nature and 25% in the British Medical Journal in 2001. Many colleagues bemoan the fact that students have very little knowledge in statistical methods when they start working on their first real project or thesis. I can attest to that but it would be wrong to blame the students for the plight. It rather seems that we seriously should consider the way we teach statistics. While the complexity of statistical analysis grows the time spend to learn the basics was gradually reduced. Let's face it, most of the algorithms used in analytical software for genetic data are often not fully understood although deeper knowledge is crucial to make informed decisions on which method to use and how to interpret the results. 

Another issue identified by the authors is the fact that often inadequate emphasis is placed on recording of research decisions and on reproducibility of research. There are a lot of examples in the paper where companies have tried to reproduce biomedical research for the development of treatments and failed. Researchers at Bayer could not replicate 43 of 67 oncological and cardiovascular findings reported in academic publications. Researchers at Amgen could not reproduce 47 of 53 landmark oncological findings for potential drug targets. This could be the result of bad study design or sloppy reporting. I will never forget what I was taught as undergrad - if you write your method section it should read as clear as a good cookbook recipe. Everyone should be able to redo your experiment without further reading or consultation.

Finally, the authors state that our current reward systems incentivise quantity more than quality, and novelty more than reliability. It is very rare that scientists are rewarded for rigorous work and efforts to replicate their own research in order to ensure reproducibility. The hunt for high impact factors and h-indices has flooded the literature with mediocre work. It is not rare that authors split a nice study into several papers each telling the same story but from a slightly different analytical angle. That might be an interesting idea for a fictional story or the next best selling novel(s) but in research it just dilutes the findings and I find it inappropriate. We shouldn't need prestidigitation to receive credit for our work.

It is one thing to lament about the current situation but it is another to propose ways to find out of it:
Recommendations
  • Make publicly available the full protocols, analysis plans or sequence of analytical choices, and raw data for all designed and undertaken biomedical research
    • Monitoring—proportion of reported studies with publicly available (ideally preregistered) protocol and analysis plans, and proportion with raw data and analytical algorithms publicly available within 6 months after publication of a study report
  • Maximise the effect-to-bias ratio in research through defensible design and conduct standards, a well trained methodological research workforce, continuing professional development, and involvement of non-conflicted stakeholders
    • Monitoring—proportion of publications without conflicts of interest, as attested by declaration statements and then checked by reviewers; the proportion of publications with involvement of scientists who are methodologically well qualified is also important, but difficult to document
  • Reward (with funding, and academic or other recognition) reproducibility practices and reproducible research, and enable an efficient culture for replication of research
    • Monitoring—proportion of research studies undergoing rigorous independent replication and reproducibility checks, and proportion replicated and reproduced

No comments:

Post a Comment