2022 SU General Election Full Supplement

Photo by Mariah Wilson

Publish or Perish: A scientific crisis of faith

By Ishita Moghe, November 20 2019—

Most scientists begin their careers with an earnest curiosity and desire to find answers to their questions. They quickly find, however, that the way to prove their ability is through publications, seen almost as a form of currency or prestige points. A larger number of publications in high impact journals will help you win more funding, gain admission to more selective programs and in later years help you secure a faculty position. In many science programs, it seems as though students are working to publish as many review articles, research papers, and editorials as possible to fill up precious space on their CVs. At this point we must ask ourselves: Are we conducting our science from a place of genuine curiosity, the way we started out? Or are we trying to churn out results for career points?

In addition to the pressure placed on scientists to continually generate manuscripts, there is also a pressure to produce positive results. Publication bias is a well-established phenomenon that discourages submission of studies with null hypotheses — like “failed” experiments — while studies with novel, statistically significant results are much more likely to be accepted for publication. Interestingly, non-statistically significant studies tend to have more revisions made prior to publication, usually involving a principal result being modified.

In the face of publication bias from scientific journals, it should be no surprise that researchers biased interpretation often plays a role in irreproducible studies and exaggerated claims. A practice termed HARKing, or hypothesizing after the results are known, can turn failed experiments into successful ones. HARKing can involve cherry-picking data and scientific questions to assemble the ideal data-set-and-hypothesis pair for a positive, significant study. Similarly, P-hacking involves trying out multiple analysis methods after collecting data but only reporting the significant ones even for false hypotheses. This can essentially allow a researcher to construct any result they want. A satirical 2011 paper published findings that listening to a song about being older, “When I’m Sixty-Four” by The Beatles, actually made people 18 months younger. These results used completely legitimate experimental, statistical and reporting methods, illustrating how easy it is to produce false positive results and how dangerous these methods can be when applied to impactful research.

A large scale survey by Nature found that 52 per cent out of 1,500 multi-disciplinary scientists agreed that there was a reproducibility crisis. Other findings from the survey included 72 per cent of scientists failing to reproduce another scientist’s experiment, and over 50 per cent failing to reproduce results from their own study. Of course, poor reproducibility doesn’t immediately invalidate years of work — independent confounding variables, methodological differences and the specific bounds of positively-reproduced data can all affect if a study is truly reproducible or not. The bigger question is how accurately the publications in major journals represent the current scientific landscape. 

It has been verified several times that it is harder to publish negative results and replications. Adding to the low incentive to publish replication studies, journals may even push to downplay differences with the original study in failed replications. This system promotes the temptation to simply not submit unsuccessful experiments or change original hypothesis to better fit your results. Within the past decade, findings have been retracted from high-impact journals due to fabricated results. Most cases of academic fraud developed from a desire to be competitive for funding and promotion.

The competitiveness and available funding in a field affects trustworthiness in scientific publication, which goes to show that false claims, from sensationalized titles to fully fabricated experiments, are being submitted to further academic careers. While the intention behind this may not be malicious, lower quality scientific publications lead to decreased public trust in science. But with publications being used as a measure for productivity and competence, scientists are incentivized to continue producing manuscripts at an unsustainable rate.

Not only is time and money being wasted on flawed research and publication, falsified findings are hugely detrimental to drug discovery, application and clinical trials. The current global standard of using publications as a proxy of scientific ability is an undoubtedly flawed system, but the solution for it is still unclear. Some suggestions include the use of standardized reproducible methods, encouraging journals to publish null and replicated findings, and rewarding researchers for credible studies. The most critical aspect is changing the mentality of publications as a correlate for ability. A greater emphasis on quality over quantity will help increase public trust in scientific findings, and benefit science overall as researchers are able to shift their focus to pursuing curiosities rather than sustaining careers.

This article is part of our Opinions section and does not necessarily reflect the views of the Gauntlet‘s editorial board.


Hiring | Staff | Advertising | Contact | PDF version | Archive | Volunteer | SU

The Gauntlet