To visit the Butler Snow Coronavirus Hub, click here.

Evaluating Published Research for False Findings and Related Considerations

It is a bold and almost frightening conjecture that “most published research findings are false.” In the current COVID-19 era, most of us have noticed an uptick in the popular prevalence of published scientific and medical research, making us all more keenly attuned to the causal relationships established by peer-reviewed scientific publications. News headlines are replete with breaking medical discoveries, causal connections between diseases and their genesis, and all the opinions and recommendations accompanying these postulations.  Of course, news headlines and social media are equally populated with reported disagreements and dissenting opinions on the validity of the scientific research.  As a result, more people can intelligently analyze medical research using vocabulary such as “double-blind,” “peer-reviewed,” and “anecdotal.”  The same can be said for safety and efficacy recommendations issued by the Food and Drug Administration; the phrase “emergency approval” has been prolific in recent months.

But how reliable are these published research findings?  Does peer-reviewed medical literature deserve uncompromising deference?  At least one research scientist disagrees dramatically, and his bold proposition has garnered significant support since his opinion was published.  In his 2005 article in the PLoS Medicine Journal, Stanford University Professor Dr. John P.A. Ioannidis, M.D.[1], empirically quantified the reliability of published medical research and reached the following conclusion:  most peer-reviewed medical literature may, in fact, be false

In his article titled “Why Most Published Research Findings are False,”[2] Dr. Ioannidis prefaces his findings by noting, “[t]here is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims.” In fact, Dr. Ioannidis states that ‘[i]t can be proven that most claimed research findings are false,” and “[t]he probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field.” Based on these considerations, Dr. Ioannidis charges that as much as 80% of non-randomized studies (the most common) turn out to be incorrect.  He also notes that up to 25% of randomized trials (the gold standard) are wrong and perhaps even 10% of the largest randomized trials produced inaccurate relationships.  His mathematical quantification of this phenomena has been widely accepted in the scientific community, with the British Medical Journal once calling him “the scourge of sloppy science.”[3]

Dr. Ioannidis posits several corollaries in defense of his findings:

  1. The smaller the studies conducted in the scientific field, the less likely the research findings are to be true.
  2. The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
  3. The greater the number and lesser the selection of test of relationships in the scientific field, the less likely the research findings are to be true.
  4. The greater the flexibility in design, definitions, outcomes, and analytical modes in the scientific field, the less likely the research findings are to be true.
  5. The greater the financial and other interests in the scientific field, the less likely the research findings are to be true.
  6. The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.

If most scientific studies are misleading, exaggerated or just completely wrong, then why do medical practitioners, scientists and attorneys continue to rely on peer-reviewed medical research in each of their respective fields?  In November 2010, five years after his published study, Dr. Ioannidis sat down with The Atlantic to discuss why the scientific community continues to rely on misinformation.  As discussed in the article, Dr. Ioannidis states that “[t]here is an intellectual conflict of interest that pressures researchers to find whatever it is that most likely to get them funded.”  Moreover, “[a]t every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded.”[4]

Surprisingly (or not), attorneys are also keenly aware of this phenomenon, particularly those attorneys who routinely work with experts and rely on peer-reviewed scientific research to support complex scientific allegations.  The reliability of probative medical and scientific research has long been a topic in heated expert battles and Daubert challenges. Attorneys and their retained experts routinely rely on published medical research to educate courts and juries on causal scientific relationships beyond the understanding of the average layman.  As such, a single peer-reviewed study can make or break a case, particularly in the realm of pharmaceutical and medical device litigation.

If Dr. Ioannidis’ propositions are true and most published research findings are false, are attorneys fighting a fictitious battle over the reliability of scientific research?  Perhaps.  But instead of tossing all medical and scientific research, attorneys can apply Dr. Ioannidis’ findings and corollaries to identify published research that is most reliable, likely to reach an accurate conclusion, and able to survive a Daubert challenge. 

Accordingly, the following factors may be helpful for attorneys in identifying the most reliable research to support their claims:

  1. Larger scientific studies are more reliable than smaller studies.  Logically, this proposition makes sense – the more data that has been analyzed, the more reliable the causal relationships established by that data.  When evaluating your medical literature arsenal, peer-reviewed research with large test groups should rank higher than other studies with smaller test groups.  As Dr. Ioannidis notes, “other factors being equal, research findings are more likely true in scientific fields, such as randomized controlled trials in cardiology (several thousand subjects randomized) than in scientific fields with small studies, such as most research of molecular predictors (sample sizes 100-fold smaller).”  Thus, if your argument relies on a singular scientific study with a small test group, it is more likely that the relationships established by that study may be inherently inaccurate.  Likewise, potential errors in these findings are magnified by the increased ratio of inaccurate results to the total test population. Simply stated, a few bad apples could potentially spoil the whole bunch. The least scrutinized scientific studies are those where correlations are established in the largest of populations.
  2. The larger the effect of a causal relationship in a scientific study, the more likely the published findings are to be true.  Generally, probative scientific research seeking to identify a large causal effect is more reliable than a study postulating a connection between two minute items.  Dr. Ioannidis stated his article that “research findings are more likely true in scientific fields with large effects, such as the impact of smoking on cancer or cardiovascular disease, than in scientific fields where postulated effect are small, such as risk factors for multigenetic diseases.”  Studies targeting small effects will undoubtedly be plagued with false-positive claims that distort the accuracy of the results, thus making them more vulnerable to challenges. 
  3. Probative medical and scientific research with a single tested hypothesis is inherently more reliable than studies seeking to establish multiple causal relationships.  In the scientific community, studies that test a single proposition are more easily replicated and validated, lending support to the reliability of the results.  Conversely, attempting to test multiple theories in a single study may introduce additional errors that are more difficult to identify and isolate.   Accordingly, attorneys and retained experts should minimize their reliance on studies that establish multiple causal relationships and instead focus on studies that simply address the single causal relationship needed to prove their claim.
  4. Medical and scientific research with rigid, uncompromising design, definitions and categorical results are more reliable than those with broader positive result definitions.  Scientific studies that permit flexibility in their definitions can easily be discredited and will be an easy target for opposing experts to attack the sufficiency and reliability of the related study.  Scientific studies that loosely define the predicted or hypothesized positive result may be criticized for postulating a causal relationship where one does not exist.  Instead, utilize medical research that strictly defines the test parameters, result matrices and predicted outcomes.
  5. Probative scientific research funded for the purposes of litigation or by a biased third party should be avoided, as such studies are unreliable. This analysis is one that is familiar to most attorneys in the products liability realm. However, its importance should not go unstated. Relying on scientific studies financed by manufacturers, or even your client, will expose your claim to an inevitable and potentially fatal Daubert challenge. Instead, choose scientific studies prepared by reputable and scholarly bodies.  Most peer-reviewed scientific articles will note if the study was financed by any group with an interest in the outcome or may otherwise introduce an impermissible level of bias.

Dr. Ioannidis also makes a surprising corollary of particular interest to pharmaceutical or products liability attorneys:  the hotter a scientific field (with more scientific teams involved), the less likely the research findings are true.  As he explained, “[t]his seemingly paradoxical corollary follows because . . . the [positive predictive value] of isolated findings decreases when many investigators are involved in the same field.”  Dr. Ioannidis attributes this relationship to the desire of research teams to disseminate the most impressive positive results, or, similarly, for a research team to publish findings negating a positive result created by a competing research team.  With litigation often arising from the newest published research findings where a cause of action could accrue, it’s important for attorneys to be aware of this bias and use it to their advantage when defending those claims.

Of course, the validity of Dr. Ioannidis’ corollaries has been scrutinized since their publication in 2005.  Several scientists and medical doctors have criticized the mathematical model Dr. Ioannidis employed to quantify the rate of falsity in published research, responding that his bold conclusions may be exaggerated.[5]  However, the underlying considerations continue to provide useful guidance for attorneys and their retained experts.  In fact, Dr. Ioannidis states in the conclusion of his report: “Even though these assumptions would be considerably subjective, they would still be very useful interpreting research claims and putting them into context.”  For attorneys, these principles can empower the savvy litigator to prepare expert reports supported by the most reliable scientific research and attack those claims bolstered by more dubious published findings.

[1] Dr. Ioannadis is a Greek-American physician-scientist, writer, and Stanford University professor who has made contributions to evidence-based medicine, epidemiology, and clinical research. Ioannidis studies scientific research itself, meta-research primarily in clinical medicine and the social sciences. See  His paper on “Why Most Published Research Findings are False” has been the most-accessed article in the history of Public Library of Science (over 3 million views in 2020).

[2] Ioannidis JPA (2005) Why most published research findings are false. PLoS Med 2(8): e124.

[3] “John Ioannidis: Uncompromising gentle maniac”. BMJ. 351: h4992. doi:10.1136/bmj.h4992. ISSN 1756-1833. PMID 26404555. S2CID 10953475.

[4] David H. Freedman, Lies, Damned Lies, and Medical Science, The Atlantic, November 2010 Issue.

[5] See, e.g., Goodman S, Greenland S. Why most published research findings are false: problems in the analysis. PLoS Med. 2007;4(4):e168. doi:10.1371/journal.pmed.0040168