Bench & Bar May/June 2025

these issues will help attorneys and judges ensure that only reliable scientific evidence is presented in Kentucky courtrooms. THE SCIENTIFIC METHOD In Kentucky and federal courts, judges serve as gatekeepers for expert testi mony, ensuring that scientific evidence meets established standards of reliability. However, emerging concerns about the replicability crisis, publication biases, and forensic fraud suggest that courts often admit evidence that fails to align with best scientific practices. Attorneys and judges must therefore be diligent in evaluating whether proffered expert opinions genu inely adhere to the scientific method. At its core, scientific evidence must be derived from the scientific method. This process requires the formulation of a hypothesis, followed by systematic testing through observation and experimentation; “indeed, this methodology is what distin guishes science from other fields of human inquiry.” 17 Nor is it enough for an expert to claim their conclusions are based on “sci entific principles” without demonstrating how they arrived at them. In other words, the mere presence of scientific jargon does not make testimony scientifically valid. 18 Federal Rule of Evidence (FRE) 702 and Kentucky Rule of Evidence (KRE) 702 govern the admissibility of expert testimony. These rules impose three fun damental requirements: (1) the testimony must be based on sufficient facts or data; (2) it must be the product of reliable prin ciples and methods; and (3) the expert must have reliably applied those principles and methods to the case at hand. These require ments reflect the scientific method—if a hypothesis cannot be tested and verified, it is unreliable and inadmissible in court. 19 What, then, is the role of counsel as advo cates and the court as gatekeeper? While in many instances, experts may offer opinion without laying the reliability foundation, the “implicit assumption” of Daubert is that “when the time comes to assess reliability the information necessary to make that assessment will be available to the court.” 20

This assumption is frequently incorrect. Experts may omit unfavorable studies, fail to disclose modifications to their meth odology, or present speculative opinions under the guise of scientific certainty. In some instances, experts deliberately use language to mask opinions that lack true validity and reliability, leading courts to make crucial decisions with enduring con sequences. 21 This creates a false sense of credibility, making it even more difficult for judges and attorneys to distinguish between legitimate scientific evidence and testimony that merely sounds authoritative. Without careful scrutiny, unreliable science can slip through the cracks and influence legal out comes that should be based on tested and verifiable principles. THE REPLICABILITY CRISIS “Reproducibility is the sine qua non of ‘sci ence.’” 22 If a theory cannot be tested and its results independently replicated, it lacks validity. The same principle applies in the courtroom—scientific testimony must be based on reliable, repeatable findings, not unverified assertions. However, a grow ing body of evidence suggests that much of what is presented as “science” fails this basic test. A major replication study by the Open Sci ence Collaboration reviewed research from leading psychology journals and found that only 39% of studies could be successfully replicated, with reproduced effects often much weaker than the original findings. Ideally, if the original studies were sound, a much higher percentage—close to 90% of results—should be replicated. 23 Similarly, a 2016 poll of 1,500 scientists published in Nature found that 70% of respondents had failed to reproduce at least one other sci entist’s results. This issue spans disciplines: 87% of chemists, 77% of biologists, 69% of physicists, and 67% of medical research ers reported replication failures. 24 These findings raise serious concerns about the credibility of scientific studies. The pressure to publish and secure fund ing contributes to this problem. Some researchers manipulate findings to produce “statistically significant” results, knowing that journals favor positive findings over

null results. In a troubling trend, some editors and reviewers even encourage researchers to downplay failed replica tions. 25 A 2021 study found that research with reproducible results is often cited less frequently than studies with findings that cannot be replicated—suggesting that hype, funding incentives, and media attention may drive questionable scientific practices. 26 Certain fields are especially vulnerable. A 2018 meta-analysis of 200 psychology papers revealed widespread statistical weak nesses, with social psychology particularly affected. 27 Another study in Nature Human Behavior attempted to replicate 21 behav ioral and social science papers from top-tier journals and failed to confirm results in nearly half of them. 28 Despite these alarm ing findings, courts may continue to admit expert testimony based on studies that may be unverified, unreliable, or outright flawed. For Kentucky attorneys and judges, the implications are clear—proffered expert testimony should be scrutinized by asking: • Has the study been inde pendently replicated? • What is the margin of error in the expert’s conclusions? Judges, as gatekeepers, should insist on clear answers to these questions before admitting expert opinions. If an expert relies on research that has never been repli cated, that testimony should be treated with caution—if not excluded outright. Scientific evidence should not be assumed reliable simply because it has been published; it must withstand independent scrutiny, just like legal precedent. PROBLEMATIC PUBLICATION Concerns regarding the integrity of aca demic papers is hardly new. • Were the original data and methodology made available for review?

The business model of exchanging academic papers for profit can be

31 bench & bar

Made with FlippingBook Ebook Creator