Study Finds Some Court-Admitted Psychological Assessments to be Junk Science

In the courtroom, psychological assessments can be important components of a compelling argument. From establishing a plaintiff’s pain and suffering in a personal injury case to proving a defendant’s mental state during a crime, outlining psychological facts can offer critical support at trial. But not all psychometrics are made equal. A recent study by the

Study Finds Some Court-Admitted Psychological Assessments to be Junk Science

ByAnjelica Cappellino, J.D.

|

Published on June 30, 2020

|

Updated onJuly 28, 2020

Study Finds Some Court-Admitted Psychological Assessments to be Junk Science

In the courtroom, psychological assessments can be important components of a compelling argument. From establishing a plaintiff’s pain and suffering in a personal injury case to proving a defendant’s mental state during a crime, outlining psychological facts can offer critical support at trial. But not all psychometrics are made equal. A recent study by the journal, Psychological Science in the Public Interest, has found that unreliable psychological assessments presented as evidence will oftentimes go completely unchallenged by attorneys and judges. As a result, questionable scientific methods are routinely admitted into evidence. Described by the researchers as “junk science,” these inconsistent assessments can have serious legal implications and potentially undermine the fairness at trial. To ensure all admitted evidence is sound and justice is delivered, understanding the validity of psychology assessment types is crucial for attorneys and experts alike.

Study Background

The study, “Psychological Assessments in Legal Contexts: Are Courts Keeping “Junk Science” Out of the Courtroom?” set out to explore the assessment tools used by psychologists in the legal context. The study also sought to understand the admissibility challenges of these assessments in a courtroom. Using 22 surveys of clinicians in legal forensics settings, researchers identified 364 readily used assessment tools. These psychological assessments included a variety of aptitude, achievement, personality, psychological, and diagnostic tests. Researchers then measured each assessment’s standing in its field and relevant peer review or testing—the admittance factors for expert testimony under Daubert.

Survey Results

The study found insufficient evidence to make a judgment about the general admissibility of 51% of the 364 surveyed tools. But of those that were generally admitted, about 67% of assessment tools were considered to be accepted in their field, 16.8% were not considered acceptable, and another 16.8% had conflicting results. The study also uncovered that only 40% of psychological assessment tools used in the courtroom are favorably rated by experts. According to the data, 37% of the assessment types received mixed reviews, and 23% were viewed unfavorably.

The data also called into question the peer review and testing process. As the study explained, “some psychological assessment tools are published commercially without participating in or surviving the scientific peer-review process and/ or without ever having been subjected to scientifically sound testing.” The researchers attributed this testing failure to, “market economy pressures, proprietary interests, and intellectual-property concerns” that may influence test developers to release their tools quickly and without review. For those assessments that do undergo peer review, researchers determined that these reviews are sometimes conducted on commercially published information—limiting its validity and unbiased result.

Examining the Court’s Review

In the second portion of the study, the researchers investigated whether courts are sufficiently scrutinizing psychological assessment evidence. The research team chose to further investigate the admissibility of 30 of the 364 tools in 876 different cases. In their review, researchers used the factors set forth in Daubert, as well as the need for evidence to be reliable and helpful to the jury with minimal prejudicial impact as the admissibility standard. Upon review, the Minnesota Multiphasic Personal Inventory test was, by far, the most used—with 485 cases. The team also found that the well-known Rorschach Inkblot Test was the second most frequently employed—with 59 cases.

In investigating how such tools were examined by the court across the 876 cases, many were never challenged at all. In fact, the Minnesota Multiphasic Personal Inventory test was challenged only once throughout its 485 sampled cases. Out of 19 cases where a psychological assessment was challenged, only 6 instances were successful and deemed inadmissible by the court. Challenges on the basis of validity were generally less successful than challenges to relevance. Interestingly, there was little correlation between a tool’s quality and its likelihood of being challenged.

A Tension Between Law & Science

The dearth in evidentiary challenges may have to do with “a basic tension between law and science,” that hinges on the fact that the two disciplines conceptualize facts in different ways. As explained in the study, “Because of the law’s preference for certainty, experts may feel tempted to reach beyond legitimate interpretations of their data both to appear “expert” and to provide usable opinions. Similarly, legal decisionmakers may disregard testimony properly given in terms of probabilities as “speculative,” and may attend instead to experts who express categorical opinions about what did or will happen.” The study’s authors also cautioned, “When poor science is not recognized as such and is used to reach legal decisions, the risk of error rises and the legitimacy of the legal system is threatened.”

Key Takeaways for Trial

Although the study’s results are alarming, they do highlight some positive assessment information that can be effectively used by psychology experts at trial. Testing tools are still considered more valid and reliable than unaided clinical judgment, such as evaluations conducted without standardized measurement tools—which are still used by about 25% of psychologists when providing court testimony. Any qualified psychology expert should strongly consider the use of assessment tools, especially since the surveyed clinicians, on average, employed four different assessment methods in any given forensic evaluation.

The study also explained, “there are many psychometrically strong tests used by clinicians in forensic practice. And, consistent with their roots in psychological science, psychological assessment tools are nearly all tested—and thus have a known or knowable error rate with respect to at least some outcome measures. In addition, there is a positive relationship between the overall psychometric strength of tools and their general acceptance.” However, since information regarding general acceptance is not always available, this should not be the only factor a psychologist uses when evaluating an assessment tool. As a general rule, psychology experts that utilize a variety of accepted methods of testing tend to fair better.

Final Thoughts

The researchers concluded with an emphasis on the importance of psychologists’ own evaluation of their methods. “Although judges and lawyers are the ultimate arbiters of when and how psychological tools are used in the courts, the onus is on psychology to create sound methodologies and teach its scientists and practitioners to use them,” the study explained.

Overall, regardless of the type of assessment tool, all qualified psychology experts should strive to use methods that are subject to extensive peer review and testing. As the study stressed, the more information publicly accessible regarding these assessments, the better the chance that they can be adequately evaluated by both the expert, the attorney, and the court.

About the author

Anjelica Cappellino, J.D.

Anjelica Cappellino, J.D.

Anjelica Cappellino, Esq., a New York Law School alumna and psychology graduate from St. John’s University, is an accomplished attorney at Meringolo & Associates, P.C. She specializes in federal criminal defense and civil litigation, with significant experience in high-profile cases across New York’s Southern and Eastern Districts. Her notable work includes involvement in complex cases such as United States v. Joseph Merlino, related to racketeering, and U.S. v. Jimmy Cournoyer, concerning drug trafficking and criminal enterprise.

Ms. Cappellino has effectively represented clients in sentencing preparations, often achieving reduced sentences. She has also actively participated in federal civil litigation, showcasing her diverse legal skill set. Her co-authored article in the Albany Law Review on the Federal Sentencing Guidelines underscores her deep understanding of federal sentencing and its legal nuances. Cappellino's expertise in both trial and litigation marks her as a proficient attorney in federal criminal and civil law.

background image

Subscribe to our newsletter

Join our newsletter to stay up to date on legal news, insights and product updates from Expert Institute.