Criticisms of forensic science may not be quite so old as the use of forensic science in criminal proceedings, but they are still quite old by now.[1] They have in common a simple theme: Forensic science epitomizes the duality of scientific and technological advancement, offering both promise and peril. On the one hand, from 1989 through the close of 2019—a single generation—forensic use of DNA led to the exoneration of 367 inmates.[2] On the other hand, Cameron Willingham was executed in Texas in 2004 for presumably setting a fire that caused the death of his three daughters, but the evidence used to persuade the jury Willingham committed arson was so-called junk science, and the overwhelming scientific consensus today is that Willingham did not cause the fire.[3]

This issue of the Houston Law Review exposes the many opportunities for peril in the use of forensic science. But all the contributors also offer specific and concrete suggestions for how to avoid them and thereby achieve the full measure of this science’s promise.

Judge Jed Rakoff begins by observing that of the approximately 2,440 exonerations in the US, approximately 600—or 1 in 4—grew out of a conviction that relied on false or misleading scientific evidence;[4] more remarkably, of the nation’s DNA exonerations, some 40% involved false or misleading scientific evidence.[5] He then unpacks what I count as the seven different ways these errors arise: The supposed scientific discipline might not even be science (E1); there may have been laboratory (E2) or human error (E3); there might have been human misunderstanding of the data (which is the charitable interpretation, E4) or false or misleading testimony concerning the data (E5); the witness might not actually have been an expert (E6); the defense lawyers, lacking scientific literacy, might not have understood the evidence or had the resources to retain an expert to explain it to them (E7); and everything else (E8). Judge Rakoff stresses that judges have the responsibility to act as gatekeepers for scientific (as well as other) evidence, but of course, even the most vigilant gatekeeper cannot solve E2, E3, E4, E5, or E6. Judge Rakoff closes by offering suggestions to address many of these failures,[6] but also acknowledges that because the overwhelming majority of criminal prosecutions occur in state courts, it is the states themselves that bear ultimate responsibility for correcting these problems.

Whereas Judge Rakoff’s address reviewed the many ways mistakes can infect the use of forensic science, Professor Valena E. Beety’s and Professor Jessica Cino’s discussions provide hard-nosed looks at various structural limits that make it difficult to prevent these mistakes from compromising fair trials. By my count, between the two of them, Beety and Cino identify no fewer than seven such limitations,[7] including: inmates do not possess a uniform set of rights to raise post-conviction challenges (L1); there is no longer meaningful federal habeas review of state convictions (L2); numerous process limitations (including inadequate access to funding and experts) skew trial proceedings in favor of the state (L3); legal barriers make it difficult to hold actors (including prosecutors, defense lawyers, and government witnesses) accountable for wrongdoing (L4); there is insufficient adversariness in both the scientific and legal domains (L5); the forensic sciences lack both research agendas and ample validation studies (L6); and, perhaps most intractably, the influence of politics on state court proceedings can overwhelm the pressures for better science (L7).

A trial court judge himself, Judge Rakoff primarily addressed the early phases of the criminal court process, while Professor Beety homes in on the latter phases and addresses how state courts might redress the errors that will inevitably arise owing to the many structural limitations. She points to what might strike many as a paradox: While Texas is known for its prolific use of the death penalty, including in the case of Cameron Willingham (which, not so incidentally, Beety discusses at some length),[8] it also provides a model post-conviction writ of habeas corpus for attacking criminal convictions based on so-called junk science.[9] These junk science writs are especially important as a safety net because, as Beety observes, the federal courts, since the enactment of the Antiterrorism and Effective Death Penalty Act of 1996 (AEDPA), have far less power to set aside state court convictions.[10] Professor Beety explains how content and recommendations contained in the 2005 report by the National Academy of Sciences, titled “Strengthening Forensic Science in the United States: A Path Forward,” as well as model forensic science commissions (like that established in Texas) can offer a roadmap for fixing forensic problems in the future while simultaneously providing a legal basis to challenge in state habeas proceedings problematic convictions from the past.

Professor Cino pivots away from discussing how the legal system, legal doctrine, and legal procedures might help correct errors based on bad science and turns instead to a consideration of how bad science can be made good (or at least better). Indeed, she begins with a provocative question (which Professors Simon A. Cole and Alex Biedermann will also pose and expand upon), asking whether forensic science is even science at all.[11]

In recent years, many scientific domains have been plagued by a lack of reproducibility – so much so that technical journals targeting practicing scientists (like Nature, for example) as well as popular journals aiming at nonspecialists (like Scientific American) have lamented this scourge and proposed correctives.[12] But the kind of solutions that might work for chemistry or psychology are not necessarily viable for forensic science, as Cino intimates, because, at its core, forensic science asks a question quite different from the one posed in ordinary scientific discourse. The narrow inquiry that animates most all forensic examinations is whether a particular accused person did what he or she is accused of. Thus, unlike other sciences, forensic science and scientists tend to lack research agendas, in part because, if forensic science is science at all, it is what Cino calls outcome-based science. Outcome-based science is either not easily replicable, or not replicable at all (imagine, for example, an entire DNA sample being consumed). Unlike, say, experimental physics, which exemplifies a disinterested search for truth, contemporary forensic science more resembles a results-oriented business venture.[13] Cino’s solution is to import scientific rigor to the forensic sciences, and she identifies specific changes to protocols forensic scientists should implement as a step toward achieving this rigor and also suggests changes to the language forensic scientists use when reporting on or testifying about their findings.

When it comes to the language forensic scientists use—in their reports and in their trial testimony—Professors Simon A. Cole and Alex Biedermann provide a meticulous parsing of a critically important yet easily overlooked aspect of how forensic science increasingly garbs its conclusions with decidedly unscientific nouns. Cole and Biedermann note that words like fact, opinion, and conclusion have been replaced in much discourse by the word decision.[14] Instead of a fingerprint examiner testifying, for example, that her opinion is that a smudge left at a crime scene belongs to the defendant, she states she has decided the smudge was left by the defendant. In ordinary discourse, people may use many of these words interchangeably; nevertheless, they differ in subtle yet salient ways. Cole and Biedermann argue convincingly the competing nouns convey quite different levels of epistemic strength with regard to a particular scientific claim (e.g., that a smudge at a crime scene was left by a defendant).

Cole and Biedermann identify another development in forensic science that has coincided with this shift in nomenclature, namely: the increasingly common application of so-called decision theory (or decision analysis).[15] Decision theory itself comprises two strands, one of which examines probability, and the other of which focuses on utility. Yet questions of utility, while capable of being represented in mathematical language, are fundamentally normative. To oversimplify their argument somewhat, when examining a fingerprint, an analyst may believe there is x-probability it belongs to the defendant and y-probability it belongs to an unknown individual. Given the American criminal justice system’s preference for wrongful acquittals (i.e., a not guilty determination in a case where the defendant in fact committed the crime) over wrongful convictions (i.e., a guilty determination when the defendant is innocent), however, how much larger than y must x be for the examiner to decide the print belongs to the defendant? Cole and Biedermann do not answer this question, nor do they propose legal reforms, for that is not their aim. Instead, Cole and Biedermann hope to change the way forensic scientists talk about their work.

Professor Brandon Garrett returns us to the project of improving the science itself. Like Beety, Cino, and Cole and Biedermann, Garrett has ideas about how to improve the science from within; and like Rakoff and Beety, he has a distinct set of suggestions for policing the science from without.

Hovering over Garrett’s discussion is an irreversible development: the proliferation of forensic laboratories. The numbers are staggering. Garrett tells us that the more than 400 publicly-funded crime labs employ more than 14,000 workers.[16] Collecting and analyzing an array of physical evidence is big business, and big businesses create work even as they perform it. Indeed, as the areas of forensic inquiry have grown, and as the technology allowing these inquiries to proceed has improved, the sheer volume of the work conducted by these labs has mushroomed. Garrett illuminates a web of interconnected concerns that, if not created by this explosion of workload, have certainly been exacerbated by it.

At bottom, paradoxically, is a resource problem. While Congress and federal agencies have sent hundreds of millions of dollars to crime labs to purchase equipment, hire analysists, and attempt to address backlogs, Garrett expects the aggregate budgets of state crime labs to exceed $2 billion by the end of this year.[17] As big as that number is, however, it does not eliminate the pressure for labs to have to choose how they will allocate funds. To borrow one of Garrett’s illustrations, a crime lab with a large backlog of DNA samples may choose to invest in newer automated testing technology, but this technology may yield less reliable results, and less reliable results can lead to mistaken convictions, which can in turn lead to substantial payouts to the wrongfully convicted. Moreover, as a simple numerical proposition, the dramatic increase in work done by these labs predictably leads to an increase in errors—both in testing itself and in interpretation of results.

Hindering efforts to reduce or eliminate these errors is a lack of quality control and institutional oversight. Thirteen states (plus the District of Columbia) have established forensic science commissions—which means 37 states have not. Garrett recounts deeply entrenched cognitive biases that can lead to error and other problems of quality control. In short, the number of labs and the number of cases they process have drastically outpaced the oversight. Garrett therefore stresses the need to simultaneously improve the quality of work happening inside the laboratories, while also investing in and building the external institutional structure that will provide oversight and police quality control.

Finally, Professors Sandra Guerra Thompson and Nicole Bremner Cásarez dissect what appears to be an intractable problem facing the use of forensic science at criminal trials. On the one hand, empirical proof of efficacy does not exist for most areas of forensic inquiry. On the other hand, in order for scientific evidence to be admitted as evidence in a criminal prosecution, it must satisfy the so-called Daubert standard,[18] which requires the judge to take into consideration, among other factors, the error rate in the kind of testing that generated the evidence sought to be admitted.

Faced with an absence of validating studies or empirically grounded data on error rates, courts face a dilemma: admit evidence that fails to meet the Daubert standard of rigor, or exclude virtually all forensic data. Thompson and Cásarez report that state trial courts largely choose the former option, with the result that all kinds of junk science—including unsound arson investigative techniques, which underlay the conviction of Cameron Willingham—become part of the evidentiary package relied on by the state to obtain convictions.[19] In Thompson and Cásarez’s informed view,[20] of all the problems facing forensic science, the need for fundamental empirical research to identify the accuracy of various disciplines is the single most urgent.

They then provide a highly detailed proposal for filling this empirical lacuna. In line with other contributors to this Symposium, Thompson and Cásarez point to Texas as leading the way for meaningful reform. Recognizing that the cost of this system is daunting, especially for smaller labs, they note that the blind approach has value both in facilitating validation studies, and also in providing a source of quality control—which is to say, the benefit is worth the price.

Both professors are charter members of the Houston Forensic Science Center, and are thus intimately familiar with a blind testing methodology implemented across six distinct forensic disciplines. They address how blind testing operates at the HFSC in three of them—toxicology, ballistics, latent prints—and report the data from the results of the testing in these domains. Thompson and Cásarez acknowledge the aggregate sample size is still not large enough to support robust conclusions, but the methodology used by HFSC is well on the way to providing precisely the type of data required by evidentiary gatekeepers to insure forensic evidence meets the Daubert threshold.


Among the contributors to this Symposium, there appears to be unanimity on three propositions. First, that forensic science is a powerful, important, and useful category of evidence in criminal cases, both for identifying the guilty and for excluding the innocent. Second, that this value is undermined by both internal and external problems—from mismanagement and financial constraints to a blurring of the line between science and advocacy.

It is the third area of agreement, however, that may be the most important. We know when it comes to forensic science what we do not know; the unknowns are known. Moreover, we also know what needs to be done to eliminate those unknowns. Each of the scholars who has contributed here has drawn a portion of the roadmap for filling those gaps. It now falls to the states to follow that map.


  1. Some scholars suggest fingerprints have been used to authenticate documents for more than two thousand years. See David R. Ashbaugh, Quantitative-Qualitative Friction Ridge Analysis: An Introduction to Basic and Advanced Ridgeology 14–15 (1999). Many published articles also report that fingerprints were used to identify accused wrongdoers as early as the seventh century in China, although I am unaware of specific illustrations. See, e.g., Wayne A. Logan, Policing Identity, 92 B.U. L. Rev. 1561, 1573 (2012). The use of ridge analysis in criminal proceedings in the late nineteenth century, however, is well-documented. E.g., David H. Kaye, A Fourth Amendment Theory for Arrestee DNA and Other Biometric Databases, 15 U. Pa. J. Const. L. 1095, 1121 & n.145 (2013). If there are published criticisms of forensics with that much history, I am unaware of them. For comparatively recent examples, however, consider Michael J. Saks & David L. Faigman, Failed Forensics: How Forensic Science Lost Its Way and How It Might Yet Find It, 4 Ann. Rev. L. & Soc. Sci. 149 (2008); and Jennifer L. Mnookin et al., The Need for a Research Culture in the Forensic Sciences, 58 UCLA L. Rev. 725 (2011).

  2. See DNA Exonerations in the United States, Innocence Project, https://www.innocenceproject.org/dna-exonerations-in-the-united-states/ [https://perma.cc/AR3A-PVBV] (last visited Feb. 22, 2020).

  3. See generally Paul C. Giannelli, Junk Science and the Execution of an Innocent Man, 7 N.Y.U. J. L. & Liberty 221 (2013).

  4. Jed Rakoff, U.S. Dist. Judge, S. Dist. of N.Y., Keynote Address at the Houston Law Review & Criminal Justice Institute Symposium: The Future of Crime Labs and Forensic Science (Sept. 20, 2019), in 57 Hous. L. Rev. 475, 475 (2020).

  5. Id. at 476.

  6. Id. at 480–81.

  7. Many of these limits overlap, and of course, more than one can infect the same proceeding.

  8. Valena E. Beety, Changed Science Writs and State Habeas Relief, 57 Hous. L. Rev. 483, 512–14 (2020).

  9. Id. at 524.

  10. Id. at 489.

  11. Jessica Gabel Cino, Roadblocks: Cultural and Structural Impediments to Forensic Science Reform, 57 Hous. L. Rev. 533, 534–36 (2020).

  12. See Challenges in Irreproducible Research, nature (Oct. 18, 2018), https://www.nature.com/collections/prbfkwmwvz [https://perma.cc/7ZA8-LUBM?type=image]; Markus Gershater & Adam Tozer, To Fix the Reproducibility Crisis, Rethink How We Do Experiments, Sci. Am. (July 18, 2019), https://blogs.scientificamerican.com/observations/to-fix-the-reproducibility-crisis-rethink-how-we-do-experiments/ [https://perma.cc/NZP5-NGV5].

  13. Cino, supra note 11, at 536–37.

  14. Simon A. Cole & Alex Biedermann, How Can a Forensic Result Be a “Decision”?: A Critical Analysis of Ongoing Reforms of Forensic Reporting Formats for Federal Examiners, 57 Hous. L. Rev. 551, 558–63 (2020).

  15. Id. at 563–65.

  16. Brandon L. Garrett, The Costs and Benefits of Forensics, 57 Hous. L. Rev. 593, 598 (2020).

  17. Id. at 598–99.

  18. Daubert v. Merrell Dow Pharm., 509 U.S. 579 (1993). Daubert, of course, applies to the admission of all scientific evidence, in both criminal and civil proceedings.

  19. Sandra Guerra Thompson & Nicole Bremner Cásarez, Solving Daubert’s Dilemma for the Forensic Sciences Through Blind Testing, 57 Hous. L. Rev. 617, 619–20 (2020).

  20. Both Thompson and Cásarez were instrumental in establishing the Houston Forensic Science Commission.