I. Introduction
“Someone must have been telling lies about Joseph K., for without having done anything wrong he was arrested one fine morning.”
—Franz Kafka, The Trial[1]
You are driving your new car, a Toyota RAV4, into the financial district. As you pull up to a stoplight, you see a person about to walk onto the street being tased and arrested by an officer. They scream in protest, asking for help. You pull your window down to ask the officer what is going on, why are they tasing and arresting them. The officer responds that, based on data they have collected about the person, their recent purchases, other people that they have associated with recently, and the fact that your make and model of car entered the vicinity, they had good reason to infer that the person was going to carjack you.
At first glance, many would consider this a success. A dangerous crime was avoided, with about as little doubt as practicable that the crime would have occurred. Of course, questions remain: First, was the person actually going to jack my car? Could it be that they were simply asking for help? Perhaps they have weapons for their safety and weren’t intending on using them against me. Second, what’s going on with the vast amounts of information the police were using to make these inferences? Is that information true? Are the inferences legitimate? Third, what’s going to happen to this person? Are they going to be incarcerated or otherwise harmed? Notwithstanding, to many this is a prima facie good state of affairs.
Broadly, this scenario exemplifies the key questions underlying criminal prediction, with the use of artificial intelligence (AI) and big-data methodologies: Were these predictions efficacious in avoiding crime, or were they overkill? How would we even know, given the counterfactual nature of the question? And is any of this fair?
These questions aren’t new, because criminal prediction itself is nothing new. In the late 1800s, Cesare Lombroso—sometimes described as the father of positive criminology—introduced the idea of “born criminals” that were destined for lives of crime and could be identified based on biological markers.[2] Lombroso would broaden his views to allow for sociological and emotional explanations for criminality, as well as recognizing that there were “occasional criminals” motivated by deprivation and stress.[3] While many of his views were considered mistaken, the broader project of predicting criminality of individuals has continually had a sustained vigor. Thereafter, there arose simple data-based methodologies for criminal prediction—using factors like prior criminality, age, race, sex, and socioeconomic background.[4] That grew into more sophisticated statistical methods, like traditional regression analysis.[5] Importantly, alongside were calls for ensuring such prediction was not prejudiced or bigoted.[6] That has been heeded only to an extent, with much greater work to do.[7]
This march towards more accurate and comprehensive prediction has continued, with the use of the most modern techniques, including AI and big-data prediction.[8] By AI and big-data criminal prediction—what I’ll abbreviate “CrimAI”—we are referring to the use of algorithms along with large sets of data[9] to make predictions about (probabilistic) criminal wrongdoing.[10] By their nature, these algorithmic predictions are often nontransparent, providing no explanation—they may be a black box.[11] The predictions themselves can be varied: they may be that a particular individual is more likely to commit a crime, that a particular profile of offender is likely to commit a crime, that a person is likely to be a victim of crime, or that criminal activity is more likely to occur in some time and place.[12]
Predominating the discourse is the allure of eliminating criminal wrongs and their accompanying harms. But quickly following is the realization that this cannot be practically accomplished without earlier interventions in stopping putative crimes, which comes with greater potential error in predicting putative crimes.[13] Inevitably then, such earlier intervention creates the risk of erroneous and unjustified punishment, meted out to would-be criminals. This picture of criminal prediction is familiar, exemplified in Philip K. Dick’s The Minority Report, where the government’s “Precrime” system foresees criminal wrongdoing with the aid of mutant precognitives (or precogs) and thereafter levies punishment on the would-be offenders.[14]
Indeed, scholars and commentators on AI and big-data prediction warn of its dangers.[15] And those perils are myriad. First, there is the principal problem of a lack of transparency and explanation in the determinations of AI- and big-data-based methodologies.[16] Second, there is the concern that these technological advances, being trained on our own system, will perpetuate the biases, prejudices, and inefficiencies of our system.[17] Third, there is the concern that it will make mistakes—and because of the prior problems of nontransparency, bias, prejudice, and efficiency—we will not realize that such problems are occurring with regularity and in systematically deleterious ways.[18]
With that in mind, those against the use of AI and big-data methodologies advocate isolating criminal processes from these technologies. And those in favor of their use naively contend (or simply presume) that these technologies will approach astounding levels of accuracy, and thus we should willingly accept such an amplified criminal justice system.[19] Thus, in assessing the value of AI and CrimAI, we are led to decide between two poles: resisting CrimAI or dystopia, like in The Minority Report. The latter position is not just a pithy riposte—it is a commonly held vision of what a world with CrimAI will look like.[20] It suggests a grim state of affairs, where the surveillance state joins hands with the carceral state, and the parentalist state, and indeed the oligarchic state.
This Article contends that there is another way. More precisely, I suggest there is an alternative path, which may be suboptimal in an ideal world, but remains the best way forward among our practically accessible options. First, I contend that the strategy of resistance—that is, attempting to keep AI- and big-data-based methods away from the criminal law and specifically criminal prediction—is unlikely to succeed. CrimAI will very likely be part of the criminal law if for no other reason because it will be part of many aspects of our world. On that backdrop, I contend that there is a need for alternative responses, and one such model for a criminal justice system equipped with CrimAI: The Liberty-Enhancing View. In this vision, a paramount goal of using CrimAI is that it must preserve and promote personal liberty.[21] Practically, that means the system must always look to avoid punishment, it must look to avoid stigmatization, it must seek to allow people to behave as they wish, and it must seek to decrease the breadth and reach of the criminal law.
Consider two examples. First, there’s jaywalking—the criminal violation of crossing streets at undesignated locations. This is very common practice, especially in big cities; and at the same time, police departments will sometimes crack down on jaywalking. For example, after a spate of pedestrian deaths in New York City in 2014, the New York Police Department issued a much higher number of jaywalking citations in an effort to achieve greater traffic safety.[22] And many lamented the increased vigilance. Jaywalking is often discussed as a standard example of the bloat of the criminal law and law enforcement overreach.[23] Yet at the same time, jaywalking does cause real harms: it can cause traffic accidents, including fatalities, and it can make traffic less efficient generally—which means more time people lose to the humdrum of commuting.[24] The public’s perception of this vacillates, shifting from the poles of obtaining maximal compliance and abolishing the crime altogether. Those who worry about CrimAI use jaywalking as a bogeyman exemplar of the dangers of CrimAI. They recognize that most people have harmlessly jaywalked, but in the big-“Data State,” where everything is seen and processed, people will find themselves ticketed or otherwise sanctioned. And indeed, they might even be targeted by Precrime-type regimes.[25] On the other hand, those who think jaywalking is harmful will simply insist that it’s a good thing when we discover violations—they have real costs, violators should compensate them, and violators should desist from future conduct. Thus, they could maintain that CrimAI apparatus is, at least at first glance, an improvement.
The Liberty-Enhancing View contends that CrimAI can achieve a best of both worlds: Suppose that the use of algorithms, equipped with comprehensive real-time data, would communicate to pedestrians when it is safe to cross anywhere along the road (and when it is not). And suppose it would communicate to drivers—or self-driven cars—the predicted movement of pedestrians to ensure safety and greater driving efficiency. This rubric would use the same functionalities—CrimAI to anticipate the movements of people, to predict when people will commit the offense of jaywalking—to enhance people’s liberty. In this picture of the future, people can walk at their whim, with just a safety alert. There is indeed no coordinating need for a crosswalk.[26] And indeed, there may not even be a need for a jaywalking offense at all.[27] In this way, the Liberty-Enhancing View of CrimAI reduces the criminal wrongs and their associated harms but also does not devolve into a suffocating surveillance-carceral state.[28]
What’s more, I contend that this Liberty-Enhancing View of CrimAI can apply to even what most think of as the crux of the criminal law: violent crimes. Suppose CrimAI becomes accurate and effective enough to predict the occurrence and putative perpetrators of violent crimes. This is the gold standard of criminal prediction, and as suggested above, many people would be willing to trade many of our rights—to process and to privacy—for this state of affairs. There, those who are predicted to commit crimes aren’t punished, exactly. They are banished to another planet. So, they are subjected to an approximation of punishment.
Here again, the Liberty-Enhancing View offers another path. In a system where CrimAI works at high levels of accuracy and efficiency, we may only need to make the limited intervention of stopping the putative criminal wrongdoer from committing their crime. If they were about to assault someone, a team of law enforcement could stop them from committing the assault and then let them go on their way with no further intervention or punishment. If they persist in their attempt, CrimAI will again alert law enforcement, and they can again intervene. Repeat this as many times as necessary. Indeed, we could do more: The government could dispatch a team—including a therapist, a social worker, and law enforcement officers—to halt the instant commission of a criminal wrong and counsel the putative wrongdoer against further commission. This intervention would avoid the trappings of punishment, if at all possible. The goal is to do as little as possible to avoid the commission of the criminal wrong, to make the most limited intervention with the putative wrongdoer. Thus, here too, with violent crimes, we see the Liberty-Enhancing View marshaling CrimAI to avoid the harms of crimes without devolving into The Minority Report.
The Liberty-Enhancing View is not merely a gimmick; I contend that it is a necessary step to maintaining any semblance of a just criminal system. It appears the full embrace of AI and big data is inexorable. AI and big data operate in the background of so many of our online resources, most prominently in recommendation algorithms.[29] It touches all the media and news you take in. And, indeed, interactive AI—through very accessible sources like ChatGPT, BingChat, Bard, ClaudeAI, etc.—has already become staples of people’s lives.[30] Even something as basic as Microsoft’s Office Suite is equipped with AI functionalities—as I type this sentence, AI is suggesting how it should be completed.[31] Thus, AI equipped with big data will touch every aspect of our lives. If that is correct, then there is no reason to think that criminal justice will escape AI and big data’s grasp. Of course, plenty of aspects of the criminal justice system have already been so ensnared. But the point is more stark: Even if today we made the decision to extract the criminal justice system from AI and big data, that may not even be practically feasible, given the level of integration of AI and big data in our lives and society. Thus, if CrimAI is inevitable, we must have a vision of how it can enhance our society and each of our lives—so that it does not crush us.
This Article proceeds in six further parts. In Part II, I set forth two formative pieces of fiction, The Minority Report and All the Troubles of the World; these have created a dominant, pessimistic vision of the future of criminal prediction, with certain core features that I detail. In Part III, I briefly sketch the dominant theories of criminal justice—namely, retributivism and consequentialism. I then explain how criminal prediction does, and does not, advance the objectives of these theories. In Part IV, I consider whether AI will be successful in criminal prediction, and whether it can traverse the hurdles of algorithmic bias and deleterious invasions of our privacy. In Part V, I assess the strategy of resistance—that is, abstaining from the use of AI and big data in the criminal law. I contend that it is a mistaken strategy, for several reasons, including that it may miss some important benefits but also because it is highly unlikely to succeed. At the very least, we must pursue the Liberty-Enhancing View in parallel.
That leads to Part VI, where I give bones and body to the Liberty-Enhancing View. As a basic guide for how to integrate AI into the criminal system, I set out five main principles for ensuring justice in the Data State:
-
In dealing with putative criminal wrongdoers, a system equipped with CrimAI should first look to avoid responses that approximate punishment;
-
CrimAI should be used to eliminate pretextual, intrusive interactions between law enforcement and civilians;
-
CrimAI should eliminate mala prohibita;
-
CrimAI should be used to eliminate inchoate liabilities; and
-
Systems equipped with CrimAI should abstain from targeting people based on their purported bad character.
II. Visions of the Future
A. The Minority Report
Philip K. Dick’s The Minority Report has greatly influenced our expectations about predicting crime.[32] The story follows the protagonist, John Allison Anderton, who runs the Precrime program in some futuristic society.[33] Precrime works using three mutant “precogs” who purportedly predict future criminal acts. In fact, it’s not so straightforward: Anderton explains that the precogs continuously “babbled”—their statements largely incoherent to the listener.[34] Indeed, Anderton surmised that the precogs themselves had no understanding of their utterances.[35] But they could be subjected to deep analysis, through “machinery,” into coherent predictions about criminal wrongdoing, with about a week or two’s notice.[36]
Equipped with the predictions, Precrime has been able to stop much of the violent crime.[37] We are told that the last murder was five years prior, and it occurred despite Precrime predicting the name and location of the murder.[38] Anderton laments, “After all, we can’t get all of them.”[39] As a result of Precrime, they have been able to “abolish[] the post-crime punitive system of jails and fines.”[40] But it is not all roses; they have detention camps for their “would-be criminals,” who “forfeit[] [their] right to freedom and all of its privileges.”[41] And Anderton notes, though Precrime claims they are culpable, the targets have a legitimate claim that they are innocent.[42]
The rest of the story proceeds as follows: Precrime lodges a prediction that Anderton will commit a murder of a person named Kaplan. Anderton does not believe the prediction; indeed, he has never heard of Kaplan. Instead, he suspects his newly appointed subordinate Witwer of framing him.[43] Because such reports are also transmitted to the Army, Anderton is apprehended as a would-be criminal and scheduled to go to a detention camp.[44] He is brought to Kaplan, who is a former General of the Army.[45] It is announced to the public that Anderton has been predicted to commit murder and he is defrocked, with Witwer taking over his command as Commissioner of Police and leader of Precrime.[46] A figure named Fleming claims that Anderton is being framed—by Anderton’s wife Lisa—and breaks him out of detention.[47]
At this juncture, Anderton realizes that there are sometimes “minority report[s]”—that the three precogs disagree, and one precog may have a different prediction about what will happen.[48] He discovers that in his case, there was a minority report—suggesting that he would refrain from killing Kaplan after having learned of the prediction that he would kill Kaplan.[49] At the same time, he sees that the two reports in the majority, unusually, have slight differences. Anderton believes that the minority report exonerates him and is eager to seek justice for himself.[50]
He then encounters Lisa, who manages to convince him that there is no conspiracy against him—either orchestrated by her or his subordinate Witwer. Further, she persuades him that the majority report—predicting Anderton would kill Kaplan—was genuine, and so too was the minority report, which took into account Anderton’s knowledge. She wonders aloud whether other would-be criminals would reform themselves as well.[51] Fleming then reveals himself, attempting to kill Lisa. Anderton subdues Fleming and realizes that Fleming works for former General Kaplan.[52]
Anderton then puts the puzzle together. In the first precog’s report, Kaplan kidnapped Anderton, threatening him to disband Precrime.[53] Kaplan’s motivation was that in the absence of Precrime, he could stage a coup and establish Army authority over society. Anderton sought help from the government, the Senate, but they were unwilling to risk a civil war with Kaplan’s forces—and instead disbanded Precrime and the Police. Faced with that, Anderton killed Kaplan to restore order.[54]
The second precog’s report—the so-called “minority report”—comes temporally after the first. In it, Anderton is aware of the first precog report—he knows he is predicted to kill Kaplan. So, to save himself from the ignominy and consequences of committing murder, he refrains from killing Kaplan.[55] Kaplan intended to reveal this “minority report” publicly, to undermine confidence in Precrime and the Police—leaving the very void that allows Kaplan to conduct his coup.[56]
That brings us to the third precog report, which comes temporally after the second. In this one, Anderton becomes aware that he was initially slated to kill Kaplan, that he learned of this and decided to refrain, and that Kaplan would take advantage of his refraining. This third report predicts that Anderton again changes his mind and decides to kill Kaplan, to stop his coup.[57]
And indeed, this third precog report proves true: Anderton kills Kaplan, he stops the coup, and restores Precrime’s public credibility. After being apprehended, he is being sent to detention.[58] Witwer, the new Commissioner, asks him if this problem can arise again. Kaplan retorts that it can only happen to Witwer because he is the one who is aware of the reports, but that it can happen at any time.[59]
I highlight the following striking insights regarding criminal prediction for our purposes:
First, there is broad agreement that the invasive measures of the Precrime era—especially, prediction that knows whether people are considering crime, but also the suspension of rights and freedoms—are worth enduring by the population at large in order to halt crime.[60]
Second, despite the fact that people understand that would-be criminals have not committed the crime, are innocent, and thus should not be punished, the measures meted out to would-be criminals’ approximate punishment.[61]
Third, and relatedly, though people understand that Precrime reports can change behavior, including in persuading would-be criminals not to commit their crime, that is still found to be too perilous for prevention of crime.[62]
Fourth, regardless of the fact that violent crimes are a genuine rarity in the Precrime era, there is a continuing, palpable fear that they might occur. In the Precrime system, that is not because of problems of prediction, but because of insufficient response time. This fear outweighs any concerns of wrongful or poor treatment of would-be criminals.[63]
B. All the Troubles of the World
Another prescient piece of literature, dealing with criminal prediction, is Isaac Asimov’s All the Troubles of the World.[64] The story is about Multivac, a massive computer fed with all available data that makes predictions about human life and ultimately starts directing all those aspects.[65] It is tasked with predicting and stopping crime and begins to do so successfully, such that violent crime has dropped precipitously.[66] It was initially tasked with predicting major crimes, but as it showed success, the list of crimes to prevent started to include minor crimes.[67] The next frontier for Multivac is disease prediction and prevention.[68]
We are told that there are very few murders during the term of the so-called Chairman of the Central Board of Corrections, and that the new Chairman Gulliman intends to have zero murders on his watch.[69] As a case study, they had just, five years earlier, added “wife-beating” to Multivac’s list of crimes to predict and prevent.[70] The day’s report listed 2,000-odd potential cases, not all of which would be preventable.[71] “Perhaps thirty per cent would be consummated. But the incidence was dropping and consummations were dropping even more quickly.”[72] Indeed, the fact that such actions were predictable would soon become known, and that itself would prevent most occurrences.[73] On one day, Multivac predicts two intentional murders, which is a rare occurrence.[74] Gulliman wants his subordinates to ensure nothing occurs, but they protest that Multivac still contends these are low probability events.[75]
The scene shifts to a ceremony where children reaching majority must fill out comprehensive forms about themselves, with such data being fed to Multivac.[76] At this juncture, Multivac sees the children as separate individuals and will be able to predict all matters about their lives, including potential criminal wrongdoing.[77] Mike Manners goes to the ceremony and submits his information.[78] When he arrives at home, he learns that his father Joseph Manners is under house arrest for a predicted intentional murder.[79] Mike questions his father about whether he is planning a murder, expressing faith that Multivac doesn’t make mistakes. Joseph tearfully tells his son that he isn’t planning anything.[80] Joseph’s wife (and Mike’s mom) recounts a story of her friend’s father, who worked at a bank, received a communication from Multivac and the government officials not to steal any money.[81] He hadn’t stolen any money, but he had thought about it.[82]
The government officials notice that the chance of Joseph committing the murder has gone up, and thus arrest him.[83] They do not tell him why they’re arresting him, and he protests.[84] Terrified by the situation, Joseph’s younger son Ben goes to Multivac for help—he takes advantage of the ability of citizens to ask Multivac for advice.[85]
Despite Joseph’s arrest, the chance of him completing the murder keeps rising.[86] Then the government officials realize that Ben, as a minor child of Joseph, is not understood by Multivac to be a separate person—and thus his conduct is charged to Joseph.[87] At this realization, the chances of Joseph’s commission immediately reduces in likelihood.[88]
Meanwhile, Ben asks Multivac how he can help his father and receives a long list of instructions.[89] He begins carrying them out.[90] Simultaneously, the government officials are alerted that Multivac itself is the intended victim of the murder.[91] Right as Ben is about to pull a lever, he is carried away by two agents.[92]
It is revealed that Multivac orchestrated the entire plot.[93] It carefully selected the Manners family, and planned each step that would, it hoped, culminate in its electric death.[94] Gulliman and the government officials surmise that Multivac had gained sentience and could not cope with the troubles of mankind.[95] They ask Multivac what itself wanted more than anything else. It responds, ominously, “I wish to die.”[96]
In addition to affirming many of the insights of The Minority Report, this story conveys two further salient points:
First, the success of preventing some crimes by the system will lead to an expansion of duties in crime prevention. Here, Multivac starts with major crimes, and its success leads to expanding its task list to prevent minor crimes. Beyond that, the plans are for Multivac to extend to disease prevention—which we know from our own experience will require exacting control of human behavior.
And second, the system itself, and knowledge about the system, will have an impact on human behavior, including with respect to criminal wrongdoing. This has two components: people know that the system will detect their wrongdoing so they desist; and also, the system itself may encourage wrongdoing through its methods of guiding behavior.
C. A Pessimistic Vision: “The Minority Report View”
With all that in mind, there seems to have arisen a pessimistic vision of the use of AI and big-data-based prediction in criminal justice. Given the rhetorical importance of The Minority Report in shaping this view, and the fact that it is often referenced, I refer to this as the Minority Report View (MRV). MRV is not one comprehensive view as much as a family of views that all express deep skepticism, along various dimensions, about using technology in decision making, especially requiring prediction, in criminal law and procedure. As suggested above, I conceive of MRV as being comprised of the following views:
-
Use of CrimAI in criminal justice will result in broad invasions of everyone’s privacy, as well as suspensions of rights and freedoms.
-
Relatedly, the balance between preventing criminal wrongdoing and not subjecting people to wrongful restriction will be heavily skewed in favor of the former interest in preventing criminal wrongdoing.
-
Use of CrimAI will be continually expansive, reaching further and further aspects of human life.
-
Measures to restrict criminal wrongdoing stemming from CrimAI will inevitably approximate criminal punishment.
-
Nonpenal measures that might prevent criminal wrongdoing will not be utilized if they expose us to any heightened risk of criminal wrongdoing.
-
Even if there are substantial reductions in criminal wrongdoing, the fear of recurrence will maintain the system, with no relenting on privacy invasions and suspensions of rights and freedoms. It’s a one-way ratchet.
-
Systems using CrimAI will largely dictate our behavior, and in the worst cases, it may even command us to behavior for which we are sanctioned.
This is not a taxonomic list of all the complaints one might have about a future of CrimAI in the criminal justice system. Rather, it is meant to be an enumeration of claims that capture the core of the skepticism and fears about CrimAI.
The pessimistic vision MRV has become an integral part of the discourse on AI- and data-based criminal prediction and indeed has seemingly accompanied any kind of technological advance.[97] Here following are examples in our cultural discourse about AI and data-based methods in criminal law, embodying many of the concerns in MRV.
In her highly influential book Weapons of Math Destruction, Cathy O’Neil discusses the expanded use of algorithms in the criminal system.[98] In so doing, she marks The Minority Report as the dystopian vision, and cautions against many ways in which our system is duplicating these problems.[99] In particular, O’Neil observes that the systems in deployment, even at the early stage, are remarkably intrusive—they use vast amounts of data to make particularized determinations about specific individuals even prior to any kind of unlawful commission;[100] they are expansive, in that they are not limited to serious crimes but reach to “nuisance”-level crimes;[101] the systems do not meaningfully avoid penal sanctions, but instead aggrandize the same kind of penal system we have;[102] and finally the system is self-reinforcing through feedback loops, which allows for bias and prejudice to influence outcomes.[103] O’Neil is a self-described pessimist about big data and AI, and with respect to the criminal law, it’s clear that her fear of the dystopia predominates.[104] Her case is compelling, but principally, because our collective vision about the CrimAI is just an outgrowth of our current criminal model.
Consider also the 2004 Ninth Circuit case, United States v. Kincade.[105] That case concerned “whether the Fourth Amendment permits compulsory DNA profiling of certain conditionally-released federal offenders in the absence of individualized suspicion that they have committed additional crimes,” and specifically the federal DNA Analysis Backlog Elimination Act of 2000 (DNA Act), which required those convicted of federal crimes who were incarcerated, or on parole, probation, or supervised release to provide such DNA samples.[106] In the case itself, a federally convicted individual was found to have violated his supervised release conditions for refusing to provide a required DNA sample, and was thus convicted and sentenced to four months further incarceration.[107] He appealed, and a panel of the Ninth Circuit reversed his conviction.[108] On rehearing en banc, the more fulsome panel of the court, in an opinion by Judge Diarmuid O’Scannlain, upheld the conviction, holding that the DNA Act was reasonable and did not violate the Fourth Amendment.[109]
In a blistering dissent, Judge Reinhardt—joined by Judges Pregerson, Kozinski, and Wardlaw—observed that the DNA Act and its collection of DNA samples has the potential for large expansion and that provides us “reason to fear that the nightmarish worlds depicted in films such as Minority Report and Gattaca will become realities.”[110] The dissent worried “there begins to emerge a society quite unlike any we have seen—a society in which government may intrude into the secret regions of man’s life at will.”[111] Even in this early instance of the rise of data collection of such accurate identifying information, we see members of the judiciary keenly concerned about the potential “nightmar[es]” of what I have termed the Data State.[112] Indeed, even the majority, which found the collection reasonable, was eager to dispel the concerns of the Reinhardt dissent, stressing that there were adequate safeguards and limitations on the DNA Act.[113]
As another illustration, consider Matt Stroud’s influential article about the Chicago Police Department’s use of a “heat list”—a list of individuals who are predicted likely to engage in criminal wrongdoing.[114] Stroud begins by recounting the story of Robert McDaniel, a 22-year-old high school dropout from an area of the city known for violence. McDaniel himself, however, had committed no crime, had no prior violent criminal history or recent police interaction.[115] Yet he was visited by police officers because he was on the heat list.[116] Stroud writes: “Yet, there stood the female police commander at his front door with a stern message: if you commit any crimes, there will be major consequences. We’re watching you.”[117] The takeaway point is clear: the “heat list” program is being employed as a dragnet program, that has the principal goal of hyper-charging the criminal system as we know it—seeking criminal wrongdoers and punishing them.[118]
More recently, a report by Pranshu Verma considered algorithms implemented in Chicago with a high level of accuracy in predicting crime, which worked principally on predicting locations and times of crimes within the city.[119] Verma’s story focuses on the bias and prejudice that have infected algorithms before, worrying about the “feedback loop that is racially and socioeconomically biased,” which leads law enforcement to focus on particular areas for investigating criminal activity.[120] The critical point is how the use of AI and big data aggrandizes the criminal system as it is, multiplying its extant pathologies.[121]
In a similar vein, Derek Thompson considered risk-assessment tools like COMPAS, used in sentencing those who have been convicted of crimes.[122] He describes the sentencing of Paul Zilly, who had been convicted in Wisconsin state court of stealing a lawnmower and reached a plea deal.[123] The judge, consulting COMPAS, determined that Zilly was at high risk of reoffending, and thus rejected the plea deal to impose on Zilly a sentence double in length.[124] Thompson warns that these kinds of algorithmic tools are proliferating and provide no explanation or justification for their determinations.[125] “The result is something Kafkaesque: a jurisprudential system that doesn’t have to explain itself.”[126] And that is especially worrisome when it gloms onto the current objectives of our criminal system—with its focus on eliminating crime, even at a great expense, including the disregard for the humanity of so-called criminals. Indeed, in their comprehensive article “The Right to a Glass Box: Rethinking the Use of Artificial Intelligence in Criminal Justice,” Brandon Garrett and Cynthia Rudin elaborate on this point that opaque, black box AI methods in criminal justice have many deleterious effects.[127] They too stress many of the themes of the MRV.[128]
Now, an important real-world example is the European Union’s AI Act, which took effect in the middle of 2024.[129] Key among the points of the AI Act is to restrict the use of AI for law enforcement purposes, including limiting predictive policing.[130] Instrumental in driving these limitations was a consortium of many civil society organizations.[131] Their letter expresses many of the core ideas of MRV, worrying chiefly about the encroachment on human rights and democratic governance.[132]
Several scholars have expressed deep concerns with CrimAI, emphasizing many of the concerns of the MRV.[133] In particular, they stress the points that CrimAI will not be accurate (and it will be biased); it will be pervasive; it will invade our privacy; it will cause the current penal system to extend and aggrandize; it will direct our behavior in undesirable, tyrannical ways.[134] Finally, they stress that unless we stop this process now, there will be no turning back.[135] To this end, they accept the paradigm of our current penal and legal system, attempting to find current legal—and specifically constitutional—barriers that can serve as a blockade to the MRV.[136] The predominant attitude is that the integration of this technology in our criminal justice system is dangerous and we must halt it.[137]
I am deeply sympathetic to many of these concerns. But, as discussed below, I believe this strategy of addressing CrimAI is mistaken, or at least incomplete. What is lacking is a competing vision for what CrimAI can look like for our world, to benefit us without oppressing us.
III. Theories of Criminal Justice and the Data State
As a rubric to understand how we should implement CrimAI, we must understand the theories underlying the criminal system. There are two main theories of criminal punishment: retributivism and consequentialism.[138] Here I explain how these theories bear on CrimAI.
A. Brief Sketches of the Theories of Criminal Justice
1. Retributivism
Retributivism is, at its essence, the view that criminal punishment is justified by the proposition that the individual deserves such punishment.[139] Retributivism is generally understood to have at least two claims: (1) that it is an intrinsic good that an individual be punished if they deserve to suffer harm; and (2) that it is morally impermissible to punish a person beyond their desert.[140] The core intuition is that people who do wrong deserve to suffer. When an individual commits certain kinds of acts, and thus deserves to suffer, that provides sufficient reason for us to punish that individual.[141]
The second claim holds that desert is necessary for punishment—that it is inappropriate to punish an individual unless they deserve it.[142] A standard example demonstrating this claim: We would think it wrong to frame and punish an innocent person for a heinous crime, in order to quell the outrage in the community, even if it were to result in net good consequences.[143]
A critical question arises then: What is the basis upon which one deserves to be punished? Retributivists have generally settled on the following desert bases: (1) character or (2) mental culpability or deliberative conduct.[144]
2. Consequentialism
Consequentialist theories of punishment seek to justify the imposition of punishment because it results in net good consequences.[145] With respect to punishment, the analysis of whether an action is justified is usually framed in utilitarian terms.[146] Utilitarianism, generally, is the view that action should seek to maximize human happiness and minimize human suffering. Applied to punishment, the utilitarian theory queries whether it results in a net increase in human happiness.[147] Because punishment involves the intentional infliction of suffering on the offender, on these terms, punishment is by itself bad. Thus, in order for punishment to be justified, it must generate good consequences—i.e., happiness—that outweigh the human suffering it imposes.[148]
Note well that utilitarianism is not the only consequentialist theory. Consequentialists can take a broader, more pluralistic view of what constitutes the good to be maximized. For example, one’s consequentialism could seek to maximize—in addition to happiness—interests of autonomy, achievement, and fairness.[149] Then the question becomes whether any action, including punishment, promotes a net surplus of these goods, when compared to the imposition of punishment.[150]
Nevertheless, “consequentialist theories of punishment generally agree that the objective of punishment is to minimize future crime and to do so with as little punishment (and consequent human suffering) as possible.”[151] “In this respect, there are four main pathways that punishment is supposed to minimize future criminal activity:” (1) rehabilitation; (2) incapacitation; (3) specific deterrence; and (4) general deterrence.[152]
3. Our System
Our criminal justice system is generally understood to be a side-constrained consequentialism, with a negative retributivist constraint.[153] That is, the goals of our criminal justice system, especially with respect to punishment, are to advance consequentialist goals—like reducing the harm of criminal wrongs with as little punishment as possible, through deterrence, incapacitation, and rehabilitation. But there is a constraint that we cannot pursue these goals in ways that would violate retributivist limits on punishment. Namely, that is, we cannot punish the innocent and, more generally, we cannot punish people beyond what is proportional for their desert.[154] Notice, under this conception, we do not punish people in order to give them what they deserve; we only punish to advance other consequentialist goals. This is what gives rise to the negative retributivist label.
B. What Is Success for Criminal Prediction?
A key question then is how a rubric of AI- and data-based prediction fits into a penal system governed by these underlying theories of punishment. As a preliminary matter, we define AI-and big-data-based methods. AI most generally refers to artificial systems that perform activities that are generally performed by humans or require rationality.[155] That’s a capacious term that, in theory, incorporates anything that would perform human or rational tasks. Indeed, Alan Turing’s “Turing Test” for determining whether a machine can think—whether AI has reached the level of human thinking—is whether it is indistinguishable from a human (by a human).[156] More concretely, however, AI currently involves specific computational methods, including sophisticated algorithms and machine learning.[157]
Big data can be characterized “as large datasets that are produced in a digital form and can be analysed through computational tools.”[158] As human life becomes so integrated with technologies that utilize and produce large volumes of data, at high velocity, these sets of data can be mined, processed, and analyzed.[159] Big-data methods are ones that capitalize on such data—and that is what very sophisticated algorithms and machine learning do.
CrimAI, then, is the term we will use to refer to these sorts of methods in service of prediction of criminal wrongdoing. To be clear, for purposes of this Article, very little turns on terminological questions about whether something qualifies as CrimAI. The ultimate question I consider in this Article is what to do about accurate, practicable methods that predict criminality. Those might be under the umbrella of CrimAI, or they might be mutant precogs. The reason for focus on CrimAI is that it is the most likely pathway to sufficiently accurate, practicable criminal prediction.
With that in mind, we consider each of the theories in turn.
1. The Retributivist Analysis of Criminal Prediction
A retributivist theory says that we punish criminal actors because they deserve to be punished. The notion of such desert is based on the desert object, whether it be the actor’s bad character or their bad conduct.
In prior writing, I have argued that bad character is an illegitimate object for retributive desert because the term “bad character” refers to dispositions that are not under one’s control—and the result of this is that we are punishing individuals for actions for which they are not blameworthy.[160] Thus, I focus here instead on the desert object of an offender’s bad conduct.
That then leaves bad conduct as the only legitimate desert object for a retributivist theory. At first glance, it would be difficult to target individuals on the basis of their bad conduct with criminal prediction. That is because, quite simply, they have not engaged in the wrongful conduct yet. And because they have not engaged in the conduct yet, they cannot deserve punishment in light of their committing bad conduct.
That conclusion may be too quick. True enough, the prediction, and consequent action based on that prediction, should optimally stop the actor from committing the criminal wrong. But that doesn’t mean that the actor has engaged in no conduct that is morally blameworthy. It may be that the actor was engaged in inchoate conduct that was morally blameworthy. For example, if the actor was plotting to commit the wrongful conduct, and predictive measures capitalized on information about such plotting, we could find the plotting itself morally blameworthy and deserving of punishment.
The key points are about the timing of the prediction and the information on which it is based. That is, if the timing of the prediction or the information on which it is based are sufficiently early in the chain of events that would lead to the criminal wrongdoing, it may not lead to an inference that the actor was engaged in anything criminal. For example, suppose an individual bumps into another person walking on the street. They are sent into an involuntary rage and reflexively imagine killing that person—but they take no such action. Suppose face scanners recognize that the individual is in such a rage, and they predict he may engage in violence. Consequently, they sanction him in some way—perhaps, sending him to detention for a week. This intervention occurs sufficiently early and is based on data that is likely beyond the individual’s rational control. Intuitively, we would think that the individual does not deserve any sanction for their behavior. Thus, these interventions are unsound from a retributivist perspective.
From this, we can derive an important tentative conclusion on a retributivist frame: In order to use AI- and data-based prediction for purposes of sanctioning an actor in ways that are retributively acceptable, such prediction must be sufficiently late in time, and based on sufficient information, of criminally wrongful behavior. Though such behavior may be inchoate and short of the completed crime, it must be sufficiently developed to constitute wrongful behavior deserving of sanction.
But we should note that a fulsome understanding of the penal system requires more than this narrow understanding of punishment. Retributivists think it is important to give those deserving of punishment their just deserts.[161] But the retributivist does not wish that there be many people deserving of punishment. That is, the retributivist aims to maximize the proportion of people who receive their just deserts compared to the number of people who deserve punishment. The retributivist does not aim to maximize the number of people who receive deserved punishment.[162] On any reasonable theory of retributivism, it would be better that the person not engage in such wrongful conduct in the first place. That is, it is better that there be fewer instances of wrongful conduct that require retributive punishment. Thus, if it is possible to ensure that people do not engage in wrongful conduct, that would still be good from a retributivist perspective.
2. The Consequentialist Calculus of Criminal Prediction
The consequentialist case for prediction is more straightforward. As discussed, there are two main goals for the consequentialist in the criminal justice context: First, we have a goal to reduce the amount of criminal wrongdoing, because criminal wrongdoing imposes harms on others. Second, because punishment involves the imposition of suffering on humans, we have the goal to use as little punishment as possible in achieving our goals of mitigating criminal wrongdoing. This is a balancing act, where decision makers must consider how much net benefit is obtained from punishment, considering the positives of mitigating criminal wrongdoing against the negative of imposing suffering through punishment.
Consider a Standard Scenario of the operation of the criminal system:
Offender O commits a criminal wrong that imposes some harm on others and society. O is apprehended and punished. That punishment imposes harms on O. But that punishment generates good consequences in the following forms: it incapacitates O from committing other criminal wrongs, it deters O from committing other criminal wrongs, it deters others from committing criminal wrongs, and it may rehabilitate O such that O does not commit other criminal wrongs. If the penal system is working properly, the act of punishment should generate good consequences (in the form of incapacitation, deterrence, and rehabilitation) that outweigh at least the harm of the punishment’s imposition on O.
Principally, criminal prediction has the potential to generate benefits in substantially decreasing, or eliminating, the number of criminal wrongs that occur. That basic idea is encapsulated in what we may call the Criminal Prediction Scenario: If criminal prediction is accurate, then we may be able to stop Offender O from committing a criminal wrong that imposes harm on others and society. And supposing nothing else changes in the hypothetical—O is punished, that punishment harms O, but generates deterrence, incapacitation, and rehabilitation. In this scenario, we essentially reduce or eliminate only the harms of the criminal wrong, while keeping everything else the same—which results in a net benefit. Thus, on this account, Criminal Prediction Scenario is consequentially better than the Standard Scenario.
An initial response is that in many cases O will have still committed a criminal wrong. That is, even if O does not execute the murder, say, O will have engaged in activities, like planning and preparation, that make it possible to predict that O was going to engage in a murder. And planning and preparing to murder is a criminal wrong. True enough, but the inchoate crime of planning and preparing to murder is not as bad, not as harmful as the completed crime of murder. Thus, still comparing the hypotheticals, the Criminal Prediction Scenario remains on net better.
More pressingly, the success in advancing these consequentialist benefits is highly dependent on the success of CrimAI. This is intuitively clear, but the details are important. If criminal prediction is inaccurate, then potential harms abound. If there are false negatives, then we will not stop the criminal wrongs and their associated harms. And if there are false positives, then, on the Criminal Prediction Scenario, we will be punishing individuals—and thus intentionally subjecting them to harm—when they would not commit criminal wrongs. Indeed, the risk of false positives may be higher because, in order for criminal prediction to practicably stop criminal wrongs, it will have to operate at a time earlier than commission—and the less proximate from the commission you get, the more likely the targeted individual would not have actually committed that crime.[163] In such cases, it is of course unlikely that we will obtain any significant benefit in terms of specific deterrence or incapacitation.[164]
This just demonstrates an obvious, but important, point: For criminal prediction to provide a benefit, it must be substantially accurate—or else it risks significant harms that would negate or outweigh any potential benefits. That does not mean that criminal prediction has to be perfectly accurate to advance consequentialist goals. The calculus is more complex. If criminal prediction is highly accurate, even if it makes mistakes, then it may still be beneficial from a consequentialist perspective. The discourse on criminal prediction often demands perfection—but that is a misunderstanding.
Indeed, we intuitively understand this because our own criminal justice system can be beneficial from a consequentialist perspective, even if it makes mistakes. That is, for example, even if there are false convictions, whereby an innocent person is punished, so long as this is relatively uncommon, the operation of the criminal justice system as a whole can be a net positive in consequentialist benefits.[165]
Consequently, there will be beneficial gains from each accurate prediction in the mitigation of the harm of the avoided criminal wrong. At the same time, there will be harms from inaccurate predictions. The basic idea then is to compare these benefits and harms. But a fulsome comparison requires one further layer: We must compare how criminal prediction goes against current practice.[166] That is, our current practice—embodied in the Standard Scenario—makes mistakes, and so if the Criminal Prediction Scenario improves on the Standard Scenario, there is a strong argument for its adoption along the retributivist and consequentialist justifications.[167]
There is one additional wrinkle, however: scale. We must inquire as to the number of cases for which criminal prediction is deployed, compared to current practice. If the number of applicable cases is comparable, then if criminal prediction has a lower error rate than current practice, it will likely provide a net benefit. But that is a robust assumption, and indeed one that challenges the likely deployment of CrimAI. That is, one of the main advantages of AI is that it can be utilized at scale. With respect to CrimAI then, we would expect that it would be deployed constantly and continuously—indeed, in accord with the MRV—and thus cover a much larger number of cases than in current practice. But if that is right, then simply comparing error rates will not be enough.
To see this, suppose that the Criminal Prediction Scenario has a smaller error rate than the Standard Scenario, but just barely so. If the Criminal Prediction Scenario applies to many more cases—say ten times as many cases (and that is a low mark)—then there will be nearly ten times as many cases with adverse results. Aggregating the harms from those cases could result in the Criminal Prediction Scenario being net deleterious when compared to the Standard Scenario. It will depend on the added benefits of applying criminal prediction to these additional cases where mistakes do not occur.
Taking stock, we have the following Condition for Criminal Prediction Success for what it would require for criminal prediction to be successful in terms of being an improvement on current practice: Criminal prediction must be sufficiently accurate such that the total of the benefits of avoiding the harms from criminal wrongs through criminal prediction minus the harms caused by the aggregate error of criminal prediction—including, the harms of the false positives and false negatives—is greater than in current practice.
For those who are formally inclined, consider the following representation:
\[\sum_{i = 1}^{n}{(b}_{i}^{CP} - b_{i}^{SS}) - (h_{i}^{CP} - h_{i}^{SS}) > 0\]
Where is the benefit from avoiding harms of criminal wrongs in case i in criminal prediction or the standard scenario (i.e., current practice), represents the harms (principally, from both false positives and false negatives) in case i in criminal prediction or the standard scenario, and i indexes through all n cases to which criminal prediction is applied.[168]
What we should observe and emphasize here is that this is a rigorous condition for the success of criminal prediction. It requires that criminal prediction meet a high bar, especially if criminal prediction is simply deployed within the broad framework of our current criminal and penal system. In that kind of rubric, for wholesale adoption of criminal prediction to succeed, it must generally do better in avoiding criminal wrongs while ensuring that the burdens on us from such widespread deployment do not overwhelm any such benefits. That will require a very high degree of accuracy—not just being a nose ahead of current law enforcement methods.
3. Moves Toward Success
Examining this condition, then, the ways to make criminal prediction more successful are to (1) increase the benefits in avoiding criminal harms; and (2) decrease the harms associated with its operation. To accomplish (1), we will have to apply criminal prediction broadly. But because no system is perfect, that will also increase certain harms—principally through false positives, but also through greater invasions of privacy and other dignitary harms. That then leaves us at the beginning conundrum: Is it worth the heavy costs to stop crime?
Now, thus far, we have assumed that individuals are subjected to the same sorts of consequences in both the Standard Scenario and the Criminal Prediction Scenario. In such comparison, the Criminal Prediction Scenario is superior insofar as the Condition for Criminal Prediction Success obtains—namely, criminal prediction can achieve such a high degree of accuracy that, even when it is applied to a greater magnitude of cases, the harm aggregated from errors does not outweigh the benefits—all relative to current practice.
But suppose that we use means that do not inflict (as much) punishment on the putative criminal wrongdoer—call it Criminal Prediction Sans Punishment. The potential benefits are much greater.
How do we do this? First, as the title suggests, we can look to avoid or mitigate the kinds of punishment levied on the putative offender—by using some intervention that creates less harm. Suppose, for example, instead of incarcerating an individual after determining that they would probabilistically commit a crime, we simply stopped them from committing the crime and released them. This would avoid the criminal wrong and largely avoid the intentionally imposed harm of criminal punishment. I say “largely,” but of course not entirely, stopping an individual from committing a crime does not occur through magic. Depending on the circumstances, and the disposition of the putative wrongdoer, it may require the use or threat of force.[169] And that itself has some harm associated with it—it may cause the targeted individual bodily or mental pain. But we can seek to reduce that as much as possible. And given the harm associated with punishment, we could obtain a great reduction in the harms associated with criminal prediction. The principal benefit in mitigating punishment is to lessen the harms in the formula; it not only lessens the punishment—the intentional infliction of suffering on individuals—in properly predicted cases, it also drastically reduces the costs of false positives.
Now, of course, this would not advance the consequentialist benefits of punishment, such as deterrence, incapacitation, and rehabilitation.[170] But over time, if the system works extraordinarily well—and we know it must in order to have any chance of success—then the Criminal Prediction Sans Punishment Scenario may be able to achieve deterrence, incapacitation, and rehabilitation through another pathway: Because it would be fruitless to try to commit crimes, you would have little reason to do so; because the system stops you from committing criminal wrongs, it does in fact incapacitate you; and because it stops you from committing criminal wrongs, it does in fact rehabilitate you in terms of your ability to commit criminal wrongs (even if it does not change key aspects of you that would make you want to commit criminal wrongs).
Second, we can mitigate and eliminate punishment by engaging in upstream changes in the system. The basic idea, which we will give greater body to in Part VI, is that we can seek to reduce the breadth of the criminal system that imposes punishment in many spheres of life where punishment is superfluous to accomplishing our ultimate goals.
As a core example, we use the criminal system to solve coordination problems—and to enforce the obligations of coordination.[171] Above, we discussed the example of jaywalking—where criminal enforcement is used to solve the coordination problem of allowing people to cross streets efficiently.[172] If criminal prediction is sufficiently accurate to be deployed for criminal enforcement of coordination obligations, then we should use the underlying predictive capabilities to enable coordination without punishment.
In a similar vein, we can seek to reduce the number of interactions between law enforcement and civilians, which result in the use of force, the use of sanction, and consequently the use of punishment. This is especially helpful where those interactions themselves are distant from harm. So, for example, much of the criminal code—including traffic offenses—is used pretextually to allow law enforcement opportunities for investigation of more serious crimes.[173] The immediate target offenses are not ripe for serious harm. But if criminal prediction has the potential for great accuracy, we should deploy such prediction to obviate the more physically invasive methods—which probabilistically lead to more physical force and punishment.
This can also expand the benefits of criminal prediction, but in a subtly distinct way. Here, we seek to reduce the intrusion of the criminal system with prediction, where intrusion would not avoid much harm—because the putative criminal wrongs are not that harmful. Consequently, we can eliminate the harms associated with those intrusions—whether intrinsic to law enforcement interaction, through false positives, through trespasses on privacy and dignitary interests, or other such harms—when those intrusions do not yield much benefit.
IV. Will AI Be Successful?
As we saw, for CrimAI to realize benefits, it must be accurate, and practicably so. Not only must it be able to make correct predictions about the commission of crimes, and their perpetrators, it must be able to do so with enough time for law enforcement to respond as to prevent the commission of the crime. Thus, the critical question arises whether CrimAI can actually reach these levels of accuracy and practical application. Presently, CrimAI, as deployed, does not meet those benchmarks.[174] The question then is whether CrimAI can—at some point in time—meet those benchmarks. AI- and data-based methods broadly have been implemented in various parts of the criminal justice system. So, one useful starting point is a review and assessment of those methods.
A. Current Methods and Their Promise
As a matter of current practice, AI and big data methods are employed in various areas of the criminal justice system, including in bail determinations, risk assessments in sentencing, predictive policing, and forensics and evidence gathering.[175]
In considering these deployments of AI and big data, I will not seek here to be comprehensive. Given the proliferation of these techniques, that is a heavy lift, but also unnecessary for our purposes. Many of those deployments have been unsuccessful and riddled with problems.[176] But walking the graveyard of AI and big-data methods is not so instructive as to whether there is, in a distant future, a good chance of success. I will instead focus on examples where there have been successes and interrogate whether those show a path forward, including the sources of skepticism on enduring progress. Broadly, the successes of OpenAI’s o3 model are astonishing and rapid. Most notably, the o3 model was able to solve over 25% of the problems in the Frontier Math benchmark—extremely challenging mathematics problems that require proofs—where prior AI models had been stuck around 2%.[177] This suggests that trajectories of capability may be on a steep rise. But specifically with respect to criminal implementations, first, consider predictive policing techniques using CrimAI:
One powerful example is a 2015 randomized trial of a predictive policing algorithm compared to human crime forecasters, using big data methodologies, in Los Angeles.[178] The predictive policing algorithm performed slightly under twice as well in predicting criminal activity. This considered the predictive policing algorithm versus traditional criminal forecasting along with hot-spot mapping and other uses of big data.[179] But potentially the accuracy of systems integrating those methods may be even greater.
Similarly, one study used a deep neural network, trained on “data collected from various online databases of crime statistics, demographic and meteorological data, and images in Chicago.”[180] The results are that the deep neural network predicted criminal activity at a high level of accuracy, over 80%.[181]
Another such algorithm, also focused on Chicago, “forecast[ed] crime by learning patterns in time and geographic locations from public data on violent and property crimes.”[182] The study concluded that “[t]he model can predict future crimes one week in advance with about 90% accuracy.”[183]
These algorithms show robust progress in improving upon current law enforcement practices and methods. Of course, we earlier showed that CrimAI must be much more accurate than current practice, but these show promise.
Second, consider deployments of AI and big data in other forms of risk assessment:
One study, focused on New York, analyzed stop-and-frisks with the aim of finding illegal firearms. The study developed an algorithm, trained on data from stops by law enforcement officers, and improved greatly upon current law enforcement practice, finding that only 6% of the stops utilized were necessary to achieve nearly the same results in discovering illegal firearms.[184]
Another study created an algorithm to assess bail determinations in New York—that is, the risk of the defendant committing further crimes, if not detained pending trial. Using machine learning and data sets of results for prior bail determinations, the algorithm was able to greatly improve upon judge determinations.[185] Specifically, they were able to more accurately identify high-risk individuals, which could reduce crime rates among the defendants by nearly 25%.[186]
And another study, in the context of parole determinations, similarly could vastly improve upon judge determinations. Using a machine learning algorithm, the study suggests that the New York Parole Board has been vastly overdetaining, and that the Board could release 100% more individuals without any risk to increasing crime.[187] Apart from the liberty interest in individuals, these are substantial savings in terms of resources that could be better deployed in other areas to prevent criminal wrongdoing.[188]
These latter examples demonstrate the strong potential of CrimAI, especially in light of ways to improve the CrimAI cost-benefit calculus by reducing punishment and the reach of the criminal system. These algorithms and neural networks, trained on a fraction of the data possible, are able to outperform traditional law enforcement methods—and, indeed, methods that are enhanced with compilations of big data. But CrimAI need not necessarily pick and choose between these various methods—it could potentially integrate them in ways to capitalize on their strengths and mitigate their weaknesses.
The above examples address the potential for CrimAI with respect to serious and violent crimes. But note that current technology has the potential to address issues like jaywalking, right now. With sensors, cameras, GPS-tracking, smart cars, and smart phones, we could already implement regimes that would alert street crossers and drivers of vehicles of incoming dangers, and, on the flip side, whether pedestrians and drivers can proceed safely. It would take an influx of resources, of course, but all of that is exceedingly possible right now.[189] And if those resources can apply to other activities as well, including serious and violent crimes, then we could reasonably implement CrimAI.
Consequently, with all this in mind, there is reason to believe that the panoply of AI methods, equipped with big data, could achieve sufficiently accurate crime prediction that could be practicably acted upon—that we could have successful CrimAI.
B. Headwinds and Reasons for Skepticism
Despite these examples of currently beneficial practices using AI and big data methods, and the promise they show for CrimAI, reasons for skepticism justifiably remain loud.[190] Critics maintain that CrimAI will likely be unable to accurately and practicably predict criminal wrongdoing such that there is a net benefit to society.[191] I think there are reasons to believe CrimAI can be beneficial, but I am not, and we should not be, dogmatic in that belief.
Even if the trajectory of AI and big data would suggest that CrimAI would at some point meet the requisite benchmarks for success, other forces can suddenly and dramatically alter those trajectories. After I throw a baseball, it may look like it is going to the moon, but gravity prevails.
Indeed, there are many conditions that may prevent CrimAI’s success: First, it may simply be intractable to predict intentional mental states of individuals on a mass scale in a timely fashion to prevent intentional crimes. Second, it may be that there are types of bias—including racial and socioeconomic bias—that infect the system, such that CrimAI cannot be sufficiently accurate.[192] Third, there may be resource constraints that make such systems impractical. That is, such CrimAI systems may take massive data collection, which includes facilities to store lots of data, but also equipment to collect such data.[193] That may be cost prohibitive, but it also may be that we simply do not have the raw materials to produce that volume of equipment to make CrimAI feasible. And these are just some of the hurdles to cross for effective CrimAI.
If the skeptics are right—that CrimAI will not be successful—then what? There are two branches to the decision tree. First, if CrimAI is not in fact successful, it may be that people will realize that and efficaciously oppose its implementation in the criminal system. If this is right, we need not worry about the ubiquitous deployment of CrimAI.
Second, it may be that CrimAI will not be successful, but people will not realize that if integrated into our normal systems. Notice that success—or the lack of it—is on a gradient. CrimAI can be utterly unsuccessful, making junk predictions. Or it can make probabilistic predictions that are valuable but simply fall short of our accuracy thresholds.
Here, I maintain that the Liberty-Enhancing View still has great value. That value comes not because it will necessarily result in better consequences, in terms of liberty enhancement and decrease in suffering. Though not impossible, that is unlikely, given the hypothesis of CrimAI’s failure in predicting crimes. Instead, it is because if CrimAI is unsuccessful, there will not be an effective prevention of crime. And this is especially so if we pursue the Liberty-Enhancing View. I have set forth the core principles of the Liberty-Enhancing View above and I detail them further below, but in short because the Liberty-Enhancing View emphasizes minimizing interventions, if CrimAI is not working, then criminal wrongdoing will continue, and the public will take notice. That is, because it does not allow for mass incarceration and the expanse of the criminal system, the Liberty-Enhancing View will reveal to the public the failure of CrimAI. Put yet another way: the Liberty-Enhancing View issues a “put up or shut up” challenge to CrimAI. If CrimAI is truly that good, then it should be good enough to sustain the Liberty-Enhancing View. And if CrimAI is not, then it will not be sustained by a draconian penal system that masks CrimAI’s failures. Ultimately, the skeptic’s standing should never be doubted. CrimAI should be continually challenged on every relevant dimension, because as noted above, CrimAI will not pay any benefits unless it is sufficiently accurate and practicable.
With that in mind then, there are three further, key issues about CrimAI’s implementation in the criminal system: its putative nontransparency, its bias, and its impact on privacy. Though we cannot be comprehensive on these issues, I address the basic contours on how they inform CrimAI’s implementation, and the consequent calculus of benefits and harms.
1. Nontransparency
A principal complaint about AI generally is that its determinations and predictions are most often nontransparent. Some maintain that AI can solve that problem itself, by using secondary algorithms to explain results derived from AI. This has driven the movement for so-called “explainable AI” or “xAI.”[194] In this vein, some scholars—chiefly, Brandon Garrett and Cynthia Rudin—contend we should limit CrimAI to so-called “glass box” processes that are transparent in their decision making.[195] Others contend the transparency problem may be intractable.[196] To wit, Boris Babic and Glenn Cohen contend that secondary explanatory algorithms—that aim to explain the results of artificial intelligence or machine learning algorithms—often fail to be action-guiding or are simply insincere.[197]
The transparency problem is no trifling matter. Nontransparent AI poses serious challenges to important democratic values and core features of individual rights.[198] We think it is an important part of democracy that we be able to engage in rational deliberation about our societal choices—generally, through electoral processes. But insofar as we use nontransparent AI, we frustrate the ability to engage in such deliberative democracy, precisely because the reasoning behind AI’s determinations is elusive.[199] And regarding individual rights, we generally contend that individuals have a right to challenge actions that adversely affect them—that they have a right to a hearing.[200] But such hearings depend on challenging the rationality of the adverse actions. If those decisions are predicated on nontransparent AI, again we frustrate the ability to have a meaningful hearing to challenge the action.
One response is to look at the relative difference in explainability when we compare AI to human decision makers. Here, we can question the explicability of human decisions. Human decisions often come with putative explanations, but they often fall short. They may be disingenuous or simply erroneous—they may not explain actually how the decision maker came to the determination. Indeed, this was the main point hammered by Legal Realists.[201] But even when they do explain how those determinations came about, that might provide very little solace—especially in terms of democratic value or rights protection. One need only look at sentencing transcripts to understand how hollow judicial justifications can be. So, while AI may be explicitly nontransparent, human decisions may be as well, even if better hidden.
But even granting there is and will continue to be a substantial gap in transparency for CrimAI, nontransparency does not dispositively foreclose its use. For one, we do not always understand the justification for decision making that governs our society. For example, we defer to the Environmental Protection Agency to manage our water and air quality, even though we may not understand the underlying science.[202] And we defer to the Federal Reserve and the Department of the Treasury on various economic policies, despite not understanding the mysteries of economics.[203] We must always weigh different interests—and it may simply be that the tangible successes of CrimAI outweigh failures in transparency. And that may be the case for targets of the criminal process: Suppose CrimAI always resulted in a less harsh punishment for the criminal defendant. A defendant would rationally opt for a CrimAI sentence, even if it did not provide a comprehensible rationale for its decision.[204]
Thus, it comes down to weighing the relevant interests—formally represented through our calculus of benefits and harms. Nontransparency can impose some harms, by inter alia creating lacunae in democratic deliberation and burdening individual rights, but these harms can be outweighed by the aforementioned benefits. This just imposes an obligation for CrimAI to produce highly accurate results to outweigh the costs, with these harms of nontransparency figured in. That increases the burden of success on CrimAI, and being clear about that burden is critical.
But recall that another way to improve CrimAI’s success calculus is to limit the interventions and the resulting consequences of any prescribed action.[205] This is a powerful reason favoring the Liberty-Enhancing View. Indeed, it makes intuitive sense with respect to concerns about nontransparent decision making. In our current bureaucratic state, we are subject to various government interventions that are often not justified, or poorly so.[206] The lower the consequential magnitude of these interventions, the less pressing the nontransparency of the decision.
Consider a hypothetical: Imagine a frequent airline traveler who is, for unknown reasons, subject to repeated secondary screenings. This has resulted in some close calls on missing flights, misplacement and jumbling of their personal effects when traveling, and other inconveniences. Moreover, the Transportation Security Administration refuses to explain why this is happening or provide any meaningful guidance on how the traveler may take action to avoid these intrusions. That’s very frustrating, of course. But suppose instead, the same repeated trigger results in the traveler having their baggage and person subjected to superior, more in-depth screening machines. This adds a few seconds to the individual’s screening but otherwise requires very little intrusion. Undoubtedly, in the second hypothetical, the impact of the nontransparent decision making is much less and depending on the results in terms of ensuring secure travel, the regime may be a net success. In the same way, the Liberty-Enhancing View can minimize the number and harshness of punishments and intrusions by the criminal system to better ensure successful CrimAI.
2. Algorithmic Bias
Apart from nontransparency, there is a related issue about the propriety of the decision itself: algorithmic bias. The definition of algorithmic bias is itself slippery. I will be capacious in my understanding, defining algorithmic bias as when algorithms provide worse consequences for members of particular protected classes in ways that raise moral concern.[207]
Importantly, it is not enough to constitute bias that members of particular groups do worse under an algorithmic regime.[208] For example, in a medical context, if some group has particular vulnerabilities, then there may be medically justifiable reasons to prioritize treatments for them. And that may mean members of other groups fare worse under the prioritization regime. But that is not morally wrongful, and so we cannot conclude that it is biased. At the same time, it is open for debate what is itself morally wrongful: Some may maintain that moral wrongfulness requires that the worse consequences are because of wrongful reasons. Others may contend that wrongfulness can be in light of unjustified disparate results, even if the reasons for such results, or the causal processes bringing about such results, are not wrongful. As a case study, there was a robust debate as to whether the risk assessment tool COMPAS was racially biased—a discourse that hinged on what metrics of equality were used to assess COMPAS.[209] For our purposes, we cannot hope to resolve these thorny questions[210]—instead we can assume that algorithmic bias is a real and present danger to CrimAI implementation.
Here too, there is some optimism that AI can solve its own bias problems. Cass Sunstein writes, “Algorithms do not use mental shortcuts; they rely on statistical predictors, which means that they can counteract or even eliminate cognitive biases. . . . [I]f the goal is to eliminate discrimination, properly constructed algorithms nonetheless hold a great deal of promise for the administrative state.”[211] Ignacio Cofone proffers that algorithmic discrimination can be mitigated by shaping the information base.[212] And Peter Yu suggests several principles that can help ameliorate issues with AI, including algorithmic discrimination.[213]
Not everyone is so sanguine. Daniel Solove and Hideyuki Matsumi urge caution and skepticism that there are algorithmic solutions to further algorithmic bias. Specifically, they contend that the move to algorithmic decision making may be losing out on important qualitative features of human decision making that are more sensitive to moral concerns.[214] Sandy Mayson powerfully contends that criminal prediction in our system cannot escape bias, because “[a]ll prediction looks to the past to make guesses about future events. In a racially stratified world, any method of prediction will project the inequalities of the past into the future.”[215] Mayson thus contends that the only way forward is a radical restructuring of our criminal system—indeed, a lesson that this Article embraces.[216]
With all that in mind, we consider what algorithmic bias means for the validity of CrimAI. The first point is one that we must compare CrimAI with current practice. Current practice is rife with many forms of bigotry—so it cannot be that any inclusion of algorithmic bias renders CrimAI invalid.[217] This may seem cacophonous to some, because they have, justifiably, adopted a maxim against any allowance for bigotry. But realistically we can agree that any process implemented—whether our current practice or a fully implemented CrimAI—will have bias. Thus, the question again is about weighing the consequences, capaciously understood.
Now, as revealed in our calculus, it is not simply that CrimAI must be better than current practice. Because CrimAI can and will be deployed more widely, it must be sufficiently accurate to result in net better consequences. Importantly, in assessing this calculus, algorithmic bias may be more than just a proclivity for incorrect results. That is, there may be a special, additional harm for getting incorrect results because of bigotry. That is a plausible view, and our calculus is robust enough to figure that additional harm in—it simply means that CrimAI must be that much more accurate to overcome such harm. That may impose a more stringent condition on CrimAI’s success, but we should embrace clarity on those conditions of success.
Lastly though, as with transparency, accuracy is not the only lever to engender success. Again, the Liberty-Enhancing View’s prescriptions are potent. If we limit government intrusion, punishment, and the harshness of consequences, we can greatly mitigate the harms of algorithmic discrimination. Again, consider an alteration of the aforementioned TSA example: Suppose the reason that the individual is searched is because of morally wrongful racial or religious profiling. If the actual consequences of such profiling are highly mitigated, to where the traveler is barely inconvenienced, then an algorithmic regime may still be net beneficial, and thus, comparatively, a success.[218]
This coheres with my own intuition, as a person who, after 9/11, was very often subjected to additional screening by TSA when traveling, in circumstances that would strongly suggest racial profiling was afoot. The less onerous such interventions became, the less I cared whether I was being profiled at all.[219] That is not to wipe away the harm of such racial profiling, but it is to contend that the harms of such profiling can be reduced to the point where otherwise accurate CrimAI may be a net success.
3. Privacy Intrusions
Finally, we must address the issue of what a criminal regime, equipped with CrimAI, means for individual privacy. In short, CrimAI’s wide-scale implementation will putatively require broad intrusions into individual privacy. That is because for AI to make accurate predictions, it will require wide-ranging, time-sensitive data. And that means there will have to be live mass data collection—from us and our lives.[220]
There are important preliminary questions about whether the kind of live mass data collection that would occur effectuates intrusions into one’s genuine privacy interests. First, we can ask what privacy is and why it even matters. On this point, one helpful rubric is a typology of various different types of privacy: informational privacy, decisional privacy, behavioral privacy, and physical privacy. As Manheim and Kaplan explain:
Informational privacy is the right to control the flow of our personal information. It applies both to information we keep private and information we share with others in confidence. Decisional privacy is the right to make choices and decisions without intrusion or inspection. Behavioral privacy includes being able to do and act as one wants, free from unwanted observation or intrusion. Physical privacy encompasses the rights to solitude, seclusion, and protection from unlawful searches and seizures.[221]
From this, privacy matters because it relates to our autonomy and our ability to be unto ourselves. Moreover, regardless of the ultimate rational grounding of these interests, as an empirical matter, people care about their privacy, so it does matter to their happiness and preference satisfaction.
Another exemplar inquiry is whether information that individuals put in the public sphere or share with others can ever be the source of privacy violations. Where do our genuine privacy interests end, when we consentingly share copious amounts of information with many different entities in living our technologically assisted lives? Some argue that if we consentingly share information with others then we have no claim of privacy in such information—and thus there are no genuine privacy interests in our technological data imprints. Others contend that this misses that it is simply infeasible to live without technology—thus, this vitiates consent and requires privacy protections.[222] And still many other nuanced views abound.
The Supreme Court has wrestled with these issues in its Fourth Amendment jurisprudence, most recently in Carpenter v. United States.[223] There, the Court held that the obtaining and review of an individual’s cell-phone site data for over 127 days was a Fourth Amendment search that must be justified by probable cause.[224] The Court so held despite the fact that, prior to Carpenter, there was a long-standing “third-party doctrine”—which stated that law enforcement may obtain, without probable cause, information an individual shares with a third party, like a record of their phone calls, which is shared with their phone provider.[225] The law is in a time of flux, in a way that reflects the genuine difficulty of the underpinning philosophical questions.
For our purposes, and otherwise, I consider privacy interests valuable—intrinsically or instrumentally—and I think that an analysis of the benefits and harms of CrimAI must consider and weigh its requisite privacy intrusions.
With that in mind, a starting point is to interrogate the status of our privacy interests in the Data State more broadly. It seems that in our technologically advanced world, which already operates on big data, it is a given that there will be broad intrusions into our privacy.[226] A motivating assumption of this Article is that the march towards an AI-integrated world is practically inexorable, with a question of how to respond to such potential integration in the criminal system. One response to privacy concerns of CrimAI is that, even if we were to resist use of AI in the criminal system, privacy intrusions will nevertheless proliferate as AI expands to other areas of our lives. Thus, the argument goes, CrimAI does not substantially add to our privacy harms.
This is unpersuasive. Even if we agree that privacy interests will be regularly intruded upon in other areas of our lives, we can maintain that the government using our data—in operating the criminal system—is an additional harm, and perhaps an especially significant harm, because of the potential consequences of the criminal system. That is, if we are recognizing privacy harms, then under each of the four aforementioned conceptions, the fact that another entity is accessing and using our information for additional purposes compounds that privacy harm. Moreover, the magnitude of the consequences upon me caused by such usage matter too.
That said, we can question the quantum of the harm on our privacy interests when deployed in AI systems. Suppose it is the case, in deployments of CrimAI, that no individual ever interacts with data about us unless and until there is a prediction of criminal wrongdoing. For a person who is never predicted to commit a crime, has their privacy been violated?
Consider two hypotheticals:
(1) Suppose a phone camera is running but not permanently saving any information. If I walk in front of it unknowingly, has my privacy been violated? I would be inclined to say no.
(2) Now suppose that a camera algorithm, without consent, uses my images to become more adept at understanding human movement and faces, but then deletes any images or personal identifying information of me. Here, I would be inclined to say that there is some kind of privacy interest of mine that has been trespassed against. But my intuition is that this is an interest of relatively small magnitude.
Importantly, our privacy interests are not inviolate—they can be outweighed by other interests, including others and societal interests. That is just what the Fourth Amendment recognizes. So the question then is, insofar as CrimAI is trespassing against our privacy interests, is that outweighed by its benefits? Again, we must revisit CrimAI’s success calculus. I contend that, in light of all this, there are powerful reasons sounding in privacy to adopt the Liberty-Enhancing View. That is because, though CrimAI infringes on privacy, the Liberty-Enhancing View requires such deployments are (1) for strongly justified reasons—stopping criminal actions that cause high levels of harm; and (2) with limited interventions—that reduce the magnitude of consequences on the targets of such interventions.
Thus, on CrimAI’s potential for success, the jury is very much still out. What I have argued here is that CrimAI must reach a very high degree of accuracy to attain net benefits, compared to current practice. Moreover, this is not just a matter of getting predictions right. There are additional potential harms, beyond incorrect decisions, stemming from inter alia nontransparency, algorithmic bias, and privacy intrusions.[227] But I have maintained none of these categorically invalidating harms—that they can be outweighed by other good consequences from CrimAI. That just may require higher levels of accuracy from CrimAI. Nevertheless, the success calculus can also be advanced through embracing the crux of the Liberty-Enhancing View by reducing government intrusions and punishment in the criminal system.
V. Strategy of Resistance and the Need for Alternatives
As discussed above, one key strategy employed by those influenced by the MRV is to advocate against the adoption of CrimAI. And as discussed, critics of greater use of technology, and specifically CrimAI, in criminal law focus on many of these features of the MRV.[228] They stress the potential for inaccuracy and bias, nontransparency, and privacy intrusions, while worrying that CrimAI will overwhelm our criminal systems and ultimately our lives.[229] In this discourse, many stress that CrimAI will hide and enhance existing pathologies of our current criminal system.[230] This can sound in abolitionist discourse or focus on penal and police reform. More resolute versions of resistance contend that problems of algorithmic bias and inaccuracy are innate and cannot be solved.[231] Similarly, they may maintain that nontransparency, invasions of privacy, and other dignitary harms are innate to CrimAI.[232] And often operationally we see such opponents to CrimAI seeking to create robust legal bulwarks to CrimAI’s implementation through constitutional protections for individuals.[233] This can include stressing the importance of criminal process, which challenges nontransparency and bias; or upholding spheres of privacy, contending that certain types of information should be outside the ambit of CrimAI and criminal investigation and prosecution.[234] Other versions of resistance stress the same problems—inaccuracy, algorithmic bias, and dignitary harms—but call for halting of CrimAI until those problems are resolved.[235] The ethos of these arguments is to hinder and slow CrimAI’s implementation, due to broad skepticism that CrimAI will result in a better state of affairs for human society.[236]
There is much to agree with in the substance of the views that ground the strategy of resistance. We should have deep concerns about a criminal justice system enhanced with technological advances. As we have detailed, such systems—carrying the force of the state—can take away our physical liberty without explanation, they can invade our most sacred, private spaces, and they can do so in ways that offend our society’s moral commitments. Consequently, it is incumbent that we approach any implementation of CrimAI with constant vigilance and skepticism.[237] In that sense, there are strong reasons to pursue the strategy of resistance.
Nevertheless, I contend that we must develop alternatives to the strategy of resistance. This is particularly the case with the more resolute versions of resistance, which contend that CrimAI technologies should be broadly banned. There are both philosophical and practical reasons behind the need for alternatives. First, it may be that CrimAI—even with its weaknesses and failures—results in a net-better state of affairs. Second, even if it does not, the allure of eliminating crime will be so powerful that politically we are likely to have a large-scale embrace of CrimAI, despite its serious limitations. Third, even if we societally agree that deploying CrimAI in our criminal justice system is a net negative, it may be that the integration of technologies underlying CrimAI is so pervasive that isolating the criminal justice system from those technologies is infeasible. We consider them in turn.
First, there is the possibility that CrimAI will actually result in a better world. Even embracing skepticism about CrimAI, there is the possibility that it will succeed in producing better results on net. Above, we detailed with formal benchmarks what would be necessary for CrimAI to be a success.[238] That required a very high degree of accuracy, when compared to current practice in the criminal law (and could be improved by mitigating adverse consequences, like punishment). As discussed, there are indicators—including based on current algorithmic methods—that CrimAI may be able to reach that success, but we cautioned that trajectories are hypothetical and may not be attained.[239] At the same time, however, trajectories are attainable. Indeed, Peter Salib forcefully argues that on the current state of the art, we could obtain great results in terms of mitigating incarceration by adopting various algorithmic methods.[240] And whether one agrees with Salib’s assessment or not, by discounting the possibility of CrimAI’s function and seeking to broadly bar its use, the strategy of resistance risks missing out on a net better world. And this is especially the case if we combine CrimAI with the mitigation of penal consequences.
Now, resisters may suggest that complete resistance is but a current policy, while the technology is still nascent and error-prone. They may suggest that overall they are skeptical, but watchful, and willing to adapt if and when CrimAI meets its benchmarks. Here I am in general agreement—we must be vigilant and cautious, insisting that CrimAI meets its accuracy benchmarks and is, in fact, net beneficial before its wide deployment. But there is an important caveat. Implementation of CrimAI cannot and will not immediately be perfect—that is true even if CrimAI is theoretically sound, because practical implementation can be messy.[241] So it may be the case that to reach a sufficiently accurate CrimAI, we must relax thresholds for success for some temporary period, to allow for the realistic difficulties of implementation. That is not the subject of this Article—which assumes that CrimAI could, at some point, attain both theoretically and practically sufficiently high levels of accuracy. The point is, however, that getting from where we are now to a fully implemented, sufficiently accurate CrimAI will very likely require tolerance. Perhaps CrimAI can be coupled with backstop measures, to protect against the costs and harms of inaccuracy. But empirical research has found that such backstop measures can be ineffective and can exacerbate costs.[242] Thus, to get successful CrimAI we may have to let it drive alone, with some detrimental consequences.
Resisters may insist that is too costly. And they might be right. That requires a separate analysis, which is fact-sensitive to each technology and its implementation. But practically, requiring a seamless transition to accurate CrimAI will be a bar on implementing CrimAI, and that might forego greater potential benefits.
That brings us to the second point: The contention that CrimAI must be seamlessly accurate, even if it shows promise, is unlikely to win the political day. That is the case even if resisters are morally correct about the overall costs and harms of CrimAI. The issue is that the allure of eliminating crime is so powerful that people are readily willing to overlook substantial harms of the penal system. That is true today. There are many examples of this, but for one, in prior work, I have shown that, on the current best state of the evidence, the recidivist premium—the fact that we presumptively sentence those who have been convicted of crimes to longer sentences for similar conduct as those who have not been convicted of crimes—is unjustified and counterproductive given our retributivist and consequentialist penal goals.[243] But nevertheless, the recidivist premium is pervasive in our sentencing systems. And that, I surmise, is largely due to fears about criminal wrongdoing, and willingness to bear disproportionate harms—especially when levied upon others (like those convicted of crimes)—to avoid criminal wrongdoing.
That attitude, I contend, is persistent and will continue with CrimAI’s implementation, especially if it shows hints of promise. The societal fear of criminal wrongdoing has the potential to overwhelm our rationality. Thus, even if it is sensible to oppose the implementation of CrimAI, because of its harms, that dam will break. Here too, I contend that even constitutional bulwarks will give way. And as the last few terms of the Supreme Court’s jurisprudence have shown us, bulwark precedents that protect constitutional rights can and do fall, even when there is no grand consensus that they should.[244]
Third and finally then, even if there is sufficient societal concern about the pathologies of CrimAI, there is one further hurdle for adopting the strategy of resistance. That is that it may be very difficult to disentangle the criminal system from the reach of AI and big data. It may be that many of the tools we use to make decisions in the criminal law will be infused with AI and big data technologies. A very simple example is that it might be that even the simplest word processing tools may continually offer AI-generated suggestions, which may shape determinations. And, as officials in the system use AI and big data tools in every other part of their lives, ceasing for their day jobs alone may be difficult or impractical.
So, in sum, I contend that there are strong headwinds for the strategy of resistance: it may, in fact, miss out on better consequences, and indeed better consequences for putative targets of the criminal system. And, even if the strategy of resistance is actually correct, it may still not win the day—due to collective angst about criminal wrongdoing and because of the potential ubiquitous integration of AI and big data. All that said, I may be completely wrong in reading the oncoming waves: perhaps CrimAI will not succeed, people will support banning CrimAI, and it can be effectively firewalled from our criminal system. My point, however, is not contingent on my ultimate ex post correctness. Rather, it is that standing where we are now, we cannot afford to put all of our eggs in the strategy of resistance basket. We need alternative strategies that we can pursue in parallel to repel the MRV from becoming our oppressive reality.
VI. The Liberty-Enhancing View: Principles for Justice in the Data State
As discussed at length above, there is an abiding—perhaps even overwhelming—concern with the adoption of CrimAI. I have set forth one articulation of the core concerns and fears with CrimAI—the MRV. The principal point of those opposing adoption of CrimAI is that the MRV is a bad state of affairs and we must do what we can to stop it. And the chief strategy is that of resistance. Though this approach is understandable, I believe it will not be successful if indeed CrimAI is highly accurate and deployable.
So, what should we do instead? Observe that the principal complaints with MRV—that it will expand the criminal system to greater areas of human life, that it will obsessively focus on criminal wrongdoing, that it will approximate our criminal system and consider punishment as the default solution, that it will trample all of our rights protections, and that it will dictate our behavior—all relate to our loss of liberty and autonomy.[245] Thus, to oppose MRV, I contend we need a Liberty-Enhancing View of CrimAI. As a general matter, the governing, overarching idea of the Liberty-Enhancing View of a criminal justice system equipped with CrimAI is, quite simply, to expand our liberties—and most importantly, from the worst consequences of the criminal system—whenever possible.
The tough question remains: How do we do that? In short, on my review of the literature, we are at a nascent stage of such a conception. There is much to do, and creating a coherent, enduring Liberty-Enhancing View is a collective, comprehensive project. Here I offer a beginning, with five principles: (1) In dealing with putative criminal wrongdoers, a system equipped with CrimAI should first look to avoid responses that approximate punishment; (2) CrimAI should be used to eliminate pretextual, intrusive interactions between law enforcement and civilians; (3) CrimAI should eliminate mala prohibita; (4) CrimAI should be used to eliminate inchoate liabilities; and (5) Systems equipped with CrimAI should abstain from targeting people based on their purported bad character.
These are broad principles, and as a consequence, they do not always lend themselves to simple instantiation. Criminal justice is complex, so what the liberty-enhancing alternative is will be contested. Whereas an intrusive algorithmic criminal justice system may curb the individual liberty of predicted offenders, it may enhance the individual liberty of predicted victims. That suggests that the questions about best deployments of algorithmic criminal justice will be factually sensitive. Even still, I contend these basic principles will provide important guidance on how we can integrate CrimAI into a criminal justice system that enhances our individual liberty while tackling the problems of criminality. One way of understanding these principles is as a Hippocratic Oath of CrimAI integration: Do as little harm as possible, by reducing punishment and the breadth of the criminal system.[246]
A. Avoid Approximating Punishment
This is the central principle for the Liberty-Enhancing View: Insofar as algorithms are predictive of wrongful behavior, solutions to preventing criminal wrongdoing should avoid approximating punishment.
This principle arises out of perhaps the most disturbing and deleterious aspect of the Minority Report: the engagement of pre-punishment, before individuals have even committed the putative wrong. One natural question: If you know when criminal wrongdoing is going to occur with high levels of accuracy, why punish people at all? Why not stop the criminal wrong from occurring and then simply decide not to punish the putative offender.
One answer to this is that we still have epistemic doubt about the accuracy and feasibility of CrimAI. But the response should be that if CrimAI reaches the level of accuracy and feasibility to prevent criminal wrongdoing, which meets a very high bar, then we can no longer maintain complaints of epistemic doubt. If we are very confident that we can catch criminal wrongdoing, whenever it will occur, then we should not need to punish individuals—especially through incapacitation. Rather, we should intervene as minimally necessary to stop the criminal wrong and then let them on their way to live their life as they please. If they do not desist in their proclivity to commit criminal wrongdoing, the system will predict that again—rinse and repeat. Thus, if CrimAI is indeed sufficiently accurate and practical to catch criminal wrongdoing, we should insist that it do so—and thus demand against solutions to criminal wrongdoing that approximate punishment. Indeed, why incarcerate? You do not gain further benefits from incapacitation, because you can simply incapacitate the person by stopping their criminal wrong. And you do not even gain greater deterrence. As suggested in All the Troubles of the World, when any attempted criminal wrong is halted, people will then realize that such attempt is futile and themselves desist.[247] Indeed, as discussed above in terms of the basic principles underlying criminal punishment, there is another reason why we might disfavor pre-punishment: punishment, by definition, is the intentional imposition of harm on an individual.[248] And so it should be avoided when it is not necessary, because that would prevent some intentional infliction of harm.
Theory aside, what does this look like? Suppose CrimAI generates a prediction that an individual is going to commit an act of violence. Instead of dispatching a team of law enforcement officers to stop the criminal wrong and incarcerate that person, we could dispatch a team of varied specialists—including therapists, social workers, and law enforcement—to prevent the specific harm of the predicted criminal wrong. Then we can simply decide not to take any action that would approximate punishing the person. We can simply leave them be—and if they veer into criminal wrongdoing again, CrimAI will again allow us to intervene and prevent the criminal wrong. And if such nonintervention seems extreme or otherwise counterproductive, we can aim to provide them with services to rehabilitate them from their proclivity for criminal wrongdoing (and it may be that with the underlying technology of CrimAI, our other methods of rehabilitation become better as well).
The main takeaway is that punishment is a highly suboptimal response to putative criminal wrongdoing. If technological advance gives us ways to predict criminal wrongdoing, through CrimAI, we should marshal those technologies to avoid the suboptimal response of punishment as well.
B. Eliminate Pretext
CrimAI should seek to eliminate pretextual, intrusive conduct by the government. Consider that large swaths of the current criminal system exist not to punish the targeted conduct, which can be addressed by more efficient, less intrusive means. Rather, these parts of the criminal system exist, or are liberally used, to enable law enforcement levers for further investigation of putatively more serious conduct.[249]
A primary example is the traffic laws. Law enforcement often uses traffic laws pretextually to stop suspected individuals and thereby conduct further investigation, which would not otherwise be allowed due to constitutional protections. Heien v. North Carolina, where an officer stopped a motorist because of a malfunctioning brake light only to investigate whether the car had contraband, is highly illustrative of standard police techniques.[250] If the concern had simply been the malfunctioning light, there were more efficient ways to alert the motorist and obtain compliance with fixing the light. But a principal goal of traffic policing is further investigation of other kinds of crimes. Similarly, Whren v. United States involved officers pulling over a motorist for failing to signal, but it was plain that the officers only did so to investigate whether they had contraband as they were in a “high drug area.”[251] Indeed, traffic cameras could easily better enforce such failures in driving behavior—and fine them—with no police intervention at all. But that was not the point of the stop. And in both cases, the Court blessed the officers’ modus operandi—because that is the reality of policing.[252]
Consider the world in light of highly accurate and practicable CrimAI. In such a world, algorithms with big data could predict criminal wrongdoing—without these needless interventions by law enforcement. And if that is correct, then we should insist on that contraction of law enforcement methods. That is, with the benefit of great predictive power, needless interventions of law enforcement in our lives should also cease. This then enhances our liberty in one key way, because it reduces the number of interactions we have where the government is subjecting us to its authority in interrogation and investigation. We have fewer interruptions of our lives, interruptions which create anxiety and dutiful compliance.
Now, one potential problem is that it may be that such pretextual stops are common ways in which the data used for CrimAI is generated and collected. But for the Liberty-Enhancing Vision, this simply creates another corollary principle: the ways in which we collect data should seek to avoid intrusive interactions between law enforcement and civilians. Instead, data collection itself should be calibrated to not impose further obligations or intrusions in our daily lives—that allows us to live without additional burden, thereby enhancing our liberty and autonomy.
C. Eliminate Mala Prohibita
CrimAI should, in a similar vein, seek to eliminate malum prohibitum laws. The distinction between malum prohibitum laws and malum in se laws is roughly that malum prohibitum is conduct that is illegal because society has deemed it so, contrasted with conduct that is wrongful in itself—literally malum in se.[253] A prototypical example of a malum prohibitum law is the prohibition on driving the wrong way on a one-way street. In contrast, a prototypical example of a malum in se law is the prohibition against murder.
Malum prohibitum laws largely exist because of coordination problems.[254] And indeed, malum prohibitum laws are not rare because coordination problems are not rare.[255] So, though malum prohibitum laws proscribe conduct because society says so, that is not to say they do not have important function. For example, traffic direction is important to maintaining an efficient traffic system, which in turn has real material benefits for society at large.[256]
In the prototypical traffic example, a municipality may order that a particular street should have one-way traffic so as to ensure that there is an orderly and safe traffic flow. There is nothing inherently wrong about driving a particular direction on a street. Indeed, it is precisely this reason why the intervention is needed, because there is no inherent reason why one would avoid driving a particular way on a street. Consequently, without the traffic direction, people might drive wherever they please, thus causing inefficiencies and perils in traffic. So we require traffic direction—and we require criminal sanction to ensure that people do not engage in circumventing behavior that eliminates the benefits of traffic control. For example, in our current traffic system, we cannot have a person drive the wrong way on a one-way street, because they perceive no traffic on that road. Such behavior is dangerous, in case their perception is wrong, but even still erodes the strength of the traffic rule and then risks future inefficiencies.
But here then is the key point. With CrimAI comes a greater technological potential to solve coordination problems without appeal to the criminal system. Thus, with fully realized CrimAI, we will have superior knowledge of where other vehicles and pedestrians will be—and that information can be distributed to these vehicles and pedestrians. Simple rules, like “only drive north on Pioneer Road” are not as necessary. Predictive algorithms may be able to produce routes and coordinate vehicular traffic in highly efficient ways that dispense with various traffic-related mala prohibita. This is especially the case if we also combine such algorithms with communicative technology that allows for greater coordination. If we can inform all the relevant traffic actors where there are risks, set forth safe behavior—in coordination with all other traffic actors—we no longer have a need for the coordinating mala prohibita. Instead of using CrimAI to aggrandize criminal enforcement of mala prohibita, we use the underlying technology to facilitate behavior that is better for everyone, given their rational objectives—in this case, getting to their destination safely and quickly.[257] And then we can eliminate the mala prohibita altogether, reducing the footprint of the criminal system.
Now, that does not mean that the criminal laws will never engage with the subject matter of the existing mala prohibita. Consider the traffic example again. Suppose CrimAI functions sufficiently well, and it directs a vehicle to slow down because of dangers to pedestrians. It, with high accuracy, tells the driver that continuing on the route at the chosen pace will likely seriously injure or kill someone. And suppose the driver does that anyway. That is prima facie criminally wrongful. But notice it is not a violation of a malum prohibitum law. It is a violation of a malum in se law—because recklessly risking someone’s life when apprised that there is such a risk is wrong in itself.
In sum, the principal idea here is that, with CrimAI at its zenith, coordination problems, which can be addressed in ways that pose few costs and hindrances to individuals, should not in the first instance be handled through the criminal law—instead, CrimAI should empower people to accomplish their ends and thus disincentivize their rational criminal wrongdoing.
D. Eliminate Inchoate Liabilities
Employing similar themes above, a criminal justice system employing CrimAI should seek to eliminate various forms of inchoate liability—such as solicitation, incomplete attempt, and conspiracy liability.[258] The principal reason we punish individuals for such conduct, rather than punishing completed crimes, is that we seek to avoid the harms of the completed crimes and seek to deter and incapacitate their occurrence.[259] That is, we wish to catch actors who are going to engage in criminal wrongdoing prior to their criminal wrong, and then subject them to punishment—in order to obtain the benefits of punishment, but without the harms of the criminal wrong. This is sensible, especially in a system that cannot always predict criminal wrongdoing. The consequence, however, is that inchoate liability covers a greater amount of conduct—not all of which would create harms.[260] It thus limits our liberties to engage in such conduct that would not create harms but is deemed wrongful in that it probabilistically may.
Here again, successful CrimAI has the potential to enhance our liberty. Fully realized CrimAI can, by hypothesis, predict criminally wrongful conduct. Thus, CrimAI will be able to predict the commission of a murder, say, with a high degree of accuracy. And this could happen at a point early enough in time that is practically useful for intervention. But if that is right, we need not make inchoate crimes themselves criminally wrongful—we can simply intervene with the putative criminal wrongdoer to stop the commission of the target crime. We need not make discussing or planning a murder itself a criminal wrong, subject to punishment in its own right, because we are not in want for the ability to anticipate criminal wrongs.
Suppose, for example, an individual is planning to commit an arson. To do so, they purchase various items, procure gas and other accelerants, search on the internet for information about the targeted locations, and then make plans with others to commit the arson at some point. Based on data collection—like their purchase data and triggers for their searches—authorities are alerted that there is a prediction that the individual might commit some criminal action. Thus, law enforcement intervenes—to interrogate the individual and to minimally intrude so that they cannot (like, say, taking away the items necessary for committing the arson). Based on this evidentiary record, under our current law, there may be enough to ground an inchoate crime charge, like through attempt or conspiracy.[261] But contrariwise, this principle suggests the intervention should end of imposing any liability. There is reason for intervention to prevent the ultimate criminal wrong; but there is not sufficient justification—given the continually operating CrimAI apparatus—to impose liability for the inchoate crimes and thereafter punish the individual. This has the important effect of halting the creep of inchoate liability—to earlier moments where individuals arguably meet the elements of those inchoate crimes. Instead, we use as the benchmark whether highly accurate CrimAI methods in fact predict wrongdoing. This again furthers the goal of expanding liberty by contracting criminal liability and mitigating punishment.[262]
E. Do Not Target Purported Bad Character
Finally, CrimAI should not seek to discover, affirm, and punish individuals based on their purported bad character. As discussed briefly above, a putative criminal wrongdoer’s bad character, I argue, is not appropriately an object of punishment from a retributivist theory of punishment.[263] Targeting bad character, as the object of retributive punishment, may punish the innocent, and it is otherwise unclear why one’s bad character is worthy of punishment. Thus, there is no proper retributivist purpose in targeting the wrongdoer’s bad character.
In our current system, arguably, there are consequentialist reasons to consider bad character in assessing and punishing criminal wrongdoers. The term “bad character” is polysemous. Among other things, it can refer to the disposition of the person to commit wrongs, including criminal wrongs.[264] In this way, bad character may be considered a predictor of criminal wrongdoing. At the same time, bad character may also refer to an innate or immutable nature of the individual.[265] Thus, when the criminal justice system targets bad character, it has a concerning impact: It both predicts an individual’s proclivity to commit criminal wrongdoing, but at the same time declares that the individual has a particular criminal-wrongdoing nature that is fixed or intractable. Given society’s deep concern about and desire to avert criminal wrongdoing, this then leads to actions that seek to stigmatize and ostracize those labeled with bad character.[266] As Ekow Yankah teaches, “Once the criminal is deemed as having bad character, ostracization makes permanent his taint and blunts our human concern.”[267]
With CrimAI, we can change this. Now, insofar as character is simply a stand-in term for probabilistic proclivity of criminal wrongdoing, then there is genuine reason to consider bad character—because it simply is the prediction that one would commit a criminal wrong. What is unnecessary, however, is the generalized label that individuals are innately or immutably criminal wrongdoers, and any actions that either stigmatize or ostracize the individual. If CrimAI can accurately and practicably predict criminal wrongdoing, we can target the criminal wrongs with limited interventions, as discussed above. That obviates the need to take broader actions that condemn the individual.
In this way, CrimAI can have a liberty-enhancing impact. It separates individuals from their putative acts of criminal wrongdoing. It ensures that they can continue to remain members of society, free to continue their lives, except for their criminal wrongs.
F. Objections
Two important objections deserve special attention: First, is it possible to pursue all of the principles together?[268] Second, is it not naïve to think that we would pursue these objectives at all in our criminal and penal system?[269] Let us consider them in turn.
First, it may be the case that all the principles cannot be pursued limitlessly, consistent with CrimAI’s implementation. It may be the case that to collect the vast amounts of data needed to generate predictions, some of these principles must be breached. For example, to collect data, perhaps law enforcement must be able to pretextually stop individuals. Or perhaps, predictive criminal profiles generated on data may essentially duplicate a targeting of character. Or it may be that early interventions based on predictions will essentially duplicate inchoate liability.
All of these are live possibilities. We do not yet know what all it will take to create and maintain highly accurate, successful CrimAI. And as detailed above, the calculus on whether CrimAI will be successful must consider and weigh all of its benefits, and all of its harms—including nontransparency, privacy, and dignitary ones. If we do develop a successful CrimAI, I think the most important principle to pursue to the end is avoiding approximating punishment. As noted, when we pursue that goal, we lower the magnitude of all of the potential harms. The other principles are important, and nothing suggests we cannot pursue them to some extent. There may be a limit to their pursuit, but continually pushing for these objectives better enhances liberty and autonomy.
Second, and perhaps more pressingly, is the Liberty-Enhancing View actually possible? This question is especially emergent given that our current criminal system demonstrates in droves that societally we seem uninterested in the rights and humanity of those subject to the criminal system.[270] Here, I cannot profess any particular optimism on whether we will indeed embrace the Liberty-Enhancing View. Recall that this Article grows from a strand of pessimism and nonidealism: I think that CrimAI may lead to suboptimal results. But even if so, I suspect that it will come to dominate, because of our collective irrationalities regarding criminality and because AI generally will become pervasive.
In that light, the Liberty-Enhancing View is a way to preserve some amount of liberty and dignity for us, by mitigating punishment and shrinking the reach of the damoclean, draconian criminal law. But, in fact, our collective irrationalities may be too strong to embrace the Liberty-Enhancing View. We may be too fearful or too retributive. I have argued that we should not be. But if people do not listen, the Liberty-Enhancing View still has function. It is a standard by which we can judge the penological success of a CrimAI system. That may reveal that we are failing. But we cannot rectify our mistakes if we do not know them.
VII. Conclusion
This Article began with the first sentence of Franz Kafka’s The Trial. The story is of Joseph K., who is one day told that he has been arrested for a crime, the details of which he cannot be told. He meanders through his life, with the ever-present fear that he will be punished for this crime, though he does not know what it is. The sword of Damocles dangles but wrapped around it are the vices of injustice and indignity.[271] One fine day, his time is deemed up, his executors arrive, and they stab him in the chest while strangling him. Joseph K. utters, “Like a dog!”[272]
Without intervention, Joseph K.'s future is what we may face in a criminal system enhanced with AI and big data. We will be told we are predicted criminal wrongdoers, by something without the capacity to explain itself, and we must suffer the consequences. Our protests will not be heard, our dignity not safeguarded. We are sacrifices for the greater good, which in turn begins to seem mysterious and illusory.
However, a continued focus on the principal objective of safeguarding our liberty can lead to a better future. And we need not reject AI and big data to get there. Instead, we must commit to using AI and big data to always enhance our liberty and autonomy, avoiding punishment and criminal solutions more generally wherever possible. That is the Liberty-Enhancing View this Article has ventured to articulate, in a nascent way. This Liberty-Enhancing View, has great promise, for it may harness the potential to mitigate criminal wrongs without turning us into “criminals.”
Franz Kafka, The Trial 1 (Definitive ed., Willa & Edwin Muir trans., Schocken Books 1988) (1925).
Jonathan Simon, Positively Punitive: How the Inventor of Scientific Criminology Who Died at the Beginning of the Twentieth Century Continues to Haunt American Crime Control at the Beginning of the Twenty-First, 84 Tex. L. Rev. 2135, 2140, 2145 (2006).
Id. at 2146.
Richard A. Berk, Artificial Intelligence, Predictive Policing, and Risk Assessment for Law Enforcement, 4 Ann. Rev. Criminology 209, 225 (2021); Bernard E. Harcourt, Risk as a Proxy for Race: The Dangers of Risk Assessment, 27 Fed. Sent’g Rep. 237, 238–39 (2015); Paolo Mazzarello, Cesare Lombroso: An Anthropologist Between Evolution and Degeneration, 26 Funct. Neurol. 97, 100 (2011).
Berk, supra note 4, at 225.
Stephanie Holmes Didwania, Discretion and Disparity in Federal Detention, 115 Nw. U. L. Rev. 1261, 1325 (2021) (noting that the Public Safety Assessment does not explicitly use race).
See Bernard E. Harcourt, Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age 195, 214 (2007).
Pranshu Verma, The Never-Ending Quest to Predict Crime Using AI, Wash. Post (July 15, 2022), https://www.washingtonpost.com/technology/2022/07/15/predictive-policing-algorithms-fail/ [https://perma.cc/6KDT-7SSW].
Terminologically, I often use “data” a singular noun, with verbs formulated for the singular.
See Harcourt, supra note 7, at 215–16.
Tim Lau, Predictive Policing Explained, Brennan Ctr. for Just. (Apr. 1, 2020), https://www.brennancenter.org/our-work/research-reports/predictive-policing-explained [https://perma.cc/GA4W-WEFZ].
Id.; Walter L. Perry, et al., Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations 17 (2013).
Predictive Policing: Navigating the Challenges, Thomas Reuters: Legal Blog (Mar. 26, 2025), https://legal.thomsonreuters.com/blog/predictive-policing-navigating-the-challenges/ [https://perma.cc/5KKL-B3J9].
Philip K. Dick, The Minority Report, Fantastic Universe, Jan. 1956, at 4, 5–6.
See, e.g., Karen Hao, AI Is Sending People to Jail—and Getting It Wrong, MIT Tech. Rev. (Jan. 21, 2019), https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/ [https://perma.cc/K6UY-5VCT]; Julia Angwin et al., Machine Bias, ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [https://perma.cc/KU8W-5WU8]; Derek Thompson, Should We Be Afraid of AI in the Criminal-Justice System?, Atlantic (June 20, 2019), https://www.theatlantic.com/ideas/archive/2019/06/should-we-be-afraid-of-ai-in-the-criminal-justice-system/592084/ [https://perma.cc/7UC2-7DC7]; Sonia M. Gipson Rankin, Technological Tethereds: Potential Impact of Untrustworthy Artificial Intelligence in Criminal Justice Risk Assessment Instruments, 78 Wash. & Lee L. Rev. 647, 660 (2021); Ngozi Okidegbe, The Democratizing Potential of Algorithms?, 53 Conn. L. Rev. 739, 752–53 (2022); Sandra G. Mayson, Bias In, Bias Out, 128 Yale L.J. 2218, 2228–29 (2019); Vincent Southerland, With AI and Criminal Justice, the Devil Is in the Data, ACLU (Apr. 9, 2018), https://www.aclu.org/issues/privacy-technology/surveillance-technologies/ai-and-criminal-justice-devil-data [https://perma.cc/LSW9-FFF7]; Dorothy E. Roberts, Digitizing the Carceral State, 132 Harv. L. Rev. 1695, 1696 (2019); Jessica M. Eaglin, Technologically Distorted Conceptions of Punishment, 97 Wash. U.L. Rev. 483, 501 (2019); Neil C. Hughes, The Problem with Predictive Policing and Pre-Crime Algorithms, Cybernews (Apr. 28, 2025), https://cybernews.com/editorial/the-problem-with-predictive-policing-and-pre-crime-algorithms/ [https://perma.cc/UL8K-47FC].
Brandon L. Garrett & Cynthia Rudin, The Right to a Glass Box: Rethinking the Use of Artificial Intelligence in Criminal Justice, 109 Corn. L. Rev. 561, 586–88 (2024).
Mayson, supra note 15, at 2252–53.
Hao, supra note 15.
Will Douglas Heaven, Predictive Policing Algorithms Are Racist. They Need To Be Dismantled, MIT Tech. Rev. (July 17, 2020), https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/ [https://perma.cc/7WEB-A8MG].
Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy 86–87, 91 (2016).
Julian Adler et al., Ctr. for Just. Innovation, A Line in the Sand: Artificial Intelligence and Human Liberty 1–3 (2024), https://www.innovatingjustice.org/wp-content/uploads/2025/02/CJI_Summary_A_Line_in_the_Sand_02052025.pdf [https://perma.cc/PY9U-8REM].
See, e.g., Adam Gabbatt, Bill de Blasio Criticised for Targeting Jaywalkers in Bid to Cut Traffic Deaths, Guardian (Jan. 22, 2014, at 15:45 ET), https://www.theguardian.com/world/2014/jan/22/bill-de-blasio-criticized-targeting-jaywalkers-bid-cut-traffic-deaths [https://perma.cc/SXE8-KR5H].
Michael Lewyn, The Criminalization of Walking, 2017 U. Ill. L. Rev. 1167, 1173–74.
Jeb Butler, Impact of Jaywalking on Pedestrian Accident Claims, Butler Khan: Blog (July 29, 2024), https://butlerfirm.com/blog/impact-of-jaywalking-on-pedestrian-accident-claims/ [https://perma.cc/9RQD-V3VP].
Hughes, supra note 15 (“Most people reading this would have been guilty of jaywalking, putting their trash in the recycling bin and recyclables in the trash, or rode their bicycle in the wrong lane. What would happen if facial recognition resulted in you being charged for such an event? If it did, imagine then becoming a target by the PreCrime division for a future offence.”).
There may still be other needs for crosswalks, like providing ramps and signals for those who would need further assistance in crossing the street.
True enough, people may disobey warnings, cross unsafely, and perilously disrupt traffic. Or they may deliberately disturb traffic, to express some form of dominance. But those would arguably be different crimes, like reckless endangerment, given the warnings and the potential dangers.
For a discussion of operating traffic management without the police, see Jordan Blair Woods, Traffic Without the Police, 73 Stan. L. Rev. 1471, 1490–91 (2021).
Arvind Narayanan, Understanding Social Media Recommendation Algorithms, Knight First Amend. Inst.: Colum. Univ. (Mar. 9, 2023), https://knightcolumbia.org/content/understanding-social-media-recommendation-algorithms [https://perma.cc/5HUP-44FP].
Craig S. Smith, A.I. Here, There, Everywhere, N.Y. Times (Mar. 9, 2021), https://www.nytimes.com/2021/02/23/technology/ai-innovation-privacy-seniors-education.html [https://perma.cc/5KBB-3MU4].
Jared Spataro, Introducing Microsoft 365 Copilot – Your Copilot for Work, Off. Microsoft Blog (Mar. 16, 2023), https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work/ [https://perma.cc/8Q5B-JXUX].
Dick, supra note 14, at 4–5.
Id. at 4–5, 14.
Id. at 5–7.
Id. at 6.
Id. at 6–7.
Id. at 6.
Id. at 7.
Id.
Id. at 5.
Id. at 6, 14.
Id. at 6.
Id. at 11–15.
Id. at 12–14.
Id. at 12–13.
Id. at 13–14.
Id. at 16–17.
Id. at 17–19.
Id. at 19, 21.
Id. at 19, 21–22.
Id. at 22–24.
Id. at 26.
Id. at 28–30, 35.
Id. at 28–29.
Id. at 35.
Id. at 28–30, 35.
Id. at 34–36.
Id.
Id. at 34–36. In 2002, the film Minority Report was released, with a plot loosely based on Philip K. Dick’s novella. The plot differs significantly, with a focus more on questions of free will and determinism. That said, with respect to criminal justice, the film showcases many of the same themes as the novella. Minority Report (20th Century Fox 2002).
Dick, supra note 14, at 14.
Id. at 5–6.
Id. at 6, 14.
Id. at 6–7.
Isaac Asimov, All the Troubles of the World, in Nine Tomorrows 144, 144 (1959).
Id.
Id. at 145–46.
Id. at 149–50.
Id. at 150.
Id. at 146.
Id.
Id.
Id.
Id.
Id. at 145.
Id.
Id. at 147–49.
Id. at 148.
Id. at 147–49.
Id. at 149–52.
Id. at 151–52.
Id. at 151.
Id.
Id. at 150–52.
Id. at 152.
Id. at 153–54.
Id. at 155.
Id. at 155–56.
Id. at 156.
Id. at 156–57.
Id. at 157–58.
Id. at 157.
Id. at 158.
Id. at 158–59.
Id. at 159.
Id. at 159–60.
Id. at 160.
Francis X. Shen, Neuroscience, Mental Privacy, and the Law, 36 Harv. J.L. & Pub. Pol’y 653, 654, 669–71 (2013) (observing the fear in the discourse “about rapidly improving neuroscientific techniques: Will brain science be used by the government to access the most private of spaces—our minds—against our wills?” and counseling against such fear).
O’Neil, supra note 20, at 85.
Id. at 86–87.
Id. at 100–02.
Id. at 86.
Id. at 103–04.
Id. at 87.
Id. at 86–87, 102–03.
United States v. Kincade, 379 F.3d 813, 813 (9th Cir. 2004).
Id. at 816–17.
Id. at 820–21.
United States v. Kincade, 345 F.3d 1095, 1113 (9th Cir. 2003), rev’d on reh’g en banc, 379 F.3d 813 (9th Cir. 2004).
Kincade, 379 F.3d at 818 n.7, 839–40.
Id. at 850–51 (Reinhardt, J., dissenting).
Id. at 851 (Douglas, J., dissenting) (quoting Osborn v. United States, 385 U.S. 323, 343 (1966)).
Id.
Id. at 838 & n.36 (majority opinion).
Matt Stroud, The Minority Report: Chicago’s New Police Computer Predicts Crimes, but Is It Racist?, Verge (Feb. 19, 2014, at 08:31 CT), https://www.theverge.com/2014/2/19/5419854/the-minority-report-this-computer-predicts-crime-but-is-it-racist [https://perma.cc/VM7M-ZBZJ].
Id.
Id.
Id.
Later, Stroud highlights Andrew Papachristos, a Yale sociologist whose research laid the foundation for the Chicago PD’s heat list. Papachristos explains how he views the point of the program: “If we can divert resources to the right places and proceed automatically to where police and social workers need to be to help people, it would be a fundamental change in the way we approach crime and violence . . . . Whether we can actually do that is another question.” Id.
Verma, supra note 8.
Verma also highlights Papachristos, who contends that such data should be used “to figure out where to provide more social services, increase community engagement and deal with the root social causes of violence.” Id. And similarly Verma cites John Hollywood, a scholar on predictive policing, who likewise asserts “to truly reduce crime, police departments need to work in tandem with social workers and community groups to address issues of education, housing and civic engagement.” Id.
Id.
Thompson, supra note 15.
Id.
Id.
Id.
Id. Thompson notes thereafter the views of Sharad Goel, a Stanford computer scientist, who is cautiously optimistic about algorithms in criminal justice: “To Goel, this shows that public algorithms can be part of a larger plan for states to slash incarceration and still reduce overall crime by identifying defendants who are most likely to violently recidivate.” Id.
Garrett & Rudin, supra note 16, at 591.
Id. at 581.
AI Act Enters into Force, Eur. Comm’n (Aug. 1, 2024), https://commission.europa.eu/news-and-media/news/ai-act-enters-force-2024-08-01_en [https://perma.cc/9TA3-BZYW].
See Council Regulation 2024/1689, 2024 O.J. (L 1689).
EDRi & Access Now, EU Lawmakers Must Regulate the Harmful Use of Tech by Law Enforcement in the AI Act, EDRi (Sep. 20, 2023), https://edri.org/our-work/civil-society-statement-regulate-police-tech-ai-act/ [https://perma.cc/RJ2E-DUGT].
Id.
Several scholars have written about the problems of privacy and CrimAI. See, e.g., Barry Friedman, Lawless Surveillance, 97 N.Y.U. L. Rev. 1143, 1154, 1206 (2022); Fleur G. Oké, The Minority Report: How the Use of Data in Law Enforcement Breeds Privacy Concerns Among African Americans, 63 How. L.J. 87, 104 (2019); Lorna McGregor, Accountability for Governance Choices in Artificial Intelligence: Afterword to Eyal Benvenisti’s Foreword, 29 Eur. J. Int’l L. 1079, 1083 (2018); Dan Hunter, Mirko Bagaric & Nigel Stobbs, A Framework for the Efficient and Ethical Use of Artificial Intelligence in the Criminal Justice System, 47 Fla. St. U.L. Rev. 749, 773 (2020); Aaron Tucek, Comment, Constraining Big Brother: The Legal Deficiencies Surrounding Chicago’s Use of the Strategic Subject List, 2018 U. Chi. Legal F. 427, 460.
Several scholars and journalists have written about the problems of nontransparency and racial bias with CrimAI. See, e.g., Okidegbe, supra note 15, at 753; Mayson, supra note 15, at 2277, 2280; Southerland, supra note 15; Roberts, supra note 15, at 1710; Eaglin, supra note 15, at 501 & n.87; Gipson Rankin, supra note 15, at 690; Hao, supra note 15; Angwin et al., supra note 15; Mark C. Niles, Preempting Justice: “Precrime” in Fiction and in Fact, 9 Seattle J. Soc. Just. 275, 278, 300 (2010) (discussing the examples of combatting terrorism through criminal prediction, and observing the pitfalls in criminal prediction, especially focusing on false positives).
Other scholars have written about nontransparency, accountability, and confirmation-bias problems. Lyria Bennett Moses & Janet Chan, Algorithmic Prediction in Policing: Assumptions, Evaluation, and Accountability, 28 Policing & Soc. 806, 818 (2018); Mareile Kaufmann et al., Predictive Policing and the Politics of Patterns, 59 Brit. J. Criminology 674, 687 (2019); Bart Custers, Effects of Unreliable Group Profiling by Means of Data Mining, in Lecture Notes in Artificial Intelligence 291–96 (J.G. Carbonell & J. Siekmann eds., 2003).
Andrew Ferguson has written prolifically on these issues with CrimAI, emphasizing many of the core concerns I have enumerated in the MRV. See, e.g., Andrew Guthrie Ferguson, Illuminating Black Data Policing, 15 Ohio St. J. Crim. L. 503, 517 (2018) [hereinafter Ferguson, Illuminating Black Data Policing]; Andrew Guthrie Ferguson, Predictive Prosecution, 51 Wake Forest L. Rev. 705, 742 (2016); Andrew Guthrie Ferguson, Why Digital Policing Is Different, 83 Ohio St. L.J. 817, 849 (2022); Andrew Guthrie Ferguson, Policing Predictive Policing, 94 Wash. U. L. Rev. 1109, 1149 (2017); Andrew Guthrie Ferguson, Persistent Surveillance, 74 Ala. L. Rev. 1, 15 (2022) [hereinafter Ferguson, Persistent Surveillance].
And similarly with Jackson Polansky and Henry Fradella. Jackson Polansky & Henry F. Fradella, Does “Precrime” Mesh with the Ideals of U.S. Justice?: Implications for the Future of Predictive Policing, 15 Cardozo Pub. L. Pol’y & Ethics J. 253, 292–93 (2017).
See, e.g., Roberts, supra note 15, at 1696–98, 1701, 1712, 1718.
See, e.g., Gipson Rankin, supra note 15, at 684, 723.
See, e.g., Friedman, supra note 133, at 1213–14.
See, e.g., Gipson Rankin, supra note 15, at 680–81, 686, 690.
Guha Krishnamurthi, Against the Recidivist Premium, 98 Tul. L. Rev. 411, 422 (2024).
See, e.g., Alec Walen, Retributive Justice, Stan. Encyclopedia of Phil. (July 31, 2020), https://plato.stanford.edu/entries/justice-retributive/ [https://perma.cc/LV6M-NQ9Z]; Douglas Husak, “Broad” Culpability and the Retributivist Dream, 9 Ohio St. J. Crim. L. 449, 450 (2012); Jeffrie G. Murphy, The State’s Interest in Retribution, 5 J. Contemp. Legal Issues 283, 292 (1994); Mitchell N. Berman, Two Kinds of Retributivism, in Philosophical Foundations of Criminal Law 437 (R.A. Duff & Stuart P. Green eds., 2011); Douglas Husak, What Do Criminals Deserve?, in Legal, Moral, and Metaphysical Truths: The Philosophy of Michael S. Moore 52 (Kimberly Kessler Ferzan & Stephen J. Morse eds., 2016); Lawrence H. Davis, They Deserve to Suffer, 32 Analysis 136, 140 (1972).
Nathan Hanna, Hitting Retributivism Where It Hurts, 13 Crim. L. & Phil. 109, 110 (2019).
David Dolinko, Three Mistakes of Retributivism, 39 UCLA L. Rev. 1623, 1626 (1992).
Walen, supra note 139.
Louis Kaplow & Steven Shavell, Fairness Versus Welfare, 114 Harv. L. Rev. 961, 1272 (2001) (discussing a similar example).
See, e.g., Kimberly Kessler Ferzan, Act, Agency, and Indifference: The Foundations of Criminal Responsibility, 10 New Crim. L. Rev. 441, 443 (2007) (reviewing Victor Tadros, Criminal Responsibility (2005)); Lloyd L. Weinreb, Desert, Punishment, and Criminal Responsibility, Law & Contemp. Probs., Summer 1986, at 47, 58; Heidi M. Hurd & Michael S. Moore, Punishing Hatred and Prejudice, 56 Stan. L. Rev. 1081, 1099 n.47 (2004); Samuel Scheffler, Justice and Desert in Liberal Theory, 88 Calif. L. Rev. 965, 983–84, 984 n.65 (2000) (citing Joel Feinberg, Justice and Personal Desert, in NoMOS VI 69 (Carl J. Friedrich & John W. Chapman eds., 1963), reprinted in Doing and Deserving 59 n.6 (Joel Feinberg ed., 1970)).
See, e.g., Russell L. Christopher, Deterring Retributivism: The Injustice of “Just” Punishment, 96 Nw. U. L. Rev. 843, 856 (2002) (“A consequentialist theory of punishment would justify punishment on the basis of the good consequences promoted by punishment.”).
See id. at 856 (“The most well-known version of consequentialism is Jeremy Bentham’s utilitarianism in which a course of conduct is evaluated by the principle of utility or the amount of happiness and suffering that is generated by the conduct.”).
See Joshua Dressler, Understanding Criminal Law 14–15 (4th ed. 2006).
A well-known criticism of utilitarianism, as a general moral theory and as a theory of punishment, is that it licenses absurd conclusions in the interest of maximizing global happiness. Zachary Hoskins & Anthony Duff, Legal Punishment, Stan. Encyclopedia Phil. (Dec. 10, 2021), https://plato.stanford.edu/entries/legal-punishment/ [https://perma.cc/532Z-4VLW] (discussing issue and citing sources). For example, it may require (or allow) the execution of an innocent person, if that would satiate the blood lust of the general public. That is, the execution of the innocent person imposes suffering on that person, but if that suffering can be outweighed by the collective happiness of many individuals, then such execution may be required or allowed by the theory. That intuitively strikes us as wrong, and utilitarians have various answers to this pressing problem. Id. One associated question is how we determine what makes people happiest, and what kind of happiness is best, in order to then undertake the utilitarian calculus. One answer to this is to appeal to preferences, wherein the utilitarian calculus seeks to maximize people’s preferences—a theory fittingly called preference utilitarianism. Walter Sinnott-Armstrong, Consequentialism, Stan. Encyclopedia Phil. (Oct. 4, 2023), https://plato.stanford.edu/entries/consequentialism/ [https://perma.cc/9NLA-LGTB].
Christopher, supra note 145, at 856–57.
This generates other questions, of course, including how to compare different goods against each other. This is an enduring problem with most pluralist theories, but it has various potential solutions. Marc O. DeGirolami, Against Theories of Punishment: The Thought of Sir James Fitzjames Stephen, 9 Ohio St. J. Crim. L. 699, 746 (2012); Mitchell N. Berman, The Justification of Punishment, in The Routledge Companion to Philosophy of Law 145–46 (Andrei Marmor ed., 2012).
Krishnamurthi, supra note 138, at 15.
Id.
Mitchell N. Berman, On the Moral Structure of White Collar Crime, 5 Ohio St. J. Crim. L. 301, 313 (2007) (characterizing side-constrained consequentialism as the “dominant principle” of American law); Hoskins & Duff, supra note 148 (detailing negative retributivism); Guha Krishnamurthi, “It Takes Two,” So “What’s Going On?”, 11 Ohio St. J. Crim. L. 179, 187 (2013).
Hoskins & Duff, supra note 148.
Selmer Bringsjord & Naveen Sundar Govindarajulu, Artificial Intelligence, Stan. Encyclopedia Phil. (July 12, 2018), https://plato.stanford.edu/entries/artificial-intelligence [https://perma.cc/9LWB-D8PY].
A. M. Turing, Computing Machinery and Intelligence, 59 Mind 433, 434 (1950); Graham Oppy & David Dowe, The Turing Test, Stan. Encyclopedia Phil. (Oct. 4, 2021), https://plato.stanford.edu/entries/turing-test/ [https://perma.cc/4KP6-KNW4].
See, e.g., Berk, supra note 4, at 225; Sabina Leonelli, Scientific Research and Big Data, Stan. Encyclopedia Phil. (May 29, 2020), https://plato.stanford.edu/entries/science-big-data [https://perma.cc/NU9T-FX7W].
Leonelli, supra note 157; see also Rob Kitchin, The Data Revolution: Big Data, Open Data, Data Infrastructures & Their Consequences 67–68 (Robert Rojek et al. eds., 2014) (defining big data as high in volume, high in velocity, and diverse in variety). Michael White and Henry Fradella explain that, with big-data methods, “vast troves of information . . . can be used by police such as databases that capture criminal and driving history, biometric data, employment and housing records, spending habits, and a wide range of other individually specific behaviors or attributes.” Michael D. White & Henry F. Fradella, Stop And Frisk: The Use and Abuse of a Controversial Police Tactic 178 (2016).
See White & Fradella, supra note 158, at 178–79.
Krishnamurthi, supra note 138, at 430; see also Ekow N. Yankah, Good Guys and Bad Guys: Punishing Character, Equality and the Irrelevance of Moral Character to Criminal Punishment, 25 Cardozo L. Rev. 1019, 1028 (2004) (demonstrating how modern penal law continues to punish prior offenders for immoral character); R.A. Duff, Choice, Character, and Criminal Liability, 12 L. & Phil. 345, 365 (1993) (citing sources); Benjamin B. Sendor, The Relevance of Conduct and Character to Guilt and Punishment, 10 Notre Dame J.L. Ethics & Pub. Pol’y 99, 123 (1996) (arguing that people may not have control over their character).
Walen, supra note 139.
Id.
Robert M. Chesney, Beyond Conspiracy? Anticipatory Prosecution and the Challenge of Unaffiliated Terrorism, 80 S. Cal. L. Rev. 425, 435 (2007). If inaccurate, it is also possible that CrimAI will result in false negatives—a failure to catch criminal wrongdoing. But given the occurrence of crime, false negatives occur under standard practices. And so, our criminal system with CrimAI would have to fare worse in catching crime than standard practice in order to be a net negative, on that front.
That is, on specific deterrence, because the prediction of criminal wrongdoing is by assumption inaccurate, it is unlikely that the individual has a proclivity to engage in criminal wrongdoing. And thus, subjecting them to punishment does not gain us any further deterrence—they were unlikely to commit criminal wrongs in the first place.
Similarly, regarding incapacitation, because the prediction is inaccurate and the individual does not have a criminal proclivity, subjecting them to incarceration does not substantially prevent them from committing further crimes—again, because they were unlikely to commit criminal wrongs in the first place.
Finally, on general deterrence, in the instant, we may gain some benefit—it may be that others, thinking that the system has targeted a truly would-be criminal wrongdoer, may be deterred from engaging in criminal wrongdoing. But over time, if criminal prediction is inaccurate, that may lead people to apathy with respect to conforming their behavior. That is, if there is no strong connection between whether you are punished and whether you would engage in criminal wrongdoing, then there is no rational reason to be deterred. If our criminal system operated on lottery, then, because punishment is not in fact connected to behavior, such penal consequences would not rationally impact one’s behavior.
Additionally, mirroring the false negative side of the calculus, we understand well that the criminal system does not even have to be very successful in preventing further crime (because it does not), to provide a net benefit.
Indeed, accuracy in criminal prediction is required, in much the same way, to further any retributivist goals. As we saw, retributivist goals are only furthered, or at least only preserved, insofar as the predicted criminals have engaged in some kind of inchoate conduct that is morally blameworthy—that is, people on the path toward committing criminal wrongs, who are sufficiently close to doing so, and thus have entered into the territory of morally criminal wrongs though they have not completed the crime. But if criminal prediction is inaccurate, we simply will not find these people through criminal prediction. And thus we will not be able to advance the retributivist goal of giving those people their just deserts.
Cary Coglianese & Alicia Lai, Algorithm vs. Algorithm, 71 Duke L.J. 1281, 1286–87 (2022).
In theory, even if criminal prediction fares worse than standard law enforcement identification of criminal wrongdoers, the Criminal Prediction Scenario may still be an
improvement, so long as the mitigation of criminal wrongdoing outweighs the costs of the additional incorrect predictions, as compared to standard law enforcement mistakes in identifying criminal wrongdoers. But in making this argument, people are prone to discounting the heavy costs of mistakes in identifying criminal wrongdoers. In reality, I suspect faring worse on identification will lead to a worse net outcome.
Here, we assume that criminal prediction will apply to all cases of the standard scenario and more, that’s why we can index i over all n cases that criminal prediction applies to and still encompass the net benefit/detriment of current practice. We refer to h as encompassing both false positives and false negatives, among other things, noting that it is unlikely that any particular case is both a false positive and false negative.
More simply, we could approximate this as:
> 0
Where n is the number of cases, and is the average benefit (again, X signifying in either a criminal prediction regime or the standard scenario) from avoiding harms of criminal wrongs, and is the average harm in a case. One further, important point is that I intend h to be capacious in terms of the harms it recognizes, including tangible consequential harms like wrongful punishment, missed criminal harms, but also privacy and dignitary harms.
See Jennifer C. Daskal, Pre-Crime Restraints: The Explosion of Targeted, Noncustodial Prevention, 99 Corn. L. Rev. 327, 363–64, 382–83 (2014).
Christopher, supra note 145, at 848.
Alice Ristroph, An Intellectual History of Mass Incarceration, 60 B.C. L. Rev. 1949, 1964 (2019).
See supra Part I.
See Stephen Rushin & Griffin Edwards, An Empirical Assessment of Pretextual Stops and Racial Profiling, 73 Stan. L. Rev. 637, 646–48 (2021).
One genus of argument about why CrimAI will be effective is to look at the dramatic progress of AI and big-data methods in various spheres of life, and to thus conclude that CrimAI will naturally also be successful. See, e.g., Jennifer Tang & Kyle Volpi Hiebert, The Promises and Perils of Predictive Policing, Ctr. for Int’l Governance Innovation (May 22, 2025), https://www.cigionline.org/articles/the-promises-and-perils-of-predictive-policing/ [https://perma.cc/X2RL-9RZF]. I am not unpersuaded by this type of argument. Indeed, as stated, I contend the pervasiveness of AI and big data will infect every aspect of life, and thus the criminal law too. The problems of the criminal law also do not seem different in kind from other types of problems, so insofar as AI and big data have shown great successes in other areas, so too you would think those successes could and would translate to the criminal law.
Nevertheless, I do not generally employ that argument here. For one, perhaps the problems of criminal prediction are distinct in the following way: solving criminal prediction involves efficiently predicting with high degrees of accuracy particular individual’s particular states of mind. That may not be required in other focus areas of AI and big data. And thus, I treat the question of CrimAI potential as special, looking to deployments of AI and big data in criminal law as guides.
For very helpful reviews of current uses, see generally Berk, supra note 4 (explaining the implications of big data by law enforcement in sentencing and predictive policing); Sarah Brayne, Big Data Surveillance: The Case of Policing, 82 Am. Socio. Rev. 977 (2017) (highlighting that when technology outpaces regulatory responses, there are negative implications on the surveillant landscape; Peter N. Salib, Abolition by Algorithm, 123 Mich. L. Rev. 799 (2025) (providing examples of algorithmic tools in the policing space and the harmful implications for people of color); Garrett & Rudin, supra note 16 (explaining how black box AI has interacted with modern policing); Perry, supra note 12 (listing examples of predictive crime methods currently in use).
Grace Thomas, Politicians Move to Limit Predictive Policing After Years of Controversial Failures, Tech Pol’y.Press (Oct. 15, 2024), https://www.techpolicy.press/politicians-move-to-limit-predictive-policing-after-years-of-controversial-failures/ [https://perma.cc/5HAR-T6AM].
Emilia David, OpenAI Confirms New Frontier Models o3 and o3-mini, VentureBeat (Dec. 20, 2024), https://venturebeat.com/ai/openai-confirms-new-frontier-models-o3-and-o3-mini/ [https://perma.cc/V9QJ-Y9QF].
G. O. Mohler et al., Randomized Controlled Field Trials of Predictive Policing, 110 J. Am. Stat. Assoc. 1399, 1399–1401, 1408–09 (2015).
Id. at 1401, 1408.
Hyeon-Woo Kang & Hang-Bong Kang, Prediction of Crime Occurrence from Multi-Modal Data Using Deep Learning, PLOS ONE, Apr. 24, 2017, at 1.
Id. at 14–15.
Matt Wood, Algorithm Predicts Crime a Week in Advance, but Reveals Bias in Police Response, Univ. Chi. Biological Scis. Div. (June 30, 2022), https://biologicalsciences.uchicago.edu/news/algorithm-predicts-crime-police-bias [https://perma.cc/56EF-FSS8].
Id.
Sharad Goel, Justin M. Rao & Ravi Shroff, Precinct or Prejudice? Understanding Racial Disparities in New York City’s Stop-and-Frisk Policy, 10 Annals Applied Stat. 365, 368, 382 (2016).
Jon Kleinberg et al., Human Decisions and Machine Predictions, 133 Q.J. Econ. 237, 245–46, 289 (2018).
Id. at 241.
Hannah S. Laqueur & Ryan W. Copus, An Algorithmic Assessment of Parole Decisions, 40 J. Quantitative Criminology 151, 168, 170, 173, 177 (2024).
See also Amanda Agan, Jennifer L. Doleac & Anna Harvey, Misdemeanor Prosecution, 138 Q.J. Econ. 1453, 1459, 1501 (2023), which determined, based on a study of Suffolk County, Massachusetts, that nonprosecution of misdemeanor offenders could substantially reduce reoffending rates.
Beyond these deployments, see Elizabeth A. Rowe & Nyja Prior, Procuring Algorithmic Transparency, 74 Ala. L. Rev. 303, 327–28, 335 (2022); and Michael L. Rich, Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment, 164 U. Pa. L. Rev. 871, 920 (2016), for a discussion of many other categories of artificial intelligence and machine-learning algorithms in criminal investigation, most notably including in creating unknown offender profiles and in detecting various forms of financial fraud.
See Smith, supra note 30.
Above I discussed the MRV, and many of those stressing the MRV argue that CrimAI will be inaccurate. But they maintain other points as well. See supra Section II.C. Here, I am addressing just one of those points—the claim that CrimAI cannot and will not be sufficiently accurate and practicable for implementation.
See Tang & Hiebert, supra note 174.
Kimberly Russell, AI’s Complex Role in Criminal Law: Data, Discretion, and Due Process, ABA (Apr. 1, 2025), https://www.americanbar.org/groups/gpsolo/resources/magazine/2025-mar-apr/ai-complex-role-criminal-law-data-discretion-due-process/?utm_ [https://perma.cc/BW7N-96VB].
U.S. Dept. of Just., Artificial Intelligence and Criminal Justice 27 (2024).
See, e.g., Ashley Deeks, The Judicial Demand for Explainable Artificial Intelligence, 119 Colum. L. Rev. 1829, 1833–34 (2019).
Garrett & Rudin, supra note 16, at 619; see also Mirko Bagaric et al., The Solution to the Pervasive Bias and Discrimination in the Criminal Justice System: Transparent and Fair Artificial Intelligence, 59 Am. Crim. L. Rev. 95, 148 (2022) (arguing the key to using AI in the criminal justice system is ensuring transparency).
See, e.g., Boris Babic & I. Glenn Cohen, The Algorithmic Explainability “Bait and Switch”, 108 Minn. L. Rev. 857, 864 (2023).
Id.
See O’Neil, supra note 20, at 18, 86–87.
See Okidegbe, supra note 15, at 757–58, 763.
Yuval Eylon & Alon Harel, The Right to Judicial Review, 92 Va. L. Rev. 991, 1010 (2006).
See, e.g., Jerome Frank, Courts on Trial: Myth and Reality in American Justice 162 (1973) (“[A] trial judge, because of overeating at lunch, may be so somnolent in the afternoon court-session that he fails to hear an important item of testimony and so disregards it when deciding the case.”); Brian Leiter, Legal Realism and Legal Doctrine, 163 U. Pa. L. Rev. 1975, 1981 (2015).
Charles Miller, Fifty Years of EPA Science for Air Quality Management and Control, 67 Env’t Mgmt. 1017, 1018 (2021).
See About, U.S. Dep’t of the Treasury, https://home.treasury.gov/about/general-information/role-of-the-treasury [https://perma.cc/QN3Z-Z9VQ] (last visited Aug. 30, 2025); U.S. Fed. Rsrv. Sys., The Fed Explained: What the Central Bank Does 1 (11th ed. 2021), https://www.federalreserve.gov/aboutthefed/files/the-fed-explained.pdf [https://perma.cc/Z8NL-36WU].
And that would be true even if it was just sufficiently more probable—but not guaranteed—that CrimAI would result in a less harsh punishment.
See Bagaric et al., supra note 195, at 126.
See, e.g., Adam M. Samaha, Government Secrets, Constitutional Law, and Platforms for Judicial Intervention, 53 UCLA L. Rev. 909, 932 (2006).
Mayson, supra note 15, at 2232.
For a discussion of this point in the affirmative action context, see Pauline T. Kim, Race-Aware Algorithms: Fairness, Nondiscrimination and Affirmative Action, 110 Calif. L. Rev. 1539, 1574 (2022).
Mayson, supra note 15, at 2233–34; Melissa Hamilton, Debating Algorithmic Fairness, 52 U.C. Davis L. Rev. Online 261, 268–69 (2019).
If I could, that is what this Article would be about.
Cass R. Sunstein, Governing by Algorithm? No Noise and (Potentially) Less Bias, 71 Duke L.J. 1175, 1177 (2022); see also Jon Kleinberg et al., Discrimination in the Age of Algorithms, 10 J. Legal Analysis 113, 154 (2018) (“The use of an algorithm is an alternative way to try to deal with the bias of human decision-making. To the algorithm, the name on a resume, race, age, sex, or any other applicant characteristic are candidate predictors like any other: variable X42.”).
Ignacio N. Cofone, Algorithmic Discrimination Is an Information Problem, 70 Hastings L.J. 1389, 1402 (2019).
Peter K. Yu, The Algorithmic Divide and Equality in the Age of Artificial Intelligence, 72 Fla. L. Rev. 331, 361–62 (2020); see also Wesley M. Oliver et al., Computationally Assessing Suspicion, 92 U. Cin. L. Rev. 1108, 1148 (2024) (describing how AI models could overcome implicit biases in the reasonable suspicion context).
Daniel J. Solove & Hideyuki Matsumi, AI, Algorithms, and Awful Humans, 92 Fordham L. Rev. 1923, 1926–27 (2024).
Mayson, supra note 15, at 2218, 2251; see also Vincent M. Southerland, The Intersection of Race and Algorithmic Tools in the Criminal Legal System, 80 Md. L. Rev. 487, 532 (2021) (discussing policy choices necessary to confront pervasive racism and unfairness when deploying algorithmic tools).
Mayson, supra note 15, at 2297.
Southerland, supra note 215, at 531–32.
Peter Salib has offered a new metric, called “bias-impact” that would gauge algorithmic bias based on “how a new policy changes the amount of discriminatory harm suffered by members of a disadvantaged group,” instead of a focus on “imbalances in the distribution of harmful outcomes.” Salib, supra note 175, at 844.
See Hugh Handeyside, New Documents Show This TSA Program Blamed for Profiling Is Unscientific and Unreliable — But Still It Continues, ACLU (Feb. 8, 2017), https://www.aclu.org/news/national-security/new-documents-show-tsa-program-blamed-profiling [https://perma.cc/L9ZB-RV2P].
See, e.g., Karl Manheim & Lyric Kaplan, Artificial Intelligence: Risks to Privacy and Democracy, 21 Yale J.L. & Tech. 106, 119–20, 122.
Id. at 118 (footnotes omitted).
See Orin S. Kerr, The Case for the Third-Party Doctrine, 107 Mich. L. Rev. 561, 566 (2009); Callie Haslag, Technology or Privacy: Should You Really Have to Choose Only One?, 83 Mo. L. Rev. 1027, 1051.
Carpenter v. United States, 585 U.S. 296, 300 (2018).
Id. at 302, 316.
Id. at 313–14; Smith v. Maryland, 442 U.S. 735, 745–46 (1979).
Zeynep Tufekci, We Need to Take Back Our Privacy, N.Y. Times (May 19, 2022), https://www.nytimes.com/2022/05/19/opinion/privacy-technology-data.html [https://perma.cc/68EW-5RVD].
See supra Section IV.B.1.
See, e.g., Chaz Arnett, From Decarceration to E-Carceration, 41 Cardozo L. Rev. 641, 675 (2019).
See supra Part II.
See supra Part II.
See supra Part II.
See supra Part II.
See supra Part II.
See supra Part II.
See Garrett & Rudin, supra note 16, at 618.
See supra Part II.
See Garrett & Rudin, supra note 16, at 626.
See, e.g., Ferguson, Illuminating Black Data Policing, supra note 133, at 518.
See supra Part IV.
Salib, supra note 175, at 820.
Ferguson, Persistent Surveillance, supra note 133, at 43.
See Megan Stevenson, Assessing Risk Assessment in Action, 103 Minn. L. Rev. 303, 334 (2018); Megan T. Stevenson & Jennifer L. Doleac, Algorithmic Risk Assessment in the Hands of Humans, Am. Econ. J., Nov. 2024, at 382, 401–03. In these studies deploying algorithmic recommendations for sentencing, judges often disregarded the algorithmic recommendations, resulting in higher sentences that may have been inefficient for our penal goals. Id.
Krishnamurthi, supra note 138, at 466.
E.g., Dobbs v. Jackson Women’s Health Org., 142 S. Ct. 2228, 2278–79 (2022) (citing Texas v. Johnson, 491 U.S. 397 (1989)).
See supra Section II.C.
O’Neil, supra note 20, at 205 (discussing the need for an AI Hippocratic Oath); Nikolaos M. Siafakas, Do We Need a Hippocratic Oath for Artificial Intelligence Scientists?, AI Mag., Winter 2021, at 57, 58–59; Chinmayi Sharma, AI’s Hippocratic Oath, 102 Wash. U. L. Rev. 1101, 1142 (2025).
Asimov, supra note 64, at 149–50.
Hamish Stewart, Criminal Punishment as Private Morality: Victor Tadros’s The Ends of Harm, 9 Crim. L. & Phil. 21, 22 (2015) (reviewing Victor Tadros, The Ends of Harm (2012)).
See, e.g., Illya Lichtenberg, Police Discretion and Traffic Enforcement: A Government of Men?, 50 Clev. St. L. Rev. 425, 429–34 (2003).
Heien v. North Carolina, 574 U.S. 54, 57–58 (2014).
Whren v. United States, 517 U.S. 806, 808–09 (1996).
Id. at 815; Heien, 574 U.S. at 61.
James Edwards, Theories of Criminal Law, Stan. Encyclopedia Phil. (Aug. 6, 2018), https://plato.stanford.edu/entries/criminal-law/ [https://perma.cc/F2NM-63MJ].
See, e.g., Claire Finkelstein, Positivism and the Notion of an Offense, 88 Calif. L. Rev. 335, 374 (2000).
Youngjae Lee, Mala Prohibita, the Wrongfulness Constraint, and the Problem of Overcriminalization, 41 L. & Phil. 375, 380, 384 (2022).
See Lewyn, supra note 23, at 1176.
See Grace Wang, How AI Can Help End Traffic Jams, TTI (Mar. 25, 2024), https://www.traffictechnologytoday.com/opinion/opinion-how-ai-can-help-end-traffic-jams.html [https://perma.cc/Z9ZN-G6LS]; Frank Gaff, Can AI Solve Our Traffic Problems?, PBS N.C. (June 9, 2025), https://www.pbsnc.org/blogs/science/can-ai-solve-our-traffic-problems/ [https://perma.cc/2H3F-QWMC]; Queenie Wong, California’s Transportation Agency Thinks AI Can Help Cut Traffic, L.A. Times (Jan. 8, 2024, at 13:00 PT), https://www.latimes.com/california/story/2024-01-08/california-traffic-roads-safer-generative-ai-help [https://perma.cc/66J6-DHNX].
Dressler, supra note 147, at 405.
See, e.g., Victor Tadros, The Ends of Harm: The Moral Foundations of Criminal Law 326 (2011).
See Polansky & Fradella, supra note 133, at 274–75.
Dressler, supra note 147, at 405–06.
One observation is that the idea of contracting criminal liability, but still allowing law enforcement intervention, may either seem empty or duplicative of prior principles. I do not think it is empty: consider the difference between an officer giving someone a warning for reckless driving and giving them a citation or arresting them. But it may be duplicative of the other principles, chiefly avoiding approximating punishment and eliminating pretext. As I noted, there is significant overlap in these principles, but I think it is still illuminating to enumerate them thusly.
See supra text accompanying note 160.
Krishnamurthi, supra note 138, at 429.
Id. at 430.
This involves long terms of incarceration, disenfranchisement, post-sentence conditions that make employment and housing difficult to obtain, and other dispossession of core rights. See, e.g., Yankah, supra note 160, at 1031–32; George P. Fletcher, Disenfranchisement as Punishment: Reflections on the Racial Uses of Infamia, 46 UCLA L. Rev. 1895, 1906–07 (1999); Nora Demleitner, Preventing Internal Exile: The Need for Restrictions on Collateral Sentencing Consequences, 11 Stan. L. & Pol’y Rev. 153, 158 (1999); Michael Pinard, Collateral Consequences of Criminal Convictions: Confronting Issues of Race and Dignity, 85 N.Y.U. L. Rev. 457, 490 (2010); Regina Austin, “The Shame of It All”: Stigma and the Political Disenfranchisement of Formerly Convicted and Incarcerated Persons, 36 Colum. Hum. Rts. L. Rev. 173, 176 (2004); Michael O’Hear, Third-Class Citizenship: The Escalating Legal Consequences of Committing A “Violent” Crime, 109 J. Crim. L. & Criminology 165, 235 (2019); Dallan F. Flake, When Any Sentence Is a Life Sentence: Employment Discrimination Against Ex-Offenders, 93 Wash. U. L. Rev. 45, 58 (2015); Zach Sherwood, Note, Time to Reload: The Harms of the Federal Felon-in-Possession Ban in A Post-Heller World, 70 Duke L.J. 1429, 1458 (2021); Ben Geiger, Comment, The Case for Treating Ex-Offenders as a Suspect Class, 94 Calif. L. Rev. 1191, 1222 (2006).
Yankah, supra note 160, at 1032.
See, e.g., DeGirolami, supra note 150, at 710.
See, e.g., Ristroph, supra note 171, at 1955.
See id. at 1999.
Kafka, supra note 1, at 11–12.
Id. at 229.
