- I. Introduction
- II. Theme 1: Accuracy of AI Programs and Related Problems
- III. Theme 2: Empathy and Other Human Variables in Judicial Decision-Making
- IV. Conclusion
Thank you to the University of Houston Law Center and Dean Baynes for hosting me, to Professor Duncan for inviting me, to Professor Rave for coordinating my visit, and to Professors Rave, Berman, and Chandler, and their students for allowing me to attend your wonderful classes today.
My sister, Jennise Walker Stubbs, graduated from the University of Houston Law Center. While here, she was active in mock trial and cofounded the Houston Business and Tax Law Journal (which I am told is ranked in the top ten of all American tax law journals). She is now managing partner at Shook, Hardy, and Bacon—clearly the family moneymaker. And her place in the family is particularly important because you never know when a government shutdown might come after the judiciary.
Most especially, I would like to thank Judge Ruby Sondock, whom I have known since I was a brand-new state court judge. In case you are not aware, Judge Sondock is a superb mediator. Before she was a mediator, however, she was the first woman appointed to a state district court in Harris County, Texas (where the University of Houston is located), and the first woman to hold a permanent seat on the Supreme Court of Texas. When I was a district judge in Harris County, she once took me out to lunch in order to give me advice and keep me out of trouble. I was grateful then for her guidance, and I am still grateful today for her mentorship. Her mentorship is all the more meaningful because when she joined the judiciary, she did not have such a fine female role model to show her the ropes. Judge Sondock’s legacy as a trailblazer for Texas women judges is unsurpassed. I am indebted to her for her example, and I am forever grateful for her friendship and frank advice.
Today, I am going to talk about the role of new technology in the judicial process and the efficacy of such technology. I am not going to explain the intricacies of artificial intelligence (AI). As my 18-year-old daughter recently pointed out, I have no business giving a speech on artificial intelligence given the fact that I do not know the password for my home computer or even how to use the forty TV remotes we have. My tech ignorance has its benefits, though. I never have any typos in my tweets because I do not have a Twitter account. We judges make enough mistakes in our opinions. Rest assured, this afternoon, I will be speaking on a subject that is more legal than it is technical. There will be no detailed discussion on how to write scripts, design machine learning algorithms, or store data on private servers.
In stark contrast to my own, my husband’s technical knowledge is extensive. He has a master’s degree in operations research and another master’s degree in computer science. In one of his online classes, he had a TA named Jill Watson. Jill was the perfect TA; her attitude was encouraging, her responses were timely, and her answers were always accurate. Jill, it turned out, was an automated teaching assistant. Eventually, the class discovered Jill’s identity because she was simply too good at her job. She answered questions correctly—with perkiness and positivity—at 3:00 p.m. and 3:00 a.m. alike. No human could do that.
Might Jill—and programs like her—replace functions in our field traditionally performed by human beings? Could judges simply call out, “Hey Siri, tell me what to do in this case”? Chief Justice Roberts famously quipped that it is his “job to call balls and strikes,” but many are now calling for MLB umpires to be assisted by a computerized strike zone. And my Judicial Assistant Maria Valdez, a loyal Saints fan, would be more than happy to replace all referees with robots. In the judicial context, it would seem that this replacement runs the risk of removing the human element from the judiciary. We may trade discretion for science and humanity for precision. Is this a good trade?
Before we can consider the implementation of judicial AI, and in so doing consider the merits of a robotic judiciary, we must first assess the state of our technology. In theory, predictive risk assessment programs err less frequently than do human assessments because of the programs’ capacity to process massive amounts of data. What if, however, the data processed by the program is skewed? What if the humans that designed the program are prejudiced? Should we use the same program for Texas as for California? For men as for women?
The development of artificial intelligence is inevitable. Predictive modeling technology pervades many areas of American industry—the medical field and law enforcement, to name two. Indeed, in early 2019 President Trump signed an executive order encouraging the development of AI infrastructure across American government, academia, and industry. Such technology raises unique questions when applied to the judicial process. We must identify and answer these questions. AI’s use in the courtroom is growing, and we would do well to flag those aspects that might threaten the American judiciary and adopt those that might improve it.
In my talk today, I will build upon two major themes. First, I use the example of risk assessment programs from a logistical perspective. What problems must we overcome before we can implement AI to be used by judges? Second, I will consider what problems we face after we implement this technology. How does the judiciary change with the incorporation of predictive models? In sum, I ask two questions: Can we make effective tools for the judiciary? Should we?
II. Theme 1: Accuracy of AI Programs and Related Problems
A. Artificial Intelligence Overview
The programs I am going to discuss today are characterized as “Narrow AI.” Narrow AI stands in contrast to artificial general intelligence, or “AGI.” Still hypothetical, AGI refers to the process by which a program learns how to perform an unspecified range of diverse functions. Narrow AI, on the other hand, refers to the process by which a program learns how to perform a specific, or narrow, task. The program gathers many data points—all of which relate to the relevant task—and processes the data to more accurately perform the task.
Narrow AI is ubiquitous in our daily lives; Tesla’s self-driving cars, Apple’s face recognition software, and Spotify’s personalized playlists all utilize Narrow AI. It also helps Siri learn how to respond to my style of dictation—though I could use some help learning how to dictate to her. I do especially like when I ask Siri the score of a Baylor football game and she tells me—in her cheerful, perky voice—that my Baylor Bears have “mauled” their latest opponent. You can even find Narrow AI programs in dating apps. Hinge, for example, now implements AI to process first date feedback to give users better potential matches (I learned this from reading an article about it—and definitely not from talking to my law clerks).
B. Introduction to AI and Criminal Sentencing Procedure
As mentioned above, AI already pervades many American industries. In the medical field, for example, Medical AI—also known as “black-box medicine”—utilizes data from past patients to more accurately diagnose and treat present patients. And many police departments utilize crime forecasting programs that process past crime information and predict where and how frequently crimes will occur.
The legal field has also seen an increase in AI use. Jury consultants employ predictive technology to assess the likelihood of this verdict or the other, and law firms now rely on AI programs to predict how a judge will rule in a given case, using those predictions to leverage settlement agreements. The firms even utilize these programs to survey the language from past motions; they incorporate the language from those motions that were granted and avoid the language from those that were denied.
It is not initially clear, though, how accurate these models can be in predicting a particular case’s outcome. Consider The Case of the Speluncean Explorers, a thought experiment proposed by Lon Fuller in which five fictitious judges write five competing opinions in response to a hypothetical fact pattern and statute. On the fiftieth anniversary of Fuller’s article, Harvard Law Review gathered real people—judges and scholars—to write their own opinions as to how they thought the case should come out. Each, unsurprisingly, had a different understanding of the proper resolution. It seems that even when dealing with the same facts and the same unambiguous (we are told) statute, judges will still reach differing conclusions. After all, even textualists and originalists, who seek to exclude many potential sources of legal meaning, recognize that context and background informs the meaning of legal texts.
The question going forward, then, is whether these predictive algorithms will become sophisticated enough to account for other variables, like context and background, that a judge might consider in addition to precedent and a plain-text reading. Certainly, with respect to procedural rulings, judges “tend to be somewhat consistent, and even habitual,” such that an algorithm could have sufficient predictive value in those instances. But as The Case of the Speluncean Explorers illustrates, some judges may be more inclined “to focus more on the ‘equities’ of the individual case—the aspects of the case that tug at the heartstrings—and less on its precedential significance.” If that is so, the models might not be able to adequately capture the nuance involved to build an accurate profile of how a given judge will rule in this or that case. And even if these AI models do progress to that level, and the use of predictive technology becomes more prevalent, who’s to say, as Professor Blackman speculates, that judges won’t simply do the opposite of whatever these models are saying they will do in an effort to maintain some semblance of their independence.
While the effect of those changes on the practice of law are surely worthy of discussion, the focus of this afternoon’s talk is on artificial intelligence and its role in judicial decision-making. One concrete example of this is the use of AI in criminal sentencing. Some states, like Pennsylvania, Kentucky, and Wisconsin, use evidence-based risk assessment programs to assist criminal prosecution. These programs work as follows: The system accepts criminal data—crime, sentence, and subsequent offenses—as well as demographic data—age and gender, among other items. The system then compares this data to an individual to be sentenced and determines: (1) those areas in which the individual will need special assistance—like employment, housing, or substance abuse; and (2) the individual’s pretrial, general, and violent recidivism risk. In theory, risk assessment programs learn from past successes and failures to ensure that high-risk offenders are treated with appropriate caution and low-risk offenders are treated with reasonable leniency.
Professor Chandler’s excellent Analytical Methods course helps illustrate the benefits of criminal sentencing AI. In his class, Professor Chandler teaches about Bayes’ Theorem, an analytical method used to determine the likelihood that an event will happen. This method allows us to more accurately predict the occurrence of an event because it simultaneously considers the probability that an event will occur in isolation and the probability that the test we use to detect the event’s occurrence will fail. Criminal sentencing AI can be effective because it considers both the risk assessment in isolation and the historical accuracy of its own assessment. In so doing, the system learns over time, becoming ever more precise. We call a program’s ability to self-improve machine learning.
C. A Case Study—State v. Loomis
Criminal sentencing AI played a central role in a recent case in the Wisconsin Supreme Court: State v. Loomis. In Loomis, the defendant—alleged to be the driver in a drive-by-shooting—argued that the use of a risk assessment program violated his due process rights because it impermissibly considered his gender and caused him to be sentenced according to group data, rather than according to accurate data.
The majority of the Wisconsin Supreme Court ultimately held that it was permissible for the trial court to have considered the risk assessment in addition to the other evidence, but the majority emphasized the importance of the trial court making the decision itself. To the Wisconsin Supreme Court, the assessment was simply one tool in the trial judge’s toolbox and was not to be given too much weight.
Two justices filed concurrences. The first concurrence clarified that although a court may consider risk assessments, it may not rely on them. The second stressed that a court should be obliged to set forth on the record a process to address the strengths and weaknesses of the relevant technology.
Loomis provides a great illustration of the many thorny issues surrounding the use of risk assessment tools and highlights important concerns in this nascent criminal AI infrastructure.
First, to what extent must a defendant be allowed to examine the assessment? Many assessments take as inputs simple questions—e.g., how many times has this person been returned to custody when on parole, or how many times has this person had a new charge or arrest while on probation? Is a defendant afforded sufficient process if he is allowed to challenge these predicate questions? Or must he be allowed to question the entire model?
Second, in what ways are the programs’ authors biased? Perhaps predictive models remove the potential for prejudice from judges. But perhaps they also highlight the opportunity for bias in those that do the predicting, such that criminal sentencing AI shifts the potential for prejudice from the back-end to the front-end. In fact, we see this bias-shifting problem in many aspects of American law. In an attempt to combat partisan gerrymandering, for instance, several states have suggested the use of automated programs to draw electoral districts. But do Republicans or Democrats design the code that draws the lines? What assumptions go into the models? Similarly, in an effort to more accurately determine the original meaning of words and phrases, Justice Tom Lee of the Supreme Court of Utah has developed corpus linguistics, a process by which researchers examine a mass of sources published near the relevant time period to ascertain the consensus understanding of the word or phrase. But do originalists or purposivists examine the data?
Third, to what extent may a court rely on these assessments? In Loomis, the court’s holding depended in part on the fact that the trial court considered the assessment as one factor among many. As these assessments become more accurate, might courts treat them as dispositive? Should it matter if the state government—as is the case in New York—has performed a study on the accuracy of such assessments?
Fourth, how individualized must the assessments be? After all, risk assessments are designed to identify groups of high-risk offenders, not a particular high-risk individual. Are criminal risk assessments akin to actuarial assessments performed in the insurance industry? If the programs are sufficiently accurate, does this distinction even matter?
Fifth, and finally, what role does demographic data play in this calculus? Certainly, courts are prohibited from sentencing defendants on the basis of their gender. What if, however, all studies indicate that men, on average, have higher recidivism and violent crime rates than do women? Does an assessment program that fails to account for gender disadvantage women? Or vice versa? Likewise, some of the programs have been criticized for their propensity to discriminate according to race. It would be inexcusable to incorporate racial bias into judicial technology, even if this incorporation was implicit. As Buck v. Davis has instructed us, dispensing punishment on the basis of immutable characteristics—that is, punishing people for who they are, not what they do—flatly contravenes a basic premise of our justice system.
The incorporation of demographic data into criminal risk assessment programs implicates a host of other concerns as well. For example, how regional in scope should assessments be? After all, crime rates in downtown Houston may be vastly different from crime rates in The Woodlands.
As I have mentioned, criminal sentencing AI is still in its early stages, likely to improve over time to answer these serious concerns. Its onset offers efficiency, accuracy, and fairness to the sentencing process. This offer, however, comes attached with several implied strings. Before we standardize criminal sentencing AI, we must develop protections that address the accuracy of such programs and govern the scope of their use.
As the adjudicative process begins to look more robotic, we are presented with an opportunity to reflect upon the human aspects of judging. What do we gain when we adopt technology like criminal sentencing AI? Just as important, what do we lose?
There is a great scene in the television show For the People that deftly captures this dilemma. Judge Diane Barrish has informed Allison Adams, a federal public defender, that she will be using risk assessment software to assist in calculating the appropriate sentence for one of Allison’s clients. Allison objects, noting that to have faith that the program is fair is a lot to ask. Judge Barrish responds that Allison can have faith in the experience that comes with sitting on the bench for seventeen years. Still, the judge allows Allison time to prepare arguments against the program’s use.
Allison returns to court, armed with statistics. The national recidivism rate, she argues, is 32%, while the risk assessment program’s rate is even higher, at 34%. Yet for defendants tried and sentenced by Judge Barrish, the recidivism rate is only 19%. Allison then reminds Judge Barrish of her earlier words: to have faith in the judge’s experience. These numbers support Allison’s belief that Judge Barrish is fair and that she does not need to rely on software to help her with sentencing. The judge appreciates Allison’s argument and recalls a computer that was designed to beat a chess Grandmaster but ultimately lost. Nowadays, however, anyone with a smartphone can download an app capable of demolishing that Grandmaster. The technology may not be perfect now, but it is always improving. Judge Barrish admits to having been premature in her decision to use the program, but she cautions Allison that this is the future, and one day the risk assessment program will routinely beat her 19%.
III. Theme 2: Empathy and Other Human Variables in Judicial Decision-Making
A. Bridge from Criminal Sentencing AI
The example of criminal sentencing AI shows how predictive algorithms could be the next tool in the judicial toolbox. In order to understand the role that AI can play in the judicial process, we need to understand why we even have tools in the first place.
Many of our tools, for example, guide our interpretation of legal texts. We have textual canons of construction—interpretive devices like the “Presumption of Consistent Usage” and the “Surplusage Canon”—that help us understand what statutory provisions mean. We also have substantive canons, like the rule of lenity, that infuse our application of the law with certain values that society has agreed upon. And of course, common law principles and stare decisis likewise guide our decisions. The function of these tools is to limit judicial discretion by reducing the number of possible outcomes. To an extent, it removes the judge from the judging, leading to “a government of laws and not of men.”
To illustrate this purpose, imagine a man breaks into Kyle Field, Texas A&M University’s football stadium. He plans to vandalize the stadium but fails to do so. The man injures a police officer in his attempt to flee but surrenders upon the arrival of other officers. What sort of risk does this man pose? Now imagine two judges—one who graduated from Texas A&M and who attends every football game, and one who graduated from the University of Texas. Though, undoubtedly, both judges will seek to sentence the defendant fairly, might it be possible that the University of Texas graduate will go a little easier on the defendant? This bias might very well be subconscious or implicit.
But what if the judges used criminal sentencing software to determine the sentence? The program, naturally, would reach the same result when used by either judge. The risk assessment technology removes the Aggie judge’s opportunity to assess a heavy sentence upon a defiler of his alma mater’s sacred stadium. The technology likewise removes the potential for the Longhorn judge’s subconscious bias to affect the sentence. In short, tools like criminal sentencing AI depersonalize judicial decision-making in exchange for accuracy and precision.
B. Bias and Discretion in Judicial Decision-Making
Although it is clear that technology like predictive risk assessments removes judicial discretion, it is less clear whether this removal is beneficial. Judges arrive on the bench with different religious, ethnic, and economic backgrounds (though our highest court appears to lack a little diversity when it comes to higher education). Does a jurist’s unique upbringing affect their decision-making? Justice O’Connor did not think so. She once said that at the end of the day, on a legal issue, “a wise old man and a wise old woman reach the same conclusion.” Justice Sotomayor, by contrast, has remarked that “a wise Latina woman with the richness of her experiences would more often than not reach a better conclusion than a white male who hasn’t lived that life.” A diversity of perspectives, she might add, helps flesh out the contrasting positions in litigation.
It is undoubtedly true that members of the judiciary come from different backgrounds. The question is whether this diversity should affect the result of the case. Importantly, the question is not whether this diversity should affect a judge’s demeanor. From interacting with litigants to handling the day-to-day business of the court, there are many aspects of a judge’s job, aside from making legal decisions, in which empathy is undoubtedly a positive characteristic. The trickier issue is determining what, if any, influence that empathy (or perspective or passion) should have on a judge’s decision-making process.
Some may believe that the perspective and empathy we develop through life experiences are effective tools in judicial decision-making. As then-Senator Obama observed:
[A]dherence to precedent and rules of construction and interpretation will only get you through the 25th mile of the marathon. That last mile can only be determined on the basis of one’s deepest values, one’s core concerns, one’s broader perspectives on how the world works, and the depth and breadth of one’s empathy.
Because the law is applied to real persons with human concerns and is not read in a vacuum, empathy is seen as a reprieve from the law’s otherwise unfeeling application. In the words of Justice Brennan, lest the law become sterile and bureaucratic, we must embrace judicial “passion,” which he defined as “the range of emotional and intuitive responses to a given set of facts or arguments, responses which often speed into our consciousness far ahead of the lumbering syllogisms of reason.”
But others have suggested that the use of passion and empathy is ill-advised in this setting. Senator Grassley has commented: “[Some would assert that nominees will] arrive at ‘just decisions and fair outcomes’ based on the application of ‘life experience’ to the ‘rapidly changing times.’ The so-called empathy standard is not an appropriate basis for selecting a Supreme Court nominee.” His view is shared by the philosophers whose work underlies our legal theory. Thomas Hobbes, for example, in a perfect encapsulation of his “rational being,” believed that the ideal judge is supposed to “divest himself of all fear, anger, hatred, love, and compassion.” And as James Madison reminds us, judges have a duty to render decisions according to the law as enacted and not according to “constructions adapted on the spur of occasions, and subject to the vicissitudes of party or personal ascendancies.”
I submit to you that, to the extent there is a place for empathy within the judiciary, it is not a tool for interpreting the law. Instead, the proper place for empathy may lie only in those areas to which the law has explicitly granted judges discretion. These areas are many. As an illustration, judges can listen to litigants to ensure they are heard during proceedings. Or a judge might allow a late filing if it is clear that the lawyer deserves leeway; on the other hand, a judge who believes a lawyer is playing fast and loose may deny such a request. In making similar decisions, a judge, of course, relies on the standards provided by the law. But she also relies—albeit sometimes unconsciously—on her understanding of the facts according to her perspective.
In my view, passion and empathy must not color our interpretation of the law. Judges must apply the law “without respect to persons and do equal right to the poor and to the rich.” Justice Brennan is correct to identify the law as sterile and bureaucratic. Sterility and bureaucracy ensure that we apply the law as enacted and implemented by the political branches. The proper place for passion is not in the courts, but in those branches.
Perhaps William Blackstone—on the importance of a neutral judiciary—said it best:
Were [the judiciary] joined with the legislative, the life, liberty, and property, of the subject would be in the hands of arbitrary judges, whose decisions would then be regulated only by their own opinions, and not by any fundamental principles of law; which, though legislators may depart from, yet judges are bound to observe.
Whether or not we believe passion and empathy to be a relevant tool in the interpretation of law, the application of empathy is undoubtedly limited by the technological developments within the judiciary. Criminal sentencing AI—and other technology like it—help us make more accurate decisions. These programs are capable of processing more data than can humans, and, so long as they are designed carefully, they may help remove the biases that are inherent in human jurisprudence. And if they help judges reach the right decision, does it matter that the programs remove a judge’s human element?
Professor Eugene Volokh argues that the result of the adjudication is the most important aspect of the process. To the extent that we retain flawed human elements for procedural benefits, we do a disservice to the litigants. Professor Ian Kerr disagrees. He asserts that results-based criminal AI programs are flawed because they can only consider, according to past data, what is, rather than what should be. The removal of a judge’s humanity locks in cold decision-making and prevents moral growth.
Though judges may cede discretion in order to utilize risk assessment technology, it is a good trade. In all cases, the less guesswork that judges must do, the better. Precise technology does not demand a particular conclusion; instead, it allows judges to make better informed decisions.
We end where we began. Technological growth within the legal field is coming whether we like it or not. Law firms, law enforcement, and some state judges utilize AI programs already. How will this growth affect the future judiciary? Will we see a 61-39 confirmation vote for Jill Watson? Certainly not—that vote isn’t nearly close enough to be realistic. In all seriousness, though, one of our first ventures into judicial AI—criminal sentencing—will substantially alter sentencing procedure going forward. Our immediate obligation is to ensure that when these programs make their way to the federal courts, the models are accurate and unprejudiced. Crime varies greatly from jurisdiction to jurisdiction; accuracy must be verified, not assumed. Likewise, though machines might not be biased, their creators certainly are. Before we begin to sentence others according to predictive models, we must make sure that our predictions comply with the highest standards of both our science and our constitution.
Judges will have access to ever-more-accurate risk assessment data as models become more sophisticated. But does sophistication in technology imply obsolescence of the judiciary? Certainly not. Science aids the application of law; it allows legislators to enact more specific laws, and it allows judges to apply these laws with more precision. As Justice Holmes said: “[The] ideal system of law should draw its postulates and its legislative justification from science.”
Nor is the increasing role of science in the judiciary exclusive of empathy. Our shared experiences aid us in understanding a diversity of fact patterns and assist us in the exercise of discretion when discretion is called for by the law. Science reminds us that we must make precise and accurate decisions, ones that are neutral and objective. Though judges come from different backgrounds, their backgrounds do not change science or the law.
But the onset of AI in the courtroom raises more questions than it answers. Whether or not we answer the questions, the technology will advance, nevertheless. Other issues will inevitably arise, ones that we will have to address in navigating this artificial intelligence evolution. I would not call the level of urgency “a state of emergency,” but it is unmistakably pressing. And so, we must attempt to understand and guide the technological evolution before us. We must learn from the several states and their prosecutorial experiments. We must recognize the accuracy and precision that science offers to teach. And we must bear in mind the teachings of our philosophical predecessors so that we might control the implementation of predictive technology in the judiciary, rather than be overtaken by it.
AI is coming to the judicial process. It comes first as another tool in the judge’s toolbox, though it someday might evolve to replace the judge altogether. We need to be prepared—but perhaps we ought to carefully study the philosophical foundations of our nation’s history and our jurisprudence as much as we study computer science.
For 2008–2015, Washington and Lee ranked the Houston Business and Tax Law Journal eighth of American tax law publications. See W&L Law Journal Rankings, Wash. & Lee Sch. L., https://managementtools4.wlu.edu/LawJournals/ [https://perma.cc/U9HS-DAV5] (last visited Feb. 19, 2020) (select “Taxation” from the “Subject” menu; then select “United States” from the “Country” menu; then click the “Access Prior Years” box; then select the years 2008–2015 in the “Combined Scores” column; then click “Submit”).
Confirmation Hearing on the Nomination of John G. Roberts, Jr. to Be Chief Justice of the United States: Hearing Before the S. Comm. on the Judiciary, 109th Cong. 56 (2005) (statement of John G. Roberts, Jr., nominee to be Chief Justice of the United States); see also Jacob Shafer, MLB Can Learn from Independent League’s ‘Robot Umpire’ Experiment, Bleacher Rep. (July 18, 2016), https://bleacherreport.com/articles/2652109-mlb-can-learn-from-independent-leagues-robot-umpire-experiment [https://perma.cc/C2AG-86UQ].
Adam Kilgore, How the Saints-Rams No-Call Changed the NFL, Wash. Post (Sept. 15, 2019, 6:00 AM), https://www.washingtonpost.com/sports/2019/09/15/how-saints-rams-no-call-changed-nfl/ [https://perma.cc/ZP5D-TH4B] (reporting on the 2019 game between the New Orleans Saints and the Los Angeles Rams in which referees failed to call “blatant pass interference”).
See Kirk C. Jenkins, Making Sense of the Litigation Analytics Revolution, Prac. Law., Oct. 2017, at 58, 58 (discussing use of predictive systems in technology companies); Kevin Miller, Total Surveillance, Big Data, and Predictive Crime Technology: Privacy’s Perfect Storm, 19 J. Tech. L. & Pol’y 105, 116–17 (2014) (discussing examples of predictive systems used by law enforcement agencies); Michael J. Pencina & Eric D. Peterson, Editorial, Moving from Clinical Trials to Precision Medicine: The Role for Predictive Modeling, 315 JAMA 1713, 1713–14 (2016) (commenting on and advocating for the use of predictive modeling in medical care).
Exec. Order No. 13,859, 84 Fed. Reg. 3967 (Feb. 14, 2019).
5 Ben Goertzel et al., Engineering General Intelligence, Part 1, at 2–3 (Kai-Uwe Kühnberger ed., 2014).
See Adam Pasick, The Magic That Makes Spotify’s Discover Weekly Playlists So Damn Good, Quartz (Dec. 21, 2015), https://qz.com/571007/the-magic-that-makes-spotifys-discover-weekly-playlists-so-damn-good/ [https://perma.cc/N8VN-3VFQ]; Stephen Shankland, Meet Tesla’s Self-Driving Car Computer and Its Two AI Brains, CNET (Aug. 20, 2019, 11:59 AM), https://www.cnet.com/news/meet-tesla-self-driving-car-computer-and-its-two-ai-brains/ [https://perma.cc/3SCW-RUGH]; Leo Sun, Apple Is Quietly Expanding Its Artificial Intelligence Ecosystem, Motley Fool (Jan. 23, 2020, 12:30 PM), https://www.fool.com/investing/2020/01/23/apple-is-quietly-expanding-its-artificial-intellig.aspx [https://perma.cc/7BEP-JSVK].
See Karen Hao, How Apple Personalizes Siri Without Hoovering Up Your Data, MIT Tech. Rev. (Dec. 11, 2019), https://www.technologyreview.com/s/614900/apple-ai-personalizes-siri-federated-learning [https://perma.cc/2JPT-DNR4]; Sun, supra note 7.
Saqib Shah, Hinge’s AI Will Use First Date Feedback to Improve Matches, Engadget (Oct. 16, 2018), https://www.engadget.com/2018/10/16/hinge-we-met-first-date-ai/ [https://perma.cc/6QLY-L98Y].
See Pencina & Peterson, supra note 4, at 1714; W. Nicholson Price II, Black-Box Medicine, 28 Harv. J.L. & Tech. 419, 429–37 (2015).
See Miller, supra note 4.
See John G. Browning & Christene Krupa Downs, The Future Is Now: The Rise of Artificial Intelligence in the Legal Profession, 82 Tex. B.J. 508, 508–09 (2019).
Sam Skolnik, Artificial Intelligence Creeps into Big Law, Endangers Some Jobs, Bloomberg L. (Jan. 22, 2019, 3:56 AM), https://news.bloomberglaw.com/business-and-practice/artificial-intelligence-creeps-into-big-law-endangers-some-jobs [https://perma.cc/537H-X8DF]. Interestingly, France has gone the other way. See Jason Tashea, France Bans Publishing of Judicial Analytics and Prompts Criminal Penalty, ABA J. (June 7, 2019), http://www.abajournal.com/news/article/france-bans-and-creates-criminal-penalty-for-judicial-analytics [https://perma.cc/AU2S-6RFA].
Robert Ambrogi, For Legal Research, Brief Analysis Is the New Vogue, Above L. (July 22, 2019, 12:43 PM), https://abovethelaw.com/2019/07/for-legal-research-brief-analysis-is-the-new-vogue/ [https://perma.cc/KN9E-8T7J]; Roy Strom, New Data Analytics Tool Knows Every Federal Judge’s Favorite Cases, Law.com (Nov. 29, 2018, 8:00 AM), https://www.law.com/2018/11/29/new-data-analytics-tool-knows-every-federal-judges-favorite-cases/ [https://perma.cc/REQ7-KBCF].
Recently, after analyzing caselaw from the European Court of Human Rights, one “AI judge” reached the same verdict as the court 79% of the time. Ziyaad Bhorat, Do We Still Need Human Judges in the Age of Artificial Intelligence?, openDemocracy (Aug. 8, 2017), https://www.opendemocracy.net/en/transformation/do-we-still-need-human-judges-in-age-of-artificial-intelligence/ [https://perma.cc/SW9G-AJHB]. An AI trained on American legal data “successfully predicted 88% of prosecutor decisions, 82% of verdicts in asylum cases and 70% of US Supreme Court rulings.” Press Release, Soc. Mkt. Found., “Robot Judges” Are Coming – MPs Need to Set Rules for Them (June 7, 2018), http://www.smf.co.uk/press-release-robot-judges-coming-mps-need-set-rules/ [https://perma.cc/9HS8-3AX5].
Lon L. Fuller, The Case of the Speluncean Explorers, 62 Harv. L. Rev. 616 (1949).
Symposium, The Case of the Speluncean Explorers: A Fiftieth Anniversary Symposium, 112 Harv. L. Rev. 1834 (1999).
See John F. Manning, The Absurdity Doctrine, 116 Harv. L. Rev. 2387, 2466–68 (2003) (explaining that many statutory interpretation questions are answered by determining the “applicable background convention[s] against which” a provision was enacted).
See John Greil, Note, Second-Best Originalism and Regulatory Takings, 41 Harv. J.L. & Pub. Pol’y 373, 379 (2018) (noting that constitutional questions that were “uncontested but relied upon” can be answered by looking to the legal background at the time of enactment).
Josh Blackman, The Path of Big Data and the Law, in Big Data, Big Challenges in Evidence-Based Policy Making 67, 69–71 (2015).
Richard A. Posner, How Judges Think 74 (2008).
Blackman, supra note 20.
See State v. Loomis, 881 N.W.2d 749, 758–59, 759 n.23 (Wis. 2016). To my knowledge, criminal risk assessment programs have not yet been introduced to the federal courts, though I am sure they will be here soon enough. For a discussion of how these resources could be helpful, see Nathan L. Hecht, Chief Justice, Supreme Court of Tex., The State of the Judiciary in Texas, Address to the 86th Texas Legislature (Feb. 6, 2019), https://www.txcourts.gov/media/1443500/soj2019.pdf [https://perma.cc/9J3D-ML33].
Sonja B. Starr, Evidence-Based Sentencing and the Scientific Rationalization of Discrimination, 66 Stan. L. Rev. 803, 809, 811–13 (2014).
See Allen B. Downey, Think Bayes: Bayesian Statistics Made Simple 3–5 (2012).
Loomis, 881 N.W.2d at 753.
Id. at 754, 757.
Id. at 770–72.
Id. at 773–74 (Roggensack, C.J., concurring).
Id. at 774 (Abrahamson, J., concurring).
See Starr, supra note 24, at 812–14.
See Vyacheslav Polonski, AI Is Convicting Criminals and Determining Jail Time, but Is It Fair?, World Econ. F. (Nov. 19, 2018), https://www.weforum.org/agenda/2018/11/algorithms-court-criminals-jail-time-fair/ [https://perma.cc/W8EU-YVWM] (discussing embedded bias in AI models).
Arizona is one state with a redistricting program, but there are others. Let the Voters Choose: Solving the Problem of Partisan Gerrymandering, Comm. for Econ. Dev. (Mar. 13, 2018), https://www.ced.org/reports/solving-the-problem-of-partisan-gerrymandering [https://perma.cc/HAD8-B2FP].
See Thomas R. Lee & James C. Phillips, Data-Driven Originalism, 167 U. Pa. L. Rev. 261, 289 (2019) (describing corpus linguistics).
See Sarah Picard-Fritsche et al., Ctr. for Court Innovation, The Criminal Court Assessment Tool: Development and Validation, at iii–iv (2018), https://www.courtinnovation.org/sites/default/files/media/documents/2018-02/ccat_validation.pdf [https://perma.cc/5QF6-FZA7].
Cf. City of Los Angeles Dep’t of Water & Power v. Manhart, 435 U.S. 702, 705–10 (1978). In fact, risk assessment tools do rely on actuarial data. See Vincent Southerland, With AI and Criminal Justice, the Devil Is in the Details, Am. C.L. Union (Apr. 9, 2018, 11:00 AM), https://www.aclu.org/issues/privacy-technology/surveillance-technologies/ai-and-criminal-justice-devil-data [https://perma.cc/NNP6-2RK2].
U.S. Sentencing Guidelines Manual § 5H1.10 (U.S. Sentencing Comm’n 1987).
See Julia Angwin et al., Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks., ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [https://perma.cc/VKB2-J5PR]; Starr, supra note 24, at 819; Hannah Sassaman, Opinion, Artificial Intelligence Is Racist yet Computer Algorithms Are Deciding Who Goes to Prison, Newsweek (Jan. 24, 2018, 9:49 AM), https://www.newsweek.com/ai-racist-yet-computer-algorithms-are-helping-decide-court-cases-789296 [https://perma.cc/8YYY-44UC]; Southerland, supra note 36.
Buck v. Davis, 137 S. Ct. 759, 778 (2017).
For the People: 18 Miles Outside of Roanoke (ABC television broadcast Mar. 27, 2018).
Antonin Scalia & Bryan A. Garner, Reading Law: The Interpretation of Legal Texts 168, 170, 174 (2012).
Mass. Const. art. XXX.
John F. Irwin & Daniel L. Real, Unconscious Influences on Judicial Decision-Making: The Illusion of Objectivity, 42 McGeorge L. Rev. 1, 6–7 (2010).
Evan Thomas, First: Sandra Day O’Connor 268 (2019).
Sotomayor Explains “Wise Latina” Comment, CBS News (July 14, 2009, 9:57 AM), https://www.cbsnews.com/news/sotomayor-explains-wise-latina-comment/ [https://perma.cc/D2ZF-PRSU]. Justice Sotomayor’s statement is better understood with a little context. She explicitly stated that she does not mean that any “ethnic, racial or gender group has an advantage in sound judging.” Id. Rather, Justice Sotomayor believes that our unique life experiences help us to better understand some sets of facts more than others. Id.
James Fallows, Obama and Roberts: The View From 2005, Atlantic (May 25, 2012), https://www.theatlantic.com/politics/archive/2012/05/obama-and-roberts-the-view-from-2005/257624/ [https://perma.cc/4HW9-KZYC].
Stephen Wizner, Passion in Legal Argument and Judicial Decisionmaking: A Comment on Goldberg v. Kelly, 10 Cardozo L. Rev. 179, 179 (1988).
Grassley Statement on the President’s Nomination of Merrick Garland to the U.S. Supreme Court, Chuck Grassley (Mar. 16, 2016), https://www.grassley.senate.gov/news/news-releases/grassley-statement-presidents-nomination-merrick-garland-us-supreme-court [https://perma.cc/ZJX5-P6W9].
Thomas Hobbes, Leviathan, in The Ethics of Hobbes: As Contained in Selections from His Works 47, 275 (1898).
9 The Writings of James Madison 372 (Gaillard Hunt ed., 1910).
28 U.S.C. § 453 (2018).
1 William Blackstone, Commentaries *269.
Eugene Volokh, Chief Justice Robots, 68 Duke L.J. 1135, 1161–63 (2019); Kaveh Waddell, Can Judging Be Automated?, Axios (Nov. 17, 2018), https://www.axios.com/artificial-intelligence-judges-0ca9d45f-f7d3-43cd-bf03-8bf2486cff36.html [https://perma.cc/CE86-TDLE].
Ian Kerr & Carissima Mathen, Chief Justice John Roberts is a Robot 9–10 (2014) (unpublished manuscript), http://robots.law.miami.edu/2014/wp-content/uploads/2013/06/Chief-Justice-John-Roberts-is-a-Robot-March-13-.pdf [https://perma.cc/3US5-QLZA]; Waddell, supra note 53.
Oliver Wendell Holmes, Speech at a Dinner of the Harvard Law School Association in Honor of Professor C.C. Langdell (June 25, 1895), in Oliver Wendell Holmes, Collected Legal Papers 138, 139 (1920).