I. Introduction
The United States is the home of over 2 million prisoners.[1] China, in comparison, has less than 1.7 million prisoners despite having over four times the population.[2] The difference becomes more drastic when comparing the prison population rate. As of 2018, the United States has the highest incarceration rate in the world—629 prisoners per 100,000 people; China has an incarceration rate of 119 per 100,000 people.[3] Unfortunately, the large prison population in the United States is nothing new. The United States has experienced mass incarceration over recent decades, which is, in part, a result of the failed attempts at prison reform.[4] Prison reform legislation resulted in mass incarceration because it focused on the “implementation of stricter punishment” rather than reducing the prison population.[5] However, the focus is starting to shift.[6] Instead of concentrating on incarceration, the focus is shifting towards prison population reduction.[7]
In light of this shift, the Trump Administration enacted the First Step Act of 2018 (FSA) to combat mass incarceration by granting early release to federal prisoners who have a qualifying recidivism risk score.[8] More specifically, Bureau of Prisons (BOP) personnel give eligible federal prisoners a score based on their calculated recidivism risk.[9] To calculate the score, BOP personnel use the Prisoner Assessment Tool Targeting Estimated Risk and Need, also known as PATTERN.[10] After calculating the PATTERN score, BOP personnel divide federal prisoners into two groups based on the score.[11] One group consists of federal prisoners that can earn time credits under the FSA’s time-credit system and apply the credits for early release.[12] The second group consists of federal prisoners who cannot.[13]
Despite the FSA’s shortcomings, it could help counter the mass incarceration problem by offering federal prisoners with low recidivism risk a chance to qualify for early release. However, the FSA has failed to achieve this for two reasons. First, the BOP created an algorithm that arguably violates equal protection. Second, the courts wrongly interpreted when the BOP was supposed to implement the time-credit system.
To start, Part II of this Comment details prison reform legislation history. First, Part II shows how legislation aimed at crime deterrence through imprisonment caused the mass incarceration problem. Then, Part II turns to the FSA, the first prison reform act intended to reduce the prison population by releasing federal prisoners with low recidivism risk.[14] Finally, Part II uncovers two parts of the FSA—the time-credit system and the PATTERN algorithm—that were supposed to reduce recidivism.
Understanding the time-credit system and PATTERN is essential to appreciate Part III of this Comment, which explores the legal basis behind the shortcomings of the FSA. First, Part III of this Comment discusses equal protection law. Part III focuses on disparate impact because statistical evidence shows PATTERN disproportionately affects different races. Second, Part III discusses statutory interpretation law because the shortcoming with the time-credit system stemmed from a statutory interpretation issue. Finally, in Part III, this Comment discusses how the majority of the federal courts interpreted implementation compared to a minority of courts that suggested the correct interpretation.
Last, Part IV of this Comment discusses how the FSA took two steps back from achieving recidivism reduction. First, the FSA failed to achieve its goal because PATTERN arguably violates equal protection since it has a disparate impact. The discussion provides a framework federal prisoners could use to argue an equal protection claim. The framework is a suggested expansion of equal protection law in the context of algorithms in the criminal justice system. Under the proposed framework, if a petitioner shows an algorithm’s effect of disparate impact and provides statistical evidence, the petitioner does not have to prove the intent that the law traditionally requires for a disparate-impact claim. Alternatively, the burden shifts to the federal government to explain the disparate impact. Finally, a judge evaluates the evidence and determines if the algorithm violates equal protection.[15] This Comment points out that the BOP could equalize PATTERN’s error rates, so PATTERN treats each race equally, avoiding violating the Equal Protection Clause. However, balancing the error rates has downfalls.
Second, the FSA failed to achieve its goal because the time‑credit system had implementation issues. The discussion shows how the courts should have interpreted the implementation of the FSA’s time-credit system. This Comment argues that the courts should have used Chevron to interpret rather than allow the BOP to wait to implement the time-credit system. If the courts had used Chevron, they would have complied with expressed congressional intent that the BOP implement the time‑credit system immediately. Together, the equal protection violation of PATTERN and the wrong interpretation of the time‑credit system prevented the FSA from achieving its goals.
II. Prison Reform Legislation in the United States
Because prison reform legislation has been an attempt to reduce mass incarceration, Part II will begin with a discussion of the history of prison reform that culminates with the FSA. Then, Part II will examine the FSA, which was supposed to be a “first step” in prison legislation because it was the first act to focus on reducing recidivism. However, Part IV of this Comment argues that the FSA took two steps back.
A. Prison Overpopulation: A Problem Created by Legislation
The U.S. Congress has implemented a series of prison-reform acts to combat the mass-incarceration issue.[16] A brief historical background of prison-reform legislation is essential to understand how the acts shifted from incarcerating individuals in an attempt to reduce crime to trying to reduce the large prison population that resulted.[17] The FSA was the first of the prison-reform acts centered around reducing the prison population by releasing prisoners with the lowest risk of recidivism.[18] Unfortunately, while Congress enacted the FSA with an admirable goal of reducing the prison population, the FSA had implementation issues and is possibly unconstitutional.[19] Therefore, understanding the history of prison reform is vital in understanding why legislation like the FSA is needed, why the FSA failed to achieve its goals, and the effect of the FSA’s failure on the future of prison reform acts.
Prison reform has ranged from the Anti-Drug Abuse Act of 1986, the 1994 Crime Bill, the Second Chance Act of 2007, the Fair Sentencing Act of 2010, to the First Step Act of 2018.[20] The Anti‑Drug Abuse Act of 1986 mandated a “100 to 1 sentencing ratio for crack versus powder cocaine,” which effectively increased incarceration rates through higher prison sentences for minorities[21] because crack was a common drug amongst minorities.[22] Similarly, the 1994 Crime Act concentrated prison reform on increased imprisonment.[23] However, prison-reform goals shifted in 2007 with new legislation.[24] The Second Chance Act of 2007 focused on strategies “to reduce recidivism and increase public safety, as well as to reduce corrections costs for state and local governments.”[25] But, the Second Chance Act of 2007 did not significantly reduce the recidivism rate because the main focus was on the research and development of strategy, not implementation.[26] In 2015, the Obama Administration signed the Fair Sentencing Act into law, reducing “the [Anti-Drug Abuse Act of 1986] powder cocaine to crack cocaine ratio to almost eighteen to one.”[27] After implementing the above legislation, an opening remained for legislation focused on sentence reform and active recidivism reduction, rather than passive strategy development.[28] The FSA, which uses an algorithm, PATTERN, to predict the recidivism risk of a prisoner, filled the gap in legislation.[29]
B. Prison Reform Acts Culminate with One Step Forward: The First Step Act
The FSA, which was signed into law in December of 2018 during the Trump Administration, is one of the many attempts to reform the criminal justice system.[30] When signed into law, the FSA had two overarching goals: sentencing reform and prison reform.[31] Within sentencing reform there are three main reform goals.[32] First, the sentencing reform limits the ability of prosecutors to treat criminal defendants as repeat offenders.[33] Before the FSA, prosecutors could stack or combine the charges so that the prosecutors could treat separate offenses as prior convictions.[34] Second, the sentencing reform “reduces mandatory minimum sentences for repeat offenders,” such as “drug offenders with prior drug convictions” and the minimum sentences for second and third drug offenses.[35] Third, the sentencing reform grants judges the discretion to give sentences less than the statutory minimum.[36] In addition, prisoners incarcerated for crack-cocaine offenses before 2010 “became eligible to apply for resentencing to a shorter prison term.”[37]
Like sentencing reform, the prison reform goal has three focuses. First, the prison reform focuses on improving prison practices and conditions, “such as eliminating the use of restraints on pregnant women and encouraging placing people in prisons that are closer to their families.”[38] Second, the prison reform goal focuses on reducing the prison population by increasing “good time credit for federal prisoners” and “requir[ing] that the Bureau of Prisons put low-risk individuals in home confinement for the maximum allowed.”[39] Third, the FSA “authorizes $250 million for five years in funding for rehabilitative programs within federal prisons, which are currently lacking in any meaningful job training programs, education, or drug treatment.”[40]
The FSA is prison reform legislation with the purpose of reducing the prison population in federal prisons.[41] While the FSA goals of sentencing and prison reform could benefit society, the FSA implementation has issues and potential constitutional violations that stop it from achieving the goals.[42] To reduce prison populations, the FSA focuses on the recidivism risk of individual prisoners.[43] Measuring recidivism risk is important because “[t]he reality is, many formerly incarcerated people do reoffend and many return to prison.”[44] Congress delegated creating a risk assessment tool to the Attorney General, who made PATTERN, an algorithm that determines recidivism risk.[45] PATTERN considers gender and factors that correlate with race to assess recidivism risk, which creates a potential issue: an equal protection violation.[46] The PATTERN results were supposed to lead to an inmate earning good time credits that would result in an early release.[47] In fact, Congress expressly intended that the BOP act to apply the time credits.[48] However, the BOP acted contrary to congressional intent.[49] Therefore, the FSA also has an issue with the correct implementation.[50] To understand why the FSA has implementation and constitutional issues, one needs to understand the PATTERN algorithm and the time-credit system.
1. The First Step Act and the PATTERN Algorithm
To understand the PATTERN algorithm and its problems, one needs background information. First, this section will detail the historical background of risk-needs assessment tools. To understand how the FSA PATTERN algorithm acts as a risk‑needs assessment tool, one must understand the historical background of risk-needs assessment tools in the criminal justice system. Second, this section will discuss the general and legal problems with using algorithms in the criminal justice system. The discussion about the issues with algorithms is necessary background to understand where PATTERN fell short. Last, this section will provide an understanding of the PATTERN algorithm.
a. Historical Background of Risk-Needs Assessment Tools. Risk-needs assessment tools are used in the criminal justice system to predict whether or not an individual will commit a crime.[51] Generally, risk-needs assessment tools are questionnaires that consider risk factors measuring the offender’s risk, the offender’s rehabilitation need, and how responsive the offender will be to treatment.[52] According to a scoring guideline, the risk factors are scored and totaled.[53] Then, “the total score is associated with a risk level, such as low, moderate, or high risk” of recidivism.[54] Essentially, the risk assessment tools themselves are algorithms “developed through the statistical analysis of a large number of cases to identify significant correlates of recidivism.”[55] The development of risk assessment tools “can involve selecting risk factors through correlations, analyzing regression models, or employing more advanced, machine learning techniques based on computer science.”[56] Many modern risk assessment tools use machine learning to predict recidivism risk.[57]
The FSA PATTERN algorithm is not the first attempt to use a risk assessment tool for recidivism risk prediction.[58] In 1928, Ernest Burgess created the first mathematical model to predict recidivism.[59] Burgess identified twenty-two characteristics associated with successful parole.[60] A score of one for a specific factor was equivalent to parole success; a score of zero was equal to parole failure.[61] Unlike Burgess’s use of twenty-two characteristics, other early models of recidivism risk prediction took an approach using approximately seven to twelve features.[62] While the models varied in the number of characteristics used, some characteristics served as a through-line across the models, such as “work record, prior arrests, and psychiatric prognosis.”[63] Today, “work record, prior arrests, and psychiatric prognosis” and “race, age, gender, and socio-economic status” are commonly used in modern models that predict recidivism.[64] Even though race and gender could be quality predictors of recidivism risk, problematically, race is a suspect characteristic and gender is a quasi-suspect characteristic under the Equal Protection Clause of the U.S. Constitution.[65]
b. The Problem with Algorithms. While algorithmic risk assessment tools have become popular within the criminal justice system, algorithms as a prediction method present general problems and there are limits to their use.[66] First, risk assessment tools classify people based on groups; the issue being that just because a person is a member of a group does not mean the person acts as a member of the group.[67] Second, there are issues with implementing risk-needs assessment tools because someone must train the person executing the tool. Therefore, “the tool is only as reliable as the person scoring the tool according to a scoring guide protocol.”[68] Third, it is essential that someone evaluate the accuracy of the risk assessment tool.[69] Fourth, anyone using the risk assessment tool must use it “for the purpose for which [it was] designed,” not for any other purpose.[70] Finally, there is also a possibility that using risk assessment tools leads to “racial disparity” because of the risk of bias, which is a manifestation of the problems with using algorithms in the criminal justice system.[71]
Further, risk assessment tools within the criminal justice system can result in bias, which can often be unexplainable.[72] However, risk assessment tools are popular within the criminal justice system because algorithms offer some means of objectively answering questions, such as how likely it is for a prisoner to recidivate, rather than just relying on the subjective “unstructured judgment of magistrates or judges.”[73] Nevertheless, the significant reason for the bias is that the risk assessment tools reflect the data put into the algorithm.[74] For example, suppose a person inputs data into the algorithm without adjusting it for bias of increased incarceration rates among a particular racial group because of how society treated the group in the past.[75] In that case, the algorithm will reflect the increased incarceration rates among that racial group.[76] In other words, the algorithm, on its own, cannot fix social‑justice issues; instead, if the data reflects social-justice issues, the output from the algorithm will reflect social-justice issues as well.[77]
In addition to having biases, “[m]achine learning-based algorithms can . . . amplify the biases of their input data . . . because algorithms learn to make connections between variables that may be correlated, but do not actually have a causal relationship.”[78] Furthermore, there is bias within the math the algorithm is performing.[79] For example, consider an algorithm classifying people within a group based on the likelihood to recidivate.[80] Here, the algorithm will classify people in the group who have more characteristics associated with recidivism as people who are more likely to recidivate because if the algorithm does this, there is less of a chance the algorithm will be wrong.[81] And, because the algorithm’s performance depends on the accuracy rate, the algorithm will try to achieve the highest accuracy rate possible.[82] But despite the bias issues, “[m]any view algorithms as more objective tools than human judgement, because humans, especially in the criminal legal system, have a tendency towards bias.”[83]
c. Legal Issues with Algorithms. There are potential constitutional violations because of the bias and problems with the use of algorithms within the criminal justice system; for example, a critical case, State v. Loomis, assessed using the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) algorithm as a risk-needs assessment tool.[84] In Loomis, Eric Loomis received a recidivism score based on the COMPAS risk-needs assessment tool report.[85] As a result of the recidivism score given to Loomis based on the information in the report, the court gave Loomis a sentence of six years.[86] After the sentencing, Loomis had a new hearing where “an expert witness testified that consideration at sentencing of the risk assessment portions of COMPAS runs a tremendous risk of over estimating an individual’s risk and . . . mistakenly sentencing them or basing their sentence on factors that may not apply.”[87] Ultimately, “[t]he judge denied the post-conviction motion” because the court believed it could have reached a similar conclusion even without the use of COMPAS or considering the report generated by the COMPAS risk assessment tool.[88] As a result of the unfavorable ruling, Loomis appealed to the Wisconsin Supreme Court.[89]
On appeal, the Wisconsin Supreme Court “held that a trial court’s use of an algorithmic risk assessment in sentencing did not violate the defendant’s due process rights even though the methodology used to produce the assessment was disclosed neither to the court nor to the defendant.”[90] Furthermore, the Wisconsin Supreme Court warned of the possible constitutional violations that came along with using risk assessment tools in the criminal justice system.[91] Therefore, the court set out a framework to be considered when using risk-needs assessment tools for the early release of prisoners.[92] Under the framework provided by the court, the risk assessment tool, COMPAS, could not be “used: (1) to determine whether an offender is incarcerated; . . . (2) to determine the severity of the sentence, or (3) as the determinative factor in deciding whether an offender can be supervised safely and effectively in the community.”[93] Understanding this framework when analyzing the FSA PATTERN algorithm is essential because the algorithm is used to predict recidivism scores and, ultimately, the prisoner’s early release.[94]
But contrary to the COMPAS algorithm in State v. Loomis, “PATTERN is not just one factor that is weighed in deciding who is eligible for benefits like early release, it is THE factor. Thus, PATTERN is in certain, important ways more influential than COMPAS in determining how long an inmate is incarcerated.”[95] Therefore, when PATTERN is used, because it is the only factor deciding whether or not a prisoner is granted early release from prison, understanding the potential for PATTERN to be a constitutional violation becomes more critical than it was for COMPAS.
d. PATTERN: The First Step Act Algorithm. Under the FSA, Congress delegated the task of developing a risk-needs assessment tool that could use an algorithm to predict recidivism scores to the Attorney General.[96] Congress stipulated that the risk-needs assessment tool was to have objectives whereby “each inmate is assigned a risk of recidivism category and also assessed for any needs they may have to minimize their risk of recidivism.”[97] Furthermore, Congress stated that the risk-needs assessment tool had to classify the prisoners according to “minimum, low, medium, or high risk of recidivism.”[98] Not only was the Attorney General tasked with developing the risk-needs assessment tool but also with “a needs assessment for each inmate, which would result in a tailored determination of the recidivism reduction programming and/or productive activities that would best help to reduce that person’s likelihood of re-offending upon release from prison.”[99] Essentially, inmates who completed the prescribed recidivism reduction programs would earn time credits which count towards pre-release custody or supervised release.[100] As a result, the Attorney General implemented the development of a risk-needs assessment tool, PATTERN.[101]
The main goal of PATTERN is to predict “the likelihood that a person will reoffend within the three years following their release from a BOP facility,” also known as recidivism.[102] PATTERN assesses two categories of recidivism: general and violent. General recidivism means “any arrest or return to BOP custody following release” in a period of three years post-release from prison.[103] Violent recidivism means “violent arrests following release” in a period of three years post-release from prison.[104] Not only does PATTERN distinguish between general and violent recidivism but also between genders, with a different algorithm for men and women.[105] PATTERN uses race by breaking “down [accuracy scores] by the race/ethnicity of the inmate. They used the four categories the BOP data used: white, African American, Hispanic, and other.”[106] Next, PATTERN takes dynamic factors, “things an inmate can change, like participation in education classes while incarcerated,” and static factors, “historical things that are unchangeable, such as the inmate’s age at arrest,” to predict recidivism using an algorithm.[107] A few examples of static factors “included in the original PATTERN tool are age at first conviction, whether the crime was ‘violent,’ and whether the inmate was identified as a ‘sex offender.’”[108] Additionally, a few examples of dynamic factors “that PATTERN relies on in making its predictions include an inmate’s participation in drug education and treatment programs, participation in employment, and use of the income earned from that employment for payment toward victim restitution and/or dependents.”[109] Because dynamic factors are known to lower the risk for bias from the inclusion of race, dynamic factors were included.[110]
To determine the risk of general or violent recidivism, PATTERN uses risk levels: minimum, low, medium, and high risk,[111] which are used as “cut-off points” of recidivism risk.[112] Statistics can determine the different risk levels; however, the risk levels are usually a matter of policy.[113] So, arguably, the risk levels are somewhat arbitrary[114] and may reflect discrimination,[115] which is a possible violation of equal protection. While the potentially arbitrary risk levels might violate equal protection, this Comment will focus on PATTERN’s arguable equal protection violation because the algorithm uses gender[116] and factors that correlate with race.[117] Race and gender are subject to different levels of scrutiny: race is considered a suspect characteristic,[118] and gender is regarded as a quasi-suspect characteristic.[119] Thus, because of the use of gender and factors that correlate with race, the PATTERN algorithm potentially violates the Equal Protection Clause.
2. The First Step Act and the Time-Credit System
To understand the implementation issues of the FSA time-credit system, one needs to understand the system and how it works with PATTERN. Thus, this section explains the time credit system and its interaction with PATTERN. Then, this section details how the FSA awards time credits.
a. PATTERN and Earned Time Credits Work Together. PATTERN and the time-credit system were supposed to work in conjunction for the FSA to reduce recidivism.[120] First, the PATTERN algorithm assesses “each federal prisoner’s likelihood of recidivism” by identifying the needs of each prisoner, including needs such as “anger management, substance abuse education, [and] vocational training.”[121] Then, the BOP is supposed to provide evidence-based recidivism reduction (EBRR) programming that would meet the prisoner’s needs and “was based on evidence that it would reduce recidivism” as well as productive activities (PAs) for inmates whose needs have already been met.[122] The BOP provides about seventy EBRR programs and PAs available to prisoners in federal prisons.[123] The EBRR programs and PAs include cognitive-behavioral therapy, financial education, wellness groups, and mental health and substance abuse therapy.[124] The PATTERN risk-needs assessment tool is important because the PATTERN score determines whether a prisoner is eligible to spend time credits, which allow for early release.[125] In other words, PATTERN splits federal prisoners “into two groups: people who can get credit for doing this programming and get out early, and people who can’t.”[126]
Second, the time-credit system is an incentive for the federal prisoners to partake in the EBRR programming.[127] A prisoner’s participation in the EBRR programming should address the prisoner’s needs, thereby reducing the prisoner’s recidivism risk and resulting in the prisoner qualifying for early release.[128] Every thirty days a federal prisoner participates in the EBRR programming, that prisoner can earn time credit.[129] However, a federal prisoner’s PATTERN score affects how the BOP awards the time credits.[130]
b. How Earned Time Credits Are Awarded. The FSA awards time credits differently based on a federal prisoner’s PATTERN score. The first difference based on PATTERN scores relates to which prisoners can earn and use time credits. Under the FSA, federal prisoners given a PATTERN score of low or minimum can collect time credits and apply the time credits to qualify for early release.[131] However, federal prisoners with a high or medium PATTERN score can only collect time credits.[132] A second difference based on PATTERN scores lies in how much a time credit is worth. Prisoners with high or medium PATTERN scores “get one [time credit] for every three days of [EBRR] programming.”[133]
On the other hand, prisoners with a low or minimum PATTERN score “get a half [time credit] per day.”[134] Therefore, the PATTERN score affects how many time credits a prisoner can earn and if a prisoner can use the time credits. If a prisoner can earn and use time credits, they “could trade in their [time credits] for up to a year off their sentence, or for more home confinement or more halfway house.”[135] Under the FSA, Congress expressed intent that the BOP phase in the FSA time-credit system over a period of two years spanning from January 15, 2020, to January 15, 2022.[136] Therefore, since January 15, 2022, qualifying federal prisoners were supposed to have the opportunity to earn and apply their earned time credits.[137] However, there was an issue with the interpretation of the FSA concerning when the BOP was to award time credits.
III. Legal Basis for the First Step Act Shortcomings
While PATTERN and the earned time credits were supposed to be complementary aspects of the FSA that worked together to reduce recidivism, both PATTERN and the time-credit system have shortcomings that prevent the FSA from achieving its goals. First, the algorithm arguably violates equal protection. To understand how PATTERN arguably violates equal protection, Section III.A provides background on equal protection law. Second, the BOP did not implement the time-credit system correctly. Finally, because the BOP’s implementation issue was due to the statutory interpretation by a majority of the federal courts, Section III.B gives a framework for statutory interpretation.
A. The PATTERN Algorithm and Equal Protection
Risk-needs assessment tools raise both potential due‑process[138] and equal protection[139] constitutional violations. Due process violations might occur at the sentencing stage when the risk assessment tool is “the only thing the sentence is based on, or even the determinative factor” so that the judge is not considering the totality of the circumstances when deciding the sentence.[140] This Comment will focus on equal protection violations concerning disparate impact because this Comment argues PATTERN has a disparate impact.[141]
1. Equal Protection and Disparate Impact
Equal protection law, which arises from the Fourteenth Amendment of the U.S. Constitution, protects individuals from states “deny[ing] . . . any person within [a state’s] jurisdiction the equal protection of [its] laws.”[142] Even though equal protection law focuses on protecting the individual person, the law “does not require that the government treat every person exactly the same, [but] it does prohibit discrimination if it is based upon impermissible classifications.”[143] While the Fourteenth Amendment’s Equal Protection Clause applies to the states, in Bolling v. Sharpe, the U.S. Supreme Court extended equal protection to the federal government through the Due Process Clause of the Fifth Amendment.[144]
There are three levels of classification under equal protection law—suspect, quasi-suspect, and not suspect—and each class is subject to a different type of scrutiny.[145] First, race, alienage, and national origin are suspect characteristics subject to strict scrutiny.[146] Second, gender is a quasi-suspect characteristic subject to intermediate scrutiny.[147] Third, disparate impact without intent is a characteristic that is subject to rational basis scrutiny.[148] Under rational basis review, courts are deferential, so they rarely abrogate laws under the Equal Protection Clause.[149]
However, consider a law that was created with the intent to be discriminatory and has a disparate impact. In that case, the law will be subject to strict scrutiny, and courts may invalidate it under the Equal Protection Clause. In Washington v. Davis, the Court found that the Equal Protection Clause does not prohibit laws that have a disparate impact as long as the disparate impact was not the law’s intent.[150] Under Washington v. Davis, an equal protection challenge to a law with a disparate impact requires that the plaintiff show a discriminatory purpose behind the law, meaning the plaintiff must show both a discriminatory effect and an intent to discriminate.[151] However, while relevant, disparate impact is not determinative of whether a law may violate the Equal Protection Clause. If that impact is proven disproportionate or the law unevenly applies, the law can violate the Equal Protection Clause.[152] And sometimes, the disparate impact will be so conclusive that there is simply no explanation other than the fact that the governmental body created the law with the intent to have a disparate impact.[153]
a. Statistical Evidence Can Prove the Disparate Impact Intent Element. While the element of intent makes it difficult to bring an equal protection claim challenging an algorithm with a disparate impact, specifically the PATTERN algorithm—because it uses factors that correlate with race—Yick Wo v. Hopkins makes a claim possible. In Hopkins, a state law prohibited 310 laundry businesses from operating in wooden buildings unless the city’s board granted an exception to the owner of a laundry business.[154] Chinese people owned 240 out of the 310 laundry businesses.[155] When the laundry business owners applied for exceptions, the board approved every application from white owners and only denied one non-Chinese application.[156] Over 200 Chinese laundry business owners applied, and the board rejected all 200 applications.[157] The Court found that while the law was facially neutral, it was discriminatory in practice, and the board provided no reason for the discrimination.[158] Moreover, statistical evidence showed little to no probability that the exclusion happened by chance, so the statistical evidence was obvious proof of intent.[159] Therefore, the Court struck down the law as a violation of equal protection.[160]
b. Peremptory Challenges: Disparate Impact Without the Intent Element. In the context of peremptory challenges, a petitioner can prove racial bias by presenting evidence that is a little stronger than disparate impact; it does not require a showing of intent.[161] In Batson v. Kentucky, the Supreme Court held that racially motivated peremptory challenges are prohibited for both the plaintiff and defendant in civil or criminal cases.[162] Then, the Court laid out a framework to challenge peremptory challenges thought to be made based on race.[163] Under this framework, to challenge peremptory challenges based on race, an attorney must make a prima facie case by showing statistically that the challenges are biased, which can be proven by showing disparate impact.[164] If unrebutted, this is enough for the Court to sustain the challenge.[165] An opponent may rebut the inference by a credible, neutral explanation.[166] The explanation does not have to make sense; it only needs to show that the challenges deny equal protection.[167] The trial judge’s burden is to then make a fact‑finding as to whether the explanation was real or pretextual.[168] Batson’s holding that “clear statistical evidence of egregious disparate effect, coupled with little more, stands in for the intent requirement” was recently upheld by the Court in Flowers v. Mississippi.[169]
2. A Difficulty: Equal Protection and Algorithms
A difficulty arises when applying equal protection caselaw to algorithms that have a disparate impact because of the element of intent required for a disparate-impact claim, the use of factors that correlate with race, and the accuracy rate of the algorithm. First, the element of intent required for a disparate-impact claim is difficult to prove for an algorithm because it is a tool that does not “[have] good []or evil” intentions.[170] Instead, the algorithm’s human designer, who may not accurately describe the algorithm’s process, influences the algorithm’s preferences.[171] Second, algorithms may use correlates of race or proxies instead of using race as an explicit factor.[172] The use of proxies for race makes it difficult to prove that the designer created the algorithm with the intent to have a discriminatory effect or that the algorithm very disproportionately or unevenly applies to different races.[173] Third, algorithms seek to optimize accuracy.[174] Maximization of accuracy becomes an issue because “the most significant variables from the standpoint of predictive accuracy tend to be correlated with protected attributes such as race due to unequally applied criminal justice practices—consider arrest rates or detention practices skewed against African Americans.”[175]
B. The Time-Credit System and Statutory Interpretation
As a matter of statutory interpretation, Congress clearly expressed intent that the BOP must act to apply time credits and phase in the programs within two years. Under the FSA, since January 15, 2020, federal prisoners who qualified were supposed to have the opportunity to earn time credits that offered some prisoners a potential early release.[176] But, despite the precise statutory start date of January 15, 2020, the Office of Inspector General (OIG) “found that the BOP has not applied such statutorily earned time credits to any of the approximately 60,000 eligible inmates who may have completed . . . programs or productive activities.”[177] In addition, the BOP’s failure to award time credits negatively affects federal prisoners “who have earned a reduction in their sentence or an earlier placement in the community.”[178]
The BOP did not follow Congress’s expressed intent that the BOP was to apply the time credits.[179] Further, courts who heard cases concerning when the BOP had to start awarding time credits violated the first step of the Chevron doctrine because Congress clearly stated that the BOP needs to apply the time credits.[180] A court may apply the Chevron doctrine to determine whether an agency’s interpretation of a statute gets deference.[181] Under Chevron, the agency’s interpretation either receives total deference or no deference.[182] The court will first determine whether Congress has expressed an intent as to the interpretation by analyzing the statutory construction.[183] When presented with a statutory-construction issue, “[t]he judiciary is the final authority . . . and must reject administrative constructions which are contrary to clear congressional intent.”[184]
When a court interprets a statute, the court should interpret the words by “taking their ordinary, contemporary, common meaning . . . at the time Congress enacted the statute.”[185] In addition, the court must read the words of a statute so that the words are consistent with the statutory scheme.[186] Further, the court may consider the statute’s purpose and legislative history to help determine the statute’s meaning.[187] If the plain meaning of the words is unambiguous, “judicial inquiry is complete.”[188] The congressional intent must control if Congress has clearly expressed an intent.[189] If the agency interpretation differs from the congressional intent, the court will follow Congress and not the agency.[190] Only if the statute’s language is ambiguous or silent on the issue of congressional intent should the court defer to the agency’s interpretation—that is, if the court finds the agency’s interpretation to be reasonable.[191]
1. The Majority’s Interpretation
When faced with the issue of whether the BOP had to apply time credits during the FSA’s two-year phase-in period for the time-credit system, the majority of federal district courts found the answer to be no.[192] Fleming v. Joseph is an example of the majority view. In Fleming v. Joseph, Marcus D. Fleming sought an order for the BOP to recompute the time credits that he earned under the FSA during the two-year phase-in period.[193] First, the court noted that Fleming “ha[d] not exhausted the grievance procedure as to when the BOP had to apply the FSA credits to him or as to the calculation of the credits.”[194] Then, the court found that even if Fleming had exhausted the grievance, the FSA time credits “do not go into effect until January 15, 2022.”[195] The court reasoned that the FSA statute was clear and “gives the BOP two years after it completes the risk and needs assessment for each prisoner to ‘phase-in’ the program implementation,” which does not expire “until January 2022.”[196] Thus, the court dismissed the case on ripeness grounds.[197] While the majority view denied the federal prisoners time credits and allowed the BOP to avoid applying the time credits, two critical cases ruled the other way.
2. The Minority’s Interpretation
Two important cases, Goodman v. Ortiz and O’Bryan v. Cox, correctly held that, under the FSA, the BOP had to implement the time-credit system gradually and could not wait until January 15, 2022. In Goodman v. Ortiz, a federal prisoner, Rabbi Aryeh Goodman, sought the immediate application of FSA time credits during the two-year phase-in period.[198] Following the FSA’s requirements, the petitioner successfully participated in programming activities and earned 240 credits, which would result in the petitioner’s release if applied.[199] However, the BOP argued that it “is not required to award any PATTERN earned credit until the two-year phase-in period under the statute has expired, to wit, January 15, 2022.”[200] The reasoning applied by the court focused on statutory interpretation and the plain meaning of the statute.[201] Based on the statute’s plain meaning and the statutory scheme, the court found that “[t]he ordinary meaning of ‘phase-in’ is to implement gradually.”[202] Therefore, the court immediately directed the BOP to “apply [Goodman’s] Earned Time credit[s].”[203]
Like Goodman, the federal prisoner in O’Bryan v. Cox, Howell Dean O’Bryan, Jr., sought relief from the inability to earn time credits and early release from prison.[204] In O’Bryan, O’Bryan earned “43.75 days of time credits,” but the BOP would “not apply any time credits until after January 15, 2022.”[205] The court in O’Bryan found that the BOP must apply the time credits because it violates congressional intent for the BOP not to apply the time credits.[206] In O’Bryan, unlike the majority of the federal district courts, the court based its reasoning on the Chevron doctrine.[207] In Goodman, the court based its reasoning on statutory interpretation and reached the correct result,[208] but the court did not go far enough. The court in Goodman should have based its reasoning on the Chevron doctrine like the court in O’Bryan, as the BOP’s failure to apply the time credits and ultimate failure to implement the FSA correctly is a violation of congressional intent.[209]
IV. Did the First Step Act Lawfully Take One Step Forward?
Because of the courts and the BOP, qualifying federal prisoners are not serving shorter prison sentences like they were supposed to under the FSA. Thus, the FSA failed to take a step forward towards achieving its goal. The BOP created an algorithm that is arguably a violation of equal protection, and the courts wrongly interpreted when the BOP was supposed to implement the time-credit system.
A. The First Step Back: PATTERN Arguably Violates Equal Protection
Because of the evidence that PATTERN has a disparate impact among racial groups, the equal protection issue is worth arguing to the courts. An affected prisoner would have a strong showing of statistical evidence that the BOP’s use of PATTERN results in a disparate impact, thus violating equal protection under Yick Wo v. Hopkins.[210] However, the equal protection claim becomes challenging because disparate impact in the equal protection context requires intent.[211] But an affected prisoner should argue that a Batson v. Kentucky disparate impact standard that focuses on showing effect and statistical evidence rather than intent should apply to algorithms in the criminal justice system.
1. A Wedge for Equal Protection Expansion
Even though there is “no clear principle” governing racial proxies used in algorithms,[212] it is worth arguing to the courts that PATTERN violates equal protection by having a disparate impact.[213] Because Yick Wo v. Hopkins expands equal protection to cover instances when statistical evidence is proof of intent of disparate impact, it is worth arguing that PATTERN is a violation of equal protection.[214] Because intent can be hard to prove under the equal protection disparate impact standard, an affected prisoner should argue that a standard similar to that in Batson v. Kentucky should be adopted by the courts when evaluating the PATTERN algorithm or other algorithms in the criminal justice system.[215]
Although there is no state actor, an affected prisoner could bring a claim against the federal government that PATTERN violates equal protection because Bolling v. Sharpe extends equal protection to the federal government.[216] The use of gender complicates the analysis because courts measure gender according to a lower level of scrutiny than race.[217] However, PATTERN uses both gender and factors that strongly correlate with race; because race is a suspect class, the court should apply the highest level of scrutiny: strict scrutiny.[218] When applying strict scrutiny, the courts will likely only uphold the use of the PATTERN algorithm if it is narrowly tailored to achieve a compelling governmental interest.[219] The affected prisoners could base their argument on the fact that PATTERN does not meet strict scrutiny because there is not a compelling interest to support the use of PATTERN.[220] There is ample evidence suggesting that PATTERN has a disparate impact, which may violate equal protection under Hopkins. However, there may be a problem with proving intent, which could be circumvented by arguing the disparate impact intent element can be expanded for algorithms to not require an intent element, like in Batson challenges.
a. An Intent Challenge. In Yick Wo v. Hopkins, the law “divide[d] the owners or occupiers into two classes . . . merely by an arbitrary line” and “discretion [wa]s lodged by law in public officers . . . to grant or withhold licenses.”[221] Similarly, PATTERN uses risk levels—minimum, low, medium, and high—that are usually a matter of policy as cutoff points for recidivism risk.[222] PATTERN then uses the cutoff points to determine which prisoners can earn and spend time credits to be released early and which cannot.[223] Like the statistical evidence in Hopkins, there is statistical evidence suggesting that PATTERN “overpredicts the general risk for African Americans, Hispanic Americans, and Asian Americans, while it underpredicts for Native Americans.”[224] Notably, PATTERN identified 51.1% of all Black men as having a high risk of recidivism, compared to 28.1% of white men.[225] Further, PATTERN classified a more significant number of white men, 21.5%, as “minimum risk” while only classifying 7% of Black men as minimum risk.[226] PATTERN classified 55% of white men as minimum and low risk compared to only 28.3% of Black men.[227] According to an NPR report, “[t]here were big disparities for people of color. . . . Criminal history can be a problem, for example, because law enforcement has a history of over[-]policing some communities of color. Other factors such as education level and whether someone paid restitution to their victims can intersect with race and ethnicity.”[228] Thus, evidence suggests that PATTERN has a disparate impact that an affected prisoner may argue is a violation of equal protection under Hopkins. However, proving intent may be a problem because the statistical evidence is not as much of a smoking gun under Hopkins.
b. Intent Challenge Accepted. Rather than focusing on intent, Batson places greater weight on the effects and statistical evidence.[229] Therefore, a prisoner negatively affected by the PATTERN algorithm should argue that the court should accept proof of the effects of the algorithm and statistical evidence of disparate impact rather than requiring intent. A prisoner negatively affected by the PATTERN algorithm could present substantial evidence to make a prima facie case of a Batson challenge made in the context of equal protection.[230]
An affected prisoner can argue that PATTERN has had a disparate impact because of the effects of the problems BOP personnel have with implementing the risk assessment tool. When BOP personnel give a federal prisoner a score, the BOP should strictly follow the scoring instructions.[231] However, the BOP personnel do not follow the instructions strictly, which presents a serious issue with the reliability of the PATTERN score because “BOP personnel incorrectly scored and classified more than 20% of the BOP population.”[232] Further, there are about sixty exceptions that prevent some prisoners from being eligible to earn time credits because of the offense the prisoner committed.[233] The problem with the exceptions is the interpretation difficulty and possible margin of error.[234] Because of the interpretation difficulty, BOP personnel are likely to make errors. Evidence shows BOP personnel have already made errors, as “BOP personnel incorrectly scored and classified more than 20% of the BOP population.”[235] A mistake of this type is costly—it places fundamental rights at stake and may cost a prisoner their chance of earning freedom.[236]
Apart from BOP personnel errors , an affected prisoner can argue errors in the PATTERN algorithm create a disparate impact. Because Congress’s goal for PATTERN was to reduce recidivism rates, the Attorney General designed PATTERN to assess a prisoner’s risk factor on the day of the prisoner’s release.[237] However, prisoners are being evaluated for PATTERN scores when the prisoners enter prison,[238] which “call[s] into question the efficacy of the entire scoring system.”[239] PATTERN’s usefulness is in question, “[b]ecause the empirical models were estimated using different versions of these variables, it may have influenced the coefficients obtained and the item weights assigned.”[240] In addition, PATTERN favors false positives rather than false negatives; PATTERN was created with a choice to have it “perform far less accurately when predicting those who are at higher risk—which means placing too many individuals into the higher risk groupings than necessary.”[241]
Furthermore, there is an inconsistency of when BOP personnel consider disciplinary infractions, affecting how good a PATTERN score is.[242] The inconsistency is occurring because some BOP personnel are taking pretrial and holdover disciplinary infractions into account when giving a prisoner a PATTERN score, and some are not.[243] In other words, because of the inconsistency, PATTERN scores for each prisoner are not all based upon the same metrics.[244]
Considering the evidence, an affected prisoner would have a strong showing of effect and statistical evidence that the BOP’s use of PATTERN results in a disparate impact, thus violating equal protection under Yick Wo v. Hopkins with a Batson v. Kentucky disparate impact standard. After the affected prisoner presents the evidence, it becomes the federal government’s burden to offer a credible, neutral explanation for why the prisoner experienced discrimination because of PATTERN.[245] Then, it would be within the judge’s discretion to believe the government’s reason or rule that PATTERN violates equal protection.[246]
2. But What About Equal Odds?
Equalizing the error rates so that PATTERN treats each race the same is a solution the BOP could consider, but the solution has downsides. Like human decision-making, which has error rates, algorithms can have different error rates for different racial groups.[247] For example, the PATTERN algorithm differs based on racial groups and gender.[248] In addition, similar to how there are unexplainable aspects of human decision-making, only a prisoner knows the result—the recidivism score—which makes the difference in accuracy scores even more difficult.[249] A possible solution to avoid equal protection violations would be to create laws that regulate the error rates by mandating equal odds, which means requiring the false positive and true positive rates to be the same.[250] Equal odds would help avoid equal protection violations because the PATTERN algorithm would treat races the same when it classifies members of the different racial groups by a minimum, low, medium, or high recidivism rate.[251]
Mandating that the false positive and true positive rates must be the same reduces the difference in error rates between the racial groups; however, the algorithm’s accuracy drops.[252] Therefore, achieving equality does not come without a cost.[253] For example, the cost of equality can be very high if the algorithm falsely categorizes more people with a low recidivism score as having a high recidivism score.[254] In other words, the cost of accuracy could lead to more individuals who would not recidivate remaining in prison.[255] So, in the end, because the cure of equalizing the error rates results in a drop in accuracy, requiring equal odds might not lead to a more just outcome.
B. The Second Step Back: A Statutory Interpretation Mistake
The federal district courts’ failure to use Chevron is relevant even though the phase-in period ended January 15, 2022. The precedent could lead to problems interpreting prison legislation in the future. Because most of the district courts did not interpret that statute consistent with congressional intent, federal prisoners were not serving shorter prison sentences like they were supposed to if they earned time credits. So, the courts failed to serve the ultimate goals of the FSA because of the way they interpreted the FSA.
1. The First Step Act Could Have Achieved Its Goals with Chevron
When interpreting the FSA statute, the analysis should always start with the statute’s text, as Justice Kagan proclaimed, “we are all textualists now.”[256] Thus, when analyzing a statute, courts almost always start with the text itself and then expand outward to other statutory provisions, statute structure, and other statutes.[257] First, the court should examine the plain language of the statute.[258] The plain meaning of a word in the statute is the facially apparent meaning inherent in the statute’s text.[259] Plain meaning, however, is not the same as ordinary meaning.[260] A word can have a plain meaning that is not ordinary.[261] For example, the plain text of section 102(h)(4) of the FSA reads “the Bureau of Prisons may begin to expand any evidence-based recidivism reduction programs and productive activities that exist at the prison as of such date [January 15, 2020], and may offer to prisoners who successfully participate in such programs and activities the incentives and rewards described in subchapter D.”[262] Congress used the word “may,” which suggests that Congress is allowing for the BOP’s discretion unlike the surrounding sections of the FSA where “shall” is used.[263] When considering the portion of the statute codified in § 3632(d)(4)(B), which details when a prisoner may not earn time credits, Congress’s intent for using “may” becomes apparent.[264] Congress used the permissive “may” instead of a strict “shall” because, in some instances, there are programs that pre-existed the FSA for which “a prisoner may not earn time credits.”[265] If Congress had used the word “shall,” section 102(h)(4) would read as directing the BOP to also include programs that pre‑existed the FSA.[266] Thus, the inclusion of programs that pre‑existed the FSA would contradict § 3632(d)(4)(B).[267] Congress had to use the word “may” to maintain consistency between sections of the statute, not to signify that the BOP can wait.[268]
Furthermore, the statutory construction of § 3621(h)(1) and § 3621(h)(2) support the conclusion that Congress expressed intent.[269] Under § 3621(h)(1)(A)–(C), “[n]ot later than 180 days after [January 15, 2020,] . . . the Director of the Bureau of Prisons shall . . . begin to expand the effective evidence-based recidivism reduction programs and productive activities . . . [and] begin to implement the other risk and needs assessment tools necessary to effectively implement the System over time.”[270] By using the word “shall” followed by the phrases “begin to expand” and “begin to implement . . . over time,” Congress expressed intent that the BOP shall implement the system over time.[271] Therefore, the BOP’s failure to act is contrary to congressional intent.[272]
2. In the Spirit of the First Step Act
After analyzing the FSA text and finding that Congress expressed intent that the BOP shall implement the system over time, the court may look to legislative history and the statutory purpose to strengthen its analysis. Legislative history includes all documents and information generated by Congress during the passage of legislation.[273] A modern judge would likely only look to legislative history after a strict, rigorous textual analysis of the statute’s plain language and other attempts to find meaning.[274] For example, when looking at the Judiciary Committee’s report to accompany the FSA, it is clear that Congress’s intent was to “[d]irect[] the BOP to (1) implement the System and complete a risk and needs assessment for each prisoner; (2) expand the effective programs it offers and add any new ones necessary to effectively implement the System; (3) phase in such programs within 2 years.”[275] The verbs used by Congress do not suggest that the BOP may wait to act; the verbs used by Congress direct the BOP to take immediate action—“implement,” “expand,” and “phase in.”[276] However, the persuasiveness of legislative history would likely depend on the judge.[277] A soft-textualist or purpose‑driven judge, like Justice Breyer, would probably consider persuasive legislative history if it was relevant to resolving the ambiguity.[278] However, a textualist judge, like Justice Barrett, who follows the methodology of Justice Scalia,[279] would starkly oppose the use of legislative history.[280]
The interpretation of the text should not result in the destruction or contradiction of the legislature’s purpose behind the statute.[281] Additionally, sources outside the statute itself, such as legislative history, can help clarify ambiguity because they clarify the spirit of the statute, meaning congressional intent.[282] A judge taking a more purpose-driven approach, such as Justice Berger in TVA v. Hill, would probably want to consider Congress’s purpose in enacting the statute and the intent behind the use of the statute’s language.[283] If the judge found the text’s meaning and purpose in conflict, the judge would likely use the “spirit” of the statute to guide the interpretation.[284]
The purpose of the FSA was to “improv[e] the effectiveness and efficiency of the Federal prison system with offender risk and needs assessment, individual risk reduction incentives and rewards, and risk and recidivism reduction.”[285] In addition, Congress passed the FSA because there was an immediate need to “reduce the prison population that, because of its rising costs, is becoming a real and immediate threat to public safety.”[286] Thus, for the interpretation of section 102(h)(4) to remain consistent with the congressional purpose of reducing the prison population, the courts must interpret section 102(h)(4) as directing the BOP to apply the time credits—following congressional intent.[287]
In sum, the FSA has had implementation issues that resulted in caselaw that can affect the implementation of future legislation and statutory interpretation caselaw.[288] The majority of the courts should have followed the analysis in O’Bryan because the FSA was unambiguous as to its expression of Congressional intent.[289] However, most of the courts looked past the plain meaning and congressional intent when interpreting the statute and held that the BOP had the discretion to grant the time credits.[290] Furthermore, in ignoring congressional intent, the majority of the courts sidestepped the Chevron doctrine.[291] The courts’ actions are concerning because Congress clearly expressed its intent within the plain language of the FSA.[292] The interpretation of section 102(h)(4) leads to an interesting question about how the courts will handle situations where Chevron doctrine’s first step applies in the future.[293] In a sense, the courts have given themselves the power to make policy decisions as a judiciary rather than following the intent of Congress under the Chevron doctrine.[294]
V. Conclusion
The Trump Administration signed the FSA into law to take one step forward as the first prison reform act with the goal of sentence reform and active recidivism reduction. However, while the FSA’s purpose was noble, the FSA did not meet its objectives and instead took two steps back. First, the PATTERN algorithm is arguably a violation of equal protection law. Because an affected prisoner has statistical evidence and evidence of PATTERN’s effect of disparate impact, this Comment urges the courts to consider expanding the disparate impact intent requirement for algorithms in the criminal justice system. Rather than requiring intent, disparate impact in the algorithm context should require statistical evidence and evidence of an effect of disparate impact. The courts should consider this framework, especially in the criminal justice system, because of the fundamental rights at stake. Second, the BOP avoided the responsibility of implementing the time credits because the majority of federal courts overlooked congressional intent and deferred to the BOP’s argument. Instead, the federal courts should have applied the Chevron doctrine and followed congressional intent that the BOP must immediately implement the time credits and release deserving prisoners. Prison reform is a pressing issue in our society today. The FSA, if implemented correctly and constitutionally by the BOP, has the potential to make the first step towards reducing recidivism and, ultimately, the federal prison population. Unfortunately, because of the courts and the BOP, the FSA has not achieved its prison reform goals.
Emily Muenster
Highest to Lowest—Prison Population Total, World Prison Brief, https://www.prisonstudies.org/highest-to-lowest/prison-population-total?field_region_taxonomy_tid=All [https://perma.cc/FAA6-NYKX] (last visited July 17, 2022).
Id.; United States vs China by Population, Stat. Times (Jan. 10, 2021), https://statisticstimes.com/demographics/china-vs-us-population.php [https://perma.cc/M6J7-Z29G].
Prison Population Total, supra note 1.
In 2015, Hilary Clinton gave a speech calling for the end to the “era of mass incarceration.” See Sam Frizell, Hillary Clinton Calls for an End to "Mass Incarceration," Time (Apr. 29, 2015, 4:39 PM), http://time.com/3839892/hillary-clinton-calls-for-an-end-to-mass-incarceration/ [https://perma.cc/6KQE-LSG6].
See Candice N. Jones, A Broken PATTERN: A Look at the Flawed Risk and Needs Assessment Tool of the First Step Act, 5 How. Hum. & C.R. L. Rev. 185, 187 (2021).
According to a study done by the ACLU, “69% of voters say it is important for the country to reduce its prison populations, including 81% of Democrats, 71% of Independents and 54% of Republicans.” See ACLU Nationwide Poll on Criminal Justice Reform, ACLU (July 15, 2015), https://www.aclu.org/other/aclu-nationwide-poll-criminal-justice-reform [https://perma.cc/RFE6-EJQS].
Id.
Ames Grawert, What Is the First Step Act—And What’s Happening with It?, Brennan Center for Just. (June 23, 2020), https://www.brennancenter.org/our-work/research-reports/what-first-step-act-and-whats-happening-it [https://perma.cc/P6DA-7KHS]; see also Thomas L. Root, Does PATTERN Have It All Wrong?, LISA Found. (Jan. 31, 2022), https://lisa-legalinfo.com/tag/first-step/ [https://perma.cc/XNW6-AVKP].
Amy B. Cyphert, Reprogramming Recidivism: The First Step Act and Algorithmic Prediction of Risk, 51 Seton Hall L. Rev. 331, 343 (2020); see also U.S. Dep’t Just., Off. of the Att’y Gen., The First Step Act of 2018: Risk and Needs Assessment System 85 (2019) [hereinafter First Step Act Report (2019)], https://nij.ojp.gov/sites/g/files/xyckuh171/files/media/document/the-first-step-act-of-2018-risk-and-needs-assessment-system_1.pdf. [https://perma.cc/BF5S-P7PE].
First Step Act Report (2019), supra note 9, at 43.
Root, supra note 8.
Id.
Id.
Grawert, supra note 8.
Yick Wo v. Hopkins, 118 U.S. 356, 369 (1886); Batson v. Kentucky, 476 U.S. 79, 97–98 (1986).
See Jones, supra note 5, at 188–91.
Id.
Grawert, supra note 8.
Id.
Jones, supra note 5, at 188–91.
Id. at 188; see also ACLU Releases Crack Cocaine Report, Anti-Drug Abuse Act of 1986 Deepened Racial Inequity in Sentencing, ACLU (Oct. 26, 2006), https://www.aclu.org/press-releases/aclu-releases-crack-cocaine-report-anti-drug-abuse-act-1986-deepened-racial-inequity [https://perma.cc/65XC-F2CM] (“As law enforcement focused its efforts on crack offenses, a dramatic shift occurred in the incarceration trends for African Americans . . . [which] transformed federal prisons into institutions increasingly dedicated to incarcerating African Americans.”).
ACLU Releases Crack Cocaine Report, supra note 21 (finding that “because of [crack cocaine’s] relative low cost, crack cocaine is more accessible to poor people, many of whom are African Americans”).
The 1994 Crime Act “created new crimes, increase[d] mandatory minimum[s], [provided] penalties, and end[ed] Pell grants for those in federal prison.” Jones, supra note 5, at 188–89 (quoting Shon Hopwood, The Effort to Reform the Federal Criminal Justice System, 128 Yale L.J. 791, 792 (2019)).
Id. at 189.
The Second Chance Act, CSG Just. Ctr. (Apr. 2018), https://csgjusticecenter.org/wp-content/uploads/2020/02/July-2018_SCA_factsheet.pdf [https://perma.cc/D87L-XU5K].
Jones, supra note 5, at 189.
Id. at 190 (arguing that the Fair Sentencing Act “was the first semi-successful attempt at legislation intended to rectify the racial disparities as a result of mandatory minimum drug sentences, there remained a lack of true reform to the sentencing landscape.”).
Id. at 191.
Id. at 186, 191 (“[S]ign[ing] the First Step Act of 2018 into law [w]as an attempt to take the first serious step towards criminal justice reform in more than a decade.”).
First Step Act of 2018, Pub. L. No. 115–391, 132 Stat. 5194 (2018) [hereinafter First Step Act of 2018] (codified in scattered sections of 18 U.S.C.); see also Jones, supra note 5, at 191 (“First Step stands for ‘Formerly Incarcerated Reenter Society Transformed Safely Transitioning Every Person.’”); Nathan James, Cong. Rsch. Serv., R45558, The First Step Act of 2018: An Overview 1 (2019).
Grawert, supra note 8.
Rebecca Wasif, Reforming Expansive Crime Control & Sentencing Legislation in an Era of Mass Incarceration: A National and Cross-National Study, 27 U. Mia. Int’l & Compar. L. Rev. 174, 193–94 (2019).
First Step Act of 2018 § 107, 18 U.S.C. § 3631.
Id.
Wasif, supra note 32, at 194.
First Step Act of 2018 § 402, 18 U.S.C. § 3553; Grawert, supra note 8.
Grawert, supra note 8.
Id.
Wasif, supra note 32, at 193.
Id.
James, supra note 30, at 1, 8, 10.
Grawert, supra note 8.
First Step Act Report (2019), supra note 9, at 27–28.
Lina Marmolejo, Measuring Recidivism Is Hard but We Must Get It Right, Inter‑Am. Dev. Bank (Sept. 8, 2016), https://blogs.iadb.org/seguridad-ciudadana/en/why-measuring-recidivism-is-so-hard/ [https://perma.cc/J5NH-H83C].
First Step Act Report (2019), supra note 9, at 28; Cyphert, supra note 9, at 343, 345.
First Step Act Report (2019), supra note 9, at 43.
Walter Pavlo, Office of Inspector General Critical of Federal Prison Implementation of First Step Act, Forbes (Nov. 16, 2021, 1:05 PM), https://www.forbes.com/sites/walterpavlo/2021/11/16/office-of-inspector-general-critical-of-federal-prison-implementation-of-first-step-act/?sh=77db336050cb [https://perma.cc/7TFM-XWSM].
First Step Act of 2018, § 102(h)(1)(A)–(C), 18 U.S.C. § 3621(h)(1)(A)–(C).
Pavlo, supra note 47.
Id.
Danielle Kehl et al., Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing 10 (2017) (unpublished student work available through Responsive Communities Initiative, Berkman Klein Center for Internet & Society, Harvard Law School) (on file with Digital Access to Scholarship at Harvard), http://nrs.harvard.edu/urn-3:HUL.InstRepos:33746041 [https://perma.cc/UW36-YRGE].
Id.
Erin Harbinson, Understanding ‘Risk Assessment’ Tools: What They Are and the Role They Play in the Criminal Justice System: A Primer, 75 Bench & B. Minn. 14, 15 (2018).
Id.
Id.
Id.
Kehl et al., supra note 51, at 9 (“As these algorithms are used over time, their models often dynamically adjust to new data.”).
J.C. Oleson, Risk in Sentencing: Constitutionally Suspect Variables and Evidence‑Based Sentencing, 64 SMU L. Rev. 1329, 1348 (2011). “Risk assessment tools—and the principles underlying their development—have actually been a part of the criminal justice system for decades.” Kehl et al., supra note 51, at 3.
See Ernest W. Burgess, Factors Determining Success or Failure on Parole, in The Workings of the Indeterminate-Sentence Law and the Parole System in Illinois 221 (1928); see also K.N.C., Algorithms Should Take into Account, Not Ignore, Human Failings, Economist (Apr. 8, 2019), https://www.economist.com/blogs/openfuture/2019/04/open-future [https://perma.cc/TMW9-PZY9].
Burgess, supra note 59, at 221.
See id. at 222.
Oleson, supra note 58, at 1348.
Id.
Id. at 1348, 1352.
Gratz v. Bollinger, 539 U.S. 244, 270 (2003) (finding a racially discriminatory algorithm that used race as a number advantage to be a violation of equal protection under the strict scrutiny test). A law using the suspect characteristic of gender will be upheld only if it is substantially related to an important governmental purpose. Craig v. Boren, 429 U.S. 190, 204 (1976); United States v. Virginia, 518 U.S. 515, 534 (1996) (holding that a state implementing a law using the suspect characteristic of gender must provide an “exceedingly persuasive justification” as to why the discrimination is necessary).
Melissa Hamilton, Risk-Needs Assessment: Constitutional and Ethical Challenges, 52 Am. Crim. L. Rev. 231, 240 (2015) (“[S]cholars and practitioners are debating the appropriateness of using risk-needs tools for criminal justice-oriented decisions due to the presence of potentially objectionable variables within them.”).
Harbinson, supra note 53, at 16.
Id.
Id. (“This is commonly referred to as a validation study, where a risk assessment is studied to determine how well it predicts risk in a jurisdiction or agency that is different from where the tool was originally created.”).
Id.; Anthony W. Flores et al., False Positives, False Negatives, and False Analyses: A Rejoinder to "Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks.," 80 Fed. Prob. 38, 39–40 (2016) (arguing that when using a risk assessment tool, it is crucial to use it solely the purpose for which the tool was designed).
Harbinson, supra note 53, at 16–17 (“A study examining the role of criminal history and racial bias acknowledges a complicated relationship between race, criminal history, and recidivism and suggests that criminal history might have more of an impact when risk assessment is considered at sentencing rather than other stages.”); see also Jennifer L. Skeem & Christopher T. Lowenkamp, Risk, Race, and Recidivism: Predictive Bias and Disparate Impact, 54 Criminology 680, 685 (2016).
Rebecca Heilweil, Why Algorithms Can Be Racist and Sexist (Feb. 18, 2020, 12:20 PM), https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency [https://perma.cc/D643-UDS6] (“[T]hese systems can be biased based on who builds them, how they’re developed, and how they’re ultimately used. This is commonly known as algorithmic bias. It’s tough to figure out exactly how systems might be susceptible to algorithmic bias, especially since this technology often operates in a corporate black box.”).
Why Jurisdictions Choose RATs, Mapping Pretrial Injustice, https://pretrialrisk.com/the-basics/the-case-for-rats/ [https://perma.cc/D2Y5-B3PC] (last visited July 24, 2022); Anne Milgram, Why Smart Statistics Are the Key to Fighting Crime, TED (Jan. 2014), https://www.ted.com/talks/anne_milgram_why_smart_statistics_are_the_key_to_fighting_crime/transcript [https://perma.cc/HT9C-YBQL].
Bias in Algorithms, Mapping Pretrial Injustice, https://pretrialrisk.com/the-basics/bias-in-algorithms/ [https://perma.cc/4FML-TG7C] (last visited July 15, 2022).
Id.
Id.
Id.
Id.
Seth J. Chandler, Algorithmic Classifier Discrimination 1, 19 (2021) (unpublished Mathematica notebook) (on file with Author) (focusing on discrimination that may result from the use of predictive algorithms to impose consequences on individuals instead of discrimination that occurs because of deficient data or intentional acts).
Id. at 1.
Id. at 1,3.
Id. at 4–5.
Bias in Algorithms, supra note 74.
The “COMPAS risk scores indicated that he presented a high risk of recidivism across all three of the risk areas COMPAS purports to assess: pretrial recidivism risk, general recidivism risk, and violent recidivism risk.” Cyphert, supra note 9, at 339–40 (quoting State v. Loomis, 881 N.W.2d 749, 754 (Wis. 2016) cert. denied, 137 S. Ct. 2290 (2017)) (cleaned up).
Id. at 340.
Id.
Id. at 340–41 (quoting Loomis, 881 N.W.2d at 756) (cleaned up).
Id. at 341.
Id.
Criminal Law—Sentencing Guidelines—Wisconsin Supreme Court Requires Warning Before Use of Algorithmic Risk Assessments in Sentencing—State v. Loomis, 881 N.W.2d 749 (Wis. 2016), 130 Harv. L. Rev. 1530, 1530 (2017) [hereinafter Criminal Law—Sentencing Guidelines].
Id. at 1532–33.
Cyphert, supra note 9, at 342; see also Loomis, 881 N.W.2d at 769 (cleaned up).
Cyphert, supra note 9, at 342 (quoting Loomis, 881 N.W.2d at 769) (cleaned up).
Id.
Id.
First Step Act of 2018 § 101(a), 18 U.S.C. § 3631(b); Cyphert, supra note 9, at 342.
Cyphert, supra note 9, at 343.
First Step Act of 2018 § 101(a), 18 U.S.C. § 3632(a)(1); see also id. at 343–44 (“The Act required that those labeled with a higher risk of reoffending be given priority access to recidivism reduction programs, and that those classified as lower risk for reoffending be given priority access to productive activities.”).
Cyphert, supra note 9, at 344.
Pavlo, supra note 47; see also Management Advisory Memorandum from Michael E. Horowitz, Inspector Gen., to Michael Carvajal, Dir., Fed. Bureau of Prisons, on Impact of the Failure to Conduct Formal Policy Negotiations on the Federal Bureau of Prisons’ Implementation of the FIRST STEP Act and Closure of Office of the Inspector General Recommendations (Nov. 15, 2020), https://oig.justice.gov/sites/default/files/reports/22-007.pdf [https://perma.cc/4TSM-SC2H] (criticizing the BOP for not implementing the time credits as mandated by Congress in the FSA).
First Step Act Report (2019), supra note 9, at 19, 44.
Cyphert, supra note 9, at 347 (defining reoffend as likelihood of rearrest or conviction).
First Step Act Report (2019), supra note 9, at 50.
Id.
Cyphert, supra note 9, at 348 (“In addition to dividing recidivism into ‘general’ and ‘violent’ categories, PATTERN also subdivides depending on the gender of the person being examined. The tool’s developers argue that ‘adding both gender and outcome (i.e., general and violent recidivism) specificity’ represents ‘recent advancements in risk assessment tool construction . . . .’”).
Id. at 354.
Id. at 349.
Id.
Id.
First Step Act Report (2019), supra note 9, at 60.
See supra note 98 and accompanying text.
Cyphert, supra note 9, at 350.
Id.
Id. at 351.
See Oversight Hearing on the Federal Bureau of Prisons and Implementation of the First Step Act Before the Subcomm. on Crime, Terrorism, and Homeland Sec. of the H. Comm. on the Judiciary, 116th Cong. 176 (2019) (written testimony of Melissa Hamilton, Professor of Law & Criminal Justice, University of Surrey School of Law)) (“Non-Whites are one-and-a-half times more likely to be assessed as medium/high risk than Whites.”).
Cyphert, supra note 9, at 348.
First Step Act Report (2019), supra note 9, at 59–60.
Suspect Classification, Cornell L. Sch. Legal Info. Inst. https://www.law.cornell.edu/wex/suspect_classification [https://perma.cc/SC5W-FZHD] (last visited Jan. 2, 2022) (“Under Equal Protection, when a statute discriminates against an individual based on a suspect classification, that statute will be subject to either strict scrutiny or intermediate scrutiny. There are four generally agreed-upon suspect classifications: race, religion, national origin, and alienage.”); see Gratz v. Bollinger, 539 U.S. 244, 266, 270, 274 (2003).
See United States v. Virginia, 518 U.S. 515, 531, 534 (1996); see also Craig v. Boren, 429 U.S. 190, 204 (1976).
Root, supra note 8.
Id.
Id.
Evidence-Based Recidivism Reduction (EBRR) Programs and Productive Activities (PA), U.S. Dep’t Just.: Fed. Bureau Prisons https://www.bop.gov/inmates/fsa/docs/evidence_based_recidivism_reduction_programs.pdf [https://perma.cc/A695-8NTH] (last visited July 13, 2022).
Id.
Root, supra note 8.
Id.
First Step Act of 2018 § 101(a), 18 U.S.C. § 3632(d).
Root, supra note 8.
First Step Act of 2018 § 101(a), 18 U.S.C. § 3632(d)(4).
Root, supra note 8.
Id.
Id.
Id.
Id.
Id.; see also First Step Act of 2018 § 101(a), 18 U.S.C. § 3632(d)(4)(C).
Peter J. Tomasek, Courts Stuck on January 15, 2022 Start for First Step Act Time Credits, Interrogating Just. (Dec. 14, 2021) https://interrogatingjustice.org/emphasizing-rehabilitation/courts-stuck-on-january-15-2022-start-for-first-step-act-time-credits/ [https://perma.cc/LS9U-BWSN].
Pavlo, supra note 47.
Criminal Law—Sentencing Guidelines, supra note 90.
Sonja B. Starr, Evidence-Based Sentencing and the Scientific Rationalization of Discrimination, 66 Stan. L. Rev. 803, 822 (2014).
Kehl et al., supra note 51, at 22; see also State v. Loomis, 881 N.W.2d 749, 764–65, 767 (Wis. 2016).
See infra Part IV.
Kehl et al., supra note 51, at 23; see also U.S. Const. amend. XIV, § 1.
Kehl et al., supra note 51, at 23.
Bolling v. Sharpe, 347 U.S. 497, 499–500 (1954).
United States v. Carolene Prods. Co., 304 U.S. 144, 152–53 n.4 (1938); see also Marcy Strauss, Reevaluating Suspect Classifications, 35 Seattle Univ. L. Rev. 135, 135–37 (2011).
Suspect Classification, supra note 118.
See Craig v. Boren, 429 U.S. 190, 204 (1976); United States v. Virginia, 518 U.S. 515, 531, 534 (1996).
See Washington v. Davis, 426 U.S. 229, 239–40 (1976).
Williamson v. Lee Optical of Okla., Inc., 348 U.S. 483, 487 (1955).
See Davis, 426 U.S. at 239–40.
Beyond Intent: Establishing Discriminatory Purpose in Algorithmic Risk Assessment, 134 Harv. L. Rev. 1760, 1764–65 (2021).
Yick Wo v. Hopkins, 118 U.S. 356, 373 (1886).
Hernandez v. Texas, 347 U.S. 475, 478 (1954); Beyond Intent, supra note 151, at 1765 (“[T]he Feeney Court laid out an alternate path that does not mandate a showing of discriminatory purpose: actions that are expressly conditioned on a racial classification or an ‘obvious pretext,’ ‘regardless of purported motivation,’ are ‘presumptively invalid,’ as they ‘in themselves supply a reason to infer antipathy.’” (quoting Pers. Adm’r of Mass. v. Feeney, 442 U.S. 256, 272 (1979)).
Hopkins, 118 U.S. at 357–59.
Id. at 359.
Id.
Id.
Id. at 373–74.
Id. at 358–59, 368, 373.
Id. at 373.
Beyond Intent, supra note 151, at 1770 n.74 (noting that the Snyder v. Louisiana Court “[found] that the default rule of showing discriminatory intent does not apply in the context of Batson challenges.” (citing Snyder v. Louisiana, 552 U.S. 472, 485 (2008)).
Batson v. Kentucky, 476 U.S. 79, 96 (1986); Edmonson v. Leesville Concrete Co., 500 U.S. 614, 630 (1991).
Batson, 476 U.S. at 96.
See id.
Id.
Id. at 98.
Id. at 97–98.
Id. at 98.
Beyond Intent, supra note 151, at 1771; see Flowers v. Mississippi, 139 S. Ct. 2228, 2235 (2019).
Stephen F. DeAngelis, Artificial Intelligence: How Algorithms Make Systems Smart, WIRED, https://www.wired.com/insights/2014/09/artificial-intelligence-algorithms-2/ [https://perma.cc/FQ4H-H3B5] (last visited July 15, 2022).
Cassie Kozyrkov, Explainable AI Won’t Deliver. Here’s Why., HackerNoon (Nov. 16, 2018), https://hackernoon.com/explainable-ai-wont-deliver-here-s-why-6738f54216be [https://perma.cc/88BD-CWAU] (“The explanation will always be horribly oversimplified, so it won’t be true. [Artificial intelligence] [e]xplainability provides a cartoon sketch of a why, but it doesn’t provide the how of decision-making.”).
Crystal S. Yang & Will Dobbie, Equal Protection Under Algorithms: A New Statistical and Legal Framework, 119 Mich. L. Rev. 291, 297 (2020).
Beyond Intent, supra note 151, at 1766–67.
Bradley Arsenault, Why Measuring Accuracy Is Hard (and Very Important)!, Towards Data Sci. (Mar. 20, 2019), https://towardsdatascience.com/why-measuring-accuracy-is-hard-and-very-important-part-1-why-measuring-right-is-important-a279e8a6fcd [https://perma.cc/W5EM-QZPC].
Beyond Intent, supra note 151, at 1767.
Pavlo, supra note 47.
Id.
Id.
Id.
See Goodman v. Ortiz, No. CV 20-7582, 2020 WL 5015613, at *6 (D.N.J. Aug. 25, 2020) (holding that the ordinary meaning of “phase-in” combined with analysis of the statutory framework of § 3621(h) unambiguously supports the conclusion that the BOP must gradually implement the risk recidivism program).
See Chevron U.S.A., Inc. v. Nat. Res. Def. Council, Inc., 467 U.S. 837, 842–43 (1984).
See id. at 843–44.
Id. at 842–43.
Id. at 843 n.9.
See Perrin v. United States, 444 U.S. 37, 42 (1979).
Util. Air Regul. Grp. v. EPA, 573 U.S. 302, 319–20 (2014); see FDA v. Brown & Williamson Tobacco Corp., 529 U.S. 120, 133 (2000).
Holy Trinity Church v. United States, 143 U.S. 457, 459 (1892) (“[A] thing may be within the letter of the statute and yet not within the statute, because not within its spirit, nor within the intention of its makers.”).
Connecticut Nat’l Bank v. Germain, 503 U.S. 249, 254 (1992).
Chevron, 467 U.S. at 842–43.
Id.
Id. at 843.
Fleming v. Joseph, No. 3:20-CV-5990, 2021 WL 1669361, at *4 (N.D. Fla. Apr. 7, 2021), report and recommendation adopted, No. 3:20CV5990, 2021 WL 1664372 (N.D. Fla. Apr. 28) (finding that the FSA time-credit system does not go into effect until January 15, 2022). In making this finding, the Fleming court quoted Llewlyn v. Johns, No. 5:20-CV-77, 2021 WL 535863, at *2 (S.D. Ga. Jan. 5, 2021), report and recommendation adopted, 2021 WL 307289 (S.D. Ga. Jan. 29, 2021), and Herring v. Joseph, No. 4:20-CV‑249, 2020 WL 3642706, at *1 (N.D. Fla. July 6, 2020).
Fleming, 2021 WL 1669361, at *1.
Id. at *3.
Id. at *4.
Id.; see First Step Act of 2018 § 102(a), 18 U.S.C. § 3621(h)(2)(A).
Fleming, 2021 WL 1669361, at *6.
Goodman v. Ortiz, No. 20-7582, 2020 WL 5015613, at *1–2 (D.N.J. Aug. 25, 2020).
Id. at *2.
Id.
Id. at *5.
Id.
Id. at *6.
O’Bryan v. Cox, No. CIV 21-4052, 2021 WL 3932275, at *1 (D.S.D. Sept. 1, 2021).
Id.
Id. at 3.
Id.
Goodman, 2020 WL 5015613, at *5.
O’Bryan, 2021 WL 3932275, at *3.
See infra Section IV.A.1.a.
See Washington v. Davis, 426 U.S. 229, 239–40 (1976).
Yang & Dobbie, supra note 172, at 332.
The First Step Act, the Pandemic, and Compassionate Release: What Are the Next Steps for the Federal Bureau of Prisons?: Hearing Before the Subcomm. on Crime, Terrorism, and Homeland Security of the H. Comm. on the Judiciary, 117th Cong. 4, 6 (2022) (written testimony of Melissa Hamilton, Professor of Law & Criminal Justice, University of Surrey School of Law), https://docs.house.gov/meetings/JU/JU08/20220121/114349/HHRG-117-JU08-Wstate-HamiltonM-20220121.pdf [https://perma.cc/7UEA-S944].
See Yick Wo v. Hopkins, 118 U.S. 356, 374 (1886).
Batson v. Kentucky, 476 U.S. 79, 95–96 (1986).
Bolling v. Sharpe, 347 U.S. 497, 499 (1954).
United States v. Virginia, 518 U.S. 515, 531–33 (1996). Compare Craig v. Boren, 429 U.S. 190, 197 (1976), with Gratz v. Bollinger, 539 U.S. 244, 270 (2003).
Regents of the Univ. of Cal. v. Bakke, 438 U.S. 265, 290–91 (1978).
Id. at 299.
Id.
Yick Wo v. Hopkins, 118 U.S. 356, 368 (1886).
See supra Part II.
See supra Part II.
Hopkins, 118 U.S. at 374 (1886); Thomas L. Root, Is Pattern Dooming First Step Programming, LISA Found. (Jan. 31, 2022), https://lisa-legalinfo.com/tag/first-step/ [https://perma.cc/3KM2-DPVX].
The First Step Act, supra note 213, at 5.
Id.
Id.
Carrie Johnson, Flaws Plague a Tool Meant to Help Low-Risk Federal Prisoners Win Early Release, NPR (Jan. 26, 2022, 5:00 AM), https://www.npr.org/2022/01/26/1075509175/justice-department-algorithm-first-step-act [https://perma.cc/3EZR-7BZM]; see also Male Pattern Risk Scoring, Fed. Bureau Prisons, https://www.bop.gov/inmates/fsa/docs/male_pattern_form.pdf [https://perma.cc/2WDZ-9A4K] (last visited July 16, 2022).
Batson v. Kentucky, 476 U.S. 79, 95 (1986).
Id. at 96.
Root, supra note 224.
The First Step Act, supra note 213, at 3; see also Nat’l Inst. Just., 2020 Review and Revalidation of the First Step Act Risk Assessment Tool 6 (Jan. 2021), https://www.ojp.gov/pdffiles1/nij/256084.pdf [https://perma.cc/UT9Q-ANCM].
First Step Act of 2018 § 101(a), 18 U.S.C § 3632(d)(4)(D); Root, supra note 224 (“The law already has flaws as there are a number of exceptions carved out to prevent some offenses from being ineligible from earning [time credits]. Look for those to be challenged in court.”).
Root, supra note 224.
The First Step Act, supra note 213, at 3; see also Nat’l Inst. Just., supra note 232, at 6–7.
Beyond Intent, supra note 151, at 1764–65, 1770 n.67 (“[I]n contexts implicating individual liberty interests or fundamental rights, the individual has a smaller burden of persuasion; at the same time, as the state’s interest becomes more fundamental, its own burden shrinks in turn.” (citing Daniel R. Ortiz, The Myth of Intent in Equal Protection, 41 Stan. L. Rev. 1105, 1136–37 (1989))).
U.S. Dep’t Just., Off. of the Att’y Gen., The First Step Act of 2018: Risk and Needs Assessment System—UPDATE 2, 7, 14 (Jan. 2020), https://www.bop.gov/inmates/fsa/docs/the-first-step-act-of-2018-risk-and-needs-assessment-system-updated.pdf [https://perma.cc/E8ED-LSRK]; see also Root, supra note 224.
Root, supra note 224 (“For example, if a 39-year-old man comes to prison for a 15-year sentence, he has a PATTERN age risk factor of 21. But PATTERN was designed to assess his age at release, which would be age 52. The risk factor for age 52 is only 7. The difference is 14 points.”).
The First Step Act, supra note 213, at 2.
Nat’l Inst. Just., supra note 232, at 6.
The First Step Act, supra note 213, at 4.
Id. at 2.
Id. at 2–3.
Id.
Batson v. Kentucky, 476 U.S. 79, 98 (1986).
Id. Likely, the government’s defense will center around an argument that that standard should not be changed, and that Fisher v. University of Texas seemed to allow gerrymandered proxies for race. See Fisher v. University of Texas, 579 U.S. 365, 373–75 (2013). But note, the Supreme Court of the United States recently granted certiorari in two affirmative action cases, Students for Fair Admissions, Inc. v. President and Fellows of Harvard College and Students for Fair Admissions v. University of North Carolina, which may “end the use of race as an admissions factor.” Students for Fair Admissions, U.S. Supreme Court Grants Certiorari in Students for Fair Admissions v. Harvard and Students for Fair Admissions v. University of North Carolina, PR Newswire (Jan. 24, 2022, 10:41 PM), https://www.prnewswire.com/news-releases/us-supreme-court-grants-certiorari-in-students-for-fair-admissions-v-harvard-and-students-for-fair-admissions-v-university-of-north-carolina-301466593.html [https://perma.cc/X3VJ-2KTB].
Heilweil, supra note 72.
Cyphert, supra note 9, at 354 (“[T]he [accuracy] value calculated for white inmates is higher than the [accuracy] value for African American and Hispanic inmates. This means that, accepting the DOJ’s own test for predictive validity, PATTERN is less accurate in predicting recidivism for men of color than for white men. The [accuracy] values for white women were lower than the [accuracy] values for African American women but higher than those of Hispanic women.”).
Id.
Statistics How To, False Positive and False Negative: Definition and Examples, https://www.statisticshowto.com/false-positive-definition-and-examples/ [https://perma.cc/A75Q-7RY9] (last visited July 14, 2022) (“A false positive is where you receive a positive result for a test, when you should have received a negative result[]. . . . [A] false negative [is] where you receive a negative test result, when you should have received a positive one.”).
Chandler, supra note 79, at 19.
Id. at 1, 19.
Id.
Id. at 1, 9, 19.
Id. at 1, 5–6, 9, 19.
Harvard Law School, The Antonin Scalia Lecture Series: A Dialogue with Justice Elena Kagan on the Reading of Statutes, YouTube (Nov. 25, 2015), https://www.youtube.com/watch?v=jmv5Tz7w5pk [https://perma.cc/L462-439Z].
Larry M. Eig, Cong. Rsch. Serv., 97–589, Statutory Interpretation: General Principles and Recent Trends 4 (2014) (“[T]he cardinal rule of construction is that the whole statute should be drawn upon as necessary, with its various parts being interpreted within their broader statutory context in a manner that furthers statutory purposes.”).
See Anita S. Krishnakumar, Statutory Interpretation in the Roberts Court’s First Era: An Empirical and Doctrinal Analysis, 62 Hastings L.J. 221, 251 (2010) (finding that plain meaning has been very popular in the 2000s).
See Sebelius v. Cloer, 569 U.S. 369, 377 n.4 (2013) (citing the plain meaning rule).
William Baude & Ryan D. Doerfler, The (Not So) Plain Meaning Rule 6 (U. Chi. Pub. L. & Legal Theory Paper Series, Working Paper No. 590, 2016) (“The ordinary meaning is ‘plain’ in the sense of ‘plain vanilla.’ But the plain meaning rule uses the phrase in a different sense, to denote obvious meaning—i.e., the meaning that is clear.”).
United States v. Ron Pair Enters., Inc., 489 U.S. 235, 242 (1989).
First Step Act of 2018 § 102(h)(4), 18 U.S.C. § 3621(h)(4) (emphasis added).
Valerie C. Brannon, Cong. Rsch. Serv., R45153, Statutory Interpretation: Theories, Tools, and Trends (2018) (“When courts render decisions on the meaning of statutes, the prevailing view is that a judge’s task is not to make the law, but rather to interpret the law made by Congress.”).
See Kingdomware Techs., Inc. v. United States, 579 U.S. 162, 171 (2016) (“Unlike the word ‘may,’ which implies discretion, the word ‘shall’ usually connotes a requirement.”).
H.R. Rep. No. 115–699, at 4 (2018).
Antonin Scalia & Bryan A. Garner, Reading Law: The Interpretation of Legal Texts 112–13 (2012); First Step Act of 2018 § 102(h)(4), 18 U.S.C. § 3621(h)(4).
See Scalia & Garner, supra note 266, at 112.
Id. at 170.
First Step Act of 2018 § 102(h)(1–2), 18 U.S.C. § 3621(h)(1)–(2); see also Brannon, supra note 263, at 49 (“When courts render decisions on the meaning of statutes, the prevailing view is that a judge’s task is not to make the law, but rather to interpret the law made by Congress.”).
First Step Act of 2018 § 102(h)(1–2), 18 U.S.C. § 3621(h)(1–2) (emphasis added).
Id.
Id.
The Legislative Evolution of the FOIA, U.S. Dep’t Just. Blog (Apr. 12, 2012), https://www.justice.gov/oip/blog/legislative-evolution-foia [https://perma.cc/Q85J-2MYY].
Brannon, supra note 263, at 49.
H.R. Rep. No. 115–699, at 28 (2018) (emphasis added).
Id.; see also Pavlo, supra note 47 (noting that the OIG concluded that they are “concerned that the delay in applying earned time credits may negatively affect inmates who have earned a reduction in their sentence or an earlier placement in the community”).
Compare Arlington Cent. Sch. Dist. Bd. of Educ. v. Murphy, 548 U.S. 291, 304 (2006) (ignoring the use of legislative history because the statute was not ambiguous), with id. at 323 (Breyer, J., dissenting) (using legislative history because the statute was ambiguous).
Stephen Breyer, On the Uses of Legislative History in Interpreting Statutes, 65 S. Cal. L. Rev. 845, 848 (1992).
Adam Liptak, Barrett’s Record: A Conservative Who Would Push the Supreme Court to the Right, N.Y. Times (Nov. 2, 2020), https://www.nytimes.com/article/amy-barr
ett-views-issues.html [https://perma.cc/62DC-FCE7] (“Judge Barrett’s judicial opinions, based on a substantial sample of the hundreds of cases that she has considered in her three years on the federal appeals court in Chicago, are marked by care, clarity and a commitment to the interpretive methods used by Justice Antonin Scalia.”).
Conroy v. Aniskoff, 507 U.S. 511, 519 (1993) (Scalia, J., concurring) (describing legislative history as “equivalent [to] entering a crowded cocktail party and looking over the heads of the guests for one’s friends”).
Holy Trinity Church v. United States, 143 U.S. 457, 459 (1892).
Id.
Tennessee Valley Auth. v. Hill, 437 U.S. 153, 209–10 (1978).
Holy Trinity Church, 143 U.S. at 459.
H.R. Rep. No. 115–699, at 22 (2018).
Id. at 23; see also 164 Cong. Rec. 4313 (2018) (Rep. Hakeem Jeffries stating that “the mass incarceration epidemic . . . will require sustained effort, sustained intensity, sustained commitment, and a meaningful first step. That is what [the FSA] represents.”).
H.R. Rep. No. 115–699, at 21–22 (2018); see also Holy Trinity Church, 143 U.S. at 459.
Pavlo, supra note 47.
O’Bryan v. Cox, No. CIV 21-4052, 2021 WL 3932275, at *2–3 (D.S.D. Sept. 1, 2021).
Fleming v. Joseph, No. 3:20-CV-5990, 2021 WL 1669361, at *4–5 (N.D. Fla. Apr. 7, 2021), report and recommendation adopted, No. 3:20CV5990, 2021 WL 1664372 (N.D. Fla. Apr. 28); Cohen v. United States, No. 20-CV-10833, 2021 WL 1549917, at *3 (S.D.N.Y. Apr. 20, 2021); Martin v. Beard, No. 21-CV-050, 2021 WL 5625552, at *4 (E.D. Ky. Nov. 30, 2021).
Fleming, 2021 WL 1669361, at *4; Cohen, 2021 WL 1549917, at *3; Martin, 2021 WL 5625552, at *4.
Valerie V. Brannon and Jared P. Cole, Cong. Rsch. Serv., LSB10204, Deference and Its Discontents: Will the Supreme Court Overrule Chevron? 2–3 (2018) (discussing how the current Supreme Court of the United States is going to apply or possibly overrule Chevron).
Id. at 1–2 (“[T]he Court confronted the issue of agency deference in Nielsen v. Preap, although Chevron itself did not come up during oral argument. Recent cases suggest, however, that the Court might continue to reaffirm the case’s vitality, and if the Court were to reassess Chevron, it might be to narrow the circumstances under which the doctrine applies in lieu of jettisoning it.”).
Chevron U.S.A., Inc. v. Nat. Res. Def. Council, Inc., 467 U.S. 837, 842 (1984); see also James Goodwin, Will Confirming Judge Barrett Be the Death of Chevron Deference?, Equation (Oct. 15, 2020, 2:21 PM), https://blog.ucsusa.org/guest-commentary/will-confirming-judge-barrett-be-the-death-of-chevron-deference/ [https://perma.cc/B3RB-L7L7] (detailing how ignoring Chevron allows courts to “essentially invite judicial policymaking, as activist judges would have freer rein to exploit unavoidable statutory ambiguities in order to substitute their own policy preferences”).