I. Introduction
We live in an information-driven economy. Decision-making and problem-solving is often supported by technology that automates information collection, analysis, and utility to achieve specific goals. Large amounts of data combined with growing computers’ computing power automate tasks based on patterns and predictions, helping businesses grow. A global economy is at the point of this trend where there is no turning back. However, the open question is whether this trend should be followed in non-economic aspects of life, such as health or freedom.[1] Using COMPAS[2] as a tool that helps to make decisions about sending someone to prison or determining the length of the incarceration raises concerns about whether we, as humans, do not put too much weight on efficiency at the cost of civil liberty, accuracy, or transparency for the detriment of the legitimacy of the justice system. Thus, in this essay, COMPAS serve as a base and starting point for research to determine the appropriateness of using similar tools in criminal justice.
This essay indicates that COMPAS-like tools should be used neither to inform the decisions nor (inferring a minori ad maius) to make automated decisions about whether individuals should be sent to prison and for how long. Analysis of the key concerns over COMPAS indicates that such tools may decrease public confidence in the justice system and thus undermine one of its core principles to preserve the rule of law.
II. Preliminary issues.
A. Public Confidence in the Justice System – the Significance
The central argument in this article is that the courts should maintain public confidence in the justice system as a critical element of upholding the rule of law.[3] The principle of administering justice to enhance public confidence in the system is not absolute, but it should not be balanced against the principle of efficiency. Even though a justice system might not sometimes be effective in producing immediate decisions, in the end, both individuals and the public have to feel that the decision is just.
As Lord Hewart put it almost one hundred years ago: „justice must not only be done but must also be seen to be done”.[4] This maxim is a good starting point for further analysis because it demonstrates that regardless of a decision’s promptitude or correctness, it may not be seen by an individual or the public as a just decision that should be accepted and abided.[5] For the justice system to work effectively, people need confidence in that system, so the way of reaching and explaining the outcome is critical.
In sections III-V, the author tries to show that the weight and nature of concerns over COMPAS-like tools may decrease public trust and negatively impact the system.
B. Tools Like COMPAS
In this essay, COMPAS-like tools are the tools that use statistically based algorithms, anchored at least in part in sociological criminology, designed to assess an individual in terms of the risk it poses to society against the backdrop of a representative criminal population.[6]
The reference to sociological criminology is essential. A sociological approach to criminal justice requires assessing individual data such as age, class, income, job, status, education, mobility and others,[7] which in many cases are beyond anyone’s control. COMPAS designers acknowledge explicitly that the tool is based on several socio-criminological theories explaining how people become involved in criminal behaviour.[8] Analysis of the scales incorporated into the COMPAS risk assessment indicates that the tool follows these theories, as it considers such attributes as living in an area with high crime rate, having an irregular living situation, being raised by non-biological parents, a criminal family history, etc.[9] It is of crucial importance for further arguments.
The second most important feature is predictive analysis based on probability. Making inferences about a specific person as to their characteristics or future behaviour based on a set of characteristics that they possess and share with other people does not lead to the establishment of a causal relationship[10]. Thus, COMPAS-like tools are the tools that only make statistical predictions.
C. What Does It Mean to „Make a Decision”?
The next issue is how to understand „to make a decision”. COMPAS does not (yet) make decisions alone. It makes risk assessments and generates risk scores.[11] Officially, risk scores are not intended and may not be used to determine the severity of the sentence or whether an offender should be incarcerated.[12] In the end, this is a judge who makes a decision. However, the risk scores are delivered to the judge to inform decisions about applying non-prison alternatives,[13] so COMPAS’s recidivism risk assessment determines the defendant’s sentence.[14]
Given that the application of this technology is becoming increasingly broad, it is worth analyzing whether COMPAS-like tools are used to inform decisions and, if the answer is yes, whether they should be used without any human involvement.
III. Criminal Justice From Sociological Perspective – The Problem of Choice
The first question is whether the justice is done, or at least might be „seen to be done”, when the decision regarding the incarceration of an individual is based on factors outside people’s choice (such as being raised by biological or non-biological parents, criminal history of a family, etc.). Despite this problem’s concerns very sophisticated computer algorithms, the question itself is not new. In this section, I try to analyze COMPAS-like tools against the arguments of intense debate between criminology scholars representing „the Classical School”[15] and „the Positivist School”,[16] which I find very actual and valuable again.
The adherents of the Classical School believed that the criminal sanction must be proportional to the perpetrator’s guilt and to the crime committed.[17] On the other hand, the proponents of the „Positivist School” put much bigger weight on the defendants than on the offence itself. Whereas the „Classical School” defended the theory of a man’s free will and the autonomy to choose between right or wrong decisions,[18] the adherents of the „Positivist School” believed that the „Man is the Product” of the environment.[19] Thus, the latter demanded a criminal punishment that did not depend on people’s choices (a measure of guilt) but on the offender’s future danger to society.[20] Franz von Liszt, the most famous prominent of the „Positivist School”, in part of his theory, developed a typology of criminals, dividing them into three groups: occasional offenders, corrigible offenders and incorrigible offenders, i.e. classified them in terms of „dangerousness” to the society.[21]
The Positivist School legacy is unquestionable – the good things are the probation, parole, and conditional releases from prison. However, Liszt’s typology of criminals anchored in sociology, where external factors play a role in defining a person, which seems to be coming back under COMPAS-like tools is not. Liszt’s typology was not far from Lombroso’s concept of the born criminal. However, instead of relying on psychical attributes of the human body, his typology was driven more by criminal statistics and social factors relating to offenders, like their age, race, class, income, job, status, education, mobility and other habits.[22] COMPAS-like tools do the same. They compare information about an individual’s social life beyond any control to make a profile (type) of a criminal to inform the bench on decisions regarding the appropriateness of non-prison alternatives (i.e., sending someone to prison or not). Thus, it is worth returning to the turn of the 19th and 20th centuries and analyzing the COMPAS-like tools against key arguments offered by the „Classical School”.
Hugo Hoegel (one of the most famous critics of Liszt) firmly believed in a man’s free will and opposed any form of biological or social determinism. He raised concerns that assigning a man to a one type of criminal or another is subject to the risk of error and it would be improper to send someone to prison based on a vague prediction of his future behaviour instead of relying on the committed crime.[23] Indeed, such an approach – observed in COMPAS – requires the judge to rule over someone’s freedom based on sociological, not-sharp criteria, to say the least. The other aspect is what the public deems to be fair. If the COMPAS-like tools had considered the sociological background of the offender in sentencing, there would be a chance that someone would be sent to prison for minor offence because of independent factors, like being raised in foster care (COMPAS include that factor in the risk assessments[24]). That would by itself undermine public confidence in the justice system. If people were punished for situations they cannot control that would make people feel that they will be punished no matter what they do.
Secondly, some proponents of the Classical School raised the question, „if the reason for punishment lay in the offender’s future dangerousness, why wait until he or she had committed an offence”?[25] At first, Liszt had ignored that question as absurd, but it found it legitimate after some time. Eventually, Liszt answered „no” because he admitted that the state’s powers must be limited in the interest of civil liberty. [26] The question here is where to strike the balance between one’s liberty and public security in tools like COMPAS. The risk scales indicate that it can go beyond one’s guilt and the crime committed as far as the actions of third parties (ease of buying drugs in the area or high crime rate in general)[27] to legitimize someone’s incarceration instead of the application of non-prison measures. It does not lay too far from sending someone to prison before committing an offence, given that COMPAS consider the acts that the assessed individual did not commit.
Thirdly, Von Buri and Stenglen raised that Liszt’s social-defence approach would lead to long incarcerations (sometimes for minor offences) until one can be sure he will not repeat this or any crime.[28] A similar risk is associated with COMPAS. If the risk assessment goes beyond the offender’s will to realize the goal of public security, the latter seems to prevail over individual interests of a principal nature. This is where the public may feel threatened by the system instead of enjoying the security. Here, the crime loses its connection with the perpetrator and the justice is focused on the actor being subject to assessment from very unclear, sociological perspectives.
Thus, the arguments drawn more than a hundred years ago indicate that sociological inputs to COMPAS-like tools may only decrease public confidence in the system, and such tools should not be used in decision-making. It does not matter whether they are used „only” to inform or to make decisions. No system may be perceived to be just if it draws negative consequences from the criteria beyond any control of individuals.
IV. Accuracy In the Lens of the Public
The next concern over COMPAS-like tools, which may undermine the public confidence in the justice system, is the accuracy problem.
Northpointe Institute for Public Management Inc. made a study that examined the fourth-generation (4G) statistical validation of COMPAS. According to the study, COMPAS accuracy in terms of predictions about reoffending amounts to 69% for white men and 67% for black men.[29] The study prepared by ProPublica showed, in this regard, a slightly lower accuracy at the level of 62.5% for white men and 62.3% for black men. Irrespective of these small differences, comparing the amount of time and capital that can be saved in the penal system, thanks to the assistance of such a tool, these numbers may look satisfactory. However, the same numbers may be perceived completely differently if one expects the justice system to be effective and where decisions are just.
COMPAS is a tool to assess the likelihood of a defendant becoming a recidivist. It informs decisions on whether individuals should be sent to prison or whether the judge should apply non-prison alternatives. The seriousness of the matter is, therefore, enormous. In such a case, the defendant may reasonably expect from the justice system that the decision in his case will be just. On the other hand, the justice system must demonstrate to the defendant that it delivers a just decision. From the defendant’s perspective, the accuracy statistics above do not demonstrate justice. Before going to prison, a human does not think about overall system efficiency but cares about himself, his job, family, etc.[30] Meanwhile, these numbers say the system makes wrong decisions in 3 to almost 4 cases per 10 assessed defendants. The defendant does not look at the 6 to 7 people adequately classified by the system, but at these 3 to 4 where the system was wrong.[31]
One may say that humans make errors, too. That humans are even less accurate at decision-making than computers. Indeed, judicial decision-making is affected by heuristics and biases impacting the outcomes,[32] so human decision-making does not guarantee success. A study conducted by E. Bogert et al. shows that people generally rely more on algorithmic advice than crowd advice as tasks become more difficult.[33] It may indicate that defendants would prefer algorithms over human judges when it comes to sentencing over incarceration. The reality of delivering justice, however, is different. The study of G. Yalcin et al. demonstrates that people trust more and prefer to go to the Court when human judge adjudicates instead of algorithms.[34] It shows that people prefer to use computers as assistance in complex problems but prefer to not use computers where it comes to principal values such as ones freedom.
The concerns on COMPAS-like tools accuracy in terms of the public having confidence in the justice system are even greater after the research paper’s release by J. Dressel and H. Farid from Darthmouth College.[35] The research shows that COMPAS is no more accurate than predictions made by people with little or no criminal justice experience. The overall accuracy of the risk assessment made by random people was 67%,[36] which is as good as the accuracy claimed by Northpointe. Moreover, participants involved in the experiment made predictions on 7 features (which related only and directly to the defendant, like sex, age, and previous criminal history, without any facts regarding their social life) whereas COMPAS uses 137 features.[37] Since COMPAS assessed more than 1 million offenders until J. Dressel and H. Farid study[38] it is not surprising that their research attracted a press attention.[39] The public might think that the system is no better than random people without legal background. This is not enhancing the public confidence in the justice system.
Additionally, there are concerns about accuracy in terms of race. A study published by ProPublica (a nonprofit newsroom) shows that black defendants were far more likely than white defendants to be incorrectly judged by COMPAS to be at a higher risk of recidivism (including violent recidivism), while white defendants were more likely than black defendants to be incorrectly flagged as low risk.[40] While Northpoint replied in detail to these allegations, stating that ProPublica made statistical and technical errors and not considered appropriate data (different base rates of recidivism for blacks and whites), which resulted in false assertions about the COMPAS,[41] the lack of trust persists. The problem is that Northpoint understands fairness as accuracy in predicting reoffending at the rate of 60%, regardless of race, whereas ProPublica focuses on those who end up not reoffending.[42]
Another relevant research demonstrates that people lose confidence in algorithms more quickly than in humans.[43] If we juxtapose official COMPAS accuracy, the press releases indicating that COMPAS is no more accurate than people with little or no criminal justice experience and the ProPublica study in the context of this finding, it is hard to conclude that COMPAS-like tools enhance the public confidence in the justice system. It is rather the opposite. Each time the algorithms makes wrong predictions make people lose confidence in such tool much greater if humans made the same mistakes. When it comes to decision-making about incarceration, it means that every mistake made by a computer undermine people’s confidence in the justice system much more if the mistake was made solely by a human judge.
V. Lack of Transparency
Last but not least, COMPAS is a proprietary instrument. It neither discloses how the risk scores are determined nor how each factor is weighted.[44] These COMPAS characteristics were invoked in STATE vs LOOMIS due to concerns over potential breach of due process. Loomis argued that he was denied access to the COMPAS source code, so he could not challenge the accuracy and validity of the factors used to assess COMPAS risk scores and, therefore was denied access to information used for sentencing in his case.[45]
Indeed, as a private instrument, COMPAS benefits from the protection of trade secret law. It means that the way it operates is a mystery not only to defendants but also to prosecutors pursuing sentencing recommendations and even to judges adjudicating the case.[46] The problem was recognized by the Supreme Court of Wisconsin ruling in the Loomis case. The Court, ruling out the allegations that the principle of due process was breached, stated that „[a]lthough Loomis cannot review and challenge how the COMPAS algorithm calculates risk, he can at least review and challenge the resulting risk scores set forth in the report attached to the PSI”.[47] Moreover, as the Court observed, Loomis could verify the questions and answers preceding the risk assessment.[48] The question is whether this is enough in terms of the administration of justice.
On one hand, some point out that examining source code is not necessary to determine whether an algorithm is biased, and it is unlikely that examining the the code would reveal any explicit discrimination.[49] On the other hand, why a defendant should assume that the code does not contain any lines which contain explicit discrimination based on race, gender, or other characteristics that he or she may possess? When freedom is at stake, one may reasonably expect access to the code for the expert witness to verify it and exclude the defendant’s doubts on potential discrimination. This is of particular importance, given that at least one study revealed that COMPAS may be biased against blacks.[50] Again, one may say that we cannot get into the judge’s brain as well. This is true, but here the case is different. The public has the tools to get inside COMPAS’s „brain” but is not allowed to use them because proprietary interests prevail over the needs of individuals to feel safe within the system.
For the public to have confidence in the justice system, it is not enough to know the inputs and outcomes of COMPAS-like tools, just like it is not enough to know case files and judgments without reasoning underlying decisions.
VI. Summary
The attack on the U.S. Capitol on January 6, 2021 is one of the most striking examples of what happens when people lose confidence in state institutions.[51] The confidence that they operate fairly and deliver just decisions is crucial to maintaining democracy and public order. The courts are to preserve public trust in the justice system, not egoistically to protect the power of the judiciary, but as a key element of maintaining the rule of law.[52] Therefore, they should not use algorithms like COMPAS and similar tools that raise grave concerns over civil liberties, accuracy and transparency. Increasing the efficiency in criminal justice is undoubtedly tempting, but the state should not prioritise efficiency in delivering justice for the detriment of other principles in maintaining public confidence in the justice system. The trust is easy to lose but much harder and more costly to rebuild.
Bibliography
Bogert, Eric, Aaron Schecter and Richard T Watson, 'Humans Rely More on Algorithms than Social Influence as a Task Becomes More Difficult’ (2021) 11(1) Scientific Reports 8028
Dressel, Julia and Hany Farid, 'The Accuracy, Fairness, and Limits of Predicting Recidivism’ (2018) 4(1) Science Advances <https://www.science.org/doi/10.1126/sciadv.aao5580>
Guthrie, Chris, Jeffrey Rachlinski and Andrew Wistrich, 'Blinking on the Bench: How Judges Decide Cases’ [2007] Cornell Law Faculty Publications <https://scholarship.law.cornell.edu/facpub/917>
Harter, Karl, 'Zweckgedanke, Social Defence and Transnational Criminal Law: Franz von Liszt and the Network of Positivist Criminology (1871-1918) Studies’ (2020) 17 GLOSSAE: European Journal of Legal History 151
Jolls, Christine, Cass R Sunstein and Richard Thaler, 'A Behavioral Approach to Law and Economics’ (1998) 50(5) Stanford Law Review 1471
Measurment & Treatment Implications of Compas Core Scales (Northpointe Inc., September 30 2009) <https://www.michigan.gov/-/media/Project/Websites/corrections/progserv/Folder1/Timothy_Brenne_PhD__Meaning_and_Treatment_Implications_of_COMPA_Core_Scales.pdf?rev=70b2e15249b849f6a3fbd8ba613506f6>
Mednis, Arwid, Prawo ochrony danych osobowych wobec profilowania osób fizycznych [eng. Personal Data Protection Law on profiling of natural persons] (PRESSCOM, 2019)
Pifferi, Michele, The Limits of Criminological Positivism: The Movement for Criminal Law Reform in the West, 1870–1940 (Routledge, 1st ed, 2021) <https://www.taylorfrancis.com/books/9780429323713>
Practitioner’s Guide to COMPAS (Northpointe Inc., December 17 2017) <https://cjdata.tooltrack.org/sites/default/files/2018-10/Practitioners_Guide_COMPASCore_121917.pdf>
Practitioner’s Guide to COMPAS (2019) (Northpointe Inc., April 4 2019) <https://www.equivant.com/wp-content/uploads/Practitioners-Guide-to-COMPAS-Core-040419.pdf>
Schennach, Martin P, ‘Echoes of Karl Binding and Franz von Liszt? The Discussion between the „Classical School” and the „Positivist School” in Austria Studies’ (2020) 17 GLOSSAE: European Journal of Legal History 234
Sucher, Sandra J and Shalene Gupta, 'The Breach of the U.S. Capitol Was a Breach of Trust’ (January 11 2021) Harvard Business Review <https://hbr.org/2021/01/the-breach-of-the-u-s-capitol-was-a-breach-of-trust>
Yalcin, Gizem et al, 'Perceptions of Justice By Algorithms’ (2023) 31(2) Artificial Intelligence and Law 269
R v Sussex Justices, ex parte McCarthy ([1924] 1 KB 256
STATE vs LOOMIS [2016] Supreme Court of Winconsin 881 N.W.2d 749 (2016)
’Algorithmic Due Process: Mistaken Accountability and Attribution in State v. Loomis’, Harvard Journal of Law & Technology (August 31 2017) <https://jolt.law.harvard.edu/digest/algorithmic-due-process-mistaken-accountability-and-attribution-in-state-v-loomis-1>
Dietvorst, Berkeley J, Joseph P Simmons and Cade Massey, 'Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err’ (SSRN Scholarly Paper No 2466040, July 6 2014) <https://papers.ssrn.com/abstract=2466040>
equivant, 'Response to ProPublica: Demonstrating Accuracy Equity and Predictive Parity’, equivant (December 1 2018) <https://www.equivant.com/response-to-propublica-demonstrating-accuracy-equity-and-predictive-parity/>
’Evaluating the Predictive Validity of the Compas Risk and Needs Assessment System’ <https://journals.sagepub.com/doi/epdf/10.1177/0093854808326545>
Initiative, Prison Policy, 'Research Roundup: Incarceration Can Cause Lasting Damage to Mental Health’ <https://www.prisonpolicy.org/blog/2021/05/13/mentalhealthimpacts/>
’Inside Track: The Loomis Case: The Use of Proprietary Algorithms at Sentencing’:, State Bar of Wisconsin <https://www.wisbar.org/NewsPublications/InsideTrack/stronas/Article.aspx?Volume=9&Issue=14&ArticleID=25730>
Mattu, Jeff Larson, Julia Angwin,Lauren Kirchner,Surya, 'How We Analyzed the COMPAS Recidivism Algorithm’, ProPublica <https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm>
’McLachlin, Beverley — „Courts, Transparency and Public Confidence – To the Better Administration of Justice” [2003] DeakinLawRw 1; (2003) 8(1) Deakin Law Review 1′ <http://www5.austlii.edu.au/au/journals/DeakinLawRw/2003/1.html#fn30>
uclalaw, 'Injustice Ex Machina: Predictive Algorithms in Criminal Sentencing’, UCLA Law Review (February 19 2019) <https://www.uclalawreview.org/injustice-ex-machina-predictive-algorithms-in-criminal-sentencing/>
Yong, Ed, 'A Popular Algorithm Is No Better at Predicting Crimes Than Random People’, The Atlantic (January 17 2018) <https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/>
[1] In a sense where health and freedom are primary aspects of human life relating directly and unequivocally to human beings in isolation from the health care systems or penitentiary systems which have monetary connotations.
[2] Correctional Offender Management Profiling for Alternative Sanctions (hereinafter: “COMPAS”)
[3] ‘McLachlin, Beverley — “Courts, Transparency and Public Confidence – To the Better Administration of Justice” [2003] DeakinLawRw 1; (2003) 8(1) Deakin Law Review 1’ <http://www5.austlii.edu.au/au/journals/DeakinLawRw/2003/1.html#fn30>.
[4] R v Sussex Justices, ex parte McCarthy ([1924] 1 KB 256.
[5] In a sense that one understands the link between his or her action and its legal consequences.
[6] this quasi-definition was worked out on the basis of Practitioner’s Guide to COMPAS (Northpointe Inc., 17 December 2017) 1–6 <https://cjdata.tooltrack.org/sites/default/files/2018-10/Practitioners_Guide_COMPASCore_121917.pdf>.
[7] Karl Harter, ‘Zweckgedanke, Social Defence and Transnational Criminal Law: Franz von Liszt and the Network of Positivist Criminology (1871-1918) Studies’ (2020) 17 GLOSSAE: European Journal of Legal History 151, 159 (‘Zweckgedanke, Social Defence and Transnational Criminal Law’).
[8] Practitioner’s Guide to COMPAS (n 6) 5–6.
[9] Measurment & Treatment Implications of Compas Core Scales (Northpointe Inc., 30 September 2009) 319–322 <https://www.michigan.gov/-/media/Project/Websites/corrections/progserv/Folder1/Timothy_Brenne_PhD__Meaning_and_Treatment_Implications_of_COMPA_Core_Scales.pdf?rev=70b2e15249b849f6a3fbd8ba613506f6>.
[10] Arwid Mednis, Prawo ochrony danych osobowych wobec profilowania osób fizycznych [eng. Personal Data Protection Law on profiling of natural persons] (PRESSCOM, 2019) 31.
[11] Practitioner’s Guide to COMPAS (2019) (Northpointe Inc., 4 April 2019) 1 <https://www.equivant.com/wp-content/uploads/Practitioners-Guide-to-COMPAS-Core-040419.pdf>.
[12] STATE vs LOOMIS [2016] Supreme Court of Winconsin 881 N.W.2d 749 (2016), 93, 98.
[13] Ibid 89.
[14] uclalaw, ‘Injustice Ex Machina: Predictive Algorithms in Criminal Sentencing’, UCLA Law Review (19 February 2019) <https://www.uclalawreview.org/injustice-ex-machina-predictive-algorithms-in-criminal-sentencing/> (‘Injustice Ex Machina’).
[15] Like Hugo Hoegel, Karl Brunnenmeister, Emanuel Ullmann, Ferdinand Lentner, Anton Finger and Otto Friedman see:, Martin P Schennach, ‘Echoes of Karl Binding and Franz von Liszt? The Discussion between the “Classical School” and the “Positivist School” in Austria Studies’ (2020) 17 GLOSSAE: European Journal of Legal History 234, 250 (‘Echoes of Karl Binding and Franz von Liszt?’).
[16]Sometimes referred to as „modern”, „sociological” or „young German” (jungdeutsch) school united around Franz von Liszt; see: Ibid 235.
[17] Michele Pifferi, The Limits of Criminological Positivism: The Movement for Criminal Law Reform in the West, 1870–1940 (Routledge, 1st ed, 2021) 48 <https://www.taylorfrancis.com/books/9780429323713> (‘The Limits of Criminological Positivism’).
[18] Schennach (n 15) 237–238.
[19] Pifferi (n 17) 44.
[20] Ibid 47.
[21] Harter (n 7) 159.
[22] Ibid. However, Liszt did not reject the meaning of anthropological/biological factors.
[23] Schennach (n 15) 251.
[24] Measurment & Treatment Implications of Compas Core Scales (n 9) 13.
[25] Pifferi (n 17) 48.
[26] Ibid 49–50.
[27] Measurment & Treatment Implications of Compas Core Scales (n 9) 18.
[28] Pifferi (n 17) 50.
[29] ‘Evaluating the Predictive Validity of the Compas Risk and Needs Assessment System’ 31 <https://journals.sagepub.com/doi/epdf/10.1177/0093854808326545>.
[30] Prison Policy Initiative, ‘Research Roundup: Incarceration Can Cause Lasting Damage to Mental Health’ <https://www.prisonpolicy.org/blog/2021/05/13/mentalhealthimpacts/> (‘Research Roundup’).
[31] Normally, people tend to think that bad event are far less likely to happen to them than to others. However, it is not a case where the threat is salient (like being incarcerated). In such cases people tend to overestimate the likelihood of being sanctioned. Christine Jolls, Cass R Sunstein and Richard Thaler, ‘A Behavioral Approach to Law and Economics’ (1998) 50(5) Stanford Law Review 1471, 1519, 1524–1525.
[32] Chris Guthrie, Jeffrey Rachlinski and Andrew Wistrich, ‘Blinking on the Bench: How Judges Decide Cases’ [2007] Cornell Law Faculty Publications 19–29 <https://scholarship.law.cornell.edu/facpub/917> (‘Blinking on the Bench’).
[33] Eric Bogert, Aaron Schecter and Richard T Watson, ‘Humans Rely More on Algorithms than Social Influence as a Task Becomes More Difficult’ (2021) 11(1) Scientific Reports 8028, 5.
[34] Gizem Yalcin et al, ‘Perceptions of Justice By Algorithms’ (2023) 31(2) Artificial Intelligence and Law 269, 276–284.
[35] Julia Dressel and Hany Farid, ‘The Accuracy, Fairness, and Limits of Predicting Recidivism’ (2018) 4(1) Science Advances <https://www.science.org/doi/10.1126/sciadv.aao5580>.
[36] Ibid 2.
[37] Ibid 1–2.
[38] Ibid 1.
[39] Ed Yong, ‘A Popular Algorithm Is No Better at Predicting Crimes Than Random People’, The Atlantic (17 January 2018) <https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/>.
[40] Jeff Larson Mattu Julia Angwin,Lauren Kirchner,Surya, ‘How We Analyzed the COMPAS Recidivism Algorithm’, ProPublica <https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm>.
[41] equivant, ‘Response to ProPublica: Demonstrating Accuracy Equity and Predictive Parity’, equivant (1 December 2018) 1 <https://www.equivant.com/response-to-propublica-demonstrating-accuracy-equity-and-predictive-parity/> (‘Response to ProPublica’).
[42] uclalaw (n 14).
[43] Berkeley J Dietvorst, Joseph P Simmons and Cade Massey, ‘Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err’ (SSRN Scholarly Paper No 2466040, 6 July 2014) 10–11 <https://papers.ssrn.com/abstract=2466040> (‘Algorithm Aversion’).
[44] STATE vs LOOMIS (n 12) 51.
[45] Ibid 46–66.
[46] uclalaw (n 14).
[47] ‘Inside Track: The Loomis Case: The Use of Proprietary Algorithms at Sentencing’:, State Bar of Wisconsin 53 <https://www.wisbar.org/NewsPublications/InsideTrack/stronas/Article.aspx?Volume=9&Issue=14&ArticleID=25730> (‘Inside Track’).
[48] STATE vs LOOMIS (n 12) 56.
[49] ‘Algorithmic Due Process: Mistaken Accountability and Attribution in State v. Loomis’, Harvard Journal of Law & Technology (31 August 2017) <https://jolt.law.harvard.edu/digest/algorithmic-due-process-mistaken-accountability-and-attribution-in-state-v-loomis-1> (‘Algorithmic Due Process’).
[50] Mattu (n 40).
[51] Sandra J Sucher and Shalene Gupta, ‘The Breach of the U.S. Capitol Was a Breach of Trust’ (11 January 2021) Harvard Business Review <https://hbr.org/2021/01/the-breach-of-the-u-s-capitol-was-a-breach-of-trust>.
[52] ‘McLachlin, Beverley — “Courts, Transparency and Public Confidence – To the Better Administration of Justice” [2003] DeakinLawRw 1; (2003) 8(1) Deakin Law Review 1’ (n 3).