AI in Courtroom: From Supportive Software to AI Judge

Source: Getty Images/iStockphoto

Introduction

The release of ChatGPT last year by OpenAI aroused both the interest and concern of the general public about artificial intelligence (AI). According to OpenAI, if the latest model, GPT-4.0, takes the New York State Bar Exam, it can beat 90% of human test takers (2023). Six months ago, GPT-3.5 could only beat 10% of the test takers. This rapid change has also led to a wide discussion of AI in the judicial community. The use of AI in the justice system is already rapidly expanding. In the last few years, AI has been used to assist judges in making decisions, automating certain tasks, and even predicting the outcome of cases. Therefore, AI is seen as a useful tool to help improve the efficiency and effectiveness of the judicial system. However, some characteristics of AI have also raised concerns about the use of AI in the justice system, such as lack of transparency, bias, legal authority and accountability. The purpose of this blog is to explore the role, benefits, and pitfalls of AI in the justice system, and to suggest possible regulatory solutions to the challenges facing the use of AI in the legal field.

Applications

The use of artificial intelligence in the judicial system did not receive much attention until the 21st century. In recent years, advances in machine learning, natural language processing, and data analysis have led to the development of more complicated artificial intelligence systems that can analyze legal research, provide justice support and provide a reasonable prediction for lawyers and judges (BBC, 2021). In legal research, AI is used to assist in literature analysis, case law research, and contract analysis. Artificial intelligence systems can quickly analyze large volumes of legal documents and provide relevant insights, reducing the time and cost required for manual research. In a complex murder case, as many as 10,000 files needed to be reviewed and analyzed. Compared to human lawyers, AI completed the review and analysis four weeks earlier and saved £50,000 in expenses (Belton, 2021). There is also a rapidly growing demand for AI to provide justice services since AI can design defence solutions and search for citable judicial precedents. In addition, AI like ROSS, the world’s first artificially intelligent attorney powered by IBM’s Watson technology, can keep close track of the latest developments in the law, such as major court decisions, and events that may have a significant impact on a lawyer’s case to keep its capabilities up-to-date (Jesus, 2016; Turner, 2016). Moreover, AI can also provide reasonable predictions in terms of decision support, such as sentencing recommendations and predicting sentence outcomes. These AI systems use machine learning algorithms to analyze historical legal data, like the outcomes of prior cases, criminal history, and demographic information, and predict the outcome of current cases or recommend sentences. For example, COMPAS is used in some U.S. states to predict the probability of recidivism of offenders and help judges determine appropriate sentences.

Advantages

Increase Efficiency

Source: Photo by Simon de Trey-White

One of the biggest advantages of using AI in the judicial system is the efficiency and savings in labor costs. Because of the complex nature of the law, many countries around the world are facing backlogs of cases due to a lack of available judges and lawyers. Brazil had a backlog of more than 100 million court cases before the outbreak of the Coronavirus and more than 40 million cases were pending in Indian lower courts at the end of January 2021 (Belton, 2021; Reuters, 2021). It is almost impossible for human judges and lawyers to handle cases of this scale, but it is possible for AI. Back in 2015, DoNotPay AI helped overturn 160,000 New York parking tickets in just 21 months. It has cumulatively resolved more than 2 million cases by 2022 (Mazuru, 2023).

Reduce Bias

Another benefit of using AI in the justice system is that it can reduce bias. Since human decisions are often influenced by bias and prejudice, although many times unconsciously, algorithms can go some way to overcoming these factors that have no legal impact on individual cases, such as gender and race (Mason, 2001). In addition, various factors that impact the fairness of a decision can have an effect on adjudicative outcomes, such as the time of day, when and what the judge eats, personal values, and personal attractiveness (Agthe et al., 2011; Danziger et al., 2011; Quintanilla, 2012). Because AI systems operate according to predefined rules, they can help ensure that similar cases are handled consistently and with relatively fair treatment.

Limitation and Concerns

While there are potential advantages to using AI in the justice system, there are some limitations and concerns to consider.

Bias

Bias in AI can come from two parts, the training data and the algorithm itself. The development of an AI system depends on the data it uses for training. If there is bias in the training data, then the AI system may continue and amplify that bias, leading to unfair decisions and thus technological redlining, especially for disadvantaged groups (Noble, 2018). The second place where bias can occur is through the algorithm itself. Taking the example of an AI that predicts the length of a sentence, the AI is weighing the different variables in the data to be analyzed, such as age, to get the result, which is the length of the sentence. Even if an AI says that it will not use race as a variable to reduce racial discrimination, other variables will implicitly affect the outcome, such as address. This is because the address is usually correlated with race, income, and social class (Grimm et al., 2022).

Language behind AI

Searle points out that the syntax of a computer program is independent of the semantics of the sentences (2002). In other words, the program is simply treating information as symbols and processing it according to predetermined syntactic rules, and it can function perfectly without knowing the meaning of the information it is processing. The same applies to artificial intelligence programs. When an AI program analyzes or predicts a case’s data, it may not analyze the background or contextual meaning of the case, which can lead to inaccurate or incorrect results. Moreover, the AI used in the judicial system is often written by IT professionals rather than legal professionals. In the process of converting legal texts into codes and commands, IT professionals may be misjudged by the law and brought into the AI (Sourdin, 2018).

Authority and Accountability

The use of AI in the judicial system affects the authority of the law and raises challenges to accountability as well. At the current stage, AI judges are only used in smaller civil courts, such as in Estonia, where AI judges are used to handling some small court claims (Pinkstone, 2019). The decisions of AI judges have a legally binding effect, and both parties of a lawsuit can appeal to a human judge if they are not satisfied with the decision. In criminal courts, AI is primarily used to provide sentencing recommendations or parole evaluations, such as COMPAS in the U.S. However, the automation of decision-making resulting from AI can weaken the authority of the judge. The sentence proposed by AI or the recommendation made by AI is based on historical data, and it may be biased or overlook the background. If the judge’s decision differs from the AI’s decision, it could lead the defendant and the public to question the authority of the judge’s decision. In this way, the AI’s decision is equivalent to the violation of the human judge’s freedom of discretion in individual cases, and interfering with the independent judicial power (Koudeman, 2023). Second, the non-transparent nature of AI decision-making may also harm judicial authority. The algorithms used by AI are often not publicly available as trade secrets (Varosanec, 2022). If the public or a judge questions a recommendation or decision made by the AI or suspects that the AI is biased against a variable, like race or gender, the AI is unable to provide a detailed explanation of the recommendation or decision it made. Such black-box operations can also lead the public to question the authority of the law. The above situations also lead to questions about the accountability of the law. The rapid growth in the use of AI in the judicial system may conceal the role of human judges as decision-makers and make it difficult to attribute decisions made. For example, if courts use AI systems to make decisions, or if human judges follow AI recommendations for judgments even if they disagree, it can be difficult to determine who is ultimately responsible after a decision is found to be biased or unfair.

Case Study

Bernard Parker, left, was rated high risk; Dylan Fugett was rated low risk. (Source: Josh Ritchie for ProPublica)

Correctional Offender Management Profiling for Alternative Sanctions, or COMPAS, is an AI system used to predict the “risk score” of defendants. COMPAS takes information from surveys and the defendant’s criminal record and uses it to evaluate the defendant’s risk of committing another crime in the future, and then makes recommendations to the judge based on the level of risk, from the amount of bail to the length of the sentence (Jackson & Mendoza, 2020). Although the final judgment is still determined by the judge, there is no doubt that COMPAS can have an impact on people’s freedom. COMPAS has been used extensively in U.S. courts for more than 20 years and has evaluated more than one million defendants (Taylor, 2020). However, in 2016, ProPublica’s research found significant racial disparities in COMPAS sentencing. With similar crimes, blacks were twice as likely as whites to be flagged as likely to commit crimes in the future. Meanwhile, black defendants were more likely to be flagged as a high risk compared to white defendants (Angwin et al., 2016). Although race was not used as a variable of reference because of legal prohibitions, interpersonal relationships, family background, employment status, and home address were among the variables used to measure risk scores (Angwin et al., 2016; Taylor, 2020). Racial minorities, such as African Americans, are already disadvantaged in terms of education and income, and COMPAS decisions that appear to be fair but actually have racial profiling create a technological redlining for disadvantaged groups (Noble, 2018). 

Black Defendants’ Risk Scores (Source: ProPublica analysis of data from Broward County, Fla.)
White Defendants’ Risk Scores (Source: ProPublica analysis of data from Broward County, Fla.)

Moreover, the act of judging individuals based on group averages is also questionable. AI’s judgments on individuals are predicted based on group averages of historical data. Not to mention whether the historical data is biased, but each individual may be different from the group. According to British judge William Blackstone, it is better that ten guilty persons escape than one innocent suffer (Blackstone, 1753). So when hearing a case, the judge should base the decision only on the case, not on the tendency of the defendant’s group to behave in a certain way (Grimm et al., 2022).

Regulation and Oversight

The COMPAS case demonstrates the need for regulation and oversight of the use of AI in the justice system. First, AI used in the justice system needs to be independently audited and tested to ensure the validity, fairness, and accuracy of the AI. There is only a small amount of independent research on crime risk assessment and much of the review of validity is done by the developers themselves (Grimm et al., 2022). Second, AI used in the justice system needs to be more transparent and less operating in the black box. Some of the AI used in the justice system is now developed by for-profit companies, which may refuse to provide the algorithms used by the AI to make judgments on the grounds of trade secrets. However, the AI systems used in the justice system should be able to provide clear, easy-to-understand explanations of decisions. Like the General Data Protection Regulation (GDPR) launched by the EU, it requires organizations that use AI to provide explanations of automated decisions that have a significant impact on individuals (Wu, 2023). Finally, there is a need for clear accountability. If there is a miscarriage of justice in a court of law due to the use of AI, who is responsible for the case, the developer of the AI or the user of the AI? And to what degree of accountability?

Conclusion

This blog provides a short introduction to the use of AI in the judicial system and an analysis of its benefits and concerns. The use of AI can provide many potential benefits to the judicial system, such as efficiency, labor cost savings, and reduced bias by judges. However, its own disadvantages cannot be ignored, such as bias brought by training data or algorithms, lack of transparency, influence on legal authority and unclear accountability. In response to these disadvantages, the government should gradually improve to ensure the authority and fairness of the law.

Reference

Agthe, M., Spörrle, M., & Maner, J. K. (2011). Does Being Attractive Always Help? Positive and Negative Effects of Attractiveness on Social Decision Making. Personality & social psychology bulletin, 37(8), 1042-1054. https://doi.org/10.1177/0146167211410355

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

BBC. (2021). Artificial Intelligence AI and the Law: What it would be like to have a robot as your lawyer. https://www.bbc.com/zhongwen/simp/science-58236166

Belton, P. (2021). Would you let a robot lawyer defend you? BBC. https://www.bbc.com/news/business-58158820

Blackstone, S. W. (1753). Commentaries on the Laws of England in Four Books (Vol. 02). J.B. Lippincott Co.

Danziger, S., Levav, J., & Avnaim-Pesso, L. (2011). Extraneous factors in judicial decisions. Proceedings of the National Academy of Sciences, 108(17), 6889-6892. https://doi.org/doi:10.1073/pnas.1018033108

Grimm, P. W., Grossman, M. R., Gless, S., & Hildebrandt, M. (2022). Artificial Justice: The Quandary of AI in the Courtroom. Judicature International. https://judicature.duke.edu/articles/artificial-justice-the-quandary-of-ai-in-the-courtroom/

Jackson, E., & Mendoza, C. (2020). Setting the Record Straight: What the COMPAS Core Risk and Need Assessment Is and Is Not. Harvard Data Science Review(2.1). https://doi.org/https://doi.org/10.1162/99608f92.1b3dadaa

Jesus, C. D. (2016). AI Lawyer “Ross” Has Been Hired By Its First Official Law Firm. Futurism. https://futurism.com/artificially-intelligent-lawyer-ross-hired-first-official-law-firm

Koudeman. (2023). Is the era of AI judging humans coming? Can ChatGPT-4 draft judgments instead of judges? https://opinion.udn.com/opinion/story/11678/7048031#sup_8

Mason, K. (2001). Unconscious judicial prejudice. Australian law journal, 75(11), 676-687.

Mazuru, M. (2023). DoNotPay AI-Powered Robot Lawyer Will Soon Defend Human Being in Its First U.S. Court Case. autoevolution. https://www.autoevolution.com/news/donotpay-ai-powered-robot-lawyer-will-soon-defend-human-being-in-its-first-us-court-case-208013.html

Noble, S. U. (2018). Algorithms of oppression : how search engines reinforce racism. New York University Press.

OpenAI. (2023). GPT-4 is OpenAI’s most advanced system, producing safer and more useful responses. https://openai.com/product/gpt-4

Pinkstone, J. (2019). AI-powered JUDGE created in Estonia will settle small court claims of up to £6,000 to free up professionals to work on bigger and more important cases. Daily Mail. https://www.dailymail.co.uk/sciencetech/article-6851525/Estonia-creating-AI-powered-JUDGE.html

Quintanilla, V. (2012). Different Voices: A Gender Difference in Reasoning About the Letter Versus Spirit of the Law [CELS 2012 Poster Submission]. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2106005

Searle, J. R. (2002). Can Computers Think? In D. J. Chalmers (Ed.), Philosophy of Mind: Classical and Contemporary Readings. Oup Usa.

Reuters. (2021). India has court backlog of 40 million cases, chief justice says. Reuters. https://www.reuters.com/world/india/india-has-court-backlog-40-million-cases-chief-justice-says-2022-04-30/

Sourdin, T. (2018). Judge v Robot?: Artificial intelligence and judicial decision-making. University of New South Wales law journal, 41(4), 1114-1133. https://doi.org/10.53637/ZGUX2213

Taylor, A. (2020). AI Prediction Tools Claim to Alleviate an Overcrowded American Justice System… But Should they be Used? https://stanfordpolitics.org/2020/09/13/ai-prediction-tools-claim-to-alleviate-an-overcrowded-american-justice-system-but-should-they-be-used/

Turner, K. (2016). Meet ‘Ross,’ the newly hired legal robot. The Washington Post. https://www.washingtonpost.com/news/innovations/wp/2016/05/16/meet-ross-the-newly-hired-legal-robot/

Varosanec, I. (2022). Silence is golden, or is it? Trade secrets versus transparency in AI systems. The Digital Constitutionalist. https://digi-con.org/silence-is-golden-or-is-it/

Wu, S. (2023). Tracing AI Decisions: AI Explainability and the GDPR. https://www.airoboticslaw.com/blog/ai-explainability-gdpr

Be the first to comment

Leave a Reply