Regulation on AI Algorithms: White Elephant?

From personal privacy to algorithm discrimination, how can we put AI back in the cage?

Recently, I am re-watching my favorite American drama ‘Better Call Saul’. Forced to take a break from his career as a lawyer to sell disposable phones, Jimmy painted a large sign on the window: “Is the man listening?” “Privacy sold here”. By broadcasting this simple concept, his phone store was quickly flooded. This story took place in New Mexico in 2003, before the advent of networks and algorithms, when the biggest privacy concerns were the eavesdropping and tracking of cell phone calls.

‘Better Call Saul’Season 4 Episode 5 Screenshot

With the rapid development of artificial intelligence and big data technology, people are far more worried that sharing on the Internet will completely “kill” privacy, and a series of problems such as algorithm traps, algorithm discrimination and automation bias come to the surface. Disposable phones are no longer the “perfect answer”.

Algorithms Amplify Bias and Discrimination

As a kind of science and technology or tool, algorithm is completely neutral in value and free of bias, but as a decision-making mechanism, algorithm is deeply embedded in value judgment. As AI and other data-driven innovations race farther and faster ahead, the automation of racial bias is causing growing concern.

In recent years, prominent technology companies like Microsoft, Facebook and Google have all had high profile problems with their technology. For instance, Facebook’s AI made a grave error when it labeled a video of black men as “Primates”. Other companies, such as Nikon and HP, also experienced similar problems, with their software misidentifying Asians as blinking in photos and struggling to spot people with dark skin. These technological issues are indicative of a larger problem of structural inequality, as publicly available data sets used to train AI systems do not include sufficient data from ethnic minorities. Joy Buolamwini, a PhD student from MIT Media Lab, has done significant research on how computers detect, recognize and classify people’s faces, and shares her experience with algorithmic bias on TED (The Coded Gaze: Bias in Artificial Intelligence | Equality Summit).

Her work on the “Gender Shades” project to test the accuracy of AI-powered gender classifications system across different people’s faces, revealing that while three companies – IBM, Microsoft, and Face++ – appear to have relatively high accuracy overall, with best achieving 94% accuracy on whole dataset, they all perform worse on darker-skinned individuals, particularly darker-skinned females. IBM has the most significant gap in accuracy, with a 34% difference in error rates between lighter males and darker females. Gender Shadow is just a preliminary excavation of algorithmic discrimination – it reflects the reality that “existing measures of success AI don’t reflect the global majority” (Nuolamwini, 2020) – the deeper we dig, the more remnants of biases we will discover in our technology. As the world puts greater faith in technology, embedded biases are affecting black people in all aspects of their life, strengthening our narrow-minded society.

Intersectional group analysis results show that all companies perform worst on darker female (Source: Gender Shades)

“Human biases and values are embedded into each and every step pf development. Computerization may simply drive discrimination upstream” (Pasquale, 2015, p.35) One challenge is the inputs to an algorithm, which can lead data mining to have adverse effects on specific groups due to measurement errors, insufficient variables, improper selection, incompleteness, outdated, selection bias, or inappropriate standard division (Susskind, 2018). It’s not an accusation the creators intended to be racist, but rather that they were probably a group of light-skinned developers who tested it on themselves, assuming that “it worked for us and it must work for everybody”. No matter how clever the algorithms are, if they are fed a one-sided or misleading view of the world, they will not do justice to those who are hidden from view or in dim light.

The second challenge is based on the bias of algorithm application. To address this, there are provisions that prohibit the use of algorithms to discriminate against groups based on races, beliefs, and genders, and to oppose unintentional or malicious discrimination. For example, New York Local Law 49 (LL49) was introduced in 2017 to specifically regulate the use of algorithmic decision-making by government agencies and judicial organs. However, regardless of the design, an algorithm is bound to favor some groups more than others. While these disadvantaged groups, as Ding (2020), an associate professor of Renmin University of China, pointed out, may still suffer in the reality without algorithmic decision-making mechanism, or even worse.

As far as laws and regulations permit, the application of algorithms in the identification, sorting, classification and management of people will only increase. However, it is difficult to ensure that enterprises will not classify people based on social legal or moral classification (race, gender, sexual orientation, etc.) when using big data algorithms. As Jamie Susskind (2018) puts it, “this makes algorithms a new and important mechanism of distributive justice.” The intelligible society, as opposed to the black-box society, is committed to ensure that important decisions are fair, non-discriminatory, and open to criticism.

AI algorithms (automatically generated by Stable Diffusion)

Algorithms Challenge Government’s Autonomous Decision Making

It has to be admitted that algorithms have entered the decision-making agenda, ranging from spam filtering, credit card fraud detection, search engines, hot news trends, to advertising, insurance or loan qualifications, and credit scores. For example, “electronic police”, which appeared early in the field of administrative punishment, is the mainstream form of partially automated administration. In many cities, including Los Angeles, Chicago and Miami, police departments use software to analyze large amounts of historical crime data, predict where hot spots are most likely to be, and deploy officers accordingly. That could lead to increased policing in traditionally poor nonwhite neighborhoods and less surveillance in wealthier white neighborhoods. Such algorithmic administrative decisions have become more prevalent during the COVID-19 period, even sparking a debate about the trade-offs between personal privacy mandates and public health concerns. Especially in China, the health QR code plays the role of “passport”, in which the system makes color judgments based on the health information and travel location reported by individuals, to determine whether he/she is allowed to travel smoothly or enter certain places.

Artificial intelligence algorithms undoubtedly have potential for personalization and customization, allowing for differentiated governance according to different objects, which help to achieve good governance if given full play. But a series of procedural fairness and ethical issues that come with it cannot be ignored. In a lengthy feature in ProPublica, predictive algorithms in the criminal justice system were found to unfairly target black individuals (Angwin et al., 2016). Additionally, a study titled “Automated Inference on Criminality using Face Images” by Wu Xiaolin and Zhang Xi (2016) from Shanghai Jiao Tong University, which used machine learning to predict a person’s likelihood to commit a crime based on their facial features, sparks controversy. Similarly, in the Google blog post “Attacking discrimination with smarter machine learning”, Wattenberg and others (2016) proposes an improved machine learning system to avoid discrimination, optimizing “equality of opportunity in supervised learning” is only one of many tools available for improving machine learning systems, and that mathematics alone cannot yield the best solution. Combating discrimination in machine learning therefore requires a considerate, multidisciplinary approach, and a willingness to collaborate across disciplines.

In the “equal opportunity” situation, the blue and orange groups have equal repayment ability. In this way, both benefits and fairness are maximized, and overall, the number of people who can obtain loans is also the largest. (Source: Google Research)

“In a lot of ways Facebook is more like a government than a traditional company,” Facebook CEO Mark Zuckerberg (2018) said bluntly. In today’s world, internet platform companies are increasingly shaping the global environment in which governments operate. No longer just tools in the hands of the government, but to participate in global affairs together with the state machinery, the concept of “data security” has evolved from the original “personal privacy” and “corporate secrets” to the context of “national security”. These companies have demonstrated the power to block former US President Donald Trump’s Twitter account, and force sales, restrict and control other Internet companies (a simple example of which can be seen in the success or failure of software programs on the app store). 

In the economic marketplace, however, high profitability does not in itself imply harm to competition, as long as business success is achieved through data-driven innovation rather than through the use of big data to discriminate against certain competitors, impose exchange costs, enforce exclusive contracts, or engage in other forms of abuse. While algorithms that determine information visibility and social status may not be inherently negative, as long as the new methods bring more justice than the old ones.

The Legal Framework of Algorithmic Regulation

When it comes to the relationship between power and technology, it seems that the realization and protection of civil rights have not kept pace. The emergence of algorithmic power appears to coincide with a weakening of government regulation that citizens are unable to participate in, be aware of, or oversee. As Elon Musk (2014) warned, if not developed properly, we risk “summoning the Demon” with Artificial Intelligence. 

In order to create “good governance” of an intelligent society, three foundational principles must be upheld: Fairness, Accountability, and Transparency. And the three traditional methods of law have been employed: algorithmic openness, personal data empowerment, and anti-algorithmic discrimination. The US Privacy Act and the EU General Data Protection Regulation (GDPR) establish policies for privacy protection, define security responsibilities for data preservation and processing, and mandate post-hoc review. In 2017, the UK’s Data Protection Act strengthened the “informed — consent” system and added many new conditions for individual consent. Meanwhile, China formulated “Methods for Identification of Applications’ Illegal Collection and Use of Personal Information” to clarify standards for compulsory authorization, excessive claims, and excessive collection of personal information. Although much of the existing literature does not explain how to construct patterns consistent with the algorithmic and technological revolution, the landscape is changing rapidly with cocoons, loops, and iterations happening all the time. However, there is no doubt that human beings still control the direction of social progress, and only through critical thinking can we navigate towards it. 

As for automated decision making by algorithms, practical experience tells us that self-regulation by companies and industries has not proved to be a successful paradigm in the digital realm. Instead, external regulatory bodies need to assume the authority and tasks of regulation. The GDPR, in particular, emphasizes that data management should adhere to the concept of industry-led and appropriate intervention of regulatory bodies while fully mobilizing the spontaneous force of the market to achieve self-discipline of the industry. It went into force in 2018 and swiftly became the global standard. To this end, countries around the world have introduced a range of laws and regulations to governance AI, with varying approaches to regulation and protection of personal and important data. Singapore, for example, applied a “balanced approach”, while China has Personal Information Protection Law (PIPL), India has the Personal Data Protection Bill, and UNESCO has launched “first global standard on the ethics of artificial intelligence”. The core idea behind these regulations is to allow for the rapid decision-making that comes with technological innovation while also rejecting the results of automated administrative algorithms on the other. Take “PIPL” as an example, it changes the previous thinking from the perspective of business to a regulatory perspective, with the state as the subject of regulation, establishing a top-down data analysis and classification protection system. From a data security supervision perspective, it targets three types of data: personal information, important data and any records of information in electronic or non-electronic form. This type of regulation can help ensure that personal information and important data are properly protected, while also allowing for the benefits of technological innovation and rapid decision making.

Cyberpunk hacker in AI era (automatically generated by Stable Diffusion)

We are now at a time when all rivers flow into the sea, both digital technologies and legal regulations constantly evolving and presenting new challenges. As these technologies become more advanced and ubiquitous, it is crucial to ensure that they are used responsibly and ethically to protect individuals’ privacy and rights. Unfortunately, the black box of algorithms is often kept closed, making it difficult to solve the problems that arise in society. Even after open it, these questions are not always perfectly answered. It is the imperfections and limitations that drive the need for regulation to promote economic efficiency, environmental sustainability, ethics, and overall public welfare. However, due to the same imperfections and limitations, regulations themselves are bound to be imperfect.

As Justice Barack Orbach (2012) put it, “Society’s challenge, therefore, is to acknowledge that imperfections and limitations impair decision-making, communication, and trade, and to utilize legal institutions to address them. In other words, we should accept the fact that regulation is here to stay, and work to maximize its benefits and minimize its costs.”

Edit by Eva, Yujie Wu

Published on April 11, 2023


Reference

Barak Orbach. (2012). What is Regulation?. Regulation: Why and how the State Regulates. Foundation Press.

Bloomberg Live. (2019, March 30). The Coded Gaze: Bias in Artificial Intelligence | Equality Summit [Video]. YouTube. https://www.youtube.com/watch?v=eRUEVYndh9c

Blomberg Originals. (2014, November 25). Tesla’s Elon Musk: We’re ‘Summoning the Demon’ with Artificial Intelligence [Video]. YouTube. https://www.youtube.com/watch?v= Tzb_CSRO-0g

EU. General Data Protection Regulation GDPRhttps://gdpr-info.eu

Gender Shades. (n.d.). Retrieved April 6, 2023, from http://gendershades.org/overview.html

Government UK. The Data Protection Acthttps://www.gov.uk/data-protection

Governance and Strategic Affairs. The Personal Data Protection Bill, 2019. 164.100.47.4/Bills Texts/LSBillTexts/Asintroduced/373_2019_LS_Eng.pdf

IBM. (n.d.). Fairness, Accountability, Transparency. https://research.ibm.com/topics/fairness-accountability-transparency

Jamie Susskind. (2018). Future politics: living together in a world transformed by tech, Oxford, United Kingdom: Oxford University Press. 

Joy Boulamwini. (2020 August). Project Gender Shadow. MIT Media Lab. https://www.media. mit.edu/projects/gender-shades/overview/

Julia Angwin, Jeff Larson, Surya Mattu, Lauren Kirchner, (2016, May 23). Machine Bias. ProPublicahttps://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Koo Peter. (2020 December 8). NYC Local Law 49: A First Attempt at Regulating Algorithms. Foundations on law and societyhttps://foundationsoflawandsociety.wordpress.com/2020/12/08/nyc-local-law-49-a-first-attempt-at-regulating-algorithms/

Martin Wattenberg, Fernanda Viégas, & Moritz Hardt. (2016). Attacking discrimination with smarter machine learning. Google Bloghttps://research.google.com/bigpicture/attacking-discrimination-in-ml/

Pasquale F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press. http://www.jstor.org/stable/j.ctt13x0hch

Personal Data Protection Commission Singapore. Singapore’s Approach to AI Governance.  https://www.pdpc.gov.sg/Help-and-Resources/2020/01/Model-AI-Governance-Framework

The State Council Information Office of the People’s Republic of China. Methods for Identification of Applications’ Illegal Collection and Use of Personal Information(App违法违规收集使用个人信息行为认定方法》).http://www.scio.gov.cn/xwfbh/xwbfbh/wqfbh/42311/44109/xgzc44115/Document/1691066/1691066.htm

The National People’s Congress of the People’s Republic of China. Personal Information Protection Law (中华人民共和国个人信息保护法)www.npc.gov.cn/npc/c30834/202108/a8c4e3672c74491a80b53a172bb753fe.shtml

The White House. (2016 May). Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights. Executive Office of the President.  https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/2016_0504_data_discrimination.pdf

UNESCO. (2022 April 8). UNESCO adopts first global standard on the ethics of artificial intelligence.  https://www.unesco.org/en/articles/unesco-adopts-first-global-standard-ethics-artificial-intelligence

US department of justice. Overview of The Privacy Act of 1974 (2020 Edition)https://www.justice.gov/opcl/overview-privacy-act-1974-2020-edition

Wu, X., & Zhang, X. (2016). Automated Inference on Criminality using Face Images. ArXiv, abs/1611.04135.

XiaofDong Ding. (2020). On the Legal Regulation of Algorithms. Chinese Academy of Social Sciences. 12: pp.138-159+203.

Be the first to comment

Leave a Reply