A Comprehensive Guide to the Algorithmic Age of Discrimination

When Illusions Fade

What’s the first thing you do when you’re shopping for a laptop? You search for laptops on Google and apply some necessary filters. You also search on Amazon. After the first page or two, things look less appealing. You search YouTube to see what laptops people like me are using. Reviews of the laptop you have Googled may appear. You decide on a model. Google tells you which nearby retailers sell the model for the lowest price. You take advantage of the discount and buy that model. You’ve made a smart purchase. As the algorithm sorts products based on people’s reviews and prices, considering your tastes, budget, and location, you can confidently choose the best option without visiting multiple stores or asking around.

However, Angwin et al. (2015) found that Princeton Review priced its online SAT courses differently depending on the zip code, ranging from $6,600 to $8,400. In areas with higher prices, there was a larger Asian population. How would you feel if you were charged more because of the zip code you incidentally entered? We have the illusion that algorithms help make rational decisions, but in reality, they are tools of bias and discrimination. With algorithms becoming more prevalent, we face their biases not only as consumers but also as job seekers and citizens.

What is an Algorithm?

An algorithm is a set of instructions for solving a problem or accomplishing a task (Rainie & Anderson, 2017). To make it a little more complicated, “Algorithms are the rules and processes established for activities such as calculation, data processing, and automated reasoning.” (Flew, 2021, p. 83). In the example above, an algorithm was used to make the decision to buy a new laptop. Machine learning algorithms, which use training data and mathematical descriptions of objectives to make predictions, decisions, or actions as output (Australian Human Rights Commission, 2020), are expected to make such decisions much faster and with fewer mistakes that human judgment errors make. The way in which the technical nature of algorithms is presented as guaranteeing fairness is also known as “dataism” (Flew, 2021, p. 84).

“We humans are very bad at making decisions.”
Gartner analyst Nigel Rayner (Thibodeau, 2011)

Where Are the Biases From?

Then, where do the biases in supposedly neutral algorithms come from? I’ve given you a hint. It’s the training data that the machine learning algorithm learns from. Training data is data that characterizes a historical cohort including all the variables used to inform the model’s choices (Australian Human Rights Commission, 2020). Therefore, bias occurs if the data itself is biased or if the machine learning training process is flawed (Flew, 2021).

Causes of Algorithm Bias (Kanev, n.d.)
Contaminated Data

As data about past decisions is used to train algorithms, past discrimination can be reflected in the algorithm and bias current decisions. A classic example is the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), a tool used in the United States to assess defendants’ risk of future crimes. The program’s algorithm classified more African Americans as higher risk than whites, but the table below shows that African Americans classified as higher risk were twice as likely as whites not to re-offend (Angwin et al., 2016). This is because the historical racism in the U.S. criminal justice system was mirrored in the training data. Nevertheless, judges referenced these risk scores when determining actual sentences.

 WhiteAfrican American
Labeled Higher Risk, But Didn’t Re-Offend23.5%44.9%
Labeled Lower Risk, Yet Did Re-Offend47.7%28.0%
Prediction for Reoffend of White and African American Defendants (Angwin et al., 2016)

In 2014, Amazon developed an algorithm to review and evaluate resumes, but it was soon discovered that the program was not gender-neutral. This was because the program had been trained to screen applicants based on patterns in the resumes submitted to the company over the past decade. The algorithm penalized resumes with the word “female,” such as “captain of the women’s chess club,” reflecting the male dominance of the tech industry (Dastin, 2018).

Operators

The people who develop and operate algorithms are the second source of algorithmic bias. During her research, Buolamwini of the MIT Media Lab found that facial recognition software failed to recognize her face. While this is partly due to unrepresentative data, she believes the underlying problem is that its algorithms are typically written by white engineers, who dominate the tech field. When constructing algorithms, coders focus on facial features that are more visible in certain races but not in others and test them mostly on white subjects (Breland, 2017). As of 2018, only 17% of U.S. computer science graduates were women, and according to Google’s 2018 diversity report, its workforce is 53% white, 36% Asian, 4% Hispanic, 3% black, and less than 1% American Indian (House of Representatives, 2020, pp. 146-147).

Buolamwini’s Speech on the Algorithm Bias

Sometimes, algorithms are intentionally designed to produce discriminatory results. In 2016, it was revealed that Facebook allowed advertisers to exclude people by race from housing ads or prevent older people from job ads (Dwoskin, 2018). Such discrimination is illegal, as U.S. federal law prohibits employers and landlords from excluding people from ads based on “protected categories” such as gender, race, religion, and age. However, we may not realize that algorithms aren’t targeting us.

Algorithmic bias spans gender, race, age, and more,
affecting policing, employment, housing, and other areas.

Despite the examples above, some believe that algorithms just haven’t developed enough yet. Microsoft’s LinkedIn, the world’s largest professional network, offers employers algorithmic rankings of candidates based on their suitability. John Jersin, vice president of LinkedIn Talent Solutions, said, “I certainly would not trust any AI system today to make a hiring decision on its own. The technology is just not ready yet.” (Dastin, 2018)

Why Do We Need to Care?

At this point, you might be thinking, “But aren’t humans also biased?” A study comparing the recidivism predictions of COMPAS and humans found that both COMPAS and humans were similarly unfair to black defendants (Dressel & Farid, 2018). This suggests that human predictions are not without bias. Furthermore, even though critics argue that it is hard to know why discrimination occurs with algorithms, isn’t human bias harder to detect? Research showing that resumes with Caucasian-sounding names receive 50% more interview callbacks than those with African-American-sounding names indicates the existence of implicit racism in the U.S. labor market, where such discrimination is explicitly prohibited (Bertrand & Mullainathan, 2004). 

Nevertheless, why is algorithmic discrimination particularly problematic? I argue that we should care about algorithmic bias for two reasons. The first difference between human and algorithmic bias is substantive, and the second is intrinsic and procedural.

Reinforcement of Biases

Bias is “systematically and unfairly discriminating against certain individuals or groups of individuals in favor of others” (Friedman & Nissenbaum, 1996, p. 332). While algorithms themselves merely reflect sampling errors in data sets or the social biases of the algorithm operators, automated decision-making by algorithms systematizes these biases. Given that algorithms can be used universally to make decisions at a speed and scale unparalleled by human decision-making, and that algorithmic decisions can form feedback loops that continually reinforce biases in existing data (FRA, 2022), even the slightest biases in algorithms can magnify the scope and intensity of discrimination.

Algorithms are not only a mirror of society but also a magnifying glass.

Lack of Accountability

A concern with algorithmic bias is that, as the COMPAS case illustrates, it is difficult for individuals to know why they were treated differently. Professor Frank Pasquale of Cornell Law School has referred to this algorithmic opacity as a black box society (Pasquale, 2015). Apple’s credit card has been found to offer different credit limits to men and women but cannot explain how each limit was determined (Robertson, 2019)

A Tool to Deceive and Slaughter by Caleb Larsen (2009)

However, there is a big difference between not knowing why discrimination occurs and not being able to explain why it occurs. In a country with equal rights, someone or a system explains discrimination, holds accountability, and reforms. Accountability is defined as “a relationship between an actor and a forum in which the actor is obligated to explain and justify his or her behavior, the forum can pose questions and make judgments, and the actor can face consequences” (Lewis et al., 2014, p. 401). When there are explanations, we can establish accountability and provide remedies.

How Should We Respond?

As mentioned earlier, current laws prohibit discrimination in pricing or employment based on protected categories, but the black-box nature of algorithmic decision-making makes it impossible to discern whether the treatment is based on a protected category or some other characteristic. We can’t fix the system even if we feel discriminated against. While the systematized bias occurs in human-made institutions, we have combated discrimination through reform and civic education. Therefore, making algorithmic decisions explainable and accountable is a fundamental solution.

EU’s right to explanation

The right to explanation is an individual’s claim against decision-makers in hierarchical and involuntary institutions. Decision-makers are required to provide individuals with a rule-based explanation, as meaningful self-defense relies on understanding the rules behind the decision (Vredenburgh, 2022). In fact, this is not a particularly new type of right for algorithms. The U.S. Equal Credit Opportunity Act requires creditors to notify applicants who are denied credit with specific reasons; a statement that the party has an insufficient credit score is not enough (Consumer Financial Protection Bureau, n.d.). To combat discrimination, the same right is needed for algorithms with maximized information asymmetry.

Europe’s General Data Protection Regulation (GDPR), which came into force in 2018, has been interpreted to provide a right to meaningful information about the logic involved in automated decision-making that significantly affects users (Selbst & Powles, 2017). Australia and Singapore have also emphasized explainable and accountable Artificial Intelligence (AI), including algorithms (Australian Human Rights Commission, 2020; Personal Data Protection Commission, 2020). Among other things, UNESCO’s (2021, p. 22) AI Recommendation is crucial for addressing algorithmic biases.

  • 1. Member States should ensure that it is always possible to attribute ethical and legal responsibility for any stage of the lifecycle of AI systems, as well as in cases of remedy related to AI systems, to physical persons or to existing legal entities.
  • 2. The transparency and explainability of AI systems are often essential preconditions to ensure the respect, protection and promotion of human rights, fundamental freedoms and ethical principles.

However, no country seems to have legislated the right to explanation. Is this due to technical difficulties or corporate influence? The prevailing argument is that the AI industry could be stifled and lose competitiveness in the global market (Feathers, 2022). Sustained civic engagement is necessary, along with discussions on a concrete level, including what, how, and to whom to explain, the reason for disclosure or non-disclosure, and the form of regulation.

Who Decides Future

The advent of digital platforms for mortgage applications has created the possibility for people who don’t fit the stereotypes of homeownership (white, married, and heterosexual) to get mortgages for lower fees without going through mortgage brokers (Miller, 2020). Additionally, cconsidering the opacity of human decision-making, algorithms can be utilized to detect human biases, and once identified, eliminating algorithmic bias can be easier than addressing human bias (Mullainathan, 2019). Therefore, this is not an argument against using algorithms for decision-making.

Technology does not determine society; rather technology and society interact within a complex social sphere (Murphie & Potts, 2002). While algorithms were developed to reduce human effort and increase convenience, they have had an unintended consequence: bias. Whether algorithms systematize social biases or are effectively utilized depends on our ability to recognize and mitigate the risks of algorithmic biases. Technology is not always on the bright side (Karpf, 2018). Regulation is necessary to ensure its operation, and for that, we need to stay awake.

References

Angwin, J., Larson, J., Mattu, S. & Kichner, L. (2016, May 23). Machine Bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Angwin, J., Mattu, S., & Larson, J. (2015, September 1). The Tiger Mom Tax: Asians Are Nearly Twice as Likely to Get a Higher Price from Princeton Review. ProPublica. https://www.propublica.org/article/asians-nearly-twice-as-likely-to-get-higher-price-from-princeton-review

Australian Human Rights Commission. (2020). Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias. https://humanrights.gov.au/our-work/technology-and-human-rights/publications/technical-paper-addressing-algorithmic-bias

Bertrand, M., & Mullainathan, S. (2004). Are Emily and Greg More Employable than Lakisha and Jamal: A Field Experiment on Labor Market Discrimination. The American Economic Review94(4), 991–1013. https://doi.org/10.1257/0002828042002561

Breland, A. (2017, December 4). How white engineers built racist code – and why it’s dangerous for black people. The Guardian. https://www.theguardian.com/technology/2017/dec/04/racist-facial-recognition-white-coders-black-people-police

Consumer Financial Protection Bureau. (n.d.). Comment for 1002.9 – Notifications. https://www.consumerfinance.gov/rules-policy/regulations/1002/interp-9/

Dastin, J. (2018, October 11). Insight – Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/idUSKCN1MK0AG/

Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances4(1), eaao5580–eaao5580. https://doi.org/10.1126/sciadv.aao5580f

Dwoskin, E. (2018). Men (only) at work: Job ads for construction workers and truck drivers on Facebook discriminated on gender, ACLU alleges. The Washington Post (Washington, D.C. 1974. Online).

Feathers, T. (2022). Why It’s so Hard to Regulate Algorithms. The Markup. https://themarkup.org/news/2022/01/04/why-its-so-hard-to-regulate-algorithms

Flew, T. (2021). Regulating platforms. Polity Press.

FRA. (2022). Bias in Algorithms – Artificial Intelligence and Discrimination. European Union Agency for Fundamental Rights. https://fra.europa.eu/sites/default/files/fra_uploads/fra-2022-bias-in-algorithms_en.pdf

Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems14(3), 330–347. https://doi.org/10.1145/230538.230561

House of Representatives. (2020). Inclusion in Tech: How Diversity Benefits All Americans: Hearing before the Subcommittee on Consumer Protection and Commerce of the Committee on Energy and Commerce. House of Representatives No. 116-13.

Kanev, K. (n.d.). Responsible AI: How to make your enterprise ethical, so that your AI is too. DXC Technology. https://dxc.com/in/en/insights/perspectives/paper/responsible-ai

Karpf, D. (2018). 25 Years of WIRED Predictions: Why the Future Never Arrives. WIRED. https://www.wired.com/story/wired25-david-karpf-issues-tech-predictions/

Lewis, J. M., O’Flynn, J., & Sullivan, H. (2014). Accountability: To Whom, in Relation to What, and Why? Australian Journal of Public Administration73(4), 401–407. https://doi.org/10.1111/1467-8500.12104

Miller, J. (2020). Is an Algorithm Less Racist Than a Loan Officer? New York Times (Online).

Mullainathan, S. (2019). Biased Algorithms Are Easier to Fix Than Biased People: Economic view. New York Times (Online).

Murphie, A., & Potts, J. (2002). Culture and technology (First edition.). Palgrave Macmillan. https://doi.org/10.1007/978-1-137-08938-0

Pasquale, F. (2015). The black box society : the secret algorithms that control money and information. Harvard University Press.

Personal Data Protection Commission. (2020). Model Artificial intelligence Governance Framework. https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGModelAIGovFramework2.pdf

Rainie, L. & Anderson, J. (2017, February 8). Code-Dependent: Pros and Cons of the Algorithm Age. Pew Research Center. https://www.pewresearch.org/internet/2017/02/08/code-dependent-pros-and-cons-of-the-algorithm-age/

Robertson, M. (2019, November 12). Apple’s ‘sexist’ credit card investigated by US regulator. BBC. https://www.bbc.com/news/business-50365609

Selbst, A. D., & Powles, J. (2017). Meaningful information and the right to explanation. International Data Privacy Law7(4), 233–242. https://doi.org/10.1093/idpl/ipx022

Thibodeau, P. (2011, October 18). Machines make better decisions than humans, says Gartner. Computerworld. https://www.computerworld.com/article/1477636/machines-make-better-decisions-than-humans-says-gartner.html

UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000381137

Vredenburgh, K. (2022). The Right to Explanation. The Journal of Political Philosophy30(2), 209–229. https://doi.org/10.1111/jopp.12262

Be the first to comment

Leave a Reply