Biases within Artificial intelligence

Introduction of AI, automation, algorithms & datafication

Artificial intelligence (AI) and algorithms are a type of machine learning that aim to understand and build intelligent entities based on rational action (Russell & Norvig, 2020) if you define them academically and technically. Through the use of data analytics, algorithms can produce the “best” possible outcomes, and it has been transforming industries and society as a whole. This process is often referred to as datafication, which involves collecting online data that is converted into quantified data for predictive analysis of human behaviour. (Flew, 2021)

The use of AI and algorithms is particularly prominent within organizations, especially within the recruitment process, where they can improve efficiency and identify suitable candidates. For instance, in 2018, Amazon’s recruitment engine that is used for screening was found to be biased against female candidates, as its computer model is trained to observe patterns based on job applicant’s resumes that were submitted to the company over the course of 10 years, where most of which were from men. (Dastin, 2018)

As a result, it cannot rate candidates in a gender-neutral way. This case study of gender bias in Amazon’s recruiting tool highlights the ethical and legal challenges that arise from algorithmic decision-making, including issues about transparency, fairness, and accountability. This case illustrates the importance of addressing bias in AI systems with effective governance.

Source: Logically

The Amazon AI Recruiting Tool

Amazon began developing an artificial intelligence (AI) recruitment program back in 2014 to streamline the process of reviewing job applications, with the aim of making the talent acquisition procedure faster and more precise. The company wanted to automate repetitive sorting work and help the hiring team deal with the large number of applications they received.

Amazon’s artificial hiring tool operates by giving ratings to candidates ranging from 1 star being the lowest and 5 stars being the highest score, almost like its product satisfaction rating system from its marketplace. (Dastin, 2018) Correspondingly, candidates with high scores were more likely to be considered for the position, whereas those with lower scores were likely to be eliminated from the recruiting process.

Amazon’s AI recruitment tool’s primary function was to select information from the resumes sent to the company and assign relevance to information elements, such as skills, experience, and background in a data set. It processes resumes using calculations and reasoning to generate scores, which help determine whether an applicant is suitable for a particular role.

But how can this artificial intelligent tool learn to do all these tasks?

The answer is that Amazon has given the AI system 10 years’ worth of resumes from its current well-performed employees. This allows the systems to learn and find similarities with the new incoming resumes and match their data set with their job positions. By the time when the AI recruitment system reads enough historical data set, it can then predict whether the job applicants were compatible with the applied position, and it can also anticipate which role the job applicant would be most successful in across certain corporate and warehouse jobs. After that, Amazon’s AI would be able to fast-track these applicants to the next procedure. The AI recruitment method Amazon adopted is often known as the Automated Applicant Evaluation (AAE) technology and was adapted by many other firms. (Rey, 2022)

Biases in Algorithms

While the tech industry is stereotypically and, in fact, a male-dominated industry, according to Reuters Graphic, there is a significant gap between gender distribution among top global tech firms like Apple, Facebook, Google, and Microsoft, and it is most prominent among technical staff such as software developers where men outnumber women by a large margin. In fact, all these top global tech firms have over 77% male employees in their technical roles (Huang, 2017).

Source: Reuters

Returning to the topic of AI, algorithms, data, and the Amazon case study. As most of the resumes, Amazon received were from men, the recruitment algorithm was gradually trained to advocate for male candidates over female candidates. As the large pile of male-dominated resume data had misled the algorithm into reasoning that male or “masculine” words should be presiding over women or any “feminine” terms.

Over time, the system taught itself to demote the term “Women.” For example, when a female candidate includes “women’s surfing club” on their resume, the system will give the candidate a lower score. Candidates from a female-only university also claimed to be relegated by the system. In contrast, resumes that include words like “executed” or “captured” were likely to receive more stars as they appear more often on male resumes. (Meyer, 2021)

The gender bias of Amazon’s AI recruitment system makes me question

what really is the root cause of the faulty and biased algorithm?

And how algorithms can be bias-free when humans are the one who builds it and when there are biases existing within the society and from human ourselves. In other words, the existing database provided to the algorithm has always embedded biases within.

Looking at the base of the situation, biases that occur within Amazon’s algorithm are likely due to the insufficient female contribution and dataset available for machine learning. But when we look more profoundly at the situation, as mentioned above, the ones who helped build the machine are mainly guys. If there is gender bias embedded in the designers, the machine will likely pick up that pattern and exhibit the same behavior in its output. (Madgavkar, 2021)

The relationship between the algorithm builder and the machine itself reminds me of the Clever Hans effect. It refers to when someone or something picks up on non-deliberate cues from their handler and responds with a desirable outcome.

The phenomenon caused the subject to seem to have near-human intelligence, but the truth is that the subject is simply producing the results according to the given cues. Similarly, with Amazon’s algorithms, the machine merely takes cues from its builder and the data fed to it. In the relationship between the builder and the machine itself, bias can enter the system. The mechanism really points out that the biases are entirely based on a wider social structure and humans since the machine is neither artificial nor has an intelligence of its own. (Crawford, 2021)

The bias due to underrepresented data used in machine learning can cause potential impacts on the specific organization and society as a whole. For companies, the bias in the machines can cause disruptions in their business operation.

Source: The Guardian

Again, taking Amazon as an example, the gender bias in their screening tools could not rate candidates in a gender-neutral way. It can impact Amazon on the company’s most valuable assets, which is a whole gender category of specialists. Looking at the bigger picture of biases in AI, more companies and organizations are eager to implement AI within their operation. However, with the lack of adequate knowledge, more and more biased data results will be generated.

According to Gartner’s prediction, by now, 85% of AI projects are delivering erroneous outcomes due to bias in the algorithms (Gartner, 2018). This can cause serious problems, especially when data holds such influential power. It can potentially cause opinion and political polarization in society.

Ethical and Legal Challenges of Artificial Intelligence

There are some challenges that arise from the algorithmic biases that affect both individuals and the community. These challenges include ethical and legal issues, transparency of AI system, algorithmic governance as well as accountability of AI system.

Amazon’s case study has highlighted a few major ethical and legal matters that we might need to reflect on when using algorithms, such as the potential increase of social discrimination, exclusion of social categories, and production of inaccurate information. Other than that, whether or not individuals are protected and have the right to challenge the algorithmic decision.

Source: Flickr

In the case of Amazon’s recruitment tools, the company has yet to provide a convincing hypothesis whether AI has produced accurate outcomes from its screenings. Without a valid explanation to the public, Amazon’s action of screening out applicants based on algorithmic suggestions might potentially incur liability in blind reliance on its AI system. (Dattner et al., 2019)

The case also links to the problems of transparency in algorithm use between algorithm builders and people subject to the decision based on those data. As mentioned in previous paragraphs, the biases in the AI system are often perpetuated from existing prejudice, so that being said, the biases passed to the AI are implicit and on a subconscious basis (Vincent, 2018).

Therefore, it is not visible for correction and is less open to the public for clarification purposes. With the implication of AI, job applicants or even the hiring team might not have a reason as to why an applicant is being screened out or is rated as a particular score, as the decision-making process is entirely based on trusting the system.

The problems that emerged from the AI implications have raised questions about who is accountable for the outcomes if decisions are made by machines. Though, there might not be a general answer to who should take the blame when the AI system causes harm.

Though, at the very least, organizations and AI developers should be responsible for the evaluation process of the system to ensure the quality of the results as well as the outcomes do not violate any ethical or legal problems. To minimize the impact of the ethical concerns from the usage of AI, it is best that enterprises could set up a governance system for responsible AI and internal policies.

In fact, UNESCO has introduced the Recommendation on the Ethics of Artificial Intelligence. It works as an international regulatory framework that allows Member States to apply voluntarily to protect humanity from the emerging implementation of AI. It acts as a guideline to tackle challenges such as transparency, accountability, and privacy.

The Recommendation principle urges regulatory bodies to ban the intrusive use of AI systems on the public, advocates for the implementation of tools that can assess the impact of AI systems on individuals and emphasizes the importance of using environmentally friendly AI. (UNESCO, 2022) With the appropriate regulation approach from the government and organizations, the implication of AI can surely improve ethical data processing as well as migmatite biases from AI.


To sum up everything, we have discussed how the use of artificial intelligence and algorithms is emerging and how it has impacted individuals and society. Using Amazon’s gender-biased recruitment AI as a case study, we explored how human prejudice is embedded in the dataset that caused the biases in the AI system. We have highlighted the ethical and legal challenges of AI decision-making, together with issues in AI transparency and accountability. Bias in AI can cause significant problems in society, like perpetuating existing social inequalities and worsening polarisation. It is crucial for enterprises and organizations to address faults within AI to ensure AI is developed and used in ways that align with ethical and legal standards. This will require collaboration between policymakers, AI developers, governments, and all stakeholders to improve and follow standards for AI regulation. Ultimately, to protect humanity and maximize the benefits of the implication of AI to society.

Reference List:

Benjamin, M., Buehler, K., Dooley, R., & Zipparo, P. (2021, August 10). What the draft European Union AI regulations mean for business. McKinsey & Company. Retrieved April 6, 2023, from

Crawford, K. (2021). In Atlas of ai (pp. 1–21). essay, Yale University Press. Retrieved April 6, 2023, from

Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Retrieved April 5, 2023, from

Dattner, B., Chamorro-Premuzic , T. \, Buchband, R., & Schettler, L. (2019, August 8). The legal and ethical implications of using AI in hiring. Harvard Business Review. Retrieved April 6, 2023, from

Flew, T. (2021). Regulating Platforms. Retrieved April 5, 2023 from Polity.

Gartner says nearly half of cios are planning to deploy artificial intelligence. Gartner. (2018, February 13). Retrieved April 6, 2023, from

Huang, H. (2017). Dominated by men. Reuters Graphic. Retrieved April 6, 2023, from

Kästner, C. (2022, February 16). Transparency and accountability in ML-enabled systems. Medium. Retrieved April 6, 2023, from

Madgavkar, A. (2021, April 7). A conversation on Artificial Intelligence and gender bias. McKinsey & Company. Retrieved April 6, 2023, from

Meyer, D. (2021, June 8). Amazon killed an AI recruitment system because it couldn’t stop the tool from discriminating against women. Amazon Reportedly Killed an AI Recruitment System Because It Couldn’t Stop the Tool from Discriminating Against Women. Retrieved April 6, 2023, from

Rey, J. D. (2022, November 23). A leaked Amazon memo may help explain why the tech giant is pushing out so many recruiters. Retrieved April 6, 2023, from

Russell, S. J., & Norvig, P. (2020, April 28). Artificial Intelligence: A modern approach 4th edition. Pearson. Retrieved April 5, 2023.

UNESCO. (2022, April 8). UNESCO adopts first global standard on the ethics of artificial intelligence. Retrieved April 7, 2023, from

Vincent, J. (2018, October 10). Amazon reportedly scraps internal AI recruiting tool that was biased against women. The Verge. Retrieved April 6, 2023, from

Be the first to comment

Leave a Reply