
Introduction
Artificial intelligence has experienced rapid development in recent years and penetrated people’s daily lives. Artificial intelligence algorithms have impacted life more than people can imagine. But is AI definitely fair?
As the level of AI technology increases and is widely used, ethical issues such as bias have emerged. The world’s leading AI companies are commonly bothered by the issue, and AI is a high-risk zone for bias. AI, which seems to be absolutely objective and neutral without emotion, has more persistent bias than humans on some social issues, and the scope of discrimination is even broader. It is difficult for people to define a fair AI. The existence of AI relies on huge databases, almost all of which are grabbed, borrowed, or stolen from the vast amounts of data that people send out every day, intentionally or unintentionally. From this perspective, AI is “amplifying” the biases that already exist. This article aims to examine how AI bias is formed, how it affects humans, and how it can be solved by combining the course material and analysing case studies.
The Roots of AI Bias and Potential Discrimination
In AI systems, bias refers to any type of systematic distortion that applies to the analysis and modelling process. Bias can come from model mechanisms or be captured in the data. (Atrium.ai, 2022) Why does AI, which is supposed to be neutral, produce bias? The mainstream AI language models are trained by accumulating large amounts of corpus data and then using machine learning to “understand” the incoming language. These AI models are widely used in areas such as translation, auto-response, content generation, and text filtering. In the training process, algorithm mechanism and data set flaws to result in an inherently biased AI algorithm. For example, inputting “doctor” or “lawyer”, AI will automatically associate them with “male”, and information about “credit” or “real estate” are recommended more often recommended to white users.
AI algorithms do not make judgments about things out of thin air. Artificial intelligence’s “association” mechanism categorizes words and pairs them together to make a “most likely” prediction. It has to go through a large number of data practices before it has the ability to match keywords of different groups and make judgments automatically. Then it can accurately label something. Yet, when there are data limitations, i.e. insufficient training data, it is likely to have information bias, which can lead to unintentional discrimination. From the creation of the model to the formation of the data set, from the analysis of the data to the evaluation of the results, in each step before making judgments, AI may have various possibilities to be biased.

According to Crawford (2021), artificial intelligence is neither artificial nor intelligent. This definition of AI contradicts the mainstream view of AI, but Crawford argues that AI is both embodied and material and that without extensive computationally intensive training using large data sets or predefined rules and rewards, AI systems are not autonomous, rational, or capable of recognizing anything. That is, AI is an idea, an infrastructure, an industry, a form of exercising power, and a way of seeing.
Computational rationality and concrete work are intimately connected, and AI systems both reflect and produce social relations and understandings of the world. (Crawford, 2021) The need behind current AI is namely for large amounts of data to be trained, and although AI itself does not know what bias and discrimination are, the researchers behind AI carry such thoughts that bias is created in the choice of training data. Typically, the process of creating AI algorithms involves many engineers, and these engineers are generally labeled as [high-income] [white] [male], and their perceptions are based on their class and identity, therefore, the AI trained by them to understand the world will inherit their limitations. To some extent, the current bias and discrimination in AI is a projection of human consciousness and class status. The AI algorithm developed by elite white male engineers is more like ‘white AI’ or ‘elite AI’. Analogously, it is conceivable that a black or yellow-dominated AI algorithm would also be more favourable to their group.
How AI Bias Affects Humans: A Case Study of Amazon’s AI Resume Selection System
AI has evolved to be a player in shaping knowledge, communication, and power. The biases implicit in AI have an impact on the human social order and permeate all aspects of life with the gradual expansion of applications. For example, AI indirectly widens the existing gender gap. According to UNESCO, only 22% of all AI practitioners are women. (Marchant, 2021) The gender bias and stereotypes are reflected in AI algorithms due to their lack of representation and visibility in the industry. The gender default of virtual assistants such as Siri, Alexa or Cortana is female. The slavishness and conformity implied by such gender settings is an example of how AI reinforces and propagates gender bias in society. Discriminatory patterns that occurred in the past are perpetuated when biased training data leads to biased models.

Research has shown that job search platforms more frequently offer higher positions to less qualified men than to women. Such data algorithms directly affect women’s employment rates and inadvertently exacerbate sexism. AI tends to label specific groups of people and therefore cannot guarantee absolute objectivity. For example, Amazon once developed an artificial intelligence recruitment system.
Early in 2014, Amazon began to develop artificial intelligence for resume selection, hoping to quickly filter the most ideal candidates from a large number of resumes. However, just a year later, the technology was questioned as having a strong gender bias in the AI algorithm’s screening, violating human rights. The bias was manifested in the fact that the system itself concluded that male candidates were more desirable. Even if the candidate’s resume does not indicate gender, the AI will look for clues about gender in the text, such as “captain of the women’s chess club” or “graduated from a women’s university. According to people familiar with the project, the AI was trained on the resumes the company has received over the past 10 years. In the heavily gender-imbalanced science and technology sector, especially in technical roles, long-standing career stereotypes and a “men’s club” culture have led to hiring more men than women. (Dastin, 2018) In 2017, Amazon abandoned this AI model for screening resumes. Amazon’s failed AI hiring system reflects an important source of text-based AI bias: existing databases are inherently biased. Whether it is an established gender bias in the industry or a widespread perception of gender in society, these biases are reflected in the corpus directly or indirectly. Similarly, racial or cultural biases permeate the process of AI learning.

For another example of AI bias, Noble (2018) points out that information related to terms such as “black girls” on search engines exhibits a historical tendency to misrepresent black women. These distortions and the use of big data to maintain and exacerbate social relations play a powerful role in maintaining racial and gender slavery. AI systems and search engines reinforce racial bias by consistently normalizing the ideology that black people do not deserve human rights and dignity under the guise of technological innovation.
Thus, if an AI model always shows predictions consistent with social bias and the probability of such predictions occurring is very stable, it proves that the model is indeed systematically biased.
How can the problem of AI bias be solved?
As artificial intelligence continues to evolve, technology companies, governments, businesses and activist groups are recognizing the problem of AI bias and are beginning to confront it gradually. Many companies have developed tools to assess the fairness of AI and are doing their best to try to solve AI bias. However, solving the AI bias problem is not as simple as one might think. AI bias is also not an immediate solution due to deep-rooted social biases, but there are still some solutions to the problem.
Developing ethical AI is one way to solve the problem of AI bias. UNESCO has initiated a global legal document on the ethics of AI in an attempt to regulate AI bias by promoting ethical AI and has called on everyone in the world to participate in the development of this document. Australia also launched an ethical framework for AI in 2021, which it claims will “guide businesses, governments and other organizations in the responsible design, development and use of AI.” (Department of Industry, 2022)
Both government and business have a role to play in developing and providing ethical and inclusive AI. Interfering with AI as it remaps and intervenes in the world is an effective means of reducing bias. Biases are very similar to the patterns used by machine learning to inform predictions. They are characterized by ethical considerations, which means that biased, discriminatory patterns are associated with ethical decision-making. Not all patterns are relevant to ethical decision-making, depending on how researchers train the models to determine which patterns are biased and which are not.
Since AI systems cannot learn and develop without humans, AI technology is intended for human use, and human rights need to be guaranteed, the development of AI also needs to be human-centred. Activity groups also play a role in solving the problem of AI bias. Developing more representative, broad-based, and high-quality databases is an effective means of solving the problem of AI bias. The diversity of data depends on the diversity of backgrounds of the researchers and participants behind it. Activist groups can make a social call for more people from diverse backgrounds to enter the AI industry and work on AI system development and framework design.
Conclusion
The learnability and powerful arithmetic of AI has allowed it to be used in a wide range of applications and, as a result, to permeate all aspects of life, but now more than ever, it is important to be alert to the potentially discriminatory dangers of AI. People can’t ignore the fact that AI is by nature a tool, and they need to be designed to benefit humans, not harm them. In the pursuit of better AI fairness, humans can try to eliminate bias through a diverse set of testers during the development phase of a model. At the same time, AI ethical guidelines need to be regulated, and the development of ethical AI needs to be taken more seriously. AI has the potential to improve almost any industry, from healthcare to education and more, as long as effective practices are followed. With the right ethical framework in place, it’s easy to see how it will make the world a better place.
Reference List
Atrium.ai. (2022). Ethical AI: Real-world examples of bias and how to Combat it. Atrium. Retrieved from https://atrium.ai/resources/ethical-ai-real-world-examples-of-bias-and-how-to-combat-it/
Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300252392
Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Retrieved from https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
Department of Industry. (2022). Artificial Intelligence. Department of Industry, Science and Resources. Retrieved from https://www.industry.gov.au/science-technology-and-innovation/technology/artificial-intelligence
Marchant, N. (2021). Only 22% women in AI jobs – the gender gap in science and Technology, in numbers. ThePrint. Retrieved from https://theprint.in/features/only-22-women-in-ai-jobs-the-gender-gap-in-science-and-technology-in-numbers/697917/
Noble, S. U. (2018). Algorithms of oppression : How search engines reinforce racism. New York University Press.
UNESCO. (2020). Artificial Intelligence: Examples of ethical dilemmas. Retrieved from https://en.unesco.org/artificial-intelligence/ethics/cases
Wood, M. J. (2020). Addressing bias in ai. Addressing Bias in AI | American College of Radiology. Retrieved from https://www.acr.org/Practice-Management-Quality-Informatics/ACR-Bulletin/Articles/September-2020/Addressing-Bias-in-AI
Be the first to comment