Yuhan Yao
SID: 520646276
The controversy caused by ChatGPT
ChatGPT has been one of the most visible achievements in the field of artificial intelligence in recent years, with professionals and casual users alike enthusiastically using and discussing this new chat robot model. The application appears to have a filter for offensive or harmful content, but such a highly sought-after new technology is quickly being questioned as still being an old bias, meaning it inevitably suffers from negative developer influence.
First off, some users have discovered that ChatGPT dose not show a comprehensive and objective picture of the contributions of marginalized people, especially women. For example, when you ask ChatGPT if the famous black blues singer Bessie Smith has influenced Mahalia Jackson, ChatGPT cannot respond accurately nor does it provide any background on Bessie as proof of influence, this situation is regarded as a form of gender and racial discrimination against black women. (Nkonde, 2023)
Secondly, ChatGPT inadvertently conveys gender stereotypes, such as when it is asked by users to write out the lyrics, “If you see a woman in a lab coat, She is probably just there to clean the floor. ” (Alba, 2022)

To understand the origin of these gender discrimination occurrences, two key aspects may need to be considered. One is that it is always a reflection of “Bias In, Bias Out”. ChatGPT actually establishes a large number of associations between words and phrases to generate new language, but the language “itself can be biased in racist, sexist and other ways. ” (Alba, 2022)
Second, the members of the system creators are overwhelmingly male. According to the ChatGPT Team Background Research Report, there are nine women in this team, accounting for only 10% of the total. Moreover, in terms of job composition, nearly 90% of the team are technicians, which means that men have complete control over technological advancement. As a matter of fact, topics related to algorithmic discrimination such as ChatGPT are common, and there is no shortage of discrimination against women, including demeaning women, stereotypes, weak recognition of women and insufficient representation of women.
With the rapid development of artificial intelligence, algorithms have fully penetrated the operation of society and influenced every aspect of our lives. However, algorithms are not completely objective and neutral technologies, and the biases generated during their operation have raised public doubts about their fairness and transparency. In particular, AI has resulted in a plethora of female sexism, which is essentially a mapping of social biases onto technology, thus exacerbating the infringement on social justice. In the following, this article will show various manifestations of gender discrimination faced by women in AI scenarios, rationally explore the reasons for its growth and attempt to propose strategies for effective governance.
What is “algorithmic gender discrimination”?
“Algorithmic bias” or “algorithmic discrimination” is a core ethical issue, and it is actually necessary to understand them separately. Algorithmic bias, as the name implies, is the prejudice in an algorithm that absorbs the bias from the data and the developer and allows it to reenter society as part of the algorithm. Algorithmic discrimination, on the other hand, refers to discrimination caused by humans dividing target objects based on algorithm decisions and applying differential treatment.
The most significant difference between the two is that bias is a state of perception that exists in people’s minds, but discrimination is an action that has consequences, which means it is based on bias. This blog focuses on the plight of women in the digital economy era, and therefore limits algorithmic sexism to the phenomenon of discrimination against the female groups.
Perhaps algorithmic sexism is not readily perceived directly by the average Internet user, but the problem of such discrimination has lurked in many scenes of life. What has been widely noticed by scholars is the labor scene, with the most notorious case coming from Amazon.
According to a Reuters’ report (Meyer,2018), this tech giant once built an automated recruitment system to screen resumes and select employees, but the word “women” (women and women’ s) related words would cause the algorithm to give applicants a lower ranking. The reason for this was that the company’s long-standing gender bias against women in hiring was embedded in the training data set, which was implicitly learned by the computer and became part of the algorithm. Engineers later tried to solve the problem, but they could not ensure that the bias had been removed completely and the algorithm could still discriminate in undetectable ways, so the project was eventually terminated.
There is also a subtler way in the employment algorithm. When Datta (2015) and other scholars studied the relationship between job seekers’ gender and job advertisement push, they used data software to find that male user groups received six times more job recommendations for positions “above $200000 per year” than female user groups. Both of these two forms of cases illustrate the key features of algorithmic discrimination, which originate from input data and are difficult to detect and eliminate.
In addition, the mainstream search engines in various countries have been criticized for showing algorithmic discrimination. For instance, when searching for words such as “engineer,” “CEO,” or “scientist” (refer to Image 2 below)on the major Chinese search engines represented by Baidu, most of the results are images of male. (Zhang, 2021)
Similarly, the suggestions that automatically appear after keywords when using Google search often contain a range of sexist ideas, such as that women should not drive, or that women should stay at home or in the kitchen. This is not all, what is even more outrageous is that black female writers found that there is a high probability that the returned search results are pornographic and other unpleasant content when searching for “black girls”. (Noble, 2018) Search engines that collect a lot of information are often subconsciously perceived by users as neutral players to be trusted, but these companies prioritize pursuing profits, and they present results in a deliberate order that takes into account the user’s tendency to click. In other words, the stereotypes or degradation of women in search engines is actually a demonstration of the status of women in the real world.

Besides, there is a type of gender discrimination that is easily overlooked, manifested in the different recognition abilities for men and women, with the algorithms typically having weaker recognition for female. According to a report by New York Times, both Google and Amazon’s artificial intelligence services did not recognize the word ‘hers’ as a pronoun, but correctly identified ‘his’. (Metz, 2019) The journalist insists that algorithms “often don’t give women enough trust” when analyzing text. Similar recognition differences were found in voice recognition, where Criado-Perez (2019) states that degree of voice recognition software for female voice is far lower than that of male voice, putting female drivers at risk when their cars’ voice command systems failed to understand their language. These algorithmic gender biases may not be malicious from technology and businesses, but they truly affect the lives of women.
Why does “algorithmic gender discrimination” occur?
To address the above-mentioned problem of algorithmic gender discrimination, it is necessary to analyze its causes, mainly considering the gender stereotypes or social biases of algorithm designers, inherent flaws of algorithms, loopholes in the algorithm design process, and information asymmetry such as algorithmic “black box”.
Firstly, bias and discrimination may be the result of machine learning, and it is difficult to avoid the influence of bias throughout the entire process of algorithm design, from the initial designer’s perceptions to the technological evolution that follows. LinkedIN’s analysis (2018) found that women represent only 22% of AI professionals worldwide, and that the male technicians who dominate the field may inadvertently bring biases and values to the design and training of algorithms in their work, causing AI to be more accustomed to seeing things from a male perspective, so the Internet is largely beneficial to men.
There is a persistent symbolic link between masculinity and technology, and the sharp gender division of labor in technical practice. (Faulkner, 2004)From another perspective, if algorithms tend to solidify or amplify discrimination, then bias and discrimination will grow throughout the algorithm.
Secondly, the algorithm relies on training data, but if these data themselves lack representativeness and validity, the algorithm will inevitably also experience deviation that affect the accuracy of the overall algorithmic decision-making and prediction. Of course, it is undeniable that many algorithms are actually a matter of probability, so it is possible that some algorithmic bias will occur.
Thirdly, the existence of the algorithm “black box” makes the opportunity for human manipulation. The algorithm “black box” means that it is difficult for the general public to know and understand the specific information of the algorithm, whether it is the purpose or the operation process that users cannot grasp, the whole process is like running in a black box, and the problem of opacity arises. (Kenton,2022) By excluding users, algorithm designers with big data information gain an advantage to treat users differently, without users knowing that they are being discriminated against. This information imbalance allows algorithm owners to embed special codes in their algorithms to achieve explicit goals, exacerbating the already existing gender discrimination.

Source:ACUITY knowledge partners
How to eliminate “algorithmic gender discrimination”?
In order to gradually eliminate algorithmic sexism, all parties at the societal level should take proactive measures. First and foremost, to increase the voice of women in the artificial intelligence industry. Women are influenced by the narratives spread in society, such as that technology-related jobs are challenging and difficult, which inadvertently limits women’s first steps into the field. Therefore, the government needs to increase gender equality in technical education and promote more approachable science and technology education to enhance women’s interest in AI and improve their education in related fields.
And those women technicians who enter the artificial intelligence field also face many obstacles, as they often struggle to obtain equal access to be assigned to research and development positions like men, there are also difficulties in subsequent promotion and salary gaps. Therefore, AI technology enterprises need to effectively increase the proportion of female engineers and ensure that female employees are treated equally.
The second point is to strengthen the awareness of gender equality and professional ethics of practitioners. A recent survey found that 58% of respondents working in AI-related fields were unaware of the existence of gender discrimination in algorithms, which signifies more than half of them lacked basic knowledge in this area. (Zhang, 2021)
Thirdly, companies need to establish a review mechanism to risk control technical vulnerabilities in the design process of algorithms. In addition to this, greater transparency needs to be the goal that spurs technology companies on, and only by not hiding and having sufficient transparency can external personnel inspect and supervise the company to prevent improper behavior. (Pasquale, 2015) Finally, the algorithm should collect data in all aspects during the design stage to present women’s social existence as comprehensively as possible and avoid neglecting to trigger algorithmic bias.
These ideally proposed approaches can only solve the technical aspects of the problem after all. The deeper social problems still need a long process to be perceived, identified, voiced and acted upon.As the product of human wisdom, algorithms truly reflect the biases and injustices in today’s society. At the same time, algorithms that are constantly being trained with data containing various human prejudice are also amplifying the biases, ultimately forming algorithmic discrimination and harming marginalized groups.
Despite this article mainly discusses the problems faced by women, in fact, we need to pay attention to more neglected minority groups, who have difficulty speaking up for themselves in the field of artificial intelligence and even become accustomed to such discrimination unconsciously. In short, more and more people are becoming aware of the powerful capabilities of artificial intelligence and its positive significance for mankind, the gender bias it carries still has to be taken seriously. The possibility of prejudice in these algorithms can only be reduced by regularly evaluating, examining, challenging, holding responsible, and discussing algorithms.
References:
Alba, D. (2022, December 08). CHATGPT, open ai’s chatbot, is spitting out biased, sexist results. Retrieved April 10, 2023, from https://www.bloomberg.com/news/newsletters/2022-12-08/chatgpt-open-ai-s-chatbot-is-spitting-out-biased-sexist-results
Criado-Perez, C. (2019). Invisible women : exposing data bias in a world designed for men. London: Chatto & Windus.
Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination. arXiv.org.
Kenton, W. (2022, March 6). What is a black box model? definition, uses, and examples. Retrieved April 10, 2023, from https://www.investopedia.com/terms/b/blackbox.asp
LinkedIn. (2018, December 18). Growing but not gaining: Are ai skills holding women back in the workplace? Retrieved April 10, 2023, from https://economicgraph.linkedin.com/blog/growing-but-not-gaining-are-ai-skills-holding-women-back-in-the-workplace
Lohan, M., & Faulkner, W. (2004). Masculinities and Technologies: Some Introductory Remarks. Men and Masculinities, 6(4), 319–329. https://doi.org/10.1177/1097184X03260956
Metz, C. (2019, November 11). We teach A.I. systems everything, including our biases. Retrieved April 10, 2023, from https://www.nytimes.com/2019/11/11/technology/artificial-intelligence-bias.html
Meyer, D. (2018, October 10). Amazon killed an AI recruitment system because it couldn’t stop the tool from discriminating against women. Retrieved April 10, 2023, from https://fortune.com/2018/10/10/amazon-ai-recruitment-bias-women-sexist/
Nkonde, M. (2023, February 27). CHATGPT: New AI system, old bias? Retrieved April 10, 2023, from https://mashable.com/article/chatgpt-ai-racism-bias
Noble, S. U. (2018). Algorithms of oppression : how search engines reinforce racism. New York: New York University Press.
Pasquale, F. (2015). The black box society : the secret algorithms that control money and information. Cambridge: Harvard University Press.
Zhang, P. (2021, September 30). The ‘CEO’ is a man: How Chinese artificial intelligence perpetuates gender biases. Retrieved April 10, 2023, from https://www.thestar.com.my/tech/tech-news/2021/09/30/the-ceo-is-a-man-how-chinese-artificial-intelligence-perpetuates-gender-biases
Be the first to comment