An Introduction to Gender Bias in Artificial Intelligence: Exploring Gender Bias in Google Image Research

Introduction

With the release of ChatGPT, AI (Artificial Intelligence) is no longer a concept from science fiction but has really come to us. While users are marvelling at the changes that AI’s capabilities may bring to their lives and work, there is also great concern about the ethical issues that AI may cause. The promiscuity of AI as a term, and its openness to being reconfigured, also means that it can be put to use in a range of ways (Crawford, 2021). Whether searching for images in search engines, viewing short videos on social media or using AI tools for text image generation, users receive the information generated by different platforms based on a large number of databases.

As AI-assisted systems are increasingly proliferating in society, these biased systems will directly impact users unaware of these biases (Shrestha & Das, 2022). As highlighted by several machine learning pioneers, a primary challenge to computer systems is their incapacity to separate the information they process from the context in which it is generated (Pearl, 1988). The algorithmic recommendation mechanisms and ethical review mechanisms of platforms, and most fundamentally the database formed by the gender bias accumulated in society over time, can affect the fairness of the information received by users. Due to the pervasive nature of gender bias in society and its influence on our unconscious decisions, it can be challenging for users to detect a biased automated system. Making systems accountable for unfair practices is difficult when users are unable to recognise such biases. Gender bias in AI can exacerbate fair treatment and stereotypes, further entrancing gender inequality in society.

The Emergence of Gender Bias in AI

Machine learning(ML) and artificial intelligence(AI) have been deployed in various ways. These algorithms and models provide quick and easy solutions to problems, but the biases lurking in them, such as gender bias, can treat people unfairly. While the fairness debate in the field of AI is relatively new, discrimination has long been present in human society. Historical biases seep into automated systems through the data they rely on. AI systems, when trained on such data, can detect and learn the implicit biases accumulated over time, which may not be apparent at first glance. Broadly in the context of ML and AI implementation, a model is gender biased if the model’s performance and/or output is biased against a faction of the population based on their gender (Shrestha & Das, 2022). The training datasets do not always contain representative public demographics limiting the integration of vulnerable groups and amplifying gender biases (Castaneda et al., 2022).

Different Stages Can Produce Various Types of Bias

Emily Bender and Batya Friedman describe three categories of bias: pre-existing biases, technical biases and emergent biases (Bender & Friedman, 2018). Pre-existing biases stem from biased social norms and practices. Women face unequal pay for equal work at work, marginalisation in the workplace, age and maternity anxiety. Numerous product designs are modelled on the male body structure. Non-binary gender is ignored in policy, etc. These gender biases are imported into the AI database in the form of data. Technical biases arise from technical aspects and are incorporated into AI systems as a result of the implementation of specific constraints and decisions made by the developers. Some researchers have linked the existence of gender bias to the underrepresentation of women in the development and creation of AI products and services.(Castaneda et al., 2022). Emergent bias occurs during actual usage when a system does not align with the capabilities or values of its users. For example, the filter bubble created by social media’s algorithmic recommendations. When bubbles emerge around issues such as sexism, the constant affirmation of a single point of view inhibits users’ ability to expand their knowledge beyond the confines of their own networks.

AI Systems Frequently Exhibit Gender Bias Across Various Domains

Gender bias is widely present in different AI systems, including Natural Language Processing (NLP), automated Facial Analysis, image classification, recognition algorithms, advertisement, marketing, recruitment, recommendation systems, search, and robotics (Shrestha & Das, 2022). 

Gender Bias in Google Image Search

Figure 1. Image research engine (Pew Research Center, 2018)

Figure 1. Image research engine (Pew Research Center, 2018)

Search engines are the most basic Internet platform used by every user. We use search engines to access web pages, videos, text, and images. Google processes more than 3.5 billion queries per day and 1.2 trillion searches per year. Google image searches account for 22.6% of all searches (Feng & Shah, 2022). Considering the huge number of image searches in everyday life, the results of these searches influence the world as perceived and understood by the user. Information environments have the power to affect people’s perceptions and behaviours. 

Gender bias is a prevalent and extensively researched form of demographic bias within various types of image biases. As a result, it frequently draws the focus and examination of both academics and the media. This attention often prompts service providers to implement ad hoc solutions to address such biases (Feng & Shah, 2022).  Some studies on gender bias in search engines by analysing data on the gender distribution of different occupations in image searches. The descriptions of occupations were chosen because it is a socially important topic that has been in the spotlight and is constantly improving.

CEO Image Search Results Demonstrate Significantly Higher Representation of Men

In 2018, a study by the Pew Research Center builds on existing research to analyse a wide range of images from Google Image Search featuring men and women in typical jobs. These findings are then compared with actual data on gender distribution within the U.S. workforce. The study reveals that the proportion of each gender depicted differs significantly across the range of careers examined. Machine vision algorithms are used to estimate whether each person appearing in the images was male or female. They calculate the estimated percentage of males and females in the top 100 Google image search results for each job to assess whether the results reflect the actual percentage of males and females in each occupation. The results show that overall there are more men in the image search results than the proportion of men in these occupations actually (Lam et al., 2018). For more than half of the examined job categories, search images underrepresent women compared to their real-world participation in those professions, as per federal data (Figure 2). For example, when searching for the position of CEO, the proportion of women in the image results is much lower than that of men. Stereotypes about professions are widespread and in reality, there is an unequal gender representation in different positions. Existing images support the stereotype of working women as marginalised, sexualised or in support roles, and these portrayals harm women’s career aspirations and prospects (Kay et al., 2015). Search engine results reinforce these stereotypes and there is a lack of comparison of search results with real data.

Figure 2. The table of percentage of women in image search results (Lam et al., 2018)

Figure 2. The table of percentage of women in image search results (Lam et al., 2018)

Has CEO Gender Bias Been Fixed

After discovering and reporting that gender bias for certain professions could change searchers’ worldviews, Google quickly took action to correct and fix such a bias. They adjusted the gender distribution in image search results for CEO and several other occupations (Feng & Shah, 2022). In a recent study (Feng & Shah, 2022), it was shown that although Google has corrected the image search results for CEOs, gender biases reappear when adding other search keywords such as US or UK (Figure 3). Researchers have searched for combinations of occupation and country, revealing that the image search engine is only superficially mitigating bias and that gender bias deep-rooted is not fixed systematically.

Figure 3. The image search results of “CEO” on Google (Feng & Shah, 2022) 

Figure 3. The image search results of “CEO” on Google (Feng & Shah, 2022) 

Image Search Results Affect How People Think and Act

Simultaneously, gender bias in search algorithms impacts users by reinforcing gender bias and possibly affecting hiring choices. Psychological researchers conducted a study aiming to identify if there is a correlation between the level of inequality in a society and the presence of bias in algorithmic outputs. Furthermore, they investigated whether exposure to these biased outputs could sway human decision-makers to act in line with such biases. Utilising the Global Gender Gap Index (GGGI), which includes rankings of gender inequality for over 150 countries, they sourced data on the extent of gender disparities in various areas. The GGGI measures gender inequality in economic participation and opportunity, educational achievement, health and survival, and political empowerment across 153 countries, offering a societal-level gender inequality score for each nation. The researchers performed Google image searches for “person” in each nation’s dominant local language across 37 countries. The findings indicated that in countries with higher gender inequality, the proportion of male images returned from these searches was greater. This demonstrates that algorithmic gender bias aligns with societal gender inequality (Vlasceanu & Amodio, 2022). In addition, they conducted an experiment in which 400 American men and women participate. The participants are asked to take on the role of recruiters and choose the gender of the candidate they thought was appropriate for four unfamiliar occupations. At the beginning of the experiment, both male and female participants tend to choose men. Nonetheless, after viewing the image search results, participants in the low-inequality conditions alter their male-biased prototypes compared to the baseline assessment. In contrast, those in the high-inequality conditions retain their male-biased views, further solidifying their perceptions of these prototypes. This finding illustrates that gender bias in a commonly used internet search algorithm mirrors the level of gender inequality present in society. The study reveals that exposure to gender bias patterns in algorithmic outputs influences people to think and behave in ways that reinforce societal inequality.

A Cycle of Bias Propagation Involving Society, AI, and Users

After the gender bias in the image search engine was exposed, Google has been optimising its search algorithm. However, the search results still reflect gender inequality. Firstly gender bias and stereotypes accumulated in society over time lead to bias in the data itself. Secondly, the imperfection of AI algorithms and unsound review of search results lead to output results explaining these biases. Thirdly these results affect users’ gender concepts and related behaviours (e.g. recruitment). This implies a cycle of bias propagation involving society, AI, and users. At the same time, the results of these studies are a record of the development of the search engine AI algorithm. Although the gender bias in search engine results may fade away, these studies are proof that bias existed.

Strategies to Address and Mitigate Gender Bias in AI

To tackle gender bias in AI, there is a need not only for companies to improve their algorithms, but also for an international consensus on the governance of AI and the establishment of norms.

Enterprises Improve AI Algorithms

In order to provide a better user experience and to be socially responsible, AI development companies need to improve AI algorithms to reduce the impact of bias in their products on users and society. Algorithmic biases are usually prevented or mitigated by manipulating the source of the bias. In most cases, the source is either the training corpus or the algorithm itself (Shrestha & Das, 2022). According to Feldman and Peake (2021), there are three distinct types of algorithmic bias mitigation, depending on the stage of training during which the model designers implement the intervention. Pre-Processing methods refer to the filtering of the raw data before training begins. In-Processing methods intervene during the training process, such as adversarial learning. Post-training intervention methods are applied after the model has been trained, making it the easiest approach to implement.

Organisations Develop AI Standards

International organisations such as UNESCO and the OECD are already developing standards and principles for AI. UNESCO highlights that AI can exacerbate existing gender gaps, particularly as gender biases and stereotypes are perpetuated due to the underrepresentation of women in the industry. Currently, UNESCO is working on the development of the first global standard-setting instrument concerning AI ethics, in the form of a recommendation (UNESCO, 2022). The Council of the OECD on AI provides a set of internationally-agreed principles and recommendations. The value-based principles are inclusive growth, sustainable development and well-being, human-centred values and fairness, transparency and explainability, robustness, security and safety, and accountability (Artificial Intelligence – OECD, n.d.). In addition, different countries are drafting their own AI industry standards.

The other most urgent need to address is bias outside of AI technology. We can’t fully address gender bias in AI recommendations without addressing human gender bias.

Reference

Artificial intelligence – OECD. (n.d.). Www.oecd.org. https://www.oecd.org/digital/artificial-intelligence/

Bender, E. M., & Friedman, B. (2018). Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science. Transactions of the Association for Computational Linguistics, 6, 587–604. https://doi.org/10.1162/tacl_a_00041

Castaneda, J., Jover, A., Calvet, L., Yanes, S., Juan, A. A., & Sainz, M. (2022). Dealing with Gender Bias Issues in Data-Algorithmic Processes: A Social-Statistical Perspective. Algorithms, 15(9), 303. https://doi.org/10.3390/a15090303

Crawford, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (p. 19). Yale University Press.

Feldman, T., & Peake, A. (2021). End-To-End Bias Mitigation: Removing Gender Bias in Deep Learning. https://arxiv.org/pdf/2104.02532.pdf

Feng, Y., & Shah, C. (2022). Has CEO Gender Bias Really Been Fixed? Adversarial Attacking and Improving Gender Fairness in Image Search. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 11882–11890. https://doi.org/10.1609/aaai.v36i11.21445

Kay, M., Matuszek, C., & Munson, S. A. (2015). Unequal Representation and Gender Stereotypes in Image Search Results for Occupations. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems – CHI ’15. https://doi.org/10.1145/2702123.2702520

Lam, O., Broderick, B., Wojcik, S., & Hughes, A. (2018, December 17). Gender and Jobs in Online Image Searches. Pew Research Center’s Social & Demographic Trends Project. https://www.pewresearch.org/social-trends/2018/12/17/gender-and-jobs-in-online-image-searches/

Pew Research Center. (2018). Image research engine [Image]. Pew Research Center. https://www.pewresearch.org/social-trends/2018/12/17/gender-and-jobs-in-online-image-searches/

Shrestha, S., & Das, S. (2022). Exploring gender biases in ML and AI academic research through systematic literature review. Frontiers in Artificial Intelligence, 5. https://doi.org/10.3389/frai.2022.976838

UNESCO. (2022). Ethics of artificial intelligence | UNESCO. Www.unesco.org. https://www.unesco.org/en/artificial-intelligence/recommendation-ethics

Vlasceanu, M., & Amodio, D. M. (2022). Propagation of societal gender inequality by internet search algorithms. Proceedings of the National Academy of Sciences, 119(29). https://doi.org/10.1073/pnas.2204529119

Be the first to comment

Leave a Reply