Using Google Search Engine as an Example to discuss Algorithmic Bias and Social Responsibility
People are living with the internet, and many different types of platforms are successfully used by the public. They have a common feature in that all operations are dependent on algorithms. Algorithms drive the establishment of processes and rules for different platforms, processing data, helping calculations, and automated decision-making (Goggin, 2023). Algorithms are familiar to the public, but the public has no power to control the algorithm. Search engines are one of the first places people look for information and it’s also the second most common use of the Internet (Robertson et al., 2018). Some scholars believe algorithms are neutral tools, but others constantly overturn this theory. Therefore, the problem regarding algorithmic bias and who should be responsible for the algorithm is important. I will use “Google” this search engine as an example in this blog to discuss the algorithmic bias and the impact of discrimination brought about by bias on the public. In the end, I will discuss who should be responsible for the adverse social impact caused by algorithms. Then encourage the public to improve algorithm awareness and education to correctly recognize algorithm bias.
Regarding whether the algorithm is a neutral tool, Noble conducted a systematic study on the Google search engine in 2018: by analyzing all the phenomena caused by the algorithm of Google for seven years, it proved that the algorithm has a discriminatory bias. This means the algorithm is not neutral. In theory, the programmer’s thoughts will be shown on the algorithm and the overall proposition of the company. Originally most search engines relied solely on content and metadata to determine whether a web page was relevant to users’ search (Introduction to PageRank for SEO, 2020), and this resulted in search accuracy being low. Google’s search algorithm is a highly guarded trade secret, it is dynamic in nature (Ofiewe, 2021). Continuously learning and adjusting the algorithm can create a better user experience-centric platform, and prevent negative bias in time to reduce the psychological impact on the public. Google’s search algorithm refers to the process Google uses to rank content. How could users feel this? The search bar associations and all search results are linked to the user account. In the beginning, Google mainly uses PageRank to clear the relationship between the web and the page. To determine the importance of the page through a large number of links. Also, collaborative filtering (CF) link the web page with the user’s background and provides the user with the search results it thinks are appropriate. For example, when the search bar is associated, the webpage will automatically pop up the search results you have searched before or some with the related keywords. At the same time, algorithms will place some entries that you are not interested in in a very low position or just block them. This process is like putting users in their habit bubble, which is called a “filter bubble” (Bruns, 2019). Especially in the process of viewing the search results, when you search for similar questions after already clicking some links, the link ranking will be changed according to your previous viewing. Google easily pull users to the rabbit hole effects, it continuously provides personalized recommendations: the related links that users may be interested in while browsing the web. However, personalization does not reduce algorithmic stereotypes and prejudices. Instead, it demonstrates the multifaceted discrimination of the platform (Bozdag, 2013). I will use two cases to analyze the racism and right-wing bias in the algorithm caused by Google search.
Racism:

Figure 1. Screenshot from Cohn’s blog. He was searching “Girl” & “Woman” on Google before.
In Figure 1, we could find pictures of girls and women mainly white and their aesthetics are thin and beautiful. I think this demonstrates the stereotypes and racial biases in Google towards women. Only 1/3 of Google’s employees are women (Miller, 2012). The male-dominated algorithm objectifies women as they need to give men visual appreciation. The proportion of pictures of people with dark skin colour and Asian girls is only 4/50. This result has a certain relationship with the proportion of black women working at Google, which is only 1.2% (Cohn, 2019). Another point showing discrimination is the suggested words in the search engine.

Figure 2: a screenshot of Noble’s book
Figure 2, the cover of Noble’s book. We could find in 2018, search engine suggestions give many negative words for discussion topics around black women. In 2023, google deletes all search suggestions to avoid showing discrimination against black women. When I only type “black women” in the search bar, then the suggestions are all related to buying clothes. It’s good that Google realized bias, but in the process of changing, it put users in filter bubbles to increase the profitability of advertising promotions.
The right-wing bias:

Figure 3: screenshot from the Guardian webpage. Google helped Clinton by autocompleting search results.
Contributors to Partisan Bias Include TV, Social Media, and Search Engine Manipulation Effects (Robertson et al., 2018). The Guardian‘s findings indeed demonstrate that Google search has enabled creative manipulation of social media trends and search algorithms by right-wing groups (Solon & Levin, 2016). The algorithm is kept in a black box, and employees have the opportunity to manually adjust the algorithm irresponsibly. During the Trump and Clinton campaigns, Google’s search results used biased rankings to alter some voters’ minds who were hesitant (Robertson et al., 2018), voters prefer to choose candidates with positive entries. During the campaign, Google provided many search completion suggestions related to Clinton’s positive image (Sputnik International, 2016) and changed the order of entries to increase the probability of the name appearing. At the same time, there is also the problem of fake information. American far-right organizations use various techniques to deceive algorithms so that false information can spread quickly (Robertson et al., 2018). The autocomplete search algorithm is not a simple tool, its importance is far beyond our imagination. We’ll find out that Google’s manipulation was sneaky, and it helped platforms form a symbiotic relationship with politics (Robertson et al., 2018).
What effects?
Algorithmic bias affects individuals, social groups, and politics in different ways. First, is racial discrimination. Globally, racial discrimination will have an adverse impact on population health (Braveman et al., 2022). Systemic racism is rooted in the system and is a long-standing stereotype. In the current globalization context, the Internet should shoulder the responsibility of anti-discrimination. Google’s search algorithm will be connected with the user’s account background information, and racial discrimination will lead to unfair search results. This can lead to serious psycho problems for the individual and group, which can even escalate into hatred of society, thus increasing the likelihood of antisocial behaviour.
Google makes money by relying on advertising on the web, and a biased algorithm will reduce the amount of information that coloured people get, such as job-related ads. This further leads to their economic disadvantage and increased health risks (Braveman et al., 2022). Especially Google’s autocomplete can reinforce stereotypes about marginalized groups. From this point of view, Google’s removal of the association with black women is the first step to success. If it wanted to help the anti-discrimination movement, Google could consider adding positive suffix associations to certain groups of words.
In terms of political impact, algorithmic bias will undermine people’s trust in the electoral system. Users feel manipulated because they don’t have the freedom to control the bias of search results. Users will trust fair, transparent and responsible algorithm recommendations more (Shin, 2020), it can also ensure the fairness of elections and reduce polarization at the same time.
Most people will use search engines to check the accuracy of information obtained on other social platforms (Robertson et al., 2018), and the rapid spread of false information will cause other industries to be affected as well. For example, journalists also need to rely on search engines and Internet platforms to obtain news. Google and other search platforms not only control the information they can access but also the audience and composition of stories (Tong & Lo, 2017). Platforms controlled by capital, technicians, and government bureaucrats incorporate their worldviews and interests into algorithm design (Feenberg, 1999). Changes in the media industry will also lead to upheavals in the financial industry. There was a fact that false information lead to stocks plummeting, which affected multiple social groups. Online information intermediaries have become our social gatekeepers to a certain extent, and that’s why he needs to be given responsibilities.
How to change?
Algorithmic bias needs to be brought under control, and tech companies should recognize their responsibility to expand algorithmic transparency where necessary (Napoli & Caplan, 2016). However, the transparency of the algorithm may lead to attacks on the algorithm, so it is necessary to think about the degree of disclosure of the algorithm and how to control accessibility. Google can consider the social diversity of personnel allocation and algorithm writing teams within the company, requiring designers to use more than the functions within the system but also to assume the real usage in the environment (Friedman & Nissenbaum, 1996). The diversity of the team can make the algorithm less personal in the editing stage and reduce the user’s feeling of being discriminated against. At the same time, Google can also conduct fixed manual search reviews to adjust the automatic algorithm. Google’s search engine dominates the industry so overwhelmingly that it’s hard for users to have a second opinion (Dian, 2008). Google should accept reasonable regulation by the public, a single decision will lead to political problems. It is very necessary to clarify the responsibility system, and Hinman proposed four reasons why search engine companies should undertake major social responsibilities (2005): First because they play an absolutely critical role in access to information, second, citizens need accurate information and informed decisions. Third is because search engines are at the heart of education, it controls the future. Last is because the businesses behind them are for-profit, it should give the public trust. Whether Google or any other platform includes a search function, it should be responsible to the public. Their role and power are too wide. They not only control the source and dissemination of information but also control the group thinking of the public in the future. There is no doubt that algorithm designers and companies need to take responsibility. But you, the reader should also think about this! You have a responsibility to raise awareness to avoid your own quality bias and raise education to face algorithmic bias.
Other social platforms now also have search functions, such as China’s Little Red Book and Taobao, which will provide users with recommendations. Same as Google, provide users with predictions through algorithms such as filter bubbles too. Overly accurate predictions will also cause users to fear personal privacy data. Indeed, it is difficult for search engine algorithms to satisfy everyone, and various deviations are inevitable. But the platform should assume social responsibilities while protecting the rights and interests of users, it should also stabilize the stability of other industries. The public should improve their self-awareness and let people tame algorithms, instead of letting algorithms tame people’s thinking and behaviour habits.
Reference
Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology, 15(3), 209–227. https://doi.org/10.1007/s10676-013-9321-6
Braveman, P. A., Arkin, E., Proctor, D., Kauh, T., & Holm, N. (2022). Systemic And Structural Racism: Definitions, Examples, Health Damages, And Approaches To Dismantling: Study examines definitions, examples, health damages, and dismantling systemic and structural racism. Health Affairs, 41(2), 171-178.
Bruns, A. (2019). Filter bubble. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1426
Cohn, J. (2019). Google’s algorithms discriminate against women and people of colour. The Conversation. Retrieved from: https://theconversation.com/googles-algorithms-discriminate-against-women-and-people-of-colour-112516
Diaz, A. (2008). Through the Google Goggles : Sociopolitical Bias in Search Engine Design. In Information science and knowledge management (pp. 11–34). Springer.
Feenberg, A. (1999). Questioning Technology. Routledge. https://doi.org/10.4324/9780203022313
Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems, 14(3), 330–347. https://doi.org/10.1145/230538.230561
Goggin, G. (2023). Issues of Concern: AI, Automation, & Algorithmic Governance. ARIN6902 Internet Cultures and Governance. The University of Sydney. 14, April.
Hinman, L.M. (2005). “Esse Est Indicato in Google: Ethical and Political Issues in Search Engines.” International Review of information Ethics, 3:19-25.
Introduction to PageRank for SEO. (2020). POLEMIC DIGITAL. Retrieved from: https://www.polemicdigital.com/introduction-to-pagerank-for-seo/
Miller, C. C. (2012). In Google’s inner circle, a falling number of women. The New York Times. Retrieved from https://www.nytimes.com/2012/08/23/technology/in-googles-inner-circle-a-falling-number-of-women.html#:~:text=One%2Dthird%20of%20Google’s%2034%2C300%20employees%20are%20women.
Napoli, P. M., & Caplan, R. (2016). When Media Companies Insist They’re Not Media Companies and Why It Matters for Communications Policy. Available at SSRN 2750148.
Noble, S. U. (2018). Algorithms of oppression. In Algorithms of oppression. New York University Press.
Ofiwe, M. 2021. How Does the Google Search Algorithm Work in 2021? Semrush blog. Retrieved from: https://www.semrush.com/blog/google-search-algorithm/
Robertson, R., Jiang, S., Joseph, K., Friedland, L., Lazer, D., & Wilson, C. (2018). Auditing Partisan Audience Bias within Google Search. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 1–22. https://doi.org/10.1145/3274417
Shin, D. (2020). User perceptions of algorithmic decisions in the personalized AI system: perceptual evaluation of fairness, accountability, transparency, and explainability. Journal of Broadcasting & Electronic Media, 64(4), 541-565.
Solon, O. & Levin, S. (2016). How Google’s search algorithm spreads false information with a rightwing bias. The Guardian. Retrieved from: https://www.theguardian.com/technology/2016/dec/16/google-autocomplete-rightwing-bias-algorithm-political-propaganda
Sputnik International. (2016). Sputnik exclusive: Research proves google manipulates Millions to favor Clinton. Sputnik International. Retrieved from https://sputnikglobe.com/20160912/google-clinton-manipulation-election-1045214398.html
Tong, J., & Lo, S.-H. (2017). The Invisible Hand of the Unaccountable Algorithm: How Google, Facebook and Other Tech Companies Are Changing Journalism. In Digital Technology and Journalism. Springer International Publishing AG. https://doi.org/10.1007/978-3-319-55026-8_2
Be the first to comment