
With the rapid development of artificial intelligence and big data, we have entered an algorithmic society. Algorithms comprehensively penetrate all aspects of the social economy and closely influence our everyday lives and decisions. However, algorithms are not a completely objective and value-neutral technology. There are many possible sources of bias and problems in the operation of algorithms, including bias in the source of data, bias in the labelling of samples, and bias in the design of the system, ect. The essence of which is the mapping of social bias in the era of artificial intelligence, which gradually reveals and infringes on the public’s right to equality of personality and even causes ethical problems and an ideological crisis. Big data algorithms are based on the technology of database training, so when data mining is approached without care, the existing discrimination will be grabbed at the same time with the data (Barocas & Selbst, 2016). At this stage, the public’s concern about algorithms is still focused on the level of technical development. However, the hidden biases in algorithms are also worthy of in-depth sociological study, as raising users’ awareness of the concept of algorithmic bias is important to mitigate the negative effects of algorithmic bias. In today’s blog, we will start from the formation of algorithmic bias, analyse the importance of governance, and finally understand the current governance methods of algorithmic bias in different countries through the existing policies.
The Formation of Algorithmic Bias
Before we look at how bias happens, let’s quickly review the steps that are needed to develop an algorithm. The first step is to identify the problem that the algorithm needs to solve, and then the developer needs to specify the type of data that needs to be used and how to treat that data. After that, the developer needs to select existing code or create new code to deal with the problem based on the data sets and then implement the algorithm in the programming language. In this process, the main areas where bias can be introduced are in the mining and processing of data and in the formulation of the algorithm. In Solon and Andrew’s (2016) paper on algorithmic bias, they explain that the possible causes of bias in data mining include sample selection bias, measurement bias, missing data, and data culling. When the data set covers only a subset of the ideal sample, the selection bias can cause important factors to be overlooked; When the tools or standard for measuring data are not neutral or accurate, the measurement bias will make the results of the data processing biassed; When the data sampling does not cover all samples, missing data will cause bias in the data set; When data collectors selectively cull some data, data culling will lead to some specific groups being overlooked. To summarise, even without considering the bias at the time the algorithm is written, there is still a lot of potential for bias to be introduced into the database formation process, and in many cases, this bias is environmentally determined and not realised by the algorithm engineers. The algorithmic bias affects not only the algorithm but also the database used to implement the algorithm.
We can use some cases to understand how the database deviation leads to the deviation of the algorithm’s operation. Amazon once developed a resume screening algorithm with significant algorithmic bias issues because of the historical data with gender differences. Although the algorithm was discontinued when the problem was discovered, this case is still very representative of the algorithm bias caused by data mining bias. The algorithm was designed to help recruiting teams screen the right candidates faster and more accurately, but there was gender and race bias in the data set that was used to train the algorithm (Weber & Dickerson, 2018). Simply put, the company vetted job applicants by providing its algorithm with resumes they received over a 10-year period to serve as a database. But the majority of the passed candidates are male, which not only reflects the male dominance of the technology industry as a whole but also provides a wrong sense to the algorithm. Amazon’s system therefore “remembers” that male applicants are more likely to be accepted and reduces the probability of accepting female resumes. Although Amazon has built certain programmes to be neutral on certain terms, this does not guarantee that the machine will not otherwise discriminate against job applicants. The result is that the software ignores some qualified female and minority applicants and selects more male applicants. Such algorithmic bias could negatively affect the values, reputation, and business of the companies that use it. This case has also triggered further attention to algorithmic bias by experts and the public.
In addition to the bias of the data set, the algorithms are not written neutrally in many cases. Whether it was because the programmers had different values or because tech companies tried to make the algorithms’ suggestions more appealing to the market, the result was that bias was built into the algorithms. In Noble, Safiya’s analysis of algorithm bias in search engines, she critically enumerates how exaggerated gender and racism are in Google’s search algorithm. At the time (the screenshot in the literature is taken from 2011), both the sexualized photos that appeared when searching for photos of “black girls” and the white men that appeared when searching for “business elites” were typical evidence of algorithmic bias (Noble, 2018). In her study, she points out that some of the causes of algorithmic bias include a lack of diversity in algorithm design and implementation teams, biassed data sets, and the prioritisation of profit over ethical considerations in algorithm development. In addition, she highlights the role of historical biases as well as the influence of socio-political context in shaping algorithmic bias (Noble, 2018). More importantly, such social context is not only reflected in the algorithm but is even amplified. With the increasing attention to the concept of algorithmic bias, we now hardly see such obvious algorithmic vulnerabilities as before, but Noble’s analysis of the causes of algorithmic bias is still applicable to the study of the more invisible algorithmic biases of today’s content.
Why algorithm bias matters?
In fact, in addition to the cases mentioned above, the impact of algorithmic bias on our lives far exceeds public perception. According to the study in “Gender Shades”, commercial facial recognition algorithms are more biassed in terms of gender and skin colour. The study found that these algorithms had a 34.7% probability of misclassifying black women, compared to a 0.8% probability of misclassifying white men (Buolamwini & Gebru, 2018). A similar situation has also been seen in other areas, as the Detroit Police Department erroneously sent an innocent black man, Robert Julian-Borchak Williams, to the police for an algorithmic error. Williams was arrested as a suspect in a home invasion robbery, but in fact he had no connection to the case. The facial recognition system used by the police incorrectly matched his face to the person who appeared on the security camera. Although the case was dismissed by the prosecution two weeks later, the mistake was irreparable because Williams was handcuffed and arrested in front of his children and asked to provide a DNA sample and fingerprints and be interrogated at the police station with 30 hours of detention (Allyn, 2020). With the broad adoption of algorithms, the employment of algorithms can be seen in various industries, which is a positive trend from the perspective of technological development, but at the same time, it is undeniable that such algorithmic bias has also taken root in those industries.
Governance
In the past few years, algorithmic bias has been getting worse, so governments and regulators have started to pay attention to it and try to control the situation. However, due to the complexity and variability of algorithmic bias, it is not easy to develop comprehensive and effective regulatory policies, and many countries and organisations have started to take some measures. The policy targeting still needs to be improved, but governments are paying attention to the negative effects of algorithmic bias. The European Commision has released a white paper On Artificial Intelligence – A European Approach to Excellence and Trust (2020), that proposes three core values for development and four priority development agendas. Among them, trust, a core value, emphasises that the development of AI must be in line with EU values and norms and improve the division of responsibilities through transparency. At the same time, in addition to the promotion of AI development, the protection of human rights, including human dignity, diversity, inclusion, non-discrimination, and privacy and personal data protection (Litvinets, 2020), is also a very important element of this white paper. The U.S. is also developing a series of regulations to address algorithmic bias. For example, California has passed the Consumer Privacy Act (CCPA) (2018), which requires companies to disclose all datasets and algorithms used by their AI systems in order to provide the public with a better understanding of how the data is used and analysed. Meanwhile, the UK Information Commissioner’s Office (ICO) has recently published updated guidance on algorithmic decision making (2023), further encouraging data processors to consider factors such as fairness, transparency, and interpretability in the design and use of algorithms.
In addition, several international organizations and agencies have begun to regulate and control algorithmic bias. For example, the United Nations has developed the Principles for the Ethical Use of Artificial Intelligence (2022), which include the principle of fairness to eliminate algorithmic bias and unfairness. In short, the development and implementation of policies to regulate algorithmic bias have become a common concern among countries and various institutions. For the moment, from the perspective of policy, the mainstream governance of algorithmic bias is mainly in two areas: education for Internet companies and protection of the general public as customers. The principles and guidelines are constantly being updated, but there is still debate about what constitutes “ethical AI” and what ethical requirements, technical standards, and best practises are needed to achieve it. Academic discussions on the ethics of AI have focused on concepts including transparency, justice, fairness, non-harm to humans, division of responsibility, privacy, and security. While there is no single ethical principle governing the development of algorithms, more than half of the current algorithmic guidelines refer to the five key points mentioned above (Jobin et al., 2019). Not surprisingly, global policy development on algorithmic governance has expanded around these principles. In particular, there is a general emphasis on transparency, justice, and fairness (Jobin et al., 2019). The current worldwide consensus on the development of algorithmic technologies is that AI may exacerbate social inequalities and trigger more serious social conflicts if justice and equity factors are not adequately considered.
Conclusion
In the digital era, algorithms are used in all aspects of society, and algorithms as a new social force have become an important driver of social change. We cannot escape the social problems brought by technological development, but we can only actively respond to and prevent such problems to reduce the negative impact. Algorithms are created by people, and the discrimination in human society is more or less extended to the algorithmic world. Unlike the human world, “algorithmic discrimination” is more insidious, and the social impact of data generated through multiple calculations is exponentially magnified, and once formed, it is difficult to eliminate. But happily, although the negative effects cannot be ignored, the cases of algorithmic bias mentioned in this paper have been appropriately addressed. Both government regulation and internet companies are paying more attention to algorithmic bias, and currently, the social governance of algorithmic bias has begun the journey to find a better solution for its future development. (1952words)
References:
Allyn, B. (2020, June 24). ‘The Computer Got It Wrong’: How Facial Recognition Led To False Arrest Of Black Man. National Public Radio. Retrieved from https://www.npr.org/2020/06/24/882683463/the-computer-got-it-wrong-how-facial-recognition-led-to-a-false-arrest-in-michig.
Barocas, S., & Selbst, A. D. (2016). Big Data’s disparate impact. SSRN Electronic Journal, 671–673. Retrieved from https://doi.org/10.2139/ssrn.2477899
California Consumer Privacy Act (CCPA). (2018). Retrieved 10 April 2023, Retrieved from https://oag.ca.gov/privacy/ccpa
European Commission, White Paper on Artificial Intelligence: a European approach to excellence and trust (2020). European Commission. Retrieved from https://commission.europa.eu/system/files/2020-02/commission-white-paper-artificial-intelligence-feb2020_en.pdf.
Guidance on AI and data protection. (2023). Retrieved 10 April 2023, from https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/guidance-on-ai-and-data-protection/
Jobin, A., Ienca, M. & Vayena, E. (2019) The global landscape of AI ethics guidelines. Nat Mach Intell, 1, 389–399 . Retrieved from https://doi.org/10.1038/s42256-019-0088-2
Litvinets, V. (2020, June 19). A summary of the European Commission White Paper on artificial intelligence - A European approach… Medium. Retrieved April 10, 2023, from https://medium.com/@litvinets/a-summary-of-the-european-commission-white-paper-on-artificial-intelligence-a-european-approach-d386c4b9dce8
Noble, Safiya U. (2018) A society, searching. In Algorithms of Oppression: How search engines reinforce racism. New York: New York University. pp. 15-63. Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. PMLR, 77-91. Retrieved from https://proceedings.mlr.press/v81/buolamwini18a.html
Principles for the Ethical Use of Artificial Intelligence in the United Nations System. (2022). Retrieved 10 April 2023, from https://unsceb.org/sites/default/files/2022-09/Principles%20for%20the%20Ethical%20Use%20of%20AI%20in%20the%20UN%20System_1.pdf
Weber, J., & Dickerson, M. (2018, October 10). Amazon Scraps Secret Ai Recruiting Tool That Showed Bias Against Women. International Business Times. Retrieved from https://link-gale-com.ezproxy.library.sydney.edu.au/apps/doc/A557542740/ITOF?u=usyd&sid=bookmark-ITOF&xid=d26910ef.
Be the first to comment