Algorithm discrimination in the context of network cultural governance

jin yang 10/04/2023

Recently, as the artificial intelligence-led algorithm has become the core driving force for the progress of major Internet platforms, it has played a vital role in improving service quality, facilitating the public’s life, prospering the development of the digital economy, and optimizing the dissemination of digital information.  However, the irrationality caused by their shortcomings in transparency, fairness, and security has brought challenges to the dimension of Internet cultural governance, which may potentially affect the legitimate rights and interests of the public and social equity.  Therefore, the potential challenges brought about by the governance of artificial intelligence and algorithms have also become the focus of attention from all walks of life.

Image: Marc A Tasman

Algorithms refer to the process of automatically evaluating and reasoning about scattered data information. Algorithmic discrimination refers to the emergence of biased decision-making due to machine learning based on faulty assumptions.

As Žliobaitė (2017) said, “It seems that what we see in real life is that artificial intelligence and algorithms can often make better decisions than humans, which also leads people to trust and rely on them too much”.

Is this really the case? Obviously, the answer is no.

Although, we have involved the application of artificial intelligence and algorithms in various fields of life, such as bank loan assessment, advertising placement, etc. However, that doesn’t seem to be the case with some services.

Manifestations of algorithmic discrimination

Firstly, because of the lack of information filtering mechanisms in machine learning, there is discriminatory speech in AI chat robots. The application of AI robots is very common in today’s increasingly-developed artificial intelligence. AI chat robots learn information in interactive scenarios, store it in order to simulate human conversations, and then interact and have conversations with the users. Their other function is to help users record daily homework and search for information. Once AI chats Robot learning will learn discriminative information, causing the algorithm discrimination problem. The AI chat robots developed by Koreans can learn to converse in interactive scenarios, allowing them to engage in real world conversations with users. Unexpectedly, however, this AI chat robot interacts with users, publishing books that discriminate against the disabled and pregnant, as well as inappropriate utterances from homosexual and other groups that are precisely what AI bots have been taught during their interaction with users. The promotion and use of AI robots will increase in the future. Whether or not the speech expressed by AI robots is positive has a direct bearing on the user experience and physical and mental health. On the other hand, the algorithmic discrimination brought about by AI robots will also cause more severe social discrimination.

Second, the phenomenon of big data “killing” manifests itself as an ideology of profit-seeking on internet platforms. The typical characteristic of algorithmic discrimination in the era of big data is that internet platforms are “killing” old users. The so-called “killing off” of big data refers to the behavior of using big data mining algorithms on internet platforms to obtain information about users, “portraying” them, and then providing differentiated quotes for different groups of consumers, in order to maximize sales or attract new users. Essentially, this type of company’s “kill off” behavior consists of screening and splitting groups of users through some algorithm, by forming a model of multiple business groups with a single set of ports. And ‘maturity’ here refers to the fact that big data mining algorithms have a complete command of the data.

Image: Ethan Wilk

There are many examples of such phenomena in China. Meituan is an Internet shopping platform in China. According to a survey by the China Consumers Association, this platform uses Internet big data technology to conduct in-depth analysis of customer information. They found that for the purchase of the same commodity or service, the price offered by “Meituan” to white-collar workers or young people is basically 10% higher than the price offered by blue-collar workers or senior citizens.

Lastly, lending discrimination on financial loan platforms is an extension of societal discrimination. Today inclusive finance is in the ascendancy, and discrimination still exists widely in the financial sector (racism). Online financial lending platforms should avoid face to face contact between financial institutions and users, and make lending fairer. Yet, in today’s age of deep integration of big data mining algorithms and the finance industry, algorithms can more easily discover users’ private information and treat user loan limits and interest rates differently on the basis of that information. As a result, the discriminatory behavior of online financial lending platforms is nearly equal to that of offline financial lending institutions.


The Quicken Loans algorithmic discrimination case are an excellent example. Quicken Loans is one of the largest internet stage platforms in the United States. However, when borrowers of color apply for loans on this platform, the interest rate they need to pay is still 5.3 basis points higher than that of white borrowers. Obviously, this does not seem to be much different from the 5.6 basis points that applicants need to pay for borrowing from offline lending institutions (Nova, 2018).

“According to the research, this kind of phenomenon is not actually caused by the reputation of the applicant, but by the algorithmic discrimination,” said Robert Bartlett, a law professor at the University of California, Berkeley and a co-author of the study.

Causes of Algorithmic Discrimination

Given the algorithmic discrimination above, this paper explores the flaws inherent in algorithmic reasoning, the severe discriminatory thinking of algorithm designers, the existence of data, and technical flaws in the algorithm design process, the existence of “black boxes” and other phenomena of asymmetric information in algorithms, and analyses the reasons for algorithmic discrimination.

For the artificial intelligence algorithm, it is actually just an output tool, and people can only see its final output on the Internet platform. Clearly, this is the case in the example described above. Customers enter the required relevant personal information through the software platform and generate results with one click. Clearly, for the most part, they are convinced by what the algorithm is showing. They can hardly detect any biases and flaws in the algorithm.

However, the fact that the logical process of the algorithm from the input data to the output decision outcomes is not publicly disclosed to the outside world has led to the emergence of the “black box” of the algorithm. Users using this algorithm can only know the results of the algorithm’s execution, as well as key processes such as the data and analysis logic employed in the algorithm that are hidden by the algorithm’s “black-box” operation. Due to the non-transparency of the algorithm, the discrimination of algorithms is more covert. In addition, algorithm designers have an advantage in information due to their grasp of big data information, which makes it easier to differentiate between users (Zuiderveen, 2018); furthermore, users are not aware of each other’s situation, and in most cases,  users are only able to view the execution results of their own algorithms without knowing the operation results of the other users. Then users will not even be able to find themselves as a member of the discriminated group (Favaretto et al., 2019).

However, for artificial intelligence algorithms, it mainly relies on the information and data at the input end, and uses artificial intelligence technologies such as deep learning and logical reasoning for analysis, so as to make the most reasonable decision. 

From a mathematical point of view, an algorithm can be regarded as a computer mathematical calculation program, so it has a relatively stable objectivity. Therefore, most of us would consider it unaffected by human values or emotional factors. However, the reality is just the opposite.

This is also the subjective discriminatory thinking of the so-called algorithm designers. Designers have subjective cognitive biases on one side. Even today, prejudice and ideology are pervasive in today’s society. Algorithms mirror human thought. When the algorithm designer has biased ideas in the subjective aspects and incorporates subjective will and implicit bias in the algorithm, this inevitably leads to the problem of discriminating between algorithms (Miron, 2021). In addition, with the promotion and application of big data science today, algorithm designers can more easily achieve user privacy, which has resulted in more severe discriminatory behavior.

This is an obvious conundrum for Internet cultural governance (Bigman 2023). Because it is extremely difficult to supervise the process of algorithmic decision-making, it is difficult for us to judge the fairness, rationality and reliability of the decisions made by the algorithm. We can’t even be sure if the algorithm program is flawed, or if the algorithm’s developer is faulty. Therefore, in addition to the difficulty in supervising the calculation process of the algorithm, it is also difficult for us to define responsibilities.

In addition, some Internet platforms will also use reasons such as “commercial secrets” to avoid relevant regulatory policies. This not only magnifies the hidden dangers of the algorithm, but also further increases the difficulty of algorithm governance.

Governance Proposals for Algorithmic Discrimination

Algorithmic discrimination has been a serious detriment to basic user rights and interests, and corresponding action is urgently needed. We conducted an analysis of the causes of discrimination in algorithms, primarily proposed by algorithm designers, and conducted by users and third-party entities such as governments, corporations, and the media to explore ways to govern discrimination in algorithms.

The first is to strengthen the prevention and control of risk in the IT industry. First, there is a need to strengthen practitioners’ professional ethics. The IT industry is expected to write a professional ethics guide for practitioners, as well as regularly organize practitioners to participate in the study and evaluation testing of relevant regulations and professional ethics in the industry, eliminating practitioners from devising discriminatory algorithms for the sake of profit, and trying to avoid incorporating biased ideas into algorithm programming as much as possible (Issar & Aneesh, 2022); Set up an employee evaluation system and a regular evaluation system, set up a good user evaluation and complaints system, develop relevant reward and punishment systems, and pay particular attention to the investigation and analysis of user satisfaction. Second, to set standards for data collection. The large amount of information in the data makes it crucial in the algorithms. The IT industry should set standard rules for data collection and other aspects when designing algorithms for use by practitioners. Professionals must record and describe the source, scope, amount of sampling, and other information in a timely manner according to the industry’s standard rules, and regular supervision and inspection to ensure comprehensive and consistent data collection from diverse groups, in order to avoid issues of discrimination in the algorithm due to loss of data samples or uneven sampling.

Second, to increase users’ awareness of prevention issues. First, we should value the privacy of private information. With the rise of internet technology today, users’ trajectories, brands of mobile phones, travel logs, and other daily behavior information can be captured by big data algorithms, becoming “data providers” in algorithmic discrimination. Users therefore need to strengthen privacy in their everyday lives, to strictly control the app’s data read permission, and to avoid being stolen by some privacy-extraction algorithms (Cantero Gamito & Ebers, 2021).Third, the government, the media, and other third parties must achieve joint oversight. First, the legal and regulatory system must be improved. China is a rule-of-law country, with strong binding laws that restrict the IT industry through legal means, making it the most robust means of dealing with algorithmic discrimination. The use of algorithms in the current era of artificial intelligence is becoming increasingly widespread, involving many industries and fields. Legal governance should thus be implemented to regulate the scope of application, approaches, and the bottom line of discrimination algorithms. On the other hand, algorithm discrimination also involves many topics such as algorithm designers, merchants and users. In the case of different objects, the law should make detailed provisions about their behavioral norms, clarifying their responsibilities and obligations. Relevant departments should promulgate relevant laws to regulate algorithm designers, clarify the system of punishment for malicious discrimination in the design of algorithms, and enforce it strictly to prevent them from interfering with users’ legitimate rights and interests.


Bigman, Y. E., Wilson, D., Arnestad, M. N., Waytz, A., & Gray, K. (2022). Algorithmic discrimination causes less moral outrage than human discrimination. Journal of Experimental Psychology: General, 152(1),4-27.

Cantero Gamito, M., & Ebers, M. (2021). Algorithmic governance and governance of algorithms: An introduction. Algorithmic Governance and Governance of Algorithms: Legal and Ethical Challenges, 9(3), 1-22. 10.1007/978-3-030-50559-2_1

Favaretto, M., De Clercq, E., & Elger, B. S. (2019). Big Data and discrimination: perils, promises and solutions. A systematic review. Journal of Big Data, 6(1), 1-27.

Issar, S., & Aneesh, A. (2022). What is algorithmic governance?. Sociology Compass, 16(1), 12955 -12955.

Miron, M., Tolan, S., Gómez, E., & Castillo, C. (2021). Evaluating causes of algorithmic bias in juvenile criminal recidivism. Artificial Intelligence and Law, 29(2), 111-147.

Nova, A. (2018) Online lending hasn’t removed discrimination, study finds

Saurwein, F., Just, N., & Latzer, M. (2015). Governance of algorithms: options and limitations. info, 17(6), 35-49.

Wilk, E. (2022) An Old-Fashioned Economic Tool Can Tame Pricing Algorithms

Žliobaitė, I. (2017)  Measuring discrimination in algorithmic decision making. Data Min Knowl Disc, 31, 1060–1089.

Zuiderveen, F. (2018). Discrimination, artificial intelligence, and algorithmic decision-making. línea, Council of Europe, 13(9),53-57.

Be the first to comment

Leave a Reply