The challenge of walking a tightrope: how to find a balance between protecting individual privacy, optimizing algorithmic recommendations, and curbing hate speech.

Figure 1 from


In today’s era, the wave of the Internet has swept the whole world. For people, especially the younger generation, the Internet has deeply penetrated every aspect of life; we are keen to shop online at various websites, use social media to communicate with others in real-time, and share daily updates. It is almost impossible to live a modern life without the Internet. It’s convenient, fast, vast, and extremely informative. Along with the numerous benefits, the Internet age has come with some disturbing problems.

Have you ever experienced such a scenario? Trying to sign up a new user for each platform, hastily running your finger over the lengthy text-heavy terms of use that dominate the screen, and then ticking the bottom button that represents consent without actually reading the terms at all. It turns out that when you talk about a product with a friend in real life, the way to buy that product is displayed on the front page when you open the shopping software, while social platforms are keen to recommend what’s happening around you and recommend your real friends who use the same platform to you, this so-called personalized recommendation function often brings people a sense of panic: where is our privacy? Who is stealing our personal information?

On top of that, the widespread hate speech on the Internet can be equally disturbing. We can witness discriminatory, offensive, and insulting content in the comments section of almost any platform, and verbal violence, whether in terms of gender, ethnicity, class, or demographics, is a dark side of the Internet that cannot be ignored.

So, who should protect our privacy and protect us from hate speech?

Before discussing this question in detail, several important concepts should be clarified:

Figure 2 from

What is privacy?

Privacy is a complex concept, considering its inclusion of rich information dimensions with a large body of past theoretical work (Rössler, 2005; Nissenbaum, 2010). Although many scholars have explored the concept of privacy in the contextual conditions of different cultural regions (Moore, 1984), in order to narrow the discussion, this paper chooses to explain the concept of online privacy as commonly used in the context of the digital age: online privacy is usually considered as the extent to which personal data and browsing history are protected. In particular, aspects such as data collection, processing, and storage (Sushko, 2021).

Figure 3 from

What is Hate Speech?

The definition of hate speech varies; according to the Encyclopedia of the American Constitution and scholars, hate speech refers to speech directed at individuals or groups that attacks their specific attributes, such as ethnicity, religion, gender, etc. (Nockleby, 2000; Davidson, 2017; Parekh, 2012, p. 40). Of these, Davidson et al. argue that their purpose is to derogate, humiliate, or insult group members, while Parekh emphasizes its potential to encourage, provoke, or incite hateful sentiments against a specific group.

The balance between personal privacy protection and algorithmic recommendations

It is undeniable that big data and artificial intelligence technologies have now become the product of the spotlight on the technology stage. As digital technology continues to evolve, more and more websites and digital platforms use algorithms to speculate on users’ needs in an attempt to maximize the user experience. Therefore, the need to collect users’ personal information has become inevitable. As the core of the recommendation algorithm, the collection of personal data may cause damage to personal privacy. It has been proposed that algorithms can use personal data as input to help make decisions, but this practice can also harm personal privacy. For example, recommendation algorithms may recommend specific goods or services to users based on their browsing history, search history, and purchase behavior. While such recommendations do enhance the user’s shopping experience, this collected data may also be used for inappropriate purposes. Some companies may use this data for advertising and marketing or even sell it to third parties (Acquisti, Brandimarte, & Loewenstein, 2015). Several research studies have shown that recommendation algorithms of online retailers such as Amazon use private information such as demographic data, addresses, and economic status to make purchase recommendations, which may lead to unwanted information leakage and personal privacy breaches. For example, in a 2017 study, Li demonstrated that the information in Amazon wish lists might reveal private personal information about users. Based on the collected user data, scholars demonstrated that it is possible to predict the user’s gender with more than 80% accuracy simply by using items from Amazon wishlists (Li, 2017). A similar situation occurs with other recommendation algorithms, such as social networks and search engines.

Figure 4 from

In March 2018, for example, Facebook was exposed for providing user data to the data analysis firm Cambridge Analytica for political marketing and election interference. Cambridge Analytica reportedly analyzed users’ data, including profiles, friendships, activity logs, and likes, to predict and influence voter behavior in order to develop personalized political advertising and advocacy strategies for its clients. These clients include the campaign team of U.S. President Donald Trump and the Brexit camp of the United Kingdom. Such a wide-reaching and shocking incident has sparked widespread global concern and discussion about data privacy and security, as well as controversy over the regulation and supervision of the adoption of personal data by Internet companies and data analytics firms.

Based on the above examples, it can be concluded that we need to find a balance between algorithmic recommendations and personal privacy protection to ensure the impartiality of algorithmic recommendations and the protection of personal privacy. Here are some possible initiatives: In order to strike a good balance between personal privacy protection and algorithmic recommendations, web policies need to set strict restrictions to ensure that the collection and use of personal data are ethical and legal. This includes limiting the scope of data collection and retention periods for algorithms, encrypting and anonymizing the data involved in the algorithmic recommendation process, and strengthening the regulation of algorithmic recommendations.

The balance between the Problem of Hate Speech and Free Speech

As users of the Internet, all people fear hate speech, yet how regulation and governance define hate speech, how to use human or algorithmic review of hate speech, and how to limit it should be widely considered. Overly restrictive measures on hate speech may be seen as undermining free speech, and this has sparked much controversy. As Flew calls in his book, a balance should be struck between the principles of free speech and the regulation of harmful content on the Internet (Flew, 2021). Therefore, in balancing the relationship between online hate speech and freedom of expression, various factors need to be taken into account to ensure a just and reasonable approach.

On the one hand, initiatives to over-censor online hate speech may undermine freedom of expression. 

Figure 5 from

For example, #Me Too is a global movement sparked by social media to expose and combat sexual harassment and sexual assault. It spread into mainland China in 2018, exposing a number of incidents on media platforms and bringing with it tit-for-tat battles between netizens. As a result, the #Me To topic was ordered by the Chinese government to be blocked by the policy in April 2018, so since then, #Me Too-related posts and topics have been deleted or banned on Chinese social media platforms, including social software such as Weibo and WeChat. As a result, some users on social media platforms have had to switch to using the Chinese harmonic word to spread the topic (which means beige rabbit in English).

Figure 6 from

Bypassing algorithmic censorship to avoid being automatically filtered by the system, which has had a negative impact on combating sex crimes and increasing women’s courage to protect themselves. Such a situation also implies that an over-reliance on algorithmic censorship may lead to excessive censorship and, thus, infringement of freedom of expression.

On the other hand, however, an over-reliance on freedom of expression may also lead to the proliferation of hate speech and other inappropriate expressions on the web. Such 

speech may lead to social divisions, racial discrimination, and other undesirable consequences that harm the interests of victims and social justice, as scholar Gorenc argues that online hate speech is a form of violence that persists despite efforts to combat it, and it should not be considered synonymous with freedom of expression (Gorenc, 2022).

Combining these two aspects, we need to find the right perspective and approach when balancing the relationship between online hate speech and freedom of expression. Policymakers can play a supervisory role over platforms to ensure that the scale of platform vetting meets social justice and ethical standards. In addition, a combination of manual and algorithmic auditing is a wise choice, and giving full play to the role of gatekeepers can effectively avoid over-reliance on algorithmic auditing and excessive restrictions on freedom of expression by watchdogs.

The balance between personal privacy and hate speech:

It may be surprising to hear this, but an overemphasis on personal privacy may indeed allow hate speech and other inappropriate speech to flourish on anonymous or non-actual-name platforms. Some scholars have demonstrated that online hate is motivated by feelings of privacy, safety, and security associated with virtual communication. The constituents of the communication content do not fully acknowledge their statements and their intent to target a specific speaker (Kilvington, 2021). On these non-real-name platforms, speech publishers can hide their true identities and are more easily protected. This also means that it is difficult for victims to hold speech accountable, making hate speech more prevalent on these platforms. 

Figure 7 from

The anonymous social platform Yik Yak is one example. The platform, which took U.S. campuses by storm between 2013 and 2017, serves as an anonymous social platform where users can post short messages as well as like or step on other users’ posts. Due to its anonymity, Yik Yak was often used to post hate speech and bullying incidents, especially in schools and school environments, which greatly impacted the upbringing as well as the physical and mental health of students. Thankfully, in 2015, Yik Yak finally attracted widespread attention for the hate speech and bullying incidents that existed on its platform. Some schools began asking students to ban the app in an attempt to avoid hate speech. Unfortunately, however, while the Yik Yak platform took some steps to limit abuse, such as forcing users to enter their school’s name to restrict access to the platform, these measures did not fully solve the problem. In 2016, the platform began shutting down some non-school communities, limiting users’ anonymous responses to posts, and creating a 24-hour response team to handle reports of abuse. However, these measures did not lead to a complete solution to the problem, and Yik Yak shut down its social platform in 2017.But eventually, the app was relaunched in 2021.

This example shows that hate speech and bullying incidents that exist on anonymous social platforms need to be more strictly regulated and limited. For now, while platforms can take some measures to limit abusive behavior, users will always look for loopholes in platform rules, making it difficult to address the root cause of these measures. For this reason, the government and society should develop broader measures to strengthen the regulation and institutionalization of online speech in order to curb the breeding of hate speech and protect the interests of victims. Some policy measures, such as real-name systems and personal information registration, can reduce the rate of hate speech publication and reduce the falsity and maliciousness of speech while also making it easier for victims to hold speech accountable. In addition, according to Gagliardone et al., counter speech is considered a more effective way to prevent potential harm from hate speech. This author suggests four different types of initiatives to combat the rise and spread of hate messages: monitoring and engaging in discussions about hate speech, uniting civil society, influencing private companies through lobbying efforts, and launching media and information literacy campaigns (Gagliardone, 2015). At the same time, education and awareness should be intensified to raise public awareness and vigilance about online abuse and hate speech.


In the age of the Internet, we are faced with the challenge of walking a tightrope: finding a balance between protecting personal privacy, optimizing algorithmic recommendations, free speech, and curbing hate speech. Just as tightrope walking requires a steady pace and good balancing skills, dealing with these issues requires prudent policy-making, corporate responsibility, and public engagement to ensure a safe, fair, and harmonious online environment. In this high-wire act, we must strive to find a balance between all aspects and avoid an overall imbalance caused by excessive focus on one aspect in order to achieve true online harmony and win-win.


Acquisti, A., Brandimarte, L., & Loewenstein, G. (2015). Privacy and human behavior in the age of information. Science, 347(6221), 509-514.

Davidson T, Warmsley D, Macy MW, Weber I. Automated Hate Speech Detection and the Problem of Offensive Language. ICWSM. 2017;

Flew T. Regulating Platforms . Polity; 2021.

Gagliardone, I. (2015). Countering online hate speech. UNESCO.

Gorenc, N. (2022). Hate speech or free speech: an ethical dilemma? International Review of Sociology, 32(3), 413–425.

Kilvington, D. (2021). The virtual stages of hate: Using Goffman’s work to conceptualise the motivations for online hate. Media, Culture & Society,43(2), 256-272.

Li, Y., Zheng, N., Wang, H., Sun, K., & Fang, H. (2017). A measurement study on Amazon wishlist and its privacy exposure. 2017 IEEE International Conference on Communications (ICC), 1–7.

Moore, B. (1984). Privacy: Studies in social and cultural history. Armonk; London: M.E. Sharpe, Inc.

Nockleby JT. Hate Speech. Encyclopedia of the American Constitution. 2000;3:1277-79.

Parekh, B. (2012). Hate speech. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Fall 2012 ed.). Stanford University.

Rössler, B. (2005). The Value of Privacy. Cambridge: Polity.

Nissenbaum, H. (2010). Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford: Stanford Law Books.

Sushko, O. (2021). What Online Privacy Is and Why It’s Important.

Be the first to comment

Leave a Reply