Why finding a balance between protecting users from hate speech and fostering diverse viewpoints is so challenging?

Image Source from: (Where’s the Line between Hate and Freedom of Speech?, n.d.)

As society continues to evolve, online platforms have become a major battlefield for combating personal expression and spreading harmful information. On the one hand, these platforms provide a great channel for people to express themselves and communicate, but on the other hand, they face challenges in maintaining the public interest and protecting users from harm(Alkiviadou, 2018). Out of all the social media platforms, with over 2.6 billion monthly active users, Facebook is currently the largest social network globally(Statista, 2022). However, it has also faced the most criticism for hate speech and online harassment. In fact, 75% of people who reported being harassed online said they experienced it on Facebook, according to the 2021 and 2022 Online Hate and Harassment: The American Experience report. Other platforms like Twitter, YouTube, Instagram, WhatsApp, Reddit, Snapchat, Discord, and Twitch have also faced some reports of harassment or hate speech, but to a lesser extent than Facebook(Online Hate and Harassment: The American Experience 2021, 2022). With the rapid development and globalization of online platforms, finding a balance between protecting users from hate speech and fostering diverse viewpoints has become a major issue that needs urgent attention. However, tackling this problem is complex and challenging as it involves considering a range of factors.

Definition of Hate Speech and Free Speech

First of all, defining hate speech and distinguishing it from other forms of speech can be a complex and subjective task. It seems like different platforms have sort of different definitions regarding their moderation. For example, Facebook defines hate speech as “a direct attack against people – rather than concepts or institutions – on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease…..”(Hate Speech | Transparency Center, n.d.). Twitter also mentioned that “You may not directly attack other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” as how they persuade hate speech in their help centre(Twitter’s Policy on Hateful Conduct | Twitter Help, 2023). Free speech on the other hand, is described in Article 19 of the United Nations’ Universal Declaration of Human Rights:

Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.

United Nations’ Universal Declaration of Human Rights

Definition Gaps

However, most of these definitions are controversial. By looking at their definitions of hate speech, both Facebook and Twitter mentioned the words “directly attack”, is that mean indirectly attacking other people is not count as hate speech even if it’s disrespectful and harmful? These definitions simply do not cover all targets’ experiences of hateful content. There are certain types of content or material out there that can make some people feel totally disrespected and helpless, as they might even call it hateful. But despite this, Facebook’s idea of what counts as hate speech might not include that kind of stuff, so it could just hang out on the platform without any action taken to address it(Sinpeng et al., 2021). If the internet is filled with hate speech, people start feeling like they can’t share their true thoughts and opinions without getting attacked or harassed for it. This can lead to a real buzzkill effect on free expression, where people start censoring themselves and avoiding online discussions altogether. It’s bad because we might lose hearing all sorts of diverse perspectives and opinions in the public conversation.

Cultural differences in understanding of hate speech and free speech

Defining hate speech and differentiating it from other forms of speech presents an additional challenge, as varying individuals and cultures may hold differing opinions on what constitutes hate speech and free speech(Sinpeng et al., 2021). Especially when it’s in a different language. the concept of “hate speech” and “free speech” may not have an exact translation in certain languages, and their legal implications may not align with the English language definition. For instance, Sinpeng’s study reveals that the Bisaya language commonly uses the term “panghimaraot” to describe hate speech, which translates to ‘cursing’ in English (Sinpeng et al., 2021). It may result in misinterpretations of what hate speech means and how it differs from free speech, making it difficult to identify and address instances of it. It could further exacerbate the issue of hate speech and hinder efforts to combat it effectively. Similarly to that, while the concept of free speech is generally understood as the right to express oneself without censorship or restraint, the specific scope and limits of this right can vary widely. For example, some people and societies view free speech as an absolute right that should be protected at all costs, even if it means allowing hate speech, incitement to violence, or other forms of harmful speech. Others see free speech as a conditional right that must be balanced against other important values, such as the prevention of harm to individuals or communities. Furthermore, in terms of legal regulations, both hate and free speech is regulated very differently across various countries(Sinpeng et al., 2021). If legal definitions of hate speech are not consistent, there will be a lack of clarity, and it is further compounded by the fact that legal provisions related to hate speech and free speech are not always specific. This can create confusion about what content is considered harmful and legally punishable as hate speech and what content can be encouraged as free diverse viewpoints. 

The lack of a universally agreed-upon definition of hate and free speech can make it difficult to navigate issues related to online speech and content moderation. It can also lead to disagreements and tensions between different groups and individuals who hold differing views on the appropriate scope and limits of hate and free speech. People may find themselves walking a fine line when it comes to what they can and cannot say, as they may not be entirely sure what constitutes hate speech. 

Moderation Challenges

Therefore, Facebook implanted a muti-faced detection process of hate speech moderation, with different parties involved: the internal content regulation teams, the report from users, the local trust partners and the page administrators. However, according to the study, the reports from users and page administrators are often ignored by Facebook’s classifiers, people got tired of reporting inappropriate content on Facebook, which they called it as “reporting fatigue”, because they feel like their reports won’t actually make a difference in how Facebook moderates content. This can mess up the integrity and utility of this content-flagging process(Sinpeng et al., 2021). 

Also, page administrators as the key gatekeepers are largely working as volunteers and so have no professional expertise in moderation or community management. Most were not aware of either Facebook community standards or training resources. Furthermore, in some developing nations, the community standards of Facebook, which are the guidelines that users are required to follow, were not even translated into the local language(Zakrzewski et al., 2021). This overly broad and inconsistent moderation could disproportionately harm the marginalized communities in those countries. And Facebook was already well aware of this situation, however, just like the civic integrity lead of Facebook Samidh Chakrabarti said: “The painful reality is that we simply can’t cover the entire world with the same level of support.”(Zakrzewski et al., 2021). 

The painful reality is that we simply can’t cover the entire world with the same level of support.

Samidh Chakrabarti

Moderation balance and challenges for social media in balancing free speech and hate speech.

As there’s something that we need to be aware of is that the weaker moderation, the more space it leaves the platform vulnerable for bad actors and authoritarian regimes to abuse(Zakrzewski et al., 2021). But excessive content moderation can also lead to issues, such as limiting freedom of speech. The controversy surrounding Twitter’s decision to ban former US President Donald Trump from its platform in January 2021 reflected this problem(Clayton, 2021). The ban was based on Trump’s alleged incitement of violence and spreading of misinformation on the platform. While many supported Twitter’s decision as a necessary measure to combat hate speech and disinformation, others criticized it as an infringement on Trump’s freedom of speech. The case sparked a wider debate about the balance between content moderation and free speech on social media platforms. However, some types of speech may be more subjective. For example, some people may argue that certain political views or opinions are hate speech, while others argued that such speech is protected under the First Amendment as it protects freedom of speech but also allows for certain limitations on speech, including hate speech that directly incites violence or poses a serious risk of harm(The White House, n.d.). So how do we draw the line between hate speech and free speech? 


Prioritising transparency

According to the study, laws designed to regulate hate speech, including those related to cybersecurity or religious tolerance, can be used by governments to silence political dissent and limit the freedom of expression(Sinpeng et al., 2021). In other words, instead of being used to protect individuals from hate speech, these laws can be misused to suppress criticism of the government or to limit the expression of certain views or ideas. Thus, the need for prioritising transparency has risen. Early in 2017, Facebook and Google agreed to pay $455,000 to settle a lawsuit alleging that they failed to keep records of who paid for political advertising on their platforms, as required by state law(S & ers, 2018). It highlights the issue of political advertising on the platform and the need for transparency in identifying who is paying for these ads. In another controversial move, Facebook implemented a ban on political ads solely during the week leading up to the US presidential election in October 2020(Overly, 2020). The social media giant’s decision was widely criticized for not extending the ban to individual users’ posts or content, allowing false and misleading political information to continue circulating on the platform. In response, a number of organizations and advocacy groups called for greater transparency and accountability from Facebook regarding its content moderation policies and procedures(Overly, 2020). Some argued that the company should be required to publicly disclose its algorithms and decision-making processes, while others called for independent audits of Facebook’s content moderation practices.

Legal Restrictions

To balance hate speech and free speech, legal restrictions on hate speech must be narrowly tailored to address specific harms and must be enforced in a way that is consistent with constitutional protections for free speech. In countries, that have the burden problem of hate speech such as developing nations, there is a lack of specific laws that address hate speech. Instead, some governments use other laws that indirectly address hate speech, such as cybercriminal law, telecommunications law, and safe spaces laws. Even these laws are sometimes employed by the local governments to suppress political opposition and inhibit freedom of speech(Sinpeng et al., 2021). 

CSR & User Empowerment

At the same time, balancing hate speech and free speech on online platforms requires considering the role of corporate social responsibility and user empowerment. Social media platforms such as Facebook and Twitter may take action for a better business image and improve their regulatory framework. However, achieving such actions are challenging as corporate social responsibility may be affected by conflicts of interest and regulatory gaps because, in the end, they are still businesses that aim at making profits. And user empowerment may overly rely on users’ abilities and willingness to respond to malicious behaviour, such as if “reporting fatigue” appears, there is no more balance between hate speech and free speech.

Broader Social Factors

Furthermore, the issue of balancing freedom of speech with restrictions on hate speech is not limited to the considerations of content moderation, transparency and accountability, legal, CSR and user empowerment, as it also involves broader social factors. Society should raise awareness to recognize the harm caused by hate speech and take steps to promote inclusivity, respect for diversity, and civil discourse. The strategy that society could implant is education. Society can work towards educating people about the harm caused by hate speech, and how it can perpetuate discrimination and violence. This could be done through public campaigns, school curriculums, and online resources. Also, society could encourage counter-speech as a means of combating hate speech as well as a positive expression of free speech. This could involve promoting positive messages, challenging hateful rhetoric, and providing additional support such as to those who are targeted by hate speech. 


Ultimately, the key to balancing the need to protect users from harm with the need to foster an open dialogue and diverse viewpoints is a nuanced and multi-faceted approach. Online platforms must be willing to invest in moderation, prioritize transparency and accountability, empower users, encourage civil discourse, and continuously evolve their policies and practices to adapt to changing circumstances. Through all of the in-depth analysis and discussion, we can hopefully find the best solutions to make the online community a safer, fairer, and more welcoming place for everyone.


Alkiviadou, N. (2018). Hate speech on social media networks: towards a regulatory framework? Information & Communications Technology Law, 28(1), 19–35. https://doi.org/10.1080/13600834.2018.1494417

Clayton, J. (2021, January 9). Twitter “permanently suspends” Trump’s account. BBC News. https://www.bbc.com/news/world-us-canada-55597840

Nations, U. (2019). Hate speech versus freedom of speech. United Nations. https://www.un.org/en/hate-speech/understanding-hate-speech/hate-speech-versus-freedom-of-speech

Online Hate and Harassment: The American Experience 2021. (2022, May 3). Www.adl.org. https://www.adl.org/resources/report/online-hate-and-harassment-american-experience-2021

Overly, S. (2020, March 9). Facebook bans new political ads in the week before Election Day. POLITICO. https://www.politico.com/news/2020/09/03/facebook-bans-political-ads-election-day-408255

S, E., & ers. (2018, December 18). Facebook and Google Pay $455K to Settle Political Ad Lawsuits in Washington State. The Stranger. https://www.thestranger.com/news/2018/12/18/37206156/facebook-and-google-pay-nearly-450000-to-settle-political-ad-lawsuits-in-washington-state

S. Dixon. (2022, April 28). Facebook: quarterly number of MAU (monthly active users) worldwide 2008-2022. Statista. https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/#:~:text=How%20many%20users%20does%20Facebook

Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. https://r2pasiapacific.org/files/7099/2021_Facebook_hate_speech_Asia_report.play

Statista. (2022, April 28). Facebook users worldwide 2020. Statista. https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/#:~:text=How%20many%20users%20does%20Facebook

The White House. (n.d.). The Constitution. The White House. https://www.whitehouse.gov/about-the-white-house/our-government/the-constitution/#:~:text=The%20First%20Amendment%20provides%20that

Twitters policy on hateful conduct | Twitter Help. (2023, April). Help.twitter.com. https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy#:~:text=We%20consider%20hateful%20imagery%20to

Zakrzewski, C., Vynck, G., Masih, N., & Mahtani, S. (2021, October 24). How Facebook neglected the rest of the world, fueling hate speech and violence in India. Washington Post. https://www.washingtonpost.com/technology/2021/10/24/india-facebook-misinformation-hate-speech/

Where’s the Line Between Hate and Freedom of Speech? (n.d.). St Paul’s Girls’ School. Retrieved March 30, 2023, from https://spgs.org/ipaulina/wheres-the-line-between-hate-and-freedom-of-speech/

Be the first to comment

Leave a Reply