The Challenges of Balancing Freedom of Speech and Online Harms

Currently, it is unfathomable to conceive a world without social media. The ubiquity of the internet and computer technology has enabled us to interact in a virtual world. Nowadays, we do not have to interact physically to have an instantaneous connection as the digital world allows us to get real time responses and feedback to the thoughts, queries, and interactions shared in the digital space. Although the digital environment has made human interaction easier by shrinking the world into a global village, the advancement has come with unprecedented challenges (Sinpeng, Martin, Gelber, & Shields, 2021). One of the main challenges is hate speech and online harm. Platforms such as Facebook have witnessed an ongoing problem with hate speech considering its scope and scale. In a society that values free speech and advocates for this right to be protected in our laws and constitutions, a conversation about hate speech on online platforms is long overdue as we try to establish a delicate balance between freedom of speech and protecting people against hate speech and other online harms.

Image1: online interactions on Facebook

The Internet and Freedom of Speech

Looking back at the advent of the internet, its creation promised society a platform where people could freely communicate and express their opinions and views. Even though mass media such as TV, radio, and print engaged millions at a go, the communication was one-way and controlled by the people who had the means and resources to disseminate the information. These included the government media, houses, and corporate entities. However, fast forward to the internet age, data can move more freely as everyone with access to the internet can be an author or the audience, depending on the context.

An article in the Declaration of the Independence of Cyberspace in 1996 by John Perry Barlow gives the best description of the motivation behind the invention of the internet. The declaration states that the central vision of free speech on the internet is to create a world where individuals can express their personal beliefs and ideas without being censored, coerced into silence, or forced to conform to specific views. Social media platforms such as Facebook have tried to maintain the spirit of the internet by giving individuals an opportunity to communicate and interact freely with the masses. However, this has been a challenge since the freedom of speech in society is still relevant.

Although the constitution varies in different countries, there are limitations placed on the freedom of speech to ensure that it is not misused and abused. It is not any different in Australia, whereby the constitution promotes freedom but also has limitations to protect the public from hate speech and other online harms. However, regulating the online space has been challenging as governments are also keen to maintain the freedom of speech while protecting people from online harm.

Free speech is a double edge sword, particularly in situations where everyone has the power to create and share content. In the internet era, people can create and share content on public platforms. People can share opinions and information using their real identities on digital platforms. People also have the opportunity to develop pseudo-accounts and anonymous accounts where that are untraceable. In both scenarios, a social media user may choose to express negative opinions, but still, such action would be protected under the freedom of speech provisions. Positive options positively affect the audience, while negative votes have the opposite effect. Therefore, rubbing someone the wrong way or offending people with personal opinions is common in digital spaces.

In addition, people can spread misinformation to influence or change the opinion of others on the internet. Issues such as the American election and the COVID-19 pandemic were a litmus test for the freedom of speech on the internet and how it contributes to misinformation. When an audience needs to fact-check a claim or option made in digital spaces, they face the reality of being misled. In this context, the comfort people looking to abuse the freedom of speech get from the fact they can use fake accounts to push an agenda has exacerbated the problems of hate speech and other online harms in the digital world. Even though the internet has been instrumental in helping us achieve the goal of free speech, it has also created complex challenges, such as the prevalence of hate speech, misinformation, and other ills with devastating consequences.


Digital platforms such as Facebook continue to host a full range of conversations in the public domain in the form of broadcasts, videos, text, pictures, and even emojis. These discussions have been instrumental in reflecting human diversity in a world that interacts more freely. Online conversations represent a range of human experiences, from humorous to entertaining. People can also engage in severe and sensitive engagements such as politics or religious discourse on open platforms such as Facebook. Such highly dynamic engagements can result in ugly and hateful responses and content. It is difficult to track and regulate malicious content on the internet despite the efforts of digital spaces such as Facebook. Although some universal forms of hate speech exist, some are deeply nuanced depending on the cultural, social, or religious contexts.

In 2015, Jack Dorsey, the then-Twitter CEO, stated that the social media platform stands for freedom of expression and speaking truth to power (Groggin, n.d). However, such platforms have to deal with trolls and abuse, but they have been ineffective in countering this adverse effect. To counter the challenge, the CEO suggested that people engaging in misdeeds on the online platform should be kicked off so that they are not visible to the public (Groggin, n.d). Despite such suggestions, online harassment and other ills are still very prevalent. For example, the Pew Research Center statistics stated that 41 percent of Americans were victims of some form of online harassment (Groggin, n.d). In addition, a further 20 percent of the victims harassed online were harassed due to their political views. In addition, differences such as ethnic, gender, and racial backgrounds were also used as reasons for the online harassment suffered by Americans.

Unfortunately, platforms that were created to celebrate our diversity are used to tear us down based on our differences. Instead of being a utopia for diverse options and ideas, the internet is fast becoming a dystopia whereby some people are bullied and censored. Instead of allowing people to share and celebrate their unique ideas and beliefs, the internet has been turned into a hosting space for lynch mobs that shut down any dissenting voices. In addition, the anonymity or lack of physical contact in online interactions also encourages ills such as cyberbullying and hate speech. Anonymity increased extreme aggression in interpersonal aggression, and anonymity had similar effects on antisocial behavior and aggression in communication.(Woods & Ruscher, 2021) Such challenges have threatened the spirit of promoting free speech envisioned by pioneers providing internet technology for civilian use.

Online Harassment

Although online harassment is a wide concept, hate speech has emerged as a severe threat to freedom of speech on the internet. Online users are subjected to a wide range of ills. From sexual harassment, stalking, and physical threats to purposeful embarrassment, people are exposed to all hate speech and online harm once they are in digital spaces. Social media has been a ripe ground for online abuse, with 75% of the people experiencing online abuse doing so on social media platforms. Statistics also indicate that 79% of social media users feel like companies are failing to prevent online harassment on their platforms (Groggin, n.d).

Online Hate Speech

Online communication has expanded significantly, and there are over 4 billion internet users globally (Sinpeng, Martin, Gelber, & Shields, 2021). To understand the challenge of hate speech on online platforms, it is essential to define its boundaries. Hate speech can be addressed from a universal standpoint or a specific national context. On platforms such as Facebook, people come to share their opinions and experiences on sensitive issues such as gender, ethnicity, religion, sexuality, and politics, among other dynamic topics. As a result, it is common for people to have dissenting opinions on issues such as religious morality, gender politics, or even a nation’s foreign policy. In this context, people might need to express their views or debate on these online platforms.

Hate speech is an offensive communication mechanism that uses stereotypes to express hateful ideologies. Hate speech targets protected characteristics such as gender, religion, race, and disability.(Chetty & Alathur, 2018) Given that people are passionate about this topic, there are chances that a post might cross the line and become hate speech (Flew, 2021). However, there is no universal threshold for determining when someone has crossed the line resulting in hate speech. For example, in the US, the Constitution protects speech that is considered vile in other parts of the world under the first amendment. In addition, people have different tolerance when speaking about their protected characteristics, and something offensive to one person might not be for another. For example, the recent special by comedian Dave Chappelle on trans people was considered to be offensive by some sections of the protected community. At the same time, some thought it a piece of comedic genius.

Image 2:Dave Chappelle response to the trans controversy

This issue was divisive across all online platforms as people shared their opinion on whether the comedian crossed the line. Such scenarios expose the complexity of hate speech and the challenge of establishing a consensus on the threshold for hate speech. Despite such debates, there are numerous cases where there was a social consensus that there was hate speech. For example, the Christchurch Massacre that resulted in the slaughter of 51 Muslim worshipers in two mosques was live streamed on Facebook in 2019. There was no question that the attack was motivated by religious prejudice.

Image 3: Memorial for the victims of the Christchurch mosque Shootings

Such events have put the social media platform in the spotlight for preventing hate speech and its potential consequences. Looking back at the definition of hate speech, it is clear that the live streaming of these acts inspired more violence in the Muslim community and retaliatory speech and attacks.

Government inquests, such as the inquiry by the House of Commons in 2017, note that social media companies have taken a Laissez-faire approach when countering extremist content on their platform. Such complacency has resulted in outcomes such as the Christchurch massacre, resulting in debates on the role of social media companies in regulating hate speech. However, companies like Facebook have been forced into action to protect the public against hate speech. For example, Taylor (2019) Facebook was forced to reevaluate which people could go live to reduce the likelihood of a repeat of the massacre. In addition, the company is also employing artificial intelligence technology to remove hate groups from the platform. Despite these removals, it still causes psychological harm to the recipient. This approach has often been criticized for restricting free speech, as audiences often demand that service providers develop and enforce censorship.(Ullmann & Tomalin, 2020)

Platforms such as Facebook also have community standards to regulate4 speech and content, and restrictions are based on safety, privacy, dignity, and authenticity. Speech that encourages violence and criminal behavior or threatens the protection of individuals and groups is restricted on the platform. However, judging from the existence of hate speech, it is clear that Facebook’s mitigating actions and policies are still insufficient. More actions need to be taken to weed out harmful accounts. However, the platforms need to account for the complex nexus between freedom of speech and internet content regulation. I believe that the internet should continue promoting free speech. However, platforms such as Facebook should be more reactive in closing accounts that promote hate and violence against groups based on their diversity.  

Government inquests, such as the inquiry by the House of Commons in 2017, note that social media companies have taken a Laissez-faire approach when countering extremist content on their platform. Such complacency has resulted in outcomes such as the Christchurch massacre, resulting in debates on the role of social media companies in regulating hate speech. However, companies like Facebook have been forced into action to protect the public against hate speech. For example, Taylor (2019) Facebook was forced to reevaluate which people could go live to reduce the likelihood of a repeat of the massacre. In addition, the company is also employing artificial intelligence technology to remove hate groups from the platform.

Chetty, N., & Alathur, S. (2018). Hate speech review in the context of online social networks. Aggression and violent behavior, 40, 108-118.

Flew, T. (2021). Regulating Platforms. Cambridge: Polity, pp. 91-96.

Groggin, G. (n.d). Internet Cultures and Governance: Hate Speech, Online Harms & Moderation. The Sydney University.

Taylor, (2019). Exclusive: Facebook to clamp down on hate in response to Christchurch mosque attack. NZ Herald. Retrieved from

Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021, July 5). Facebook: Regulating hate speech in the Asia Pacific. Final Report to Facebook under the auspices of its Content Policy Research on Social Media Platforms Award. Dept of Media and Communication, University of Sydney and School of Political Science and International Studies, University of Queensland.

Ullmann, S., & Tomalin, M. (2020). Quarantining online hate speech: technical and ethical perspectives. Ethics and Information Technology, 22, 69-80.

Woods, F. A., & Ruscher, J. B. (2021). Viral sticks, virtual stones: addressing anonymous hate speech online. Patterns of Prejudice, 55(3), 265-289.

Be the first to comment

Leave a Reply