From Tragedy to Action: How the Christchurch Mosque Shooting Inspired New Digital Policies

Photography by Charlton, 2019

In today’s society, the rapid development and broad application of social media provide a platform for people to communicate, share, and connect. Online shopping, digital payment, and so on have generally become an indispensable part of our lives. But it has been accompanied by the rise of online violence and hate speech. The characteristics of anonymity, broad dissemination, algorithm recommendation, etc., allow hate speech and cyber violence to spread rapidly on social media and expand their scope of influence. At the same time, information leakage, cyberbullying, false information, etc., these problems may directly affect the daily lives and rights of users. These problems involve malicious attacks, harassment, racial discrimination, gender discrimination, religious hatred, and other aspects that pose a threat to the stability and harmony of individuals and society. Therefore, this article will discuss how digital policy can ensure online security by curbing dangerous speech and online harm without infringing on free speech. Refer to Terry Flew’s Regulating Platforms and Facebook: Regulating Hate Speech in the Asia Pacific for definitions. The following content will explore the phenomenon, causes, and effects of cyberbullying and hate speech in social media by analyzing and investigating the New Zealand mosque shooting. It will also explore how digital policy should be adopted to respond to similar hate speech and cyber victimization.

Balancing the need for free speech and cybersecurity

First of all, understand the relevant definitions of free speech, hate speech, and online harm and the appropriate connections between them. Barlow’s manifesto on cyberspace, ‘We are creating a world where anyone, anywhere, expresses their beliefs, no matter how singular, without fear of being coerced into silence or conformity ‘(Barlow, 2018). In a free and democratic society, freedom of expression on social platforms should be promoted to encourage the existence of different views, opinions, and ideas. Everyone has the right to freely express their views and thoughts, orally or in writing, in an environment free from racial, religious, gender, and orientation discrimination, to protect the pursuit of truth and to form their judgment. At the same time, free speech and the media can provide the public with diverse information and opinions, monitor government actions, and promote public debate and social progress.

What is Free Speech, Hate Speech, and Online Harm

The existence of free speech promotes diversity and the expression of different views. In this context, hate speech may be seen as a negative side effect of free speech, but it is not its inevitable consequence. According to the article published by Singpeng et al., hate speech is defined as when the speaker carries out and produces harmful acts in a regulated “public” environment with “authority,” critically using insulting or offensive speech to condemn or raise awareness of discrimination and harm. Such as gender discrimination, age discrimination, racial discrimination, and disability discrimination. It leads to social antagonism and violent behavior by inciting hatred and hostility “(Singpeng et al., 2021, pp. 11-12). The harm caused by Hate speech is speech that requires policy responses and may, therefore, reduce the likelihood of engaging in free speech. (Brown 2015, 2017) The potential conflict between the right to free speech and the prevention of hate speech requires digital policies and relevant controls to balance free speech. Excessive tolerance or condoning of hate speech can lead to social instability, group confrontation, and the erosion of individual rights.

“Online harm” is a big, vague concept. Online harm includes a range of negative behaviors and consequences facilitated by digital platforms, such as online abuse, intentional embarrassment, invasion of privacy, stalking, physical danger, harassment (including sexual harassment), or bullying. These harms can have far-reaching psychological, social, and political consequences. (Vogels, 2021)

Source:Pew Research Center.

According to the report, about 40 percent of adults in the United States (regardless of gender, age, or race) have experienced online harassment, and most believe that the reasons for harassment are political. People targeted for online abuse today are involved in more types and more severe forms of online abuse than in 2017. (Vogels, 2021). There are also cases on every continent. Those inclined to racism, misogyny, or homophobia find public platforms to reinforce their views and spur them into a niche of violence.

Because of the anonymous letters and wide range of platforms provided by the Internet, the essential network harm has the characteristics of concealment and diffusion, making the harmful behavior spread quickly and difficult to control.

When the Internet violence ends: There are still pests on social media.

In 2019, 51 people were killed in a mosque shooting in Christchurch, New Zealand. After the incident, there was a lot of hate speech and online harm. The gunman’s manifesto and livestream spread quickly across significant media, carrying out violent effects. When the identity of the suspect was revealed to be an Australian self-described white supremacist, a far-right extremist, it caused an Internet sensation. Meanwhile, while the Australian government unequivocally condemned the attack, the online conversation turned to racial slurs. He expressed hostility to Islam and multiculturalism and tried to convey his extremist views through the attack. (Mao, 2019)

The Christchurch attacks have sparked a debate in Australia about race hate

“A far-right senator issued a statement blaming Muslim immigrants for the attacks.” The move sparked outrage among Australian citizens, with 1.4 million people signing a petition calling for the senator to resign for “supporting right-wing terrorism.” (Mao, 2019). A 17-year-old boy launched a protest online but was physically assaulted by four adult males who supported the Senate.

Social media breeds terrorists

Before the attack, the gunman told viewers to subscribe to the YouTube channel of a former right-wing activist. Mass retweets and spam posts cause chaos on the Internet and use political violence to influence political change. Terrorist groups use social platforms and networks to recruit and radicalize potential fighters. These individuals tend to have lower critical thinking skills and intelligence levels and are easily manipulated and exploited by recruiters. They may come into contact with recruiters because of financial hardship or other reasons and be radicalized by plotters, eventually becoming terrorists. The shooter got what he wanted by provoking white supremacists to radicalize online. Lee Jarvis, co-editor of the journal Critical Studies of Terrorism, said the Internet provides a space for people with minority beliefs to connect with other like-minded people, thereby normalizing their worldview. (Marsh & Mulholland, 2019)

Are digital policies and media fulfilling their mission?

In the aftermath of these incidents of hate speech and violence, the associated hate speech regulating media companies and governments have also been criticized for causing social unrest. The Christchurch Appeal remains a critical initiative in the aftermath of these events. Hate speech is recognized as speech that requires a policy response because of the harm it causes. (Brown 2015, 2017) Justice Minister Kris Faafoi stated in a press conference that “Protecting our right to free speech while balancing that right with protections against ‘hate speech’ requires careful consideration and extensive input . .” (Al Jazeera, 2021) The hate speech treatment of the incident did not reduce the spread of similar speech in social media in a timely and comprehensive manner, losing the effectiveness of content regulation. Although the government has expressed its commitment to protecting freedom of expression to counteract hate comments, preventing the “dark web” dedicated to this kind of far-rightism, and strengthening digital policies (algorithms) to control hate speech and cyber-violence, the incident has not been addressed in a timely and comprehensive manner by society, government, and the public. However, the problems and challenges faced by society, government, and the media in the aftermath of the incident remain. The New Zealand government has announced stronger laws and controls on hate speech. Still, these laws are not as effective as they could be in deterring religious and minority groups.” (Al Jazeera, 2021). The lack of crackdown on combating this extremist hate speech has resulted in affecting the balance between free speech and hate speech.

Secondly, algorithms may not be able to make socially acceptable or morally correct decisions when dealing with tasks that involve personal privacy, moral judgment, and ethical decision-making. Social media relies on algorithms to deliver content to users, but the inability of algorithms to most effectively identify violent content from the far right has led to the continued growth of viral hate speech or, worse, the generation of online violence on social media. For example, Facebook did not take absolute action against religious hate speech incidents. When reader Holly West saw this graphic Facebook post proclaiming that “the only good Muslim is a fucking dead one,” she used the social network’s reporting system to flag it as hate speech. (Tobin et al., 2017) Facebook, however, vocalized that the statement did not violate community standards and simply deleted an anti-Muslim comment without doing anything else. This move by the social media platform, by failing to stop the spread of hate speech and extremist content and by failing to fulfill its due social responsibility, has led to an increase in the level of online harm.

Nonetheless, digital policies and laws have been somewhat improved and enforced in the handling of the New Zealand mosque shooting. The regulation of social media and social platforms has increased content censorship, clarified the types of content that are prohibited, and prevented the spread of extremism and hate speech. The New Zealand government enacted the Terrorism and Extremism Prevention Act 2019 (corporateName=Commonwealth Parliament; address=Parliament House, 2020), which gives law enforcement agencies more powers and strengthens monitoring and countering extremism and hate speech. Following that incident, the Commonwealth Parliament also enacted a new Cybersecurity Act, which regulates content restrictions for live streaming types. (corporateName=Commonwealth Parliament; address=Parliament House, 2021)

This legislation comes in the wake of the Christchurch massacre, which was broadcast live. It regulates a number of related types of harm, such as cyberbullying of children and live streams that may promote or incite extreme violence. Meanwhile, the incident has had a global impact, and the New Zealand government has called for greater international cooperation across continents to suppress hate speech to ensure online safety.

How does digital policy balance free speech or curb hate speech and online harm?

Content regulation and transparency

Digital policymaking needs to be open and ensure transparency and accountability. The law should clarify the responsibilities and obligations of online platforms, establish a sound content review mechanism, and combat the spread of false information and undesirable content. Digital policies must define the boundaries of speech and uphold freedom of expression. Protect the public’s right to information and physical health and respect fundamental human rights, race, religion, gender, etc. Identify what is provocative to help social platforms and relevant legal authorities enforce the law. Make public the applicable standards and approval content of speech review, safeguard the freedom of speech, improve the transparency of policies, increase credibility, and safeguard the public interest.

Technical management and upgrading

Strengthen the application of technical means and regulatory tools. With the continuous development of technology, network platforms can use artificial intelligence, big data, and other technical means to identify and filter published content intelligently. Social media companies should adapt and upgrade their algorithms to reduce the frequency of extremist and hateful content in user feeds and recommendations. This helps reduce the likelihood that users will be exposed to such harmful content.

Strengthen user education and security awareness

In the media platform, organizations can increase the promotion of cybersecurity through the podium, advertising, and offline activities. Social media users should actively participate in legitimate and healthy online safety courses. For the promotion of Internet and digital literacy, resist hate speech and Internet harm, and jointly safeguard the development of free speech on the Internet.

Governments and organizations are involved

The national government has established a legal framework regarding the Internet in the context of safeguarding freedom of expression, including fines and other legal sanctions. Relevant policies and measures need to be regularly assessed for their effectiveness and adjusted according to the actual situation. It calls on relevant governments or organizations around the world to cooperate internationally to develop new technological solutions and work together to create a secure cyber environment.


The importance and effectiveness of digital policies in addressing online violence and hate speech in social media. With the rapid development of social media, the problem of online violence and hate speech has become increasingly severe, threatening individual and social stability. In order to better balance free speech and maintain cybersecurity, digital policy ensures effective containment of online harms through content regulation, technological upgrades, user education, and cross-border cooperation. However, digital policies face implementation and technical challenges. In response to the New Zealand shooting case study, digital regulation has achieved some success, but there are still shortcomings. Digital policies need to be innovated and improved to balance freedom of expression and cybersecurity, promote healthy social media development, and build a safe and inclusive digital environment. Through sustained efforts and cross-border cooperation, a stable and inclusive cyberspace could be created.


Al Jazeera. (2021, June 25). New Zealand beefing up hate speech laws after Christchurch attack.

Barlow, J. P. (2018, April 8). A declaration of the independence of Cyberspace. Electronic Frontier Foundation.

Brown, Alex. 2015. Hate Speech Law: A Philosophical Examination. New York: Routledge.

—. 2017. “What is Hate Speech? Part 1: The Myth of Hate.” Law and Philosophy 36 (February): 419-468.

Charlton, A. (2019). Media Statement: Hate Speech Laws are already strong enough. photograph. Retrieved 2019, from–yh_z39wi–/t_tohu-badge-twitter/v1643568260/4NECJ7O_image_crop_77588.

corporateName=Commonwealth Parliament; address=Parliament House, C. (2020, February 24). Counter-terrorism Legislation Amendment (2019 measures no. 1) Bill 2019. Home – Parliament of Australia.

corporateName=Commonwealth Parliament; address=Parliament House, C. (2021, June 2). Online safety bill 2021. Home – Parliament of Australia.

Laub, Z. (2019, June 7). Hate speech on Social Media: Global Comparisons. Council on Foreign Relations.

Mao, F. (2019, March 20). Christchurch shooting: Australia’s moment of hate speech reckoning. BBC News.

Marsh, J., & Mulholland, T. (2019, March 16). How the Christchurch terrorist attack was made for social media | CNN business. CNN.

Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney.

Tobin, A., Varner, M., & Angwin, J. (2017, December 28). Facebook’s uneven enforcement of hate speech rules allows vile posts to stay up. ProPublica.

Vogels, E. A. (2021, January 13). The state of online harassment. Pew Research Center: Internet, Science & Tech.  

Be the first to comment

Leave a Reply