Disconnect from online harms: What else can we do except moderation?

(Figure1. The poster of Black Mirror. Image from: IDMb)

Introduction: What is online harm?

Obviously, online harm is the violent incident happening online. There are more types of harm online than offline. Except for physical contact, digital technology provides several forms, such as video, email, picture, emoji, etc.

According to the eSafety Commissioner website of the Australian government (2023), there are four types of online harm: cyberbullying (children or young people), adult cyber abuse, image-based abuse, and illegal and restricted online content.  

Although there are many overlaps in the classification criteria here, it is a great website that will offer help if you encounter any online harm in Australia. 

As we all know, online harm is always a severe and thorny problem coming along with digital technology. As long as connected to social media, anyone can be a potential victim of online harm. 

In a 2022 survey, researchers reported that “ 75% of Australian adults have negative experiences online”(eSafety Commissioner, 2022). 

Fuss? It really hurts!

 Health impact of online harm. 

You may think that it is not a big deal because there will not be physical scars. In fact, online harm can cause a destructive effect on mental health.

In the 2022 survey, under one in three of the 75% of Australians reported the impact on mental wellbeing, while about one in six reported the impact on physical health (eSafety Commissioner, 2022).  

You won’t realize how fragile your psychological firewall is until you really experience online harm. 

The target group of online hate speech will live with fear and harassment(Parekh, 2012, as cited in Flew, 2021).

What’s worse, there are so many cases of suicide caused by online harm worldwide. In Spain, a girl called Melania Capitan was online abused for legal hunting posts on Instagram and committed suicide in 2017. In China, a girl called Linghua Zheng was online abused for rumors about a pink hair post on Weibo and committed suicide in 2023. In the UK, a girl called Charlotte Cope was online abused for a littering post on Facebook and committed suicide in 2020. 

Why can’t they escape?

Internet addiction as mediation. 

A study found that victims of online harm usually show internet addiction which is proven to be the mediation(Lin et al., 2020).People cannot disconnect from the online harm themselves. It is not difficult to understand. Just conduct an experiment with yourself. Hide your telephone and see how long can you stand living without it. The result is obvious. We are all so boiled by the convenience of digital technology that it is hard for us to leave it. Connection, a feature of the internet that founders and users are proud of, is more and more being used as a toxic and powerful weapon for killers. 

When the victims can’t cut off the connection with the internet world, they may choose to cut off the connection with the real world. And for them, that may be the only way to escape.

What should the platform and government do?

Absolutely, we should take more actions to prevent this phenomenon. Terry (2021) pointed out that the dispute is about the role of platforms and governments involving online harm. In which way, to what extent should both of them engage in the governance of online harm?

Case of China: 

In China, online harm issues such as cyberbullying have received unprecedented attention. In 2022, China introduced a series of related policies, such as the “Notice on Effectively Strengthening the Governance of Cyber Violence” introduced by Cyberspace Administration of China. The platforms are pushed by policies to make changes in China. According to government requirements, social media platforms then have begun to take corresponding measures from 2022. 

The mainstream measure: Moderation.

  • The background of moderation. 

Moderation is the most controversial and conventional measure. It has a long history of appearing as the enemy of news freedom. It used to be a tool of centralization governance for manipulation and regulation. And when we mention moderation nowadays, we usually think of the dark world described in the book “1984” by George Orwell. In this novel, people do not have freedom of expression because of extreme moderation by the government(Orwell, 2020). 

(Figure2. The book cover of 1984. Image from:Amazon)

  • The debates around freedom of expression 

Cyberspace is originally designed as an independent world with freedom of expression. They thought they are “creating a world where anyone, anywhere may express his or her beliefs”(Barlow,1996). 

Insisting on the ideal imagination of utopia, the platforms are not active to make changes. 

But more and more vicious incidents indicate that the existing counter online harm mechanism needs to be updated. The amplification of online hate speech, “promotes mistrust and hostility in society and negates the human dignity”(Flew,2021).  

In my opinion, online harm, especially online hate speech, should not be protected by the freedom of expression. The freedom is to express rational idea and thoughts with respect. Hate and offence can be excluded when expressing thoughts. We have the ability to shape a cyber public sphere with rationality and dignity.

In China, it shows a governance tendency of giving online safety priority instead of freedom of expression to some extent. 

(Figure3. The poster of V for Vendetta. Image from:IMDb)

  • Does the moderation system work well?

The moderation in the platform is both by humans and algorithms. The rules of moderation are well-designed and transparent. The moderation system in Douban(like IMDb in China) is extremely strict in China, even impolite expression may be deleted by the moderation administrator.

However, it is may not work well as we expected. The idea is ideal, while the reality is complicated. Although the rules are transparent, the actual practice can be confusing. The moderation algorithm usually misidentifies prohibited words. Meanwhile, more vague words expressing prohibited meanings are not easy to be identified for both moderation administrators and algorithms. Also, hate speech can be more difficult to detect. The language can be ‘scientific’ or shown as jokes(Flew, 2021). 

The strict moderation system may push online harm into a more vague and underground form.

Four more directions of counter online harm in China.

  • 1. Set the complaint channel: Report

The report function is widely used. It is important to be anonymous in case of revenge and fear. It is useful in solving problems of cases at a small scale, such as misinformation and illegal content. But it can not be used as the main solution to online harm. We should not only rely on users to be the digital labor to solve problems when faced with cases at a large scale, such as cyberbullying and online abuse. Also, for offensive speech, people may have different criteria. Language is the art of symbols with multiple meanings.  For example, the word ‘Da Ye’ can mean a respectful name for old male people in Chinese, but can also be an abusive expression in certain situations. Therefore, it is undeniable that there is a grey zone in the report function. “Hate speech is very context dependent, and intimate local knowledge is required to understand and address it fully”(Sinpeng et al., 2021).

  • 2. Semi-anonymization attempt: User IP location display

Anonymity is the foundation of the freedom game of expression in cyberspace. This rule eliminates the impact of power hierarchy offline to provide equality of online expression. However, it also provides shade for viciousness. Some users ask for the real-name system so that problems are all solved. But in that way, social media definitely will just be a social telephone platform, or people may just not play this platform anymore. Let alone other precious functions. What’s more, online harm may find another space, where the problem is more difficult to tackle.

In 2022, most social media platforms began to display users’ locations according to their IP addresses. This semi-anonymization attempt to some extent violates people’s privacy, but remains the anonymity feature of platforms. The original purpose is to eliminate misinformation and rumors based on region. The core is to provide users with clues to identify rumors. 

It is also helpful in solving regional online harm. Sometimes online harm is generated from the intolerance of difference. The problem is that“if there is less tolerance of difference and if the constraints on that intolerance are not watched, then intolerance and hate will find expression”(Keen and Georgescu, 2014, as cited in Flew, 2021).

In my opinion, for cyberbullying and online abuse cases, the location display may help in a way of creating a larger group to tackle with problems of offensive groups. That is to provide people with locations as clues of understanding differences and union of a larger group where opinions are more diverse and less extreme.

  • 3. Disconnection function: A button for protection

In Chinese social media platforms such as Xiaohongshu, Douyin, and Weibo, the protection button will ban all comments, messages, and reposts from unfollowed people for seven days. Meanwhile, these comments are still visible to other users. It works like a shield. And thus it provides an escape channel for online harm without being entirely disconnected from the internet. However, it is also misused as another form of an offensive weapon. Some people will make fun of victims for using this button to show their justice and then instigate more people to harm victims.  And online harm may continue on other accounts.

There can be an updation for the protection function. On Instagram, the protection function also works as a filter, in name of hidden words. Users can set keywords to hide comments in their accounts so that they will not see these again. 

  • 4. Punishment mechanism: Legislation

There isn’t a particular law explanation and definition for online harm in the law. Usually, the final judgment is according to other law explanations. This is may because it is nearly impossible to give a precise definition of online harm such as online hate speech. 

In China, a representative proposed that there should be cyberbullying legislation in the two sessions in 2023. “Up to now, the provisions on cyber violence can only be seen in the Civil Code, Criminal Law, Public Security Management Punishment Law, Cyber Security Law, and other legal norms”(Hui, 2022).

The future and effects of cyberbullying legislation can not be predicted yet. But legislation can be seen as a protection for potential victims. Legislative punishment will let users be more cautious when they type expressions online and pay more attention to online harm issues. 


What to expect in the future governance of online harm?

Countering online harm is a never-ending war, because the enemy is humanity. Despite the updations of counter online harm mechanisms, there are still many distressing cases online. It is a good sign that more and more countries and platforms are focusing on online harm issues. Continuous focus is more essential. The visible forms of online harm are currently changing into obscure methods, hiding in corners of phrases and words. The governments and platforms should always be alert and ready to update in response to the fickle online harm.

We can see the gradual shift from centralization power to empowerment in the governance policy of platforms. In the past counter online harm mechanism, in the court of online harm, the judgment was ultimately decided by the platform or the government. And it usually takes a long time and vast energy. In the face of a dynamic and vague definition situation, this solution mechanism is obviously not always effective. Therefore, online harm finds its space to grow stronger.  Now the platform policy tends to rely more on users in a way of providing shields and swords. Users are empowered to solve problems themselves. Social media creates cyberspace where rules are different from the real world based on anonymity. It can be used to reproduce power hierarchies and exacerbate unequal power relations(Carlson, 2018). 

Therefore, it is a new direction of making policies in the position of users. Users need to have rights and the ability to exit and protect themselves instantly when facing online harm. 


Barlow, John Perry. (1996).A Declaration of the Independence of Cyberspace. Electronic Frontier Foundation


Carlson, B. & Frazer, R. (2018). Social Media Mob: Being Indigenous Online. Sydney: Macquarie University.

Cyberspace Administration of China. (2022, November 4). Notice on Effectively Strengthening the Governance of Cyber Violence. Office of Cyberspace Administration of China.


Flew, Terry. (2021). Regulating Platforms. Cambridge: Polity, pp. 91-96.

Hui, Hu. (2022). Special legislation is at the right time to curb cyber violence. The National People’s Congress of the People’s Republic of China.


Ling Lin, Jianbo Liu, Xiaolan Cao, Siying Wen, Jianchang Xu, Zhenpeng Xue & Jianping Lu. (2020). Internet addiction mediates the association between cyber victimization and psychological and physical symptoms:moderation by physical exercise. BMC Psychiatry, 20, 144. 

Office of the eSafety Commissioner. (2023). Home page. eSafety Commissioner. https://www.esafety.gov.au

Office of the eSafety Commissioner. (2022). Australians’ negative online experiences 2022. eSafety Commissioner.


Orwell, George. (2020).1984. Obooko Publishing.

Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021). Facebook: Regulating hate speech in the Asia Pacific

Be the first to comment

Leave a Reply