What are online harms?
In the digital era, the scope of online harm has expanded to include any negative experience or consequence resulting from the use of the internet or online platforms. It includes a wide range of behaviors such as cyberbullying, online harassment, hate speech, defamation, pornography, disinformation, phishing, and more. Due to various reasons, it is difficult to have a clear and consistent definition of online harms.
According to Parekh (2012), hate speech is:
Expresses, encourages, stirs up, or incites hatred against a group of individuals distinguished by a particular feature or set of features such as race, ethnicity, gender, religion, nationalist, or sexual orientation.
Simply because human beings cannot be reduced to a single marker of identity such as race, class, or gender, but are mediated by the fabric of experience (Edwards & Esposito, 2019). It dehumanizes and legitimizes specific or easily identifiable individuals by attributing undesirable and negative qualities to them (Parekh, 2012).
Smartt (2020) defines defamation as:
Untrue statements that harm someone’s reputation, character, or community status.
On January 23, 2023, Linghua Zheng, a 24-year-old girl who suffered from depression after being cyberbullied for dyeing her hair pink, passed away. Millions of people in China mourned her on social media, such as hairstyle changes of Chinese female and skirts.
On July 13, 2022, Zheng visited her 84-year-old grandfather with her master offer from East China Normal University and took pictures to record. She posted her photos on multiple platforms such as Douyin (China’s version of TikTok), The Little Red, and Weibo.
Zheng’s (Nickname: Jidan Ji) post on The Little Red
On July 14, 2022, she discovered that her photos had been stolen and circulated on various platforms. A series of educational accounts on Douyin stole Zheng’s photos and impersonated her identity to sell courses and make profits. Within one week, the content she posted on The Little Red was screenshot and reposted on Douyin, Baijiahao, Kuaishou, and other platforms. Under the reposted content on Baijiahao, she received a large number of insults and defamatory comments about her pink hair. Some people spread rumors that she may have had sexual relations with her grandfather or a man claiming to be her grandfather; some attacked her identity as a teacher and the school that admitted her; and others labeled her as “nightclub girl”, “unserious person”, “demon”, “red-haired monster” and other derogatory terms because of her pink hair. These unsightly comments even received support from netizens, with the top comment receiving more than 2,000 likes.
Within two months, Zheng has been subjected to map bombing, slut shaming, gender discrimination, and group prejudice, all kinds of accusations that stand on the moral high ground. These hurtful comments are often lacking in context, misinterpreted, and influenced by biases. Whenever such incidents occur, the direction of public opinion is always diverse. Therefore, an issue worth pondering is, what caused this or this kind of tragedy to happen?
1. Affordability of the Internet
In the virtual world, users can easily hide their identities by using anonymity or pseudonyms, creating opportunities to interact with strangers. The flexibility of online communication leads to different psychological patterns displayed by users compared to those in the real world (Luarn and Hsieh, 2014). Online platforms provide users with distance and few social contextual cues, so people are more willing to express opinions online than offline (Ho and McLeod, 2008). It means the degree to which media provide social symbols to communicators is low due to the lack of body language, facial expressions, voice quality, and static clues (Liu, 2002). Assuming that they meet a girl with pink hair in real life, cyberbullyers may discuss privately or mutter to themselves and let it go. But on the internet, the anonymous nature makes it easier for people to express hurtful thoughts towards others with their preconceived biases without facing direct consequences. They may not realize that insulting and defaming others is a cruel thing to do and that they may be terrible people.
Sunstein refers to “group polarization” in The journal of political philosophy, which refers to the predictable movement of members of a group towards more extreme points as indicated by pre-existing tendencies (2002). When people with similar views come together to discuss the same topic, their views tend to become more polarized. Individuals in the group try to assert their identity as group members and differentiate themselves from others. Therefore, some people actively seek out malicious comments to feel that their views are being validated. It can reinforce like-minded individuals’ beliefs and attitudes, reduce the willingness to compromise or consider alternative views, and lead to the expression and sharing of more extreme opinions.
Social media platforms operate on algorithms, and recommendation algorithms can accurately provide exclusive content to groups of people with similar views based on browsing history or topics of interest. Based on the breadth and depth of digital platform dissemination, toxic comments can quickly be amplified, recorded, and spread to cause repeated harm to victims. An interesting finding is that if the initial comments in a post are positive, the subsequent comments are more likely to be positive as well, and vice versa. Why do people praise when they see others’ positive comments and insult when they see others’ negative comments?
The Spiral of Silence theory shows that people tend to remain silent when they perceive their opinions to be in the minority on controversial public issues. On the contrary, they become increasingly vocal and confident when they believe their opinions are shared by the majority (Noelle-Neumann, 1973). The anonymity provided by digital media can eliminate people’s fear of isolation and encourage the expression of unpopular views. The Broken Windows theory (Kelling and Wilson, 1982) can also be used to explain this phenomenon. Some people may just come across negative comments while browsing online, but they may also engage in more uncivil and unrestrained behavior in order to follow trends or stir up trouble. Then, the clustering of small groups leads to the extension of individual malice to collective siege.
2. Lack of Effective Platform Governance
The cost of spreading rumors is much lower than dispelling rumors on major online platforms. When Zheng discovered someone insulting, slandering, or infringing upon her, she wrote complaint letters and private messages to the platform’s backend overnight. However, many complaint buttons on online platforms are hidden, and the prerequisite for filing a complaint is submitting photos of her ID card front and back, as well as relevant letters stamped with official seals. Despite her efforts, the platform’s response to her was “complaint failed”, and some users who stole her pictures even blacklisted her, while others simply hid the infringing content.
Social media is the most common place for online harm (Jhaver et al., 2018), we can find that few platforms fulfill their “duty of care”. Except for Douyin, platforms like The Little Red, Weibo, Bilibili, and others do not have warning words in their comment boxes to remind users to regulate their language and behavior.
Major mainstream social media platforms are committed to improving measures to prevent and manage negative reviews to maintain safety and integrity, such as one-click anti-riot function, psychological care team, intercepting information and punishing accounts, etc. However, there is a tense relationship between protecting users from harmful content and maintaining the principle of free speech (Roberts, 2019). In most cases, platforms use a combination of human moderation and automatic content moderation tools to screen online content. In order to fulfill the commitment of providing a free speech environment for users, reactive and post-moderation are generally used. That is, moderators will review when users report to the backend or harmful content has been published in online public places.
However, the cost of hiring 24-hour real-time moderators is expensive, and some companies may not be able to afford it. Therefore, the time it takes for content to be reviewed is often delayed from the time of online harm, and large-scale online bullying often occurs in a short period of time, which means that online violence incidents may have already reached their peak before the review. Even some companies may use the traffic and popularity attracted by online harm to deliberately ignore and condone such behavior.
On the other hand, we need to consider the ethical issues it brings while we focus on the high efficiency brought by artificial intelligence. Machine filters can only identify and mark some obviously inappropriate content, but the content of online bullying is presented in a subtle manner in many cases, such as abbreviations, homophones, Photoshopped images, etc., which makes it difficult for machines to fully identify them. In dealing with more complex cases, AI may not be able to understand the cultural sensitivity, contextual content, or the presentation of inappropriate content with special meanings. The content moderation of the platform is full of moral and practical challenges. Will the moderation system take different measures based on gender, race, class, and other factors? Will different moderators show consistent review results for the same case? Is the platform responsible for the negative consequences caused by online harm?
3. Incomplete Legislation
The consequences of speech harm far outweigh the simple sum of individual statements. The law cannot hold an individual accountable for their speech until it constitutes a violation of the Civil Code, the Law on Administrative Penalties for Public Security, or the Criminal Law. However, the negative and biased online environment itself is not subject to legal action. “No snowflake is innocent when an avalanche falls” and “Everyone gives a shove to a falling wall”, but it is difficult to identify which snowflake or individual ultimately caused the tragic outcome. The principle of difficult-to-identify actual perpetrators and non-responsibility of the masses in the law makes t network harm lost lose its clear subject and practical feasibility at the legal level.
Here are some legislations respectively stipulate that no organization or individual shall insult or slander others in any way:
- Article 1024 of the Civil Code of the People’s Republic of China;
- Article 42 of the Public Security Administration Punishments Law of the People’s Republic of China; and
- Article 246 of the Criminal Law of the People’s Republic of China.
However, the crimes of insult and slander are private prosecutions in the criminal law, and the parties involved need to “report before processing”. Therefore, the law cannot hold them criminally liable if the parties involved do not appeal to the court.
Zheng chose to call the police as soon as he found out that he was being bullied by the Internet. The police advised her to complain to the platform, and then she sought justice through social media platforms. She went to a notary office to make some evidence public and then issued a lawyer’s letter. However, after being targeted by internet violence, does collecting evidence and reporting it to the police immediately solve the problem? Zheng completed all the procedures and steps she could, but there was still no substantial progress in the case after more than six months. In reality, the road to justice for those targeted by internet violence is often long. Many cases may take half a year or a year, and still have little effect in the end. However, it can be normalized and accepted as part of online discourse when online harm is not restrained or sanctioned by the law, which can create a toxic online environment. In this environment, all illegal and unethical behavior online will be seen as an acceptable way to express oneself.
Finally, each of us may face pressure or anger in the real world, but it cannot be a reasonable excuse for us to vent and speak without restraint in the online world. In situations where we do not know the truth and the whole picture, please try to release goodwill and control our inner demons.
Civil Code of the People’s Republic of China. (2020). The National People’s Congress Committee of the People’s Republic of China. http://www.npc.gov.cn/npc/c30834/202006/75ba6483b8344591abd07917e1d25cc8.shtml
Criminal Law of the People’s Republic of China. (1979). The National People’s Congress Committee of the People’s Republic of China. http://www.npc.gov.cn/wxzlhgb/gb2021/202104/3a338df89b0a415481a9bf0571588f88/files/3d9248e01141484ead7d01b58958e0ae.pdf
Edwards, E. B., & Esposito, J. (2019). Introduction. Intersectional Analysis as a Method to Analyze Popular Culture: Clarity in the Matrix. (pp. 1-25). Routledge.
Ho, S., & McLeod, D. M. (2008). Social-Psychological Influences on Opinion Expression in Face-to-Face and Computer-Mediated Communication. Communication Research, 35(2), 190–207. https://doi.org/10.1177/0093650207313159
Jhaver, S., Ghoshal, S., Bruckman, A., & Gilbert, E. (2018). Online Harassment and Content Moderation: The Case of Blocklists. ACM Transactions on Computer-Human Interaction, 25(2), 1–33. https://doi.org/10.1145/3185593
Kelling, G. L., & Wilson, J. F. (1982). Broken windows: the police and neighborhood safety. Atlantic Monthly, 249(1), 29‐38. https://doi.org/10.4324/9781315087863-11
Liu, Y. (2002). What does research say about the nature of computer-mediated communication: task-oriented, social-emotion-oriented, or both? Electronic Journal of Sociology, 6(1).
Luarn, P. and Hsieh, A. Y. (2014). Speech or silence. Online Information Review, 38(7), 881–895. https://doi.org/10.1108/oir-03-2014-0076
Noelle-Neumann, E. (1973). Return to the concept of powerful mass media. Studies of Broadcasting, 12(9), 67-112.
Parekh, B. (2012). Is There a Case for Banning Hate Speech? In The Content and Context of Hate Speech (pp. 37–56). Cambridge University Press. https://doi.org/10.1017/CBO9781139042871.006
Public Security Administration Punishments Law of the People’s Republic of China. (2005). The National People’s Congress Committee of the People’s Republic of China. http://www.gov.cn/ziliao/flfg/2005-08/29/content_27130.htm
Roberts, S. T. (2019). Understanding Commercial Content Moderation. Behind the Screen. (pp.33-72). Yale University Press.
Smartt, U. (2020). Defamation. Media Law for Journalists. (pp.78). Routledge.
Social media troll harassing people on social media Free Photo. (2021, April 2). Freepik. https://www.freepik.com/free-photo/social-media-troll-harassing-people-social-media_13463116.htm
Sunstein, C. R. (2002). The Law of Group Polarization. Journal of Political Philosophy, 10(2), 175–195. https://doi.org/10.1111/1467-9760.00148