Negative impact of hate speech on social media platforms, attempts at regulation and governance.


In the present age, social media platforms have become the main tool for people to share their lives and obtain information. With that comes some new issues that arise when social media platforms are used. Some hate speech posted on social media platforms is one example. These hate speeches may have some negative impacts on the environment and even society of social media platforms. In response to hate speech, the operating companies of social media platforms have formulated some related solutions, and some researchers have also given some suggestions. This article will analyze the negative impact of hate speech based on several cases. This article will also discuss some effective governance and supervision attempts for this issue based on some research results and actual situations.

Hate Speech

Flew provides a definition of hate speech in the book Regulating Platforms (2021): Hate speech refers to the promotion and expression of hatred by making remarks aimed at the relevant characteristics or identity labels of a specific group of people, including gender, creed, skin color, nationality and even sexual orientation. The reason why hate speech widely exists on social media platforms is that the characteristics and functions of social media platforms can provide some convenience to the publishers. According to Matamoros-Fernández (2017), Due to the functions and characteristics of the social media platform, it is used as a tool to distribute and promote racist discourse. Publishers can use the platform’s user privacy features to hide their information. This can help them escape some blame and fight back. They can also quickly find companions and targets through social media platforms, and establish groups to organize hate speech against targets.

(Statista Research Department, 2022)

As the graph above shows, nearly half of the hate groups flagged on Facebook exist on the social media platform. In addition, the publisher also provided some data: this group has 57 pages and 10 groups on Facebook. It can be seen that the problem of hate speech on social media platforms has become very serious.

People may post hate speech for a variety of reasons, such as conflicts caused by traditional beliefs and stereotypes, venting real-life pressures, or even irrational follow-up behavior. However, hate speech, regardless of the reason, will cause harm and more serious negative effects. These hate speeches are like sharp knives, causing serious and continuous harm to the victims. They not only humiliate the victims, but also may affect their real life. Victims will have self-doubt and mental pressure on their own characteristics or identity labels because of hate speech. And they don’t know who is posting hate speech in the online world, maybe someone close to them. They will doubt themselves and those around them, and then become sensitive and nervous. When they cannot bear the mental pressure, they will feel disappointed in life and society, and finally their emotional breakdown leads to serious mental illness and even suicide. Therefore, the issue of hate speech cannot be ignored and must be properly regulated and governed.

Cases of Hate Speech

There are two cases of high-profile hate speech with serious consequences. Both cases were reported by the BBC. The first one is the incident of Korean K-POP star Sulli. In 2019, she was found to have committed suicide at her home in Seoul at the age of 25 (BBC, 2019). Before 2015, she had been active in the show business and had a large number of followers on major social media platforms. She does not like the shackles of traditional concepts, and advocates and encourages Korean women to actively pursue individuality and freedom. According to BBC News (2019), she participated in the “No Bra” women’s movement held in South Korea, expressing her love and confidence in women’s own bodies by not wearing a bra. She also showed off her nipples on various social media multiple times to encourage Korean women to unbind. It may be that her move to break traditional concepts and stereotypes has created a fierce conflict with traditional Korean culture. People who support traditional Korean culture have launched a constant stream of abuse and hate speech againsther on social media platforms.

(BBC News, 2019)

These experiences had a huge impact and hurt on her, forcing her to put her love of acting on hold. However, her withdrawal did not stop the publishers from hurting. Instead, they intensified their hate speech and even death threats against her and her family. In the long-term struggle against this kind of online violence, Sulli finally broke down emotionally, and finally chose to commit suicide to end her short life.

Another case happened in China. Xuezhou Liu was also a young man who lost his life to hate speech on the Internet. In 2022, he committed suicide due to long-term cyber violence to end his 17-year life. Compared with Sulli, his life is more like a tragedy. He was sold by his parents to another family when he was a baby due to family poverty. He understands and forgives his parents’ choices, and wants to find them to return to his family through his own efforts. He chose to use the help of social media platforms. In 2021, he posted information on several major Chinese social media platforms such as TikTok and Weibo, hoping to get help from netizens. With the help of the powerful information gathering capabilities of social media platforms, he finally found his parents. He thought this was a happy ending, but he did not expect it to be the beginning of a nightmare. His parents have divorced, and each formed a new family, and no one is willing to accept him to join. He wanted to get help from social media once again while people were still interested in the topic.

(BBC News, 2022)

However, his parents were one step ahead of him by posting false statements on social media to guide public opinion. They said Xuezhou Liu’s purpose was not for the family but only to seek his parents’ property. Many netizens did not know the truth and thought they were being deceived by him, and started posting hate speech under his accounts on various social media platforms. He once again felt the power of the Internet, but this time it was no longer helping but accusing and insulting. Although he clarified the facts, no one wanted to believe him. Under the double blow of his family and the Internet, he had no choice but to commit suicide. One interviewee said in an interview with the BBC that Xuezhou Liu as a minor, was subjected to hate speech that even an adult could not bear (BBC News, 2022). After his suicide, some netizens posted their disgust and resistance to hate speech under Xuezhou Liu’s social media account. However, no amount of sympathy and regret can exchange his young life.

Both cases demonstrate the negative impact of hate speech. They not only show the tragedies of the past, but also serve as a reminder to all netizens. Although social media can bring convenience to people’s life, people should still use it cautiously. One action or one rumor on the Internet can lead to hate speech attacks. This dangerous situation can limit the development of social media platforms and web technologies. Consequently, this issue must be governed.

Attempts at Regulation and Governance

Regarding the issue of hate speech, all major social media platforms have proposed relevant governance methods. The most common solution is to monitor sensitive words through big data and algorithms, and delete hate speech after it is found. Social media platforms will warn users who post hate speech, and after repeated warnings fail, they will be blocked and labeled. However, these solutions are not really effective.

(Dixon, 2023)
(Statista Research Department, 2023)

As can be clearly seen from the above two graphs, although Facebook and Instagram’s continuous removal of hate speech, new hate speech has not stopped appearing. This solution does not effectively solve the problem. Banning and labeling have no significant impact on the publisher, and re-registering an account after being banned can still post hate speech. Restricted sensitive words can also be replaced with more obscure words. This is mainly because the current regulation of social media platforms is not perfect. Aim et al. (2021) state that Facebook’s existing monitoring procedures are insufficient to deal with rapidly expanding hate speech, and that procedures for identifying hate speech and its triggers need more improvement and development. Flew (2021) also states that the regulatory approach to social media platforms should consider multiple fields such as economics and politics, rather than considering them as a single factor to formulate policies. Therefore, what these social media platforms need to do is to clarify the rules and improve the chain of responsibility (Matamoros-Fernández 2017). In other words, it is necessary to consider all aspects and establish a complete industry rule. All social media platforms are implemented in accordance with the requirements of industry rules, and each link is located according to the chain of responsibility. Having standards can make these social media platforms more aware of what to do, and the chain of responsibility can make them more active in completing and improving the corresponding work.

In addition to social media platforms, users should also improve their digital literacy. Every user of a social media platform should be aware of the serious consequences that hate speech can have. To deepen users’ understanding of this social issue by organizing discussions and publicity on corresponding topics. Users can try to help when encountering hate speech on social media platforms. For example, In 2022, the Call It Out campaign was launched by UTS in partnership with the National Justice Project. They created an online registry to help people who have experienced racism share and document their experiences. Many non-indigenous people participated in this activity as witnesses and provided encouragement and help to those who were discriminated against. Allison (2022) states that this participation of witnesses played an important role, and their help and encouragement made a significant contribution to exposing racism. The shared experiences and support provided by victims and witnesses of hate speech may be more effective in reducing the serious consequences of hate speech.

Government-related legislation is also an effective support for the regulation and governance of hate speech. As mentioned above, many people can avoid punishment and counterattack in real life by posting hate speech on social media platforms. The improvement of relevant legislation can effectively combat this undesirable situation. It is a more effective way to deal with the law than to ban the account of those who publish hate speech online, and to give corresponding punishment. Australia’s Online Safety Act released in 2021 has clearly formulated relevant laws and punishment methods for hate speech. The description of coercive measures in it shows that after the final notice is issued to the publisher, if the publisher still fails to take corresponding measures, coercive measures will be taken against them, including even the issuance of injunctions and civil penalty orders by the court, and corresponding economic fines (Australia’s Online Safety Act, 2021). The increase in penalties may deter some people from posting hate speech on social media platforms, and it also improves the efficiency of social media platforms in dealing with related issues.


Hate speech exploiting the features and functionality of social media platforms continues to evolve into a serious cybersecurity concern. And it will also have a negative impact on the user’s real life and even life and health. Strengthening the supervision and governance of hate speech has become a challenge that major social media platforms and users must face. Existing governance and regulation methods are not enough to fully solve this problem. Social media platforms, users and the government should all make new adjustments and attempts to effectively improve this problem. The upgrade of algorithms and regulatory procedures by social media platforms, the clarification of social media industry rules, the improvement of users’ self-digital literacy, and the establishment of relevant government laws are all effective attempts. Hate speech on social media platforms may gradually decrease or disappear if social media platforms, users, and governments all contribute to this issue and cooperate with each other. Social media platforms and Internet technologies will have a better environment to develop.


Allison, F. (2022, December 6). Call it out. Antar.

Australian government (2021, November). Cyberbullying Scheme Regulatory Guidance. esafetyCommissioner.

Aim, S., Fiona R, M., Katharine, G., & Kirril, S. (2021). (rep.). Facebook: Regulating Hate Speech in the Asia Pacific.

BBC News. (2019, October 14). K-pop star Sulli found dead aged 25. BBC News.

BBC News. (2022, January 25). Liu Xuezhou: Outrage over death of “twice abandoned” China teen. BBC News.

Dixon, S. (2023, March 9). Actioned hate speech content items on Instagram worldwide from 4th quarter 2019 to 4th quarter 2022. Statista.

Flew, T. (2021). Hate Speech and Online Abuse. Regulating Platforms. Polity.

Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946.

Statista Research Department. (2022, August). Share of Facebook-listed white supremacist groups that have a presence on the platform worldwide as of June 2022. Statista.

Statista Research Department. (2023, March 9). Actioned hate speech content items on Facebook worldwide from 4th quarter 2017 to 4th quarter 2022. Statista.

Be the first to comment

Leave a Reply