A critical analysis of hate speech


In today’s extremely rapid dissemination of information, the Internet has been popularized, and countless ordinary people’s lives have been linked to the Internet. The development of the Internet does bring convenience to our lives, but the accompanying problems can not be ignored. Online hate speech happens from time to time in our lives, which is closely related to our lives and has become a problem that ordinary people cannot ignore. An explosion of speech is being posted on social media all the time, and none of it has been rigorously vetted. Among them, countless discriminatory and hateful remarks have been infinitely amplified on social platforms, seriously affecting innocent individuals and groups in the real world. From the field of gender, race, sexual orientation, etc., one can not escape the invisible attack of the network.

Hate speech is completely different from our values of freedom, civility, and equality. However, although hate speech has been recognized as wrong, there are still many controversies in the governance of online hate speech in various countries around the world. The cultural backgrounds and legal systems of different countries, religions and organizations, as well as social attitudes, are different, which makes the formulation and implementation of specific policies difficult (Citron, 2014).

This article will focus on the analysis of the harm caused by hate speech on the Internet, and through the analysis of specific cases, it will reveal the deep-seated reasons for this phenomenon and its impact on ordinary people and society. Explore what policies or social actions can be taken to reduce hate speech and build a more inclusive online environment.

Background and significance

Online hate speech is speech posted or disseminated online that contains offensive, discriminatory, hostile, or insulting content(Waldron, 2012). This kind of hateful speech is not only a form of verbal violence but also harmful to society. Hate speech is especially prevalent when it comes to topics such as race, gender, religion, and sexual orientation. And these aspects because of their particularity, the social contradictions caused by especially serious. Therefore, online speech has become an important issue that the world needs to pay attention to.

In today’s globalization and popularization of information, we can see the information released by people on the other side of the ocean without leaving our homes. The spread speed and influence of hate speech on the Internet greatly exceed that of traditional hate speech. The communication between people has fundamentally changed, people can get any information they want to know on the web, and they can also share information. On the web, people can choose to share anonymously, which also makes some users feel free to express their opinions (Woods, &Ruscher,  2021). Sometimes they even go beyond moral and legal boundaries, Posting discriminatory and offensive comments. The impact of such verbal attacks on the victims is enormous and far-reaching.

The impact of hate speech on individuals is mainly reflected in the psychological level. If someone suffers from Internet violence for a long time, there will be anxiety, depression, and other emotions, and even more serious has affected life and work. Moreover, the Internet is confusing, and many people who do not know the truth are easy to be deceived by false statements and become the accomplices of the perpetrators. Several incidents that were initially perceived as wrongdoers who later became victims have occurred in recent years. Ordinary netizens on the Internet have now been vigilant and no longer believe in network rumors, but the real let this kind of incident will not happen again mainly rely on the formulation of laws.

People’s awareness of hate speech on the Internet should be raised. Secondly, it also needs the joint efforts of all sectors of society to strengthen the publicity and education of this harm and teach people how to deal with these problems to protect their legitimate rights and interests. Online platforms should also shoulder the corresponding responsibility to conduct stricter censorship work and formulate stricter rules for the speech published on the platforms. By these means, we can certainly build a healthier and freer Internet environment.

case study

In August 2017, a deadly incident occurred in Charlottesville, Virginia, when a rally called “Unite the Right” turned violent. Instances like these are not unusual in the US, but in this case, the repercussions were severe: a right-wing fanatic crashed his vehicle into a gathering of people who were protesting the interpreter, killing one and seriously wounding several more. The episode also demonstrated the disarray of political disagreement and racial intolerance in the US. The instigation of hate speech made online by radicals was the primary cause of the confrontation.

The incident began as a protest against the removal of a statue of Confederate General Robert E. Lee. Unfortunately, the demonstration swiftly descended into violence as opponents of right-wing organizations, including neo-Nazis and white supremacists, clashed with one other.

In the beginning, the two sides mainly expressed some hate speech on Facebook and Twitter, and finally initiated and coordinated actions through these social media, including the time and place of the rally. From this, it is not difficult to see that social media platforms played an important role in the event, which could spread information about the rally, incite the emotions of ordinary people, and mobilize supporters.

After this incident, social media faced huge pressure from public opinion, and the rationality of the platform rules was questioned. Why did hate speech spread widely instead of being detected and dealt with properly in the first place? In the end, both Facebook and Twitter deleted comments and information about the incident. But no matter how lost lives will not come back, this must arouse our attention. The impact of social media is so impressive that social media has to re-examine its responsibilities and role in cyberspace. How to balance the freedom of speech and the regulation of hate speech on online platforms, and whether some special policies should be formulated to avoid the occurrence of similar incidents are the content that platforms should consider.

Charlottesville is a cautionary tale about the complex role of online platforms in modern social conflict. It’s not just a technical problem, it’s a cultural and political challenge. Social media can become a tool for extremism, which also means that social media has a huge social responsibility in the public sphere. In the future, platforms need to be more proactive in working with various stakeholders to develop and implement strategies aimed at mitigating and preventing hate speech and other online harm.

Digital policy and governance analysis

Nowadays, the Internet is closely related to people’s lives, and the network platform has become the main place for information release. The rapid development of the Internet has brought convenience to people’s lives, but it is accompanied by an increasing number of hate speech and online harm incidents. But the platform has failed to develop strong policies to prevent such incidents, and governments and international organizations are constantly seeking solutions. At the same time, social platforms should also be aware of their increasing regulatory responsibilities.

Current digital policies vary across the globe, but they all share a common goal – to reduce hate speech and reduce harm(park et al., 2023). For example, the European Union’s Digital Services Act (DSA) requires large online platforms to strengthen content moderation and promptly remove illegal content, including hate speech and disinformation. the United States, through Section 230 of the Communications Decency Act, provides greater content freedom for social platforms. However the passage of these bills has caused a lot of controversy, and we still have a lot of blank space in digital policy.

In the existing system, the responsibilities of the platform are becoming clearer. Platforms have the responsibility of self-monitoring, and should also delete bad speech, improve transparency, and make their content management regulations public. Facebook, for example, has implemented a combination of sophisticated algorithms and human reviews to identify and remove hate speech, but its effectiveness and bias are often questioned, and it is clear that it still faces huge challenges (Sinpeng et al., 2021). To improve the problem of hate speech, there needs to be clarity at the legal level.

Taking the vicious incident in Charlottesville as an example, the platform failed to effectively control and manage the release of hate speech in the early stage, which eventually led to the occurrence of vicious events. The convenience and anonymity of platforms such as Facebook have increased freedom of expression to some extent, but they have also made it difficult to hold people accountable for inappropriate speech. Although Facebook removed the hate speech after the incident and strengthened its management in this area, it is still difficult to fundamentally prevent the publication of such speech.

In short, there is a need to strengthen international cooperation to develop uniform regulations and strengthen the development of technologies in this area to improve the accuracy of identifying and responding to hate speech. But we should also pay attention to the boundaries, and we should not interfere with the normal freedom of speech and infringe on people’s legitimate rights and interests.

Public attitudes and platform policies

The public’s attitude is now the main factor influencing the policies promulgated by the platform. Now that the influence of the Internet on people is increasing, the public is also increasingly concerned about the boundary between the platform’s regulation of online hate speech and freedom of speech. But there is no doubt that the public is strongly opposed to online hate speech, and the majority of people’s views are in favor of freedom of speech, but freedom of speech should be within the scope of moral and legal permission, and should not include offensive and insulting speech. Public opinion has undoubtedly influenced the formulation of rules for the major social platforms – Facebook and others have strengthened their online policing efforts, improved the technology of censorship, and strengthened the efforts of human censors(Schoenebeck et al.,2023).

However, there are a few people who hold a different view. They argue that the platforms’ rules on hate speech violate their freedom of speech by preventing them from freely expressing their views(Quinn, 2019). Excessive content monitoring can hurt innocent people and inhibit reasonable public discussion, especially when it comes to sensitive issues. These two opposing points of view force the platform to continuously enhance its technological capabilities and institutional offerings. Instead of erasing all material, the platform may, for instance, implement a more sophisticated content tagging system that lets users select content according to their preferences. Simultaneously, it needs to promote openness and candidly examine the policy’s criteria and particulars of execution.


Hate speech on the Internet is becoming a serious issue, therefore we need to address it head-on and take proactive steps to change the existing situation. In order to improve international collaboration and legislative coherence, legal reforms must first be implemented. Enhancing algorithmic technology to prevent hate speech and enabling third-party screening are the next steps that platforms must take to guarantee that the right to free expression is maximized while abiding by the law and moral principles. Lastly, improve public awareness campaigns and provide people a way to report offensive comments. By passing these rules, we can lessen hate speech and increase online safety. 


Citron, D. K. (2014). Hate crimes in cyberspace. Harvard University Press.

Park, A., Kim, M., & Kim, E.-S. (2023). SEM analysis of agreement with regulating online hate speech: influences of victimization, social harm assessment, and regulatory effectiveness assessment. Frontiers in Psychology, 14, 1276568–1276568.

Quinn, P. (2019). Stigma, state expressions and the law : implications of freedom of speech. Routledge.

Schoenebeck, S., Lampe, C., & Triệu, P. (2023). Online Harassment: Assessing Harms and Remedies. Social Media + Society, 9(1), 205630512311572-. https://doi.org/10.1177/20563051231157297

Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney.   

Waldron, J. (2012). The Harm in Hate Speech. Harvard University Press,.

Woods, F. A., & Ruscher, J. B. (2021). Viral sticks, virtual stones: addressing anonymous hate speech online. Patterns of Prejudice, 55(3), 265–289. https://doi.org/10.1080/0031322X.2021.1968586  

Be the first to comment

Leave a Reply