With the rapid development of the Internet, we can promptly communicate with people worldwide through social media. In many ways, the advent of social media has positively impacted society, as this rapid dissemination of information has helped us share our ideas and access knowledge more effectively. However, the rise of social media has also changed how we interact with politics and public affairs (O’Regan, 2018). This has also increased the Internet’s potential to harm society. Adverse messages like hate speech and incitement to violence are spreading and expanding on social media. The amplification of hate speech on the Internet through the spread of digital platforms has become a global problem.
Some data suggests that in 2017, 41% of internet users experienced online harassment, 18% of which was severe and included physical threats and persistent harassment (Flew, 2021). While today, with the Internet being sufficiently advanced, we place great importance on freedom of expression, which is considered an expression of human development and freedom of thought.

Moreover, the widespread use of social media has facilitated the exchange and development of human culture. However, the imbalance between promoting freedom of expression and increasing censorship has also led to hate speech flooding all corners of digital platforms. This is highly objectionable, and hate speech and online harm cause societal instability, creating mistrust and animosity.
Any advocacy of national, racial, or religious hatred that constitutes incitement to discrimination, hostility, or violence shall be prohibited by law. ( International Covenant on Civil and Political Rights, 1966)
Many people then argue that all information posted is based on digital platforms and social media, so social media platforms poorly regulate hate speech that develops on the Internet. Digital platforms should be responsible for the content posted by all users. But is it the digital platforms that are responsible for causing the spread of hate speech?
What are the current big tech companies doing to regulate ‘hate speech’ online?
For example, Facebook’s regulation of hate speech is reflected in the use of artificial intelligence to block keywords in content users post. Content that violates community norms and contains hate speech is immediately removed, and warnings are issued to posters. For a large technology company like Facebook, it is impossible to regulate hate speech solely by filtering and filtering keywords and highlights with technology such as artificial intelligence. Such an approach would result in restricting some freedom of expression, and these technology companies need help finding a balance between freedom of expression and restricting hate speech.

Facebook’s community code, for example, makes it clear that Facebook expects users to talk openly and honestly about issues that concern them, even if some of the content may be offensive to some people. In some cases, even though the content may violate the Community Code, newsworthy content in the public interest is still allowed to be posted (Facebook Community Standards, 2022). Therefore, to ensure freedom of expression as far as possible, Facebook has a strict process for censoring content. Since most hate speech needs to be judged in context, Facebook has designed a content policing ecosystem where three departments – the Public and Content Policy team, the Community Integrity team, and the Global Operations team – jointly monitor and make censorship decisions (Sinpeng et al., 2021).
These strong measures have gone some way toward limiting the spread of hate speech. After the attack on the US Capitol, for example, some tech companies banned President Trump’s account from their platforms and cut ties with Parler, whose app had spread violent rhetoric before the riots. This was followed by Google, Apple, and Amazon blocking Parler from their apps (Arntsen, 2021). This incident shows how quickly and uniformly the major technology companies have reacted to hate speech. So why does “hate speech” continue to exist when it seems that these digital platforms have taken a solid initiative to curb the spread of hate speech, and are they not taking enough responsibility?
Why can’t digital platforms control the spread of all ‘hate speech’?
Here we analyze the issue of censorship and regulation of hate speech in Asia-Pacific. The report “Facebook Regulates Hate Speech in the Asia-Pacific Region” provides an in-depth look at the regulation and findings of hate speech in the Asia-Pacific region and gives the following reasons why it is ineffective in controlling the spread of hate speech.
1. The report states that Facebook cannot effectively capture hate speech based on existing classifiers, as much of the content requires local knowledge to be identified and judged more effectively.
2. There is little hate speech legislation in the Asia Pacific region, and much remains at the bill stage, making it difficult for Facebook to set regulatory enforcement standards.
3. Many hate speech issues against LGBTQ+ people often arise in countries that are politicized at a national level.
4. Some Facebook administrators need to gain knowledge of local politics, language, and culture to make sound judgments.
5. “Reporting fatigue” is also a disincentive for administrators to report offending content (Sinpeng et al., 2021).
In an analysis of different Asia-Pacific countries, the LGBTQ+ community in the Philippines had the highest level of experience of hate speech. This is because the country does not have direct legislation that addresses hate speech directly, and the country’s constitution tends to be more protective of free speech.
In India, on the other hand, the Supreme Court passed a ruling legalizing homosexuality in September 2018 though. This has reduced some of the hate speech against LGBTQ+. However, a large amount of hate speech against LGBTQ+ in India is directed at Muslim populations, mainly due to historical legal discrimination and religious and political issues. However, the country has managed to control the incidence of hate speech relatively well due to a large number of page administrators.
The protections for the LGBTQ+ community in Australia are among the strongest in the Asia-Pacific region. This includes not only the Racial Discrimination Act of 1975 but also the introduction in 2018 of the Crimes Act, where overt threats and discrimination against sexual orientation and gender identity are illegal. Not only does Australia have a relatively robust legislative framework, but it also has its own unique social media laws, which have provisions in the Acts that target the regulation of online hate speech. So even though Australia had the highest number of comments in the case study, there was no significant indication of LGBTQ+ hate speech. Even though the only reviewer interviewed was not professionally trained, the level of LGBTQ+ hate speech in Australia remained extremely low (Sinpeng et al., 2021).
The findings of this report show that although social media is the central hate speech manager, it is only possible for technology companies to make the most effective audits due to the high dependency of hate speech on the culture and context of the country. Another important reason is that different countries have different legislation on hate speech, and some countries do not even have specific laws governing it. This has led to a proliferation of hate speech in these countries, and it is difficult to effectively stop the spread of hate speech through censorship by technology companies alone. This makes vetting hate speech by large technology companies in these countries seem inadequate. This is down to the rapid growth of global information technology, where freedom of expression has become particularly important on social media, and the imperfection of hate speech legislation in different countries and regions, which makes it difficult for social media platforms to impose substantial restrictions on users. Cultural and political factors are at the forefront of this.
What are the keys to controlling the spread of ‘hate speech’? What should we do?
The root of the spread of hate speech is the difficulty of balancing it with freedom of expression. The fundamental reason why many countries have struggled to move forward with legislation to regulate hate speech is that different ideologies and governments disagree on the baseline for hate speech. In an interview with Lazer, a professor of political science and computer and information science at Northeastern University in response to the violent riots at the US Capitol, he said that most laws are made to deal with hate crimes and that we should update some of the laws about hate speech and bullying on the Internet. However, it is not wise to leave it entirely to government regulation, either. It is doubtful that a law made by an appointee like Donald Trump can actually monitor misinformation (Arntsen, 2021).

It is quite right to understand that adopting legislation to regulate hate speech should be based on the right environment of freedom of expression. If necessary, this should be discussed, and multiple countries should set standards for regulating hate speech simultaneously. The more important thing to curb the spread of hate speech is to start with the country’s cultural environment and change the views of a small number of extremist people in society through policy guidance. Rather than restricting hate speech from an extreme point of view, politicians should control freedom of expression through their far-left or far-right political views.
As Zuckerberg once mentioned, we should create an independent body so that users can appeal our decisions. We should also work closely with governments to ensure the effectiveness of vetting systems. Let third-party agencies set the standards for harmful content, set a baseline of what can be banned, and keep harmful content to a minimum (Crews Jr, 2019). Zuckerberg makes a good suggestion that social media platforms should not lose their protection of freedom of expression at any time and that having third-party agencies work together to set standards for harmful content and keep it to a minimum is critical to both protecting freedom of expression and setting standards for managing hate speech following each country’s culture.
So it is not up to technology companies or legislation to decide who is responsible for the spread of ‘hate speech.’ It should fundamentally consider each country’s cultural and political climate through technological regulation by technology companies, combined with the implementation of national legislation and the joint negotiation of “hate speech” standards by third-party organizations. This should be done to balance how freedom of expression is preserved and how hate speech on the Internet is managed.
The challenges and future of ‘hate speech.’
With the rapid development of technology and the Internet, language, and hate speech are no longer limited by region on the Internet. In this highly globalized and relevant environment, we must develop minimum regulations for the entire Internet network to regulate hate speech. This is a continuation of international law in the governance of the Internet. In today’s social media boom, technology companies need to regulate hate speech in a way that is more tailored to each country’s context. There is a need for a three-way partnership between governments, partner organizations, and technology companies.
Moreover, in civil society, we should also work in partnership with governments to help tackle hate speech and violence. In this era of globalization, we cannot rely on one sector or group to be responsible for the spread of ‘hate speech.’ It requires every stakeholder to work together to govern our Internet. It is the responsibility of every one of us to regulate our speech and spread the ideas of love and peace.
References:
Arntsen, E. (2021) Hate thrives on social media – but who should police it?, Northeastern Global News. Available at: https://news.northeastern.edu/2021/01/12/hate-thrives-on-social-media-but-who-should-police-it/ (Accessed: April 11, 2023).
Facebook community standards (no date) Transparency Center. Available at: https://transparency.fb.com/en-gb/policies/community-standards/ (Accessed: April 11, 2023).
Flew, Terry (2021) Regulating Platforms. Cambridge: Polity, pp. 91-96.
Jr., C.W.C. (2019) How regulation of “harmful speech” online will do the real harm, Forbes. Forbes Magazine. Available at: https://www.forbes.com/sites/waynecrews/2019/12/02/how-regulation-of-harmful-speech-online-will-do-the-real-harm/?sh=38774dca5e96 (Accessed: April 11, 2023).
O’Regan, C. (2022) Hate speech regulation on social media: An intractable contemporary challenge, Research Outreach. Available at: https://researchoutreach.org/articles/hate-speech-regulation-social-media-intractable-contemporary-challenge/ (Accessed: April 11, 2023).
Sinpeng, A. et al. (2021) Pacific Centre for the responsibility to protect, Asia. Available at: https://r2pasiapacific.org/ (Accessed: April 11, 2023).
Be the first to comment