Introduction
Based on the development of social networks, the Internet provides a space for the public to speak freely. According to Article 19 of the UN International Covenant on Civil and Political Rights, citizens’ freedom of expression on the Internet becomes a right to be defended. However, this right currently lacks sufficient normativity, because freedom of speech does not limit the subject matter of people’s speech. Although this convention also clearly states that citizens should not publish any hate speech and rumors against targets or groups (Flew, 2021). However, citizens’ freedom of speech conflicts with restricting the content of speech, so online harm caused by hate speech has always been difficult to monitor.
At the same time, considering the wide range of Internet user groups, just as Flew (2021) mentioned in his article that the hate speech itself will not incite public violence, but the two are often associated. Because hate speech is often aimed at groups or individuals, and users who are dissatisfied with these characteristics often have a certain scale. This situation can lead to group hate speech and online harm. Since these speeches and harms usually revolve around sensitive topics such as gender, race and sexual minorities, to regulate platform content and protect special groups, Internet participants and social platforms have been looking for appropriate regulatory methods to intervene and reduce the generation of such speeches. In order to govern hate speeches and online harm, both the United Nations and national legislatures have enacted relevant laws on Internet usage to regulate online harm and hate speech. In addition to promulgating relevant laws and regulations by lawmakers, platforms have also made efforts to monitor this. For example, social platforms such as Facebook can mark posts involving sensitive information, and these platforms have also designed algorithms to check data related to sensitive information and break links (Matamoros-Fernández, 2017).
Why digital platforms empower users to regulate
Relevant regulations formulated by the government and algorithms developed by platforms to screen out inappropriate speech are the main options of governance for hate speech and online harm at this stage. But they both have flaws. First of all, considering the diversity and complexity of online speech, it is difficult for legislative management methods to take care of every detail precisely. Moreover, policy formulation, as a means of governance with a large scope of management, needs to be very inclusive. Even some political campaigns rely on digital platforms. For example, Canada uses social platforms to appeal to the interests of Aboriginal people (Carlson & Frazer, 2018). Since most countries with colonial history and immigrant countries pay attention to race-related demands, they all reserve the right of the public to discuss racial behavior on social media. In order to regulate hate speech and online harm on racial issues, the government also needs to rely on algorithms to screen relevant content. It can be said that policy-making groups also belong to the user groups of social platforms and are highly dependent on the public functions of the platforms, so policy-making groups highly rely on the self-regulatory capabilities of digital platforms to deal with hate speech (Flew, 2021). In addition to formulating algorithms to screen information on the platform, there is also some invisible work content that needs to be finished by a professional business team. Internet companies need to hire professionals to screen, review and judge platform content, and these review teams even have to ensure 24-hour monitoring of platform content (Roberts, 2019). Developing algorithms and hiring censorship teams creates additional overhead for online companies, thus social media has developed features that hand over some of the governance to users.

If we take Twitter as an example. The platform allows users to report posts they believe contain inappropriate content. Users can report a Tweet for offensive or misleading information and ask Twitter to delete the Tweet for the benefit of individuals or specific groups. This feature also exists on other digital platforms, which means that users can report offensive information at any time during the browsing process. And the content will be removed by the platform after it is deemed inappropriate to publish online. If a user is found to post too much inappropriate content, other users have the right to directly report the user for inappropriate behavior and ask the platform to impose restrictions on such accounts.
This function has been accepted by a large number of Internet users. The main reason is that the algorithms of digital platforms cannot accurately identify all inappropriate content. A case about the controversy over racism on online platforms was mentioned in the article by Matamoros-Fernández (2017). In this controversy, the censorship mechanism of the Internet platform is considered to be disrespectful of cultural diversity, and the algorithm refers to the value system of Western society to mark indigenous cultural content as sensitive content. Moreover, the algorithm of the platform cannot accurately identify speech content that causes online harm, especially racist content that attacks Aboriginal people. The reason for this debate may be that the current Internet culture is dominated by Western culture. Platform algorithms lack understanding and sensitivity to minority cultures, and may even amplify and create racist discourse sometimes. Moreover, platform reviewers from different cultural backgrounds are subjective when reviewing content, thus they cannot identify racist speech in censored content in a timely and accurate manner. In this case, if Aboriginal users can act as regulators to report racist content when they browse it, this has the opportunity to prompt the platform to re-examine and evaluate the content from a different cultural perspective. Then the status quo of racism on digital platforms is more likely to be improved, and it can also increase the stickiness of users from different cultural backgrounds to the platform.
Nevertheless, the regulatory powers that such platforms confer on users are still likely to be abused. A noteworthy aspect of Facebook’s governance of hate speech in the Asia-Pacific region is that they employ censors as a third party to participate in the review of hate speech and online harm against LGBTQ+ groups, only a small number of employed reviewers have received professional training. Some of the reviewers have little or no expertise in moderating Internet content (Sinpeng et al, 2021). This situation is very similar to giving Internet users regulatory powers, because the vast majority of Internet users have not received training in censorship, and this right to censor Internet content may be a weapon for some users to attack opposing groups or users. This also violates the original intention of the platform to empower users to govern, and may even lead to more online harm.
A relevant case occurred in Thailand

There is a case that occurred in Thailand in January 2023 that well reflects the impact of giving Internet users governance rights. Thai actor Jakapan Puttha, also known as Build, was accused in January by Ms Patchayamon Theewasujaroen, also known as Poi, Build accused of plagiarizing project inspiration, asking for expensive gifts and having violent behavior in intimate relationships. The author Poi provided a series of pictures and screenshots as evidence. Perhaps because the author of the accusation, Ms Poi, is female. Although some of the evidence provided by Poi was later confirmed to be falsified, based on social and cultural guidance, the public defaults that women are more likely to be hurt by violence in intimate relationships. Allegations that Build might have plagiarized were quickly ignored, and most users started attacking Build with allegations of violence against women. As the party involved, Build and his agency did not respond immediately. After the outbreak of public opinion, a lot of hate speech and online hurt speech attacked him from the perspective of defending women’s rights, and some social media users launched new accusations against him. But there are still fans who choose to believe in Build, report attacks on him on social media, and keep posting Build-related content on social media to maintain his popularity. There are also some fans who continue to post hate speeches against Poi on social media. Subsequently, Build filed a lawsuit against Internet hate speech and rumour publishers in court, and the Thai court ordered the rumormongers to issue a formal apology statement and clarify the rumors publicly. It is worth noting that some users spreading rumors are even minors. This incident is still awaiting further hearings and ruling by the court.
It is true that violence against any gender is wrong under any circumstances, and both hate speech and online harm are behaviors that affect the Internet environment. The well-known online platform Reddit is regarded as a platform with anti-feminist culture. And based on the fact that Reddit compresses the living space of women and racial minorities on this platform, everyone involved in Reddit’s anti-feminist new technology culture events still believes that their behaviour is acceptable and without any problem. This toxic culture embodies the characteristics of Reddit’s platform political culture (Massanari, 2017). And a similar toxic culture also exists in the entire Internet society, such as using hate speech and online harm to attack all possible anti-feminist behaviors. In the case of Build and Poi above, the first reaction of many users was to use hate speech and online harm to attack users or groups that they thought might harm women’s rights, and report these users and speeches. Even if they do not know the result of the court’s ruling, they cannot be sure whether the evidence provided by Poi is true. This behaviour is undoubtedly public violence against Build. And Mr Build has a large size fan base as a celebrity. This fan group has a specific code of conduct. The fan group uses the reporting function of social software to report what the fan group thinks is unfavourable to Build, and also imposes hate speech and public violence on Poi. It’s also part of the toxic culture that exists on social media. While this kind of large-scale public violence and hate speech is fueling a toxic Internet culture, it will also excessively consume the heat of sensitive topics. As a result, topics that support minority groups such as feminism are deemed meaningless by social networks and further ignored.
Conclusion
Users who actively or passively participate in hate speech and online harm will act according to their own subjective judgment, and their main code of conduct was to maintain the party they supported. In this case, neither party correctly exercised its right to govern and regulate speech on the platform but used the power of supervision to promote online harm, which had a negative impact on the Internet environment. Therefore, based on the cases mentioned above, the platform should not give users too much regulatory power. Although users are more sensitive to inappropriate remarks related to their own cultural background. It is undeniable that as a non-professional group, users cannot get rid of their own cultural background, exclude their own subjective ideas, and neutrally review hate speech and online harm in the Internet environment. But the “reporting” function designed by the platform for the public should be retained, and the platform should pay more attention to the content reported by users, and improve the platform’s algorithm based on these contents. Perhaps excessive reliance on digital platforms to regulate hate speech and online harm may further consolidate the hegemony of digital platforms in the digital environment. However, compared with broad policy supervision and highly subjective user supervision, platform supervision is still the most effective at present, and this is also the supervision plan for inappropriate speech on the Internet that may protect the interests of most people.
References
- Carlson, B., & Frazer, R. (2018) Online Political Activism. In Social Media Mob: Being Indigenous Online. Sydney: Macquarie University.pp.17-20. https://researchers.mq.edu.au/en/publications/social-media-mob-being-indigenous-online
- Flew, T. (2021) Issues of Concern. In Regulating Platforms. Cambridge: Polity, pp. 72-103.
- Ichchha. (2023, January 25). Explained: What happened to Build Thai actor Jakapan Puttha? Genius Celebs. Retrieved April 10, 2023, from https://geniuscelebs.com/what-happened-to-build-thai-actor-jakapan-puttha/
- Massanari, A. (2017) #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi-org.ezproxy.library.sydney.edu.au/10.1177/1461444815608807
- Matamoros-Fernández, A. (2017) Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. In Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130
- Roberts, S. (2019) Introduction. Behind the Internet. In Behind the Screen: Content Moderation in the Shadows of Social Media (pp. 1-19). New Haven: Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300245318-001
- Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021, July 5) Facebook: Regulating hate speech in the Asia Pacific. Final Report to Facebook under the auspices of its Content Policy Research on Social Media Platforms Award. Dept of Media and Communication, University of Sydney and School of Political Science and International Studies, University of Queensland. https://r2pasiapacific.org/files/7099/2021_Facebook_hate_speech_Asia_report.pdfLinks to an external site.
Be the first to comment