Hate Speech and Online Harm: Opportunities and Challenges for Internet Governance

source:https://www.gettyimages.com.au/detail/illustration/sadness-vs-happiness-royalty-free-illustration/1162193096?adppopup=true

The ubiquity of hate speech and online harm could be an inevitable topic when discussing the Internet and related culture. While there is no denying that this open platform provides a public space for free expression, the question still remains as to whether it should be regulated to govern. As it is a debated issue among different groups, people could freely publish their opinions on this unbounded domain; however, Carlson and Frazer (2018) argued that appropriate measures need to be implemented to ensure the responsible use of online speech to avoid resulting in harm towards certain individuals or groups. In this blog, an analysis would be conducted of how Internet governance addresses both opportunities as well as challenges posed by issues of hate speech and online harm through specific examples.

I.Opportunities

a. Existing Opportunities on the Internet
The Internet offers a plethora of opportunities across various areas. According to Jenkins (2006), one of the key benefits is that the Internet enables individuals to share their opinions freely without being limited by geographical location, social status or cultural background so that people could share information, exchange ideas and offer diverse perspectives which may not be possible with conventional media. Moreover, a series of social networking platforms promote communication connected much simpler than ever before while simultaneously breaking down barriers between different groups. In some cases, this kind of unlimited expression can lead to unconventional views on topics of culture, politics and society for open public debate which fosters innovation in thinking (Papacharissi, 2010). For example, the #metoo movement that took place in the United States with Internet users across social media platforms has raised awareness about sexual assault and harassment. Through these digital channels, individuals have found a powerful voice that allows them to connect with others who share their beliefs and critiques for meaningful change on a broader scale. By utilizing online tools in this way, it became better popularized to promote positive social progress and stimulate lasting reform. Nevertheless, the immense freedom of the internet may become dangerous yet it poses a grave threat when exploited for the purpose of propagating malicious content and causing harm to others online. Therefore, the opportunities generated by information sharing on the Internet embody the challenge of managing hate speech and online harm.

b. Opportunities for Accelerating Internet Governance
Previously, there appeared to be insufficient attention given towards addressing hate speech and online harm. However, with the increasing awareness of the need for self-protection in digital spaces, resistance against such harmful behavior has grown significantly. In response to this trend, Facebook has implemented a policy aimed at preventing users from posting racist or hateful comments on their platform in the Asia-Pacific region (Sinpeng et al., 2021). Social media platforms have been developing innovative technologies and tools that can help identify and filter out any instances of hate speech or other forms of online harm while simultaneously bolstering regulatory mechanisms. Roberts(2019) illustrated that artificial intelligence and machine learning techniques have been used by some companies to effectively detect inappropriate content quickly. Social media platforms have implemented various protective measures, including blocking and reporting features to safeguard users against online harm (Flew, 2021). Alongside these efforts by social media companies, governments are also stepping up their actions to tackle hate speech and online victimization. For instance, the Chinese government has taken proactive steps in regulating Internet content through initiatives like the real-name system for online accounts and stricter censorship of website material. The Regulations on the Administration of Live Internet Streaming Services issued in 2016 explicitly emphasized compliance with national laws and prohibited disseminating unlawful information or propagating messages that promote violence or hatred based on ethnicity or religion (https://gkml.samr.gov.cn/nsjg/ggjgs/201906/t20190603_302008.html).
As a result of the pervasive presence of hateful speech and harmful incidents online, social media platforms and governments have been spurred into action. The adaptation demands were accelerated of governance structures and cybersecurity measures for better protection against such threats.

source:https://www.gettyimages.com.au/detail/photo/law-legal-technology-concept-judge-gavel-and-royalty-free-image/1144160818?adppopup=true

II. Challenges

a. Challenges for Internet Platforms
For controlling hate speech effectively on Internet platforms is a complex challenge for these online spaces provided with a wide range of platforms to voice their opinions, yet the platform also allows for the proliferation of harmful content such as hate speech. Any attempt at intervention might be met with resistance from users who exert an influence on this feature.
Managing hate speech and online harm on social media platforms could be a task with challenges due to the various complexities of content review and management. Although these platforms are considered basic sources for information sharing, it might be intractable to balance multiple stakeholders while efficiently managing the content from platforms. Flew (2021) mentioned that social media sites faced multiple pressures such as safeguarding freedom of expression, user privacy protection, and preventing platform abuse among others during their content review process. To handle such situations, Internet platform tools have been applied to both machine learning algorithms and manual reviewing methods but still encountered problems like algorithm errors, high costs associated with human reviews, etc. (Nahmias.2021). Furthermore, it is important to note that the blocking function and reporting mechanism implemented by social media platforms may exist limitations. As evidenced by Roberts (2019), some individuals may exploit these tools for their own improper purposes in targeting other users. Moreover, the research taken by Massanari(2017) revealed how particular algorithms utilized on popular sites like Reddit can actually exacerbate hate speech by amplifying its reach to a larger scale of audience.

b. Challenges of Online Culture
Certain aspects of online culture present opportunities for hate speech and other harmful activities to emerge. For example, Howard(1993) demonstrated the characteristics of anonymity and community in this context facilitate the spread of such negative behavior. A study by Carlson and Frazer (2018) uncovered that Australian Indigenous communities on social media were targeted to suffer attacks not only from individuals but also from large groups of anonymous online mobs by spreading hate speech, which developed as a kind of group-based spontaneous or inspired hate speech even evolved into a negative Internet culture. These groups tend to make use of the communication function through the Internet to attack their victims, causing distress to these victims.

source:https://www.gettyimages.com.au/detail/illustration/man-with-megaphone-in-front-of-crowd-royalty-free-illustration/472507242?adppopup=true

c. Challenges from the Impact of Political Stand
The imposition of policies over Internet platforms and the enforcement of regulations could be applied to managing hate speech and online harm, which may give rise to curtailing free expression, coupled with the potential abuse of censorship authorities (Flew, 2021). From the case conducted in China, WeChat and Weibo occupied mainstream on social media yet channels that implemented certain limitations to prohibit users from sharing content related to sensitive issues. For instance, WeChat and Weibo issued new regulations requiring users to authenticate themselves before posting on sensitive topics (Lanjinger, 2017). The result is that the implementation of corresponding policies evidently helps reduce some irresponsible speech, but it may limit some legitimate criticism at the same time. An approach in which the Chinese government exerted control over online discourse is by utilizing a team of designated individuals known as “Internet commentators” (as reported by Chinanewscenter, 2020). These commentators were reported as handpicked from within prisons to post messages that work in concert with policy promotion on various forums and social media platforms. It could be concluded that it aims to reinforce political position with ideology, making use of the opportunities of Internet governance to promote political stands, which brings challenges to Internet governance.

source:https://news.chinanewscenter.com/archives/983

Furthermore, the content review mechanisms employed by social media platforms in China are prone to political interference due to the government censorship that governs these internet platforms. Taking the 2016 Cybersecurity Law of the People’s Republic of China (http://www.gov.cn/xinwen/2016-11/07/content_5129723.htm) as an example that the comprehensive content review mechanism needed to be aimed at stemming the spread of malicious, false, or undesirable information. While hate speech against foreign countries is typically subjected to rigorous scrutiny under prevailing regulations and laws, a similar situation may not be applied to specific groups within borders, which may result in erroneous tendencies of online content mechanisms without transparency on part of those platforms. Under such conditions, it can be challenging to identify the most fitting censorship measures and whether or not legal provisions are being correctly enforced. As Internet platforms do not serve as law enforcement agencies but rather act as supervisory control, it naturally led to a question: what ultimately decides if laws have been violated? A pertinent example of this is during the Hong Kong protest movement in 2019 (ABC News,2019) when numerous mainland Chinese netizens disseminated hate speech on social media directed at fellow Hong Kong citizens of protestors. The above statements appeared to be inadequately managed by Internet platforms but served to arise nationalistic sentiments. Thus the ramifications of these remarks and actions caused harm to those affected while escalating tensions between two groups of netizens. In conclusion, the increasing influence of political factors on social platforms has made an impact on an increasingly contentious argument, particularly when it comes to regulating hate speech and online harm while also preserving free speech, safeguarding political interests, as well as maintaining social order.

Addressing the Internet governance challenges that stem from hate speech and online harm cannot be expected as a short-term solution, as they call for a long-term approach and collaboration across different disciplines. The key to success lies in engaging all stakeholders – government, social media platforms, regulators, and civil society alike – in the Internet governance process with an emphasis on ensuring both freedom of expression as well as personal safety or security in public.

III. Summary

Despite the challenges posed by technology and varying legal regulations across countries, it is imperative that Internet platforms take a proactive approach to regulating hate speech and online harm. While identifying and blocking such content may be challengable through technology, maintaining self-regulation and neutrality may play an instrumental role in fostering comprehensive measures against these issues. Finally, establishing definite guidelines for acceptable content along with robust review mechanisms will enable platforms to uphold ethical and legal standards while avoiding any political biases or influences. Through such concerted efforts, the behavior of users on digital media can be monitored effectively towards creating a safer online environment for the public.

It is crucial that both platform companies with institutions and user-established regulatory bodies play a significant role in Internet governance together. These entities should take an active approach towards strengthening the supervision and management of platforms to ensure the protection of users’ interests, freedom of expression, as well as social justice on common sense. Administrative departments should prioritize individual civil rights rather than focusing much more on political ideologies. It may be considered as the responsibility to eliminate negative influence as far as possible, regarding hate speech and online harm control measures actively.

Additionally, regulatory organizations could take response to increase awareness among internet users by promoting effective ways to regulate their use of the internet with regards to hate speech or harmful content. This could help raise vigilance levels among users when dealing with such issues in spontaneity by sharing applicable laws, rules, and protocols with the masses, promoting comprehension of online customs, and elevating public consciousness regarding the significance of acceptance and courtesy as preventative measures against hate speech and internet-related dangers.

It appears imperative to recognize that the realm of Internet governance remains in a constant state of flux, owing to ongoing technological advancements and human exploration into digital culture and governance measures. Accordingly, it becomes essential to regularly review internet regulations with regard to hate speech and online victimization while leveraging better technical tools for adapting effectively within the diverse environment of the internet. Dialectically viewing the numerous challenges behind opportunities, challenges themselves can also consider as opportunities.

source:https://www.gettyimages.com.au/detail/photo/close-up-of-holding-hands-royalty-free-image/1316781919?adppopup=true

Reference List

Carlson, B., & Frazer, R. (2018). Social Media Mob: Being Indigenous Online. Sydney: Macquarie University.https://researchers.mq.edu.au/en/publications/social-media-mob-being-indigenous-online

Chinanewscenter. (2020).The 50-cent army in China, internet commentators are in prison – those who usually curse behind comments or news are all death row inmates or prisoners serving sentences, they are different from ordinary people.https://news.chinanewscenter.com/archives/983

Flew, Terry. (2021). Platform Regulation. Cambridge: Polity.

Howard,Rheingold. (1993). Virtual Community:Homesteading on the Electronic Frontier.https://dlc.dlib.indiana.edu/dlc/bitstream/handle/10535/18/The_Virtual_Community.pdf

Jenkins, H. (2006). Convergence Culture: Where Old and New Media Collide. NYU Press.

Lanjinger. (2017). Weibo: users must complete real-name verification before sending tweets and comments by September 15 this year.https://baijiahao.baidu.com/s?id=1577954731911451374&wfr=spider&for=pc

Massanari, Adrienne. (2017). #Gamergate and The Fappening: How Reddit’s Algorithm, Governance, and Culture Support Toxic Technocultures. New Media & Society.

Nahmias, Y., & Perel, M. (2021). The Oversight of Content Moderation by AI: Impact Assessments and Their Limitations.https://harvardjol.com/wp-content/uploads/sites/17/2021/02/105_Nahmias.pdf

Papacharissi, Z. (2010). A Private Sphere: Democracy in a Digital Age.Polity

Roberts, Sarah T. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven, CT: Yale University Press.

Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021, July 5). Facebook: Regulating Hate Speech in the Asia Pacific Region. Final report submitted to Facebook under the Content Policy Research Initiative. Department of Media and Communications, University of Sydney and School of Political Science and International Studies, University of Queensland. https://r2pasiapacific.org/files/7099/2021_Facebook_hate_speech_Asia_report.pdf

State Administration for Market Regulation. (2019). Regulations on the Management of Internet Live Streaming Services.https://gkml.samr.gov.cn/nsjg/ggjgs/201906/t20190603_302008.html

The State Council, The People’s Republic of China. (2016). The Cybersecurity Law of the People’s Republic of China.http://www.gov.cn/xinwen/2016-11/07/content_5129723.htm

Zhao, I., & Xiao, B. (2019). Pro-Beijing Protests Broke Out Around the World. ABC News.https://www.abc.net.au/chinese/2019-08-19/pro-beijing-protests-broke-out-around-the-world/11427468

Be the first to comment

Leave a Reply