Challenges Faced by Meta: A Case Study of Online Hate Speech and Harm in the Maria Ressa Incident

Introduction

The digital frontier came with an array of exciting opportunities to society and posed equally substantial challenges, and Australia was no different. The role of social media players such as Meta, TikTok, and X in facilitating online hate speech and the extent of harm it can cause is thus a significant question for policymakers and regulators. Therefore, the purpose of this blog is to conduct an in-depth analysis of the potential threats of hate speech and online violence in digital media platforms, based on a case study of the hate speech and online violence experienced by journalist Maria in Meta. The article will explore the causes of hate speech and online violence, the negative impacts of AI and human moderation in addressing them, and the challenges these methods may face in real-world applications. Finally, it will propose approaches to regulating and governing hate speech and online violence on social media platforms.

Figure 1: Hate speech post on social media associating aboriginal flag with soviet flag in relation to the Voice Referendum (Latimore, 2023)

Hate Speech And Online Abuse

Hate speech online refers to expressions that incite hostility and discrimination against specific groups based on characteristics such as race, gender, or religion (Parekh, 2012). The widespread dissemination of such speech on digital platforms has sparked an intense debate about balancing free expression with the protection of individuals from harm. Hate speech arose from the internet’s initial vision of free speech (Barlow, 1996). As the Web 2.0 era emerged, cultural participation, sharing, and communication became key features of digital platforms. Platforms like Meta and X quickly grew in size and influence. Yet, the sheer volume of content has pushed these platforms to adopt a commercial moderation model blending manual review with AI and algorithms. Nonetheless, issues like verbal abuse and cyber harassment persist for many users(Roberts, 2019)

In 2021, 41% of Americans reported experiencing online harassment, with political views, religion, appearance, race, gender, and sexual orientation as common triggers (Pew Research Center; Felw, ). Similar issues exist in Australia, highlighted in March 2019 when an extremist linked to far-right groups online killed 51 people in New Zealand mosques, illustrating how harmful content can fuel violence (Flew, 2021).

Social media faces scrutiny over inconsistent policy enforcement and vague rules, contributing to hate speech spread. Meta, for instance, bans nudity but not racist pages targeting ethnic groups (Matamoros-Fernández, 2017). This inconsistency highlights platform management issues and the significant mental health impacts hate speech can have, including stress, depression, and potentially self-harm or suicidal tendencies.

Figure 2: Meta leads platforms in terms of occurrence of hate speech (O’Driscoll, 2021).

Critical Analysis of Existing Solutions/Key issues in digital policy and governance

In Australia, the government has increased legal measures to combat online hate speech and harms, such as the Online Safety Act 2021 and New South Wales’ Section 93Z of the Crimes Act 1900. The Online Safety Act empowers the eSafety Commissioner to enforce rules against cyberbullying, image-based abuse, and illegal or restricted content on digital platforms (eSafety Commissioner, 2021). This represents a “duty of care” approach holding platforms accountable for minimizing risks of harm through their design choices and content moderation (Woods & Perrin, 2021). Meanwhile, Section 93Z prohibits vilification and threats of violence based on race, religion, sexual orientation, gender identity, and intersex or HIV/AIDS status (NSW Crimes Act 1900) (Sinpeng et al., 2021). Besides, Australia’s Communications Minister plans to strengthen the Cybersecurity Act and expand the basic cybersecurity regulations that social media companies must comply with. This includes prioritizing the interests of children, combating harmful AI-generated content, detecting hate speech, and more. Violators could face fines of up to $787,000. The government recognizes the increasingly serious online harms such as online hate speech and is determined to strengthen supervision. However, the effectiveness of these laws remains limited. They overlook many nuances in how hate speech manifests in specific cultural contexts online. LGBTQ+ individuals still frequently experience identity-based harassment that may not meet a legal threshold (Guerin & Robards, 2021). Critics argue broadly-written laws could also chill free speech (Flew, 2021). While legal prohibitions on hate speech are important, they are insufficient without complementary efforts by platforms to improve content moderation, transparency and engagement with affected communities

Platform Responsibility and Content Moderation

Social media platforms, including Meta, have been on the focus for enabling the spread of hate speech and harm as seen on figure 1, and are hence largely responsible to ensure that their users are safe. With Meta boasting over 2.7 billion monthly active users, it faces a major content moderation challenge resulting from the sheer volume of daily uploads. Therefore, the platform needs to be able to go over the content using its human and automatic systems. Automatic algorithm moderators are proficient in identifying and deleting clear breaches of community standards, but they struggle to understand, evaluate, and react effectively to subtle cases of hate speech (Sinpeng et al., 2021). Therefore, the best alternative is the human moderation offered by teams dedicated to examining the content flagged by these systems. However, past cases such as the removal of an aboriginal elders’ picture as the result of human moderation shows that the subjectivity of human judgment could lead to the perpetuation of racism and other forms of hate on Meta (Matamoros-Fernández, 2017). A balance between decentralised human moderation and centralised moderation is thus necessary. Meta, for instance, adopts a hybrid model, leveraging user reported content alongside automated filters like the sensitive media filter. Furthermore, platforms must grapple with edge cases that defy conventional moderation parameters. For example the elders’ picture case shows that platforms must strike a balance between protecting users from harm and respecting diverse cultural expressions. Thus, social media companies such as Meta are faced with immense challenges regarding moderation of hate speech, which they must navigate in order to meet their responsibility of moderating content.

.Case study

Figure 3 New study exposes the brutality of online violence against Maria Ressa.

On February 13, 2019, prominent Filipino journalist Maria Ressa was arrested by Philippine authorities on cyber libel charges, accused of publishing false news about businessman Wilfredo Keng on Rappler. As a vocal critic of then-President Rodrigo Duterte and an exposer of governmental misconduct and misinformation, her arrest and conviction were seen by many in the opposition and international community as politically motivated actions by the Duterte government (Posetti, et al.). Furthermore, Meta, being one of the primary social media platforms in the Philippines, served as a channel for spreading online violence and hate speech, making Ressa more vulnerable to attacks and abuse.

According to an analysis by Posetti and others, out of 57,000 comments about Maria Ressa on the Meta platform, 60% of the online violence damaged Ressa’s credibility as a journalist, including attacks with false information and false accusations of her dealing in “fake news” (Posetti, et al.). Additionally, 40% of the attacks were personal, characterized by gender discrimination, misogyny, and explicit violent language, including attacks on Ressa’s appearance and manipulated images with male genitalia. This violence also included threats of rape and murder, with 40% of the violent language targeting Ressa’s skin color and sexual behavior. These attacks indicate that Meta, as a major social media platform, has become one of the main channels for online violence and defamation, causing severe negative impacts on Maria Ressa’s personal and professional image.

Analysis of the causes of online violence and harm on Meta

Unclear rules

Social media platforms do not clearly define the boundaries of speech, such as what constitutes malicious insults or racial discrimination. This ambiguity allows users to exploit loopholes to spread racist speech. Additionally, the responsibility chain for content management on social media platforms is unclear, involving the platform, users, and algorithms, which leads to ambiguous responsibilities and increases the possibility of the spread of racist speech (Matamoros-Fernández, 2019).

Automated Algorithms

Automated algorithms may filter information based on users’ preferences and behavioral habits, leading to users encountering more content that aligns with their views, thereby intensifying the echo chamber effect (Andrejevic, 2019). Furthermore, the commercial bias of these automated media systems can be driven by business interests, resulting in content selection and recommendations being influenced by advertisers or sponsors rather than objective information needs.

Community Management and Transparency

One reason for the occurrence of violence on Meta is the lack of sufficient transparency in community management. Community management is an approach where users and page administrators assist in managing and monitoring user-generated content. Transparency is a crucial component of responsible community management as it helps users understand the nature of community standards and their rights to appeal. The opacity of algorithms, the ambiguity of advertising content, and the filtering mechanisms of information streams, combined with the platform allowing users to create their own names and identities, can all contribute to the spread of online hate speech and derogatory remarks.

Figure4: Snapshot of Meta notifications showing admin page on Meta allowing admin to review automatically flagged posts (Jenkins, 2021).

Challenges in Combating Hate Speech and Harm Globally

Meta and governments around the world confront a few shared challenges in their endeavors to combat hate speech and harm. Governments regularly need get to sufficient data. Social media platform, including Meta, have broad client data. By leveraging calculations, this data may possibly help in countering hate speech. Shockingly, get to this data is regularly limited for stages and analysts alike due to concerns over data protection, which theoretically could alarm many. Without get to this data, it is challenging for governments to get a handle on the seriousness of hate speech or to develop effective policies. For example, Sinpeng et al. (2021) famous that Meta is reluctant to allow hate speech analysts and controllers get to data due to privacy concerns.

Another challenge is the progressively complex nature of hate speech, with unused shapes quickly advancing and spreading over stages (Jahan & Oussalah, 2023). This complexity presents challenges for governments and Meta alike. For occurrence, it gets to be challenging for governments to plan vigorous laws that adjust to the continually changing shapes of hate speech. For Meta, the challenge is also significant as moderators, both human and computerized, must bargain with diverse cultural backgrounds, making it hard to preemptively act against hate content.

Meta moreover faces challenges with the need of regulatory education among its mediators and clients (Matamoros-Fernández, 2017). This lack of regulatory literacy not as it were obstructs users’ understanding of Meta community standards but too ruins their ability to effectively appeal content decisions.

Recommendations

Firstly, Policymakers must ensure that any regulation of social media platforms did not unduly jeopardize freedom of expression. Given the complexity of the issue, there is a need to closely monitor new legislative initiatives around the world to assess whether a good balance is being struck between protecting freedom of expression and prohibiting hate speech. In order to carry out such monitoring, social media companies need to be transparent about the content they remove and make their data available for scrutiny by researchers and the wider public.

Then, to viably address the challenges of online hate speech, both regulators and Meta ought to collaborate in a few key areas. To begin with, it’s basic to morally accumulate and share data to get it the nature and predominance of hate speech on the platform, regarding client privacy. This might incorporate developing guidelines and conventions that guarantee data anonymization some time recently collection and utilize. Such data acquisition will offer assistance in understanding hate speech patterns and shaping appropriate regulations.

Lastly, Persistent observing and upgrading of polivies are vital to adapt to the advancing nature of online hate. Overhauls to Meta’s policies ought to also involve feedback from diverse cultural, guaranteeing that the platform remains comprehensive. This can be basic, particularly considering concerns from inborn communities approximately expanding online hate (Carlson & Frazer, 2018). Research appears that counting inborn staff in social media companies can make platform more secure for these communities.

Conclusion

In conclusion, hate speech and regulation on social media platforms face formidable challenges. Even with the confidentiality and oversight of digital platforms, hate speech cannot be entirely prevented or contained, underscoring the need for robust policies and proactive community management. Moreover, enhancing transparency for moderators is crucial, as is striking a balance between curbing hate speech and protecting users’ freedom of speech. Furthermore, the prudent use of algorithms and artificial intelligence in supervision is paramount in today’s digital media landscape. With the advent of Web 3.0 and the rapid evolution of the internet, finding a regulatory approach that strikes a balance between national interests and user freedoms will be a significant challenge for the future of the digital realm.


References

Amnesty International. (2023, October 31). Meta’s failures contributed to abuses against Tigrayan community during conflict in northern Ethiopia. Retrieved from https://www.amnesty.org/en/latest/news/2023/10/meta-failure-contributed-to-abuses-against-tigray-ethiopia/

Carlson, B., & Frazer, R. (2018). Social media mob: Being Indigenous online.

Gubhaju, L., Williams, R., Jones, J., Hamer, D., Shepherd, C., McAullay, D., Eades, S. J., & McNamara, B. (2020). “Cultural Security Is an On-Going Journey…” Exploring Views from Staff Members on the Quality and Cultural Security of Services for Aboriginal Families in Western Australia. International Journal of Environmental Research and Public Health, 17(22), 8480. https://doi.org/10.3390/ijerph17228480

Jahan, M. S., & Oussalah, M. (2023). A systematic review of hate speech automatic detection using natural language processing. Neurocomputing, 546, 126232. https://doi.org/10.1016/j.neucom.2023.126232

Jenkins, L. D. (2021, October). Meta Admin Assist: Streamline Your Meta Group Content Management. https://www.socialmediaexaminer.com/Meta-admin-assist-streamline-your-Meta-group-content-management/

Latimore, J. (2023). Meta rules online racism against Indigenous people meets community standards. In The Sydney Morning Herald. https://www.smh.com.au/national/meta-rules-online-racism-against-indigenous-people-meets-community-standards-20230815-p5dwqt.html

Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Meta and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118x.2017.1293130

NSW Government. (2023, November 21). Prosecution of threats and incitement to violence set to be streamlined. Retrieved from https://www.nsw.gov.au/media-releases/threats-and-incitement-to-violence

O’Driscoll, A. (2021, February 18). 20+ Online Hate Crime Statistics and Facts for 2021. Comparitech. https://www.comparitech.com/blog/information-security/online-hate-crime-statistics/

Parekh, B. (2012). Is there a case for banning hate speech? In M. Herz and P. Molnar (eds), The Content and Context of Hate Speech: Rethinking Regulation and Responses (pp. 37–56). Cambridge: Cambridge University Press.

Pew Research Center. (2021, January 13). The state of online harassment. Retrieved from https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/

Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Meta: Regulating Hate Speech in the Asia Pacific. Meta Content Policy Research on Social Media Award: Regulating Hate Speech in the Asia Pacific.

Suzor, N. P., West, S. M., Quodling, A., & York, J. (2019). What do we mean when we talk about transparency? Toward meaningful transparency in commercial content moderation. International Journal of Communication, 13, 18.

Urman, A., & Makhortykh, M. (2023). How transparent are transparency reports? Comparative analysis of transparency reporting across online platforms. Telecommunications Policy, 47(3), 102477. https://doi.org/10.1016/j.telpol.2022.102477

Woods, L. and Perrin, W. (2021) ‘Obliging Platforms to Accept a Duty of Care’, in M. Moore & D. Tambini (eds.) (2021), Regulating Big Tech: Policy Responses to Digital Dominance. Oxford: Oxford University Press, pp. 93-109.

Barlow, J. P., (1996). A Declaration of the Independence of Cyberspace. https://www.eff.org/cyberspac

e-independence

Roberts, S. T., (2019). Understanding Commercial Content Moderation. https://doi.org/10.12987/9780300245318-003

Flew, Terry (2021) Hate Speech and Online Abuse. pp. 115-118. Regulating Platforms. Cambridge: Polity

Posetti, J., Maynard, D., Bontcheva, K., Hapal, K., & Salcedo, D. (2021). Maria Ressa: Fighting an Onslaught of Online Violence. International Center for Journalists.

Matamoros-Fernández, A. (2019). Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Meta and YouTube. Information, Communication & Society, 22(7), 930-946. https://doi.org/10.1080/1369118X.2019.1590963

Be the first to comment

Leave a Reply