Combating Online Harm: Soulful Collaboration

Introduction

The boom of digital platforms has brought a lot of convenience to our lives, through which we can share our own opinions and communicate freely on them. However, at the same time, there are many people on these platforms who take advantage of the convenience of the platforms to attack others with impunity, causing a bad impact on their daily lives.

A tragic meeting

On February 16, 2024, BBC News reported on a meeting between Esther Ghey and Ian Russell. Esther’s daughter Brianna, a transgender teenager, had been brutally murdered a year earlier, while Ian’s daughter Molly had chosen to take her own life in despair after experiencing various rhetorical attacks on her on the internet.

Brianna, left, was murdered in 2023, while Molly took her life six years ago

Esther and Ian meet for the first time in an emotional moment, empathizing with each other’s experiences of losing loved ones, and they express their hope that such catastrophes will never happen again. During this meeting, they discussed the negative impact of uncensored media content on the internet on teenagers, and both parents agreed that digital platforms should take some responsibility for the psychological safety of their users online.

Their conversation touched on the many dark sides of digital platforms. which Molly’s father described as “the darkest of worlds.” He reveals that long before his own child was killed, Molly had felt vulnerable because of some of the comments made on the internet, which her killers exploited, while as a father himself, he only found out about all this that his child had suffered during her life after her death.

The purpose of this meeting between Esther and Ian is to call on everyone to realize that online attack statements have a much greater impact on users than we think, and that the authorities and platforms should launch appropriate measures against these statements to prevent more people from losing their lives due to violent online statements.

Analysis of Digital Platform Failures

As an example, Reddit and Facebook are both social platforms with a high number of users and activity, where users can post text and images as a way to express themselves or make friends, but even with such large digital platforms, they have major weaknesses in terms of content moderation. These weaknesses lead to users being exposed to harmful content and contribute to a toxic environment that can have serious psychological and physical effects.

In his study of Reddit, Matamoros (2017) founds irrationalities in the platform’s governance. Reddit’s homepage posts are determined by the timing of the post’s update and the number of likes it receives, with the more people liking a post the more likely it is to appear in the most prominent place for users to see it. Some posts on the platform will contain words that have been purposely made into shorthand and do not words that do not show respect and friendliness, these words will not be accurately recognized by the moderation system, but if the number of users who like the post is high, such harmful posts will still be pushed to the front page, which will then bring about a wide range of adverse effects, and this kind of like-push mechanism can also be exploited by people who have the intention to spread and amplify the extreme viewpoints on purpose to increase their own popularity.

And Woods (2021) criticizes Facebook’s shortcomings in dealing with hate speech, where the platform’s automated processing system and community standards are in many cases unable to identify and determine subtle linguistic malice, such as a lack of understanding of local culture or insensitivity to some discriminatory messages, as well as the inability to effectively distinguish regional slang and context (Bronwyn, 2018), and therefore unable to recognize and combat content that may incite racial violence or discrimination, which often leads to some malicious speech escaping the control of the system and being pushed to users, causing adverse effects.

In fact, not only Facebook, but also many digital platforms rely on automated systems for content review. Although these systems have the ability to automatically learn and update their databases, they are not human beings after all, and are often incapable of fully understanding the subtle nuances in the text, which often results in them failing to detect subtle forms of hate speech, such as those that utilize acronyms or harmonies to make attacks. Furthermore, in order to generate more advertising revenue, these digital platforms need to increase their user activity, and in order to achieve this, platforms devise algorithms and interfaces, such as reward systems that resemble game leaderboards (Woodcock, 2018), to ensure that users are able to generate more interactions, such as likes, comments, and shares. However, this design for user engagement can sometimes lead to negative consequences, such as some users intentionally posting extreme, divisive, or inflammatory content in order to increase their attention, the spread of which often has a negative impact on society.

Discussion of Regulatory and Policy Challenges

The regulation of online platforms faces a complex set of challenges, which are exacerbated by the global nature of digital media and the rapid development of new technologies, thus the development of effective policies for the regulation of digital platforms in different legal and cultural environments requires multifaceted considerations and analysis.

Inconsistent global legislation

A major challenge in the regulation of online platforms is the inconsistency of legal standards for online speech and attacks across countries. This legal disparity creates a significant obstacle to the regulation of international platforms, as they may face different legal requirements for the same piece of speech, in short, something that is considered hate speech in one country may be protected as free speech in another. This inconsistency makes it difficult for platforms such as Facebook and Reddit to develop governance policies that can be targeted globally, and they must strike a balance between protecting free speech and curbing harmful content.

Advances in digital technology

In addition, the rapid development of digital technologies often outpaces the development of laws in most countries. As digital platforms continue to expand their functionality, regulators struggle to keep up, and new issues often bubble up on platforms before new regulatory policies are put in place, which leads to gaps in the regulatory process and allows harmful behaviors to continue. In such cases, regulators are often left with reactive rather than proactive measures. In the long run, this gap due to the pace of development creates more and more regulatory untimeliness, which has a deleterious effect on digital platforms.

Implementation challenges

Bronwyn (2018) emphasizes that the enforcement of digital policies is also a daunting challenge, effective regulation requires not only explicit and comprehensive legal provisions, but also a strong regulatory system to monitor user behavior and punish violations, which is not easy to do, mainly due to the sheer volume of content generated on platforms on a daily basis, coupled with the fact that there are people who utilize a wide range of methods to hide harmful content, which makes the job of enforcing the law incredibly difficult. While the use of automated systems can help process large amounts of data, these systems also often make mistakes, either deleting harmless content or missing harmful content.

The need for international cooperation

Platforms should also cooperate internationally to develop harmonized standards to protect users from harmful speech while supporting freedom of expression. These standards must be developed jointly by governments, platform companies and citizens to ensure that they can be applied across different races and cultures. Establishing such cooperation can help to create a more coherent regulatory environment, as well as effective cross-border enforcement.

Balancing freedom of expression and harm prevention

Finally, finding a balance between protecting freedom of expression and regulating harmful speech is also a challenge. On digital platforms, determining what is protected speech and what is harmful content is very difficult. Because it often depends on the specific situation and cultural context, in many cases the line between some protected speech and harmful content may not be so clear and its determination is often very subjective. Digital platforms need to find a balance between protecting users’ freedom of expression and preventing the spread of harmful content, which is a challenging task.

Strategies for Improvement

In order to effectively reduce the problem of online violence, the platforms needs to put together a system of strong digital governance that protects users from harmful online content.

Enhanced Content Management

In order to improve the accuracy of content review on digital platforms, platforms need to introduce advanced artificial intelligence technologies. These AI systems should have the most advanced language learning and recording capabilities to better detect and recognize harmful or abusive content. For example, AI systems can be trained to be sensitive to online violence.

Critical role of human moderators: While AI can handle most data, the human element is still crucial, especially when it comes to detecting emotional tendencies in context and recognizing malicious abbreviations or harmonies. Therefore, increased investment in a diverse team of human moderators can improve the accuracy of cultural and linguistic judgments during the moderation process, which is critical in a global platform environment. Training for these moderators should include cultural sensitivity, ethical considerations, and crisis management to prepare them to audit a variety of complex content (Massanari, 2017).

Balancing automation and human insight: To create a more balanced auditing system, platforms can combine the efficiency of AI with the understanding of human auditors, with the AI system flagging potential issues before the human auditor conducts the final assessment.This double-insurance approach helps to minimize the workload of the human auditor, and ensures that the decisions are made fairly while taking into account the cultural context, allowing the results of the review to adds a human touch.

Algorithm transparency and accountability

Disclosure of algorithmic processes: In an accountable platform content moderation, the platform’s algorithmic rationale for the promotion of content is critical. As Massanari (2017) suggests, it should be mandatory for platforms to disclose how their algorithms work. This openness not only promotes user trust in the platform, but also allows experts in the relevant field to validate them, in order to ensure that these algorithms do not contribute to the development of bias or inadvertently promote harmful content.

Reflective work: Platforms should regularly engage with relevant people and teams, such as meetings with users, partner companies, and regulators, to discuss whether the platform’s promotional algorithms need to be further refined. Such discussions can provide different perspectives and help identify irrationalities and weaknesses in the platform’s algorithms, so that they can be more in line with social norms and relevant laws and regulations.

Digital Literacy and Public Awareness

Global education campaigns: Improving users’ digital literacy can better help them use online platforms safely. As Esther Ghey and Ian Russell have argued, platforms have an obligation to teach users how to improve their ability to recognize harmful information, and platforms can run courses on how to do so, including informing users how to block and report speech that makes them uncomfortable.

Work with educational institutions: Digital platforms should work with educational institutions to incorporate how to respond to cyberattacks into school curricula. Through early education, young people can build a strong foundation for safe online practices, which greatly reduces the likelihood of them being victimized online as they grow older.

Regulatory and legal frameworks

Enforce strict legal standards: Advocating for stronger legal measures is crucial. The law should compel digital platforms to follow clear, strict standards of user safety and data protection. This includes regular inspections by independent organizations, heavy fines for violations, and rewards for platforms with the best governance environment (Massanari, 2017).

These strategies together constitute a governance program to address online attack speech, and by implementing these measures, we can significantly reduce the risks associated with harmful speech online, create a more secure and respectful online environment, and ensure a safer online community for all users.

Conclusion

As the case of Molly and Brianna shows, reducing the harm caused by cyberattacks requires all of us to work together, and it is only when platforms, governments, and users work together to counteract these harmful statements that we can collectively create an inclusive, open, and harmonious digital platform. Through these joint efforts, we can also hopefully prevent similar tragedies from happening again.

Reference

Bronwyn Carlson and Ryan Frazer. (2018). Social Media Mob: Being Indigenous Online. Macquarie University. 

Massanari, A. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society19(3), 329–346. https://doi.org/10.1177/1461444815608807

Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130

Woodcock, J., & Johnson, M. R. (2018). Gamification: What it is, and how to fight it. The Sociological Review, 66(3), 542-558. 

Woods, L. (2021). Obliging Platforms to Accept a Duty of Care. In Martin Moore and Damian Tambini (Ed.), Regulating big tech: Policy responses to digital dominance (pp. 93–109).

Be the first to comment

Leave a Reply