Deep Harm in Virtual Worlds: Hate Speech and the Management of Online Harm

Posted by Lena Dunham

“Without third-party regulatory bodies, relationships between humans can only descend into a state of war where everyone is against everyone else” .

Hobbes, 1651

Hell is other people?

Hate Speech: From Ancient Concept to Global Problem

Hate speech has existed in Western society since ancient times. As early as the Greek classical period, hate speech was regarded as an irrational act of violence that caused harm to public society, intensified social opposition and conflicts, endangered the democratic process, and damaged citizen virtue. The concept of “hate speech” dates back to the middle of the last century. After the end of World War II, due to the racial hatred caused by Nazi Germany and the discrimination against post-war immigrants, European countries realized the urgency of curbing racial and religious hatred and began to legislate rules. 

Online Hate: An Increasing Global Reality

The spirit of a global ecumene, first offered in communication studies in the 1960s, by Marshall McLuhan, with his concept of a ‘global village’ (Flew and Suzor, 2018; McLuhan,1968). Today, with the advent of the Internet, the global village he predicted has become a reality in today’s society. With the development of Internet technology, the spread of hate speech and online harm on a global scale has made this “global village” full of crises. “Verbal assaults” can lead to “psychic wounds” that result in ongoing, long-term injury (Brison,1998, 42, 44). Hate speech and online harm have quickly become one of the most prominent real-world issues currently erupting around the world, leaving people in the “global village” feeling hurt and panicked.

Lena Dunham, an American actress and writer, is known for creating the successful HBO comedy series Girls.

In 2013, she was named one of the “Time 100,” an annual list of the world’s most influential people. Suffering from endometriosis, Lena announced she would undergo a total hysterectomy in hopes of ending her chronic pain. On October 17, 2018, she posted a photo on Instegram. In the photo, she was lying on a hospital bed wearing a hospital gown and transparent underwear, revealing sutured wounds on her abdomen from surgery. She mentioned having difficulty walking and urinating. 

The post has more than 192,000 “likes” and 9,000 “comments.” In these comments, in addition to words from fans, there are also words such as “you are not healthy”, “a waste of human fat”, “fat pig” and so on. Also by using things like sick, hateful emojis, animal emojis, joking gestures like “How do you have a boyfriend, lol?” Hate speech targets a particular individual or group, often identifiable by arbitrary and normatively irrelevant characteristics. It stigmatizes the target group by suggesting or outright stating they possess highly undesirable qualities and treats them as an unwelcome presence and legitimate target of hostility. (Parekh 2012, 40-41).

Celebrities cannot escape the harm of “cyber violence”, and ordinary people may also become targets. Katia Katerina shared her weight loss progress and experience on the Internet, but was commented by Internet trolls, “Stop wearing Gymshark (clothing brand), you are too fat.” Chinese girl Zheng Linghua was bullied online because of her pink hair and committed suicide. Imogen Lennon, a 5-year-old Australian girl, died, but her parents were asked by online abusers, “Why didn’t you take good care of your child?” The parents were in great pain of losing their children and needed to deal with the online abusers.

According to a survey released by the Pew Research Center in 2021, young people suffer more online violence. 64% of young people under the age of 30 said they have experienced online violence, and 48% said they have experienced more serious violence. Harassment, including physical threats, stalking, sexual harassment, etc.; among people aged 30 to 49, about half have experienced online violence, and 32% have experienced more severe harassment. These numbers have been rising since 2017 . When platforms are dominated by hate speech, the vulnerable voices of women, ethnic minorities, religious minorities or LGBTQ people are drowned out and they become increasingly “silent”. “Assaultive racist speech” is akin to “being slapped in the face” and is perceived as a “blow” that, once dealt, diminishes the chance for dialogue and thus hinders engagement in free speech (Lawrence,1993, 68). 

Hate speech on the Internet is enveloped in a vast sea of massive, anonymous, and cross-border content. It is multilingual and diverse in form, incorporating emojis, images, audio, and video, which makes the content increasingly opaque, abstract, and symbolic. This presents a significant challenge for platforms trying to identify and moderate it. Facebook’s machine learning models were only tracking hate speech content in 40 languages (Perrigo 2019), though the company was exploring more effective comparative language translation models (Hao 2020). The cross-border nature of the Internet enables individuals and organizations to more easily distance themselves from and evade oversight by their home country governments. In recent years, a large number of hate websites have emerged worldwide. Hate speech violates human rights rules and its spread is growing exponentially in the online world(Flew 2021).

Technology Companies at the Crossroads: Continuously Adjusting and Improving Censorship and Content Management

In the Internet era, governments have changed their strategies. The state has shifted from direct supervision of citizens’ speech to supervision of Internet companies. It has increasingly censored and managed hate speech by supervising Internet platforms to achieve indirect control of online hate speech. The state’s actions are more targeted at technology companies. For example, in 2018, Germany officially implemented the “Internet Enforcement Law” to regulate social platforms that improperly handle false news and hate speech, and can impose fines of up to 50 million euros. The country further promotes technology companies to take responsibility. Technology companies are facing a “crossroads” and grappling with a “prisoner’s dilemma.” Given the varying standards and legal differences surrounding hate speech in different countries, technology companies need to put in more effort to establish a “precise definition of hate speech” that can be technically implemented. This includes enhancing algorithm accuracy, defining the scope of identification, and reducing disputes and conflicts.

As the largest social media platform in the world, Facebook has been striving to define hate speech clearly in its company policies from the very beginning. In fact, the concepts defined by technology companies are forced to constantly be replenished and revised, and their scope is expanded in response to new controversial events and public opinion responses. For example, Facebook’s 2019 company regulations state: “We define hate speech as hate speech based on a protected characteristic—race, ethnicity, national origin, religious belief, sexual orientation, caste, sex, gender identity, and serious illness or disability— — A direct attack on a person. We also provide some protections for immigration status.” But this definition has still been criticized by outsiders as incomplete, biased and ignores vulnerable groups. Under pressure from all parties, Facebook has revised its definition of hate speech since 2020, and a year later it extensively revised its regulations to include protection for “refugees, immigrants, and asylum seekers.”

Even with additions and expansions to the definitions, it is still not possible to cover all the controversial events that take place in reality. To avoid complications, Facebook must continually update its policies to adapt to external changes. In March 2019, Facebook announced that it would regulate and delete “white supremacist content”; In 2020, Facebook broadened its guidelines to prohibit comparing Black faces to animals, spreading racial or religious stereotypes, denying historical events, and objectifying women, among other things. While responding passively, Facebook is facing an increasingly rapid expansion of online hatred. The complexity of network language symbols, the limitations of algorithms, and the emerging network vocabulary symbols are destined to make artificial algorithms unable to achieve perfection. In 2020, Facebook and Instagram saw a significant year-on-year increase in the amount of hate speech detected and removed. In the second quarter of 2020 alone, 22.5 million instances of hate speech were removed from Facebook, while Instagram deleted 3.3 million instances. Addressing hate speech has become one of the most pressing challenges for Facebook. Facebook currently employs at least 15,000 content reviewers around the world to manually review relevant content.

As a new short video platform, TikTok has faced criticism from various sectors for its lack of clear guidance on hate speech and content moderation practices. Some studies have highlighted that terrorist organizations and extremist groups are using TikTok to disseminate hate speech, posing a risk to minors who are easily influenced. Amid criticism, Tiktok has accelerated the pace of regulating hate speech, made corresponding changes, formulated community guidelines, applied algorithms to mark and delete harmful content, and hired more than 10,000 people worldwide to work on trust and safety , responsible for reviewing the uploaded content on the platform. In September 2020, Tiktok announced that it would join the “Guidelines for Combating Illegal Hate Speech on the Internet” formulated by the European Commission, which marked Tiktok’s efforts in hate speech review.

What’s next?

The creation, perception, and spread of hatred are primarily social constructs, shaped by socio-cultural, political, and socio-economic contexts. While the language expressed in hate speech is often violent or emotional, the language itself is not the primary defining characteristic of hate speech(Flew 2021). Language is a carrier of discourse and an indicator of group identity in different social contexts. Given this complexity, addressing hate speech and online harm requires a concerted effort across society, including individuals and platforms. Platforms need to leverage understanding of these socio-cultural, political and economic contexts to develop effective solutions.

On Instagram’s official page today, it has built a relatively complete anti-cyber violence system at the product structure level. They are mainly divided into three categories. The first category is to automatically identify content that may be online violence and handle it accordingly, such as the “Comment Warning” function. When a user attempts to post provocative and rude comments, Instagram will automatically pop up a reminder: The content you entered may Violation of our Community Standards, please think twice; if you insist on posting this comment, it may be hidden; if this comment does violate the Community Standards, your account may be deleted.

The second category is to provide users with convenient tools to prevent online violence that has already occurred. Many times, online abuse involves a rapid surge of insulting content in a short amount of time, making it impossible to delete and block each piece individually. In this regard, Instagram has launched a function called “limit”. Once turned on, it can automatically filter out all comments and private messages from people you do not follow, which is equivalent to setting up a high gate. When the Internet storm passes, you can turn off the “limit” function without affecting daily use.

In user research, Instagram also learned that many victims of cyberbullying are actually unwilling to directly block those who bully them online, because this may anger the cyberbullies and stimulate them to do more extreme things. Moreover, after being blocked, the private chat records between you and this person may be gone, making it inconvenient to retain evidence of cyber violence. Therefore, Instagram designed a quiet and low-key “blocking” function, which is “restrict”.

When you restrict a user, he or she can still leave you comments, but by default these comments can only be seen by him/herself, and your approval is required before the comments can be made publicly visible; he/she can still send you private messages, but they will be hidden. To a deep entrance, you can check it when you are completely ready, and they can’t see whether you have read these private messages. This feature is quite cleverly designed. It comes entirely from the ideas provided by users, especially teenagers who have been abused online. It is difficult for product managers to come up with it while sitting in the office.

The third is the preventive tools provided to users. You can use a series of settings, such as making the entire account private, so that only people you approve can follow you, and you can also control who can at you in comments or post text, to avoid the occurrence of online violence to the greatest extent possible. In addition, you can also set your own filter words. These words may not violate community norms and will not be automatically recognized as cyber violence by AI. However, you can customize some words, such as certain words used to attack you. Any words containing Comments with these words will be automatically hidden.

CONCLUSION

Going forward, platforms need to prioritize transparency, accountability, and fairness in their review processes to create a democratic, safe, and respectful community environment. The destructiveness of hatred does not stop at speech and dissemination. Hatred continues to be created, remembered and passed on at the same time. This highlights the urgency and complexity of developing unified and standardized content moderation rules. Heidegger believed that technology is the driving force of the times, but only those with subjective consciousness can harness it and shape the course of the era (Heidegger, 1977). By proactively using technology and guiding its development, we can hopefully create a more inclusive, friendly and equal digital community in the near future.

Reject subjective assumptions!

Reject reasonable imagination!

Refuse to insult and slander!

Say no to stigmatizing remarks!

Say no to hate speech!

References

Carlson, B., & Frazer, R. (2018). Social media mob: Being Indigenous online. https://apo.org.au/node/234656

Flew, T. (2021c). Regulating platforms. John Wiley & Sons.

Heidegger, M. (1977). The question concerning technology. Readings in the Philosophy of Technology, 9-24.

Matamoros-Fernández, A. (2017). Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930-946. https://doi.org/10.1080/1369118X.2017.1293130

Nygaard, T. (2013). Girls Just Want to be “Quality”: HBO, Lena Dunham, and Girls’ conflicting brand identity. Feminist Media Studies, 13(2), 370-374. https://doi.org/10.1080/14680777.2013.771891

Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Facebook Content Policy Research on Social Media Award: Regulating Hate Speech in the Asia Pacific. https://appap.group.uq.edu.au/files/1779/2021_Facebook_hate_speech_Asia_report.pdf

Ng, B. K. (2023, March 26). Online trolls are taking a toll in China. BBC. https://www.bbc.com/news/world-asia-china-64871816

Atske, S. (2021, May 25). The State of Online Harassment | Pew Research Center. Pew Research Center: Internet, Science & Tech. https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/

Be the first to comment

Leave a Reply