“They Didn’t Commit Suicide, They Were Killed by Others” -Hate Speech and Online Harm

ARIN6902: Digital Policy and Governance
LIAOLIAO LI
540309146

Figure 1.0 Hate Speech

The Dark Side of Internet: Hate Speech and Online Harm
On 24th Jan, 2022, a 17- year-old boy called Xuezhou Liu ended his life by medicine overdose after posting his last words on social media expressing his despair. His story began when he discovered he was adopted and sold by his biological parents when he was swaddling. After his adoptive parents died in an accident, Liu managed to find his biological parents with the help of social media, which seemed to be a temporary reunion as a good ending. Unfortunately, his biological parents deserted him from transitory reunion. However, a vast number of intense cyberbullying appear in his comments instead of any consolation. In the comments section of his Weibo account, some netizens questioned his motives with the most malicious words, including querying he was exploiting the search for his parents to gain publicity and shape a mendacious backstory for manipulating public sympathy. Liu tried to respond every comment, denying these allegations as defamatory. But sadly, he couldn’t endure the malice from social media and gave up his own life tragically.
Undoubtedly, the internet has brought about the information communication revolution. We reach out to our friends, family members, and acquaintances overseas in seconds, exchange information so instantly, and access information at our fingertips.  Bilewicz and Soral (2020) notes that this digital revolution also harbors a dark undercurrent: hate speech and online harms. In relation to digital policy and governance, these are not simply words phrases rather, they are capable of harming our well-being, not to mention that some can even put to risk our lives. This blog will look into the complexities of online hate speech and online harm and Limelight their impact and challenges of handling them. We will discuss the issues and case as straightforward as we could so our audience who are unfamiliar with digital policy can understand.

Figure 1.1 Hate Speech

Is Your Online Space Safe according to digital policy and governance?
Picture for a moment someone making fun of you, being mean and just disrespectful probably because of where you are from, your faith, your sexual orientation, or altogether simply because to other people you are part of another group. According to Ullmann and Tomalin (2020) hate speech can be delivered through emails, tweets. Digital policy and governance sights that hate speech only provide negative information about people or a group based on their attributes they cannot control. Hate speech is the expression of insults in words, symbols or depictions. It can be quite direct and blatant, or it might be filtered through sarcasm, wit, or subtle irony. Internet adds to those negative sites like social platform, gaming communities and comment sections which can be the breeding ground for all these negativities.

How Online Hate Speech Can Poison Your Life

Figure 1.2
Hate speech promotes conflict, hate, and violence among communities. It may appear that it is only about words you see on the screen but, hate speech results in the tangible consequences. Here is why it is so harmful:
1.Psychological impact
The people who are victims of hate speech may experience depressive stages and lack of self-esteem (Waisbord, 2020). Hate speech may influence judgment and lead to severe problems and obsession. In digital policy and governance, whether it is ensuring the individual’s privacy against mental health threats or not, the safeguard of the individual’s well-being and mental health is its core. Hate speech used with hateful inscriptions and directed attacks, is hurtful to the victims and such trauma is psychological in nature. Digital platforms being virtual public places should be governed in a way that ensures the healthiness of the public they serve. 
2.Normalization of violence
Spewing hate in public goes unpunished positioning of bigotry as acceptable. This in turn leads to the growth of real life violence. Through history, we learned that a hateful rhetoric is an innocent word escalating into discrimination and can even be a genocide at its worst manifestation (Salminen et al., 2020). The fact that digital environment is becoming an acceptable area for spreading hate speech threatens both essence of digital governance and integrity of society. Without interference, slangy rhetoric can spread the idea and make the silent voices fearful, therefore normalising discrimination and intolerance. Digital policy directive is sensitive to how closely online exchanges are related to offline realities, thus strict measures to counter hate speech propagation is not spared.
3.Silencing voice
Triggering individuals to ‘censor’ their actual feelings and reject their original identities, a stark contrast to diversity and manifests with such horrifying fear could be the most terrifying result of hate speech. A mainstay of digital governance’s philosophy involves enshrining inclusiveness and the defense of freedom of expression (Kiela et al., 2020). Hate speech, nevertheless, is a specific threat to these fundamental values, which are the bedrock on which a free society is built, by terrifying some individuals into submissiveness and muzzling some communities. With the failure of the policy instruments and the rise in voice of marginalized ones, they may eventually be overwhelmed by the intolerance. Digital governance frameworks that promote freedom of speech must be equally availed to individuals and groups to enable them to express their opinion without fear, while at the same time refrain from promoting online hate content.

“You are so Ugly” – When Words Become Weapons

Figure 1.3 Online harm

Hate speech just summarizes one of the aspects of online harm. Cyberbullying actually goes beyond bullying and includes ample range of inimical online behaviors that can evoke significant psychological and physically harm. These include:
1.Cyberbullying
Cyberbullying is the act of constantly harassing, restricting or intimidating people online while remaining anonymous (Castaño et al., 2021). Cyberbullying is one of the most debilitating types of cyber-hazards. This is a digital space where people who feel threatened and on the offensive target an individual through electronic means. The victims of cyberbullying can suffer an extreme emotional trauma that affects them and can form state of anxiety, depression and even suicidal thoughts.
2.Doxing
This is the public release of private as well as personal or identifying information belongs to this category. Moreover, one of the sad consequences of online harms is doxing, which is the nasty act of deliberately exposing private or personal information of an individual without explicit permission (Ullmann & Tomalin, 2020). For instance, home addresses and highly personal information this type of cyberattack might target violates the privacy as well as safety of the individuals. Doxing results into physical violence from victims, mental torture when their privacy is exposed in public domain. Digital policy and governance should deal with doxing at a legislative level straight with a focus on protecting the data and issuing penalties to violators as the main ones.
3.Misinformation and disinformation
Whilst the emergence of fake news, if not seen seriously, can jeopardize the democratic process or lead to more disinformation as malicious context is adopted. Misinformation is any kind of unintentional fabricated information, while disinformation is designed specifically to deliberately disseminate lies and aimed to have some specific effect. Alongside the spreading of the fake news articles and the edited photographs, these online falsifications can be very widespread and they can worsen the relations, as well as indirectly damage the trust in the established systems and democratic processes. There is need to establish an approach in the information communication technology (ICT) policy framework to combat of misinformation and disinformation.
These online harms can exert an extreme impact on the well-being of people at the micro level, as well as on the general public health and development as a whole. Social injustice has a staring role in the emergence of social unrest, entailing distrust and sometimes even physical risk.

Free Speech Versus Safety
Resolving harm caused by the internet remains a complicated matter that does not have a simple solution. Here is why:
1.Freedom of speech
It is a fine line, and it is difficult to find a perfect balance between the freedom of expression and the need for people to be guarded from hate speech. Freedom of expression is vital pillars of democracy, especially it is an ardently desirable right that is embodied in countries’ constitutions (Hartvigsen et al., 2022). While the power of freedom is not questioned, it needs to be more than just enabling because it has to take into consideration the need to protect individuals from harm. Digital policy has become the backbone for choosing what is right and allowable to be uttered and used for discussion while also balancing users and free speech.
2.Anonymity
Although, there is no way of knowing the real identity of users online, people can engage in such evil deeds as they could not in person. The anonymity provided by the internet can be a weapon for those who feel locked with two sides facing them, on one hand providing a freedom of speech for an individual, on the other hand shielding those who seek to hurt the others (Bilewicz & Soral, 2020). Anonymity of speech is one of hacking digital policy that needs to be dealt with. Digital policy has to handle the challenge of balancing between protecting privacy and holding accountable the perpetrators.
3.Global nature of the Internet
Because the internet is the global ecosystem, it faces vulnerabilities that online data breaches add on. The virtual world knows neither boundaries nor entities of multinational nature and jurisdictional conflicts (Hartvigsen et al., 2022). This brings up a formidable problem for policymakers that aim for a better online regulation and fighting polarization.


Can We Fix the Internet? Strategies for a Safer Online Space

Figure 1.3 Say no to hate speech

There is not a one-size-fits-all solution, but here are some potential strategies:

  • Platform responsibility
    Social media alongside with other web based areas must increase the intensity of controlling that occurring harmful content on their platforms. Digital strategy can motivate these corporations to allocate resources for reliable scanning mechanisms that apply both technology and the human experience to recognize and remove content speedily (Matamoros & Farkas, 2021). Transparency and accountability will be increased by the establishment of specific regulations and indicators, and these platforms will serve as an instrument to let them be responsible to society and respect of users.
  • Law enforcement
    Law enforcement with respect to cyber bullying and hate speech are very important for criminalization and punishment in cyberspace. However, digital policy needs to be very sensitive since its prohibitive nature might infringe on freedom of expression rights (Castaño et al., 2021). Adequate and clearly stipulated laws offer authorities the required instruments to bring justice to the offenders while simultaneously preserving human rights and freedoms. Moreover, collective measures including intelligence sharing among nations make law enforcing activities across the borders more powerful. Such kinds of legislation can make the defining, prosecuting, and thus, punishment of online harassment and hate speech possible; but, they should be considered carefully so not to curb the freedom of expression.
  • Digital literacy
    Creating a context where internet users are educated and have the tools to manage their digital life is a key endeavor in mitigating online hate and harm. Educational systems informed by the development of digital policy and literacy may bring about the realization of the effects of individual acts online on social conscience and reflection (Yin & Zubiaga, 2021). Through delivering to people the weapons needed to pick out and oppose hate speech, such a policy can lead to a culture based on tolerance and amity in digital realm. Such education enables internet users to have an understanding of what is meant by accountable online life, report methods and critical thinking skills thereby helping them to fight and hate speech online.
  • Individual action
    Everyone of us has its own place and personal part. Through retweets, comments, likes and dislikes, we have the power to spread messages of tolerance not just online, but also in public sphere (Paz et al., 2020). We can denounce racist and abusive content that is spread through hate speech. There is need to regularly contribute in counteracting hate speech and online violence for all of us. It is up to the users to speak out against cyber bullying, promote good conduct on the internet, and support reporting of any threats, to create safe and inclusive internet space. Similarly, digital policy and governance can underpin the said initiatives with the reporting facilities in an accessible manner and inducing a culture among the internet users to stand as accountable.
  • Conclusion
    Dealing with the online hate speech and problems still remains to be a great challenge, but that does not mean that it can’t be overcome. We can not only come together as a community but we will also establish an environment that respects people’s different views and cultures. With the aid of digital literacy, the establishment of platforms’ compliances, and standing against negativity, people can develop an internet which guarantees safety and pleasure for all. This blog post is the start of the discussion, but we would love to hear what you think about the topic as well. Let us have this conversation on and let’s keep it on. You may share what you came across in relation to online harm by posting on the comment box below. As one, we can build more desirable way of online lifestyle.

References
Bilewicz, M., & Soral, W. (2020). Hate speech epidemic. The dynamic effects of derogatory language on intergroup relations and political radicalization. Political Psychology, 41, 3-33. https://doi.org/10.1111/pops.12670
Castaño-Pulgarín, S. A., Suárez-Betancur, N., Vega, L. M. T., & López, H. M. H. (2021). Internet, social media and online hate speech. Systematic review. Aggression and violent behavior, 58, 101608. https://doi.org/10.1016/j.avb.2021.101608
Hartvigsen, T., Gabriel, S., Palangi, H., Sap, M., Ray, D., & Kamar, E. (2022). Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. arXiv preprint arXiv:2203.09509. https://arxiv.org/abs/2203.09509
Kiela, D., Firooz, H., Mohan, A., Goswami, V., Singh, A., Ringshia, P., & Testuggine, D. (2020). The hateful memes challenge: Detecting hate speech in multimodal memes. Advances in neural information processing systems, 33, 2611-2624. https://proceedings.neurips.cc/paper/2020/hash/1b84c4cee2b8b3d823b30e2d604b1878-Abstract.html
Matamoros-Fernández, A., & Farkas, J. (2021). Racism, hate speech, and social media: A systematic review and critique. Television & new media, 22(2), 205-224. https://doi.org/10.1177/1527476420982230
Paz, M. A., Montero-Díaz, J., & Moreno-Delgado, A. (2020). Hate speech: A systematized review. Sage Open, 10(4), 2158244020973022. https://doi.org/10.1177/2158244020973022
Salminen, J., Hopf, M., Chowdhury, S. A., Jung, S. G., Almerekhi, H., & Jansen, B. J. (2020). Developing an online hate classifier for multiple social media platforms. Human-centric Computing and Information Sciences, 10, 1-34. https://link.springer.com/article/10.1186/s13673-019-0205-6
Ullmann, S., & Tomalin, M. (2020). Quarantining online hate speech: technical and ethical perspectives. Ethics and Information Technology, 22(1), 69-80. https://link.springer.com/article/10.1007/s10676-019-09516-z
Waisbord, S. (2020). Mob censorship: Online harassment of US journalists in times of digital hate and populism. Digital Journalism, 8(8), 1030-1046. https://doi.org/10.1080/21670811.2020.1818111
Yin, W., & Zubiaga, A. (2021). Towards generalisable hate speech detection: a review on obstacles and solutions. PeerJ Computer Science, 7, e598. https://doi.org/10.7717/peerj-cs.598

Be the first to comment

Leave a Reply