Status quo of Twitter’s regulation on hate speech

Hate speech is speech that “expresses, encourages, stirs up, or incites hatred against a group of individuals distinguished by a particular feature or set of features such as race, ethnicity, gender, religion, nationality, and sexual orientation” (Parekh, 2012, p. 40). It is not only offensive but also hurts someone’s feelings immediately and over time (Sinpeng et al., 2021, p. 6).
According to a survey conducted by Pew Research Center, 75% of online harassment happens on social media sites (Vogel, 2021; see Image 1). As social media is expanding at a rapid pace in a global context, regulating hate speech has become a complex and urgent problem. Although most social media platforms have set up a series of community rules to regulate content, hate speech is still running rampant online. What makes regulation of hate speech on the platform so difficult? Are there any other priorities in platform regulations? Should platforms burden the responsibilities to deal with hate speech? After Elon Musk took over Twitter, his management of content has caused controversies. Today we will dive into the status quo of Twitter for analyzing the 3 questions above and seek out what should be done about online hate speech in the future.

Image 1. Places where online harassment happened. Source: https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/

Controversy between free speech and hate speech on Twitter

One of the biggest challenges of regulating online content is the coexistence of hate speech and freedom of expression (Flew, 2021). Article 19 of the United Nations International Covenant on Civil and Political Rights observed that everyone should have the right to hold opinions without interference, either orally or through media (International Covenant on Civil and Political Rights, 1966). So, who will define the scope of freedom on social media platforms?

According to Duffy (2023), Elon Musk claimed that he wanted to enable users to speak freely on the new Twitter. He said about free speech, “Is someone you don’t like allowed to say something you don’t like? And if that is the case, then we have free speech.” And added that Twitter would “be very reluctant to delete things” and “be very cautious with permanent bans,” and that the platform would aim to allow all legal speech (Duffy, 2023). In order to show his pursuit of ‘freedom’ on the platform, he restored nearly all the accounts that were previously banned for violating old Twitter’s rules on abuse and hate speech (Malik, 2022). He also laid off almost 50% of employees, including those who were responsible for monitoring and censuring hate speech. Users from some countries like Brazil found that there was no longer anyone to respond to their warnings and requests (Knight, 2022; Rashawn&Anyanwu, 2022).

Besides, data showed that during the first 12 hours after the shift of ownership to Musk, there was an immediate surge in the use of N-words, Jew, misogynistic and transphobic language (Rashawn&Anyanwu, 2022). No doubt that Musk’s series of actions has acquiesced to the spread of hate speech. His standards of free speech included infringement on minority groups. The reason why he bought Twitter is that he thought the platform had a leftwing bias that needed to be corrected (Malik, 2022). So, after he became the owner, his intentions became the criteria of this international public platform.

However, his criteria were not always unified. In 2022, Twitter temporarily banned the accounts of some high-profile journalists because they reported Twitter’s removal of an account that posted the updated location of Musk’s private jet (Malik, 2022). It seemed Musk himself violated his vision for speaking freely—someone you don’t like is allowed to say something you don’t like. Then comes back to the question, who will define the scope of freedom on social media platforms? Although a group of people may be involved in the regulation of the platform content, it is always the leadership and the owner who make every final decision. However, if the billions of users on Twitter are only permitted to post on the platform under someone’s standard of freedom, are they still free?

It is truly difficult to draw the line between free speech and hate speech because everyone has different saying about the word ‘freedom’. The policy on the platform can only be relatively free and fair to most people. However, banning hate speech should be considered more important than allowing free speech. It is because the social media platform is getting more and more influential globally. If platforms, especially tech giants, become legal communities of extremists, online hate speech is more likely to turn into hate crimes in real, which can cause serious consequences in human society.

Twitter’s real attitude towards hate speech

According to Twitter’s official statements of hateful conduct, users “may not directly attack other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease” (Twitter, 2023a). However, research in June 2023 by Center for Countering Digital Hate (2023) found that Twitter did not act on hateful tweets posted by Twitter Blue subscribers. Researchers collected 100 hateful tweets that comply with what Twitter mentioned in hateful conduct and used Twitter’s tool to flag them. After 4 days, 99% of the posts were not hidden or deleted and 100% of the accounts were still active (CCDH, 2023).

What is a Twitter Blue subscriber? Users only need to pay 8 dollars to get verified accounts with a blue checkmark on their profile. While Twitter’s official page said that the violation of rules may cause the loss of the checkmark and even suspension (Twitter, 2023b; see Image 2), they neither acted on the tweets nor punished the accounts. An interesting thing is that they use the word ‘may’ in this description, which illustrates that this policy is not that strict. Is Twitter’s failure to control hate speech from paid users telling the public that they have privileges over normal users?

Image 2. Reasons for losing blue checkmark. Source: https://help.twitter.com/en/managing-your-account/about-x-verified-accounts

Similarly, an investigation by Global Witness and the South African public interest law firm, Legal Resources Centre (Marcuson, 2023) revealed that Twitter has approved advertisements with extreme misogynistic hate speech against women journalists in South Africa. They tried to contact Twitter but failed to get any response. Previously, they submitted 10 ads in 4 languages including English, Afrikaans, Xhosa, and Zulu (Marcuson, 2023). Thanks to these researchers, we are now able to know what Twitter is actually doing on the platform. The same issues also occurred on other platforms like YouTube, which could remove videos with copyright infringements quickly but failed to take instant action on hateful or illegal content (House of Commons Home Affairs Committee, 2017). We regret to find that despite the policies announced on social media companies’ official websites, we have nowhere to check whether they indeed implement them. All these facts have indicated that the real attitude behind the platforms may be far from our imagination. We suppose they will be responsible for providing a friendly, harmonious and equal online community, but this may not be what they really care about. #ShePersisted (2023) argued that hate is social media platforms’ business model of monetization. They are even promoting gender discrimination and abuse because controversial content can help enhance engagement and bring profits (#ShePersisted, 2023). Sadly, we forget that social media platforms are private companies that are essentially built to make money. Revenue always goes first.

Hate speech, to some extent, may improve engagement on platforms, but can this kind of engagement bring profits in the long run? Twitter under Musk has witnessed that half of the top 100 advertisers, who used to spend nearly 2 billion dollars in the past 4 years, have declared or seemingly ceased advertising in less than 1 month (Kann&Carusone, 2022). Moreover, research by Pew in March 2023 showed that 60% of Americans who used Twitter in the previous year had been away from the platform for several weeks or more, within which female and Black users presented a higher proportion (Faverio, 2023; see Image 3).
The loss of advertisers and users is difficult to recover. Even if from the perspective of making profits, what new Twitter is doing is not advisable. The establishment and prosperity of social media not only rely on the founders and administrators but also on user’s contributions. Allowing harmful speech can be deemed as backstabbing normal and loyal users.

Image 3. Twitter usage among U.S. adults in the past 12 months. Source: https://www.pewresearch.org/short-reads/2023/05/17/majority-of-us-twitter-users-say-theyve-taken-a-break-from-the-platform-in-the-past-year/

Are social media platforms responsible for regulating hate speech?

The current problem is that there is no exact law that stipulates how social media companies should deal with hate speech online. But this does not mean they have no duty in hate speech regulation. Woods and Perrin (2021) suggested that those who create and manage the platform should be responsible for or at least foresee the potential risks resulting from their choices for affecting user behaviors. This notion comes from the duty of care in the common law (Woods and Perrin, 2021). It means social media platforms should evaluate risks when they formulate community rules. If they allow hate speech, they need to consider possible negative results; if they prevent hate speech, they need to consider how to best prohibit hateful content on the platform. Woods and Perrin (2021) also pointed out that the enforcement of community rule should be visible because it is crucial to building a normative community. The process should be transparent to the public, but it is now missing in the social media regulation.

What should be done in the future?

Corresponding laws should be issued to clarify the punishments for being unable to prevent online harm and third-party regulators should be intervened to investigate the development and execution of community policies on social media platforms worldwide. Only when the interests of the platforms are involved will they start to pay real attention to the problem. Besides, the leadership of both the platforms and regulators should include more females, colored races, transgenders, etc., to guarantee maximum fairness in the whole process.

Meanwhile, more severe punishments should be exerted on those who post hate speech. Current measures are too light, compared to the harm they caused on victims of hateful conduct. They are also too late for victims because harm already happened. The most serious penalty on Twitter is to suspend accounts (Twitter, 2023a; see Image 4). It almost does not influence the people behind the screen—they can easily create a new account. However, the victims suffer more from the hate speech. For example, female politicians are facing more vicious online abuse on social media—merely because they are women. Photographs of Manuela d’Ávila’s five-year-old daughter are posted online with a rape threat at the time she was running for president of Brazil in 2018. Finally, she gave up the general election in 2021 because she and her family kept being attacked online (Meco, 2023). Likewise, many young women no longer consider speaking publicly or engaging in politics. Online misogyny and disinformation not only destroy their reputations but also curtail women’s equal rights in society (Meco, 2023). Hate speech is also a kind of violence. Their life was ruined but the perpetrators even do not have to pay any cost. What should be done is not just to delete hate speech after they are posted but to reduce its frequency of occurrence. Those who spread hate speech online must be punished legally both on the platform and in reality. Thus, making them realize that the social media platform is not a place where they can escape from legal sanctions.

Image 4. Twitter’s penalties for violation of community rules. Source: https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy

Conclusion

Regulating hate speech is no longer a problem on a single platform or a private company. It has inevitably become a global and social problem, which is tricky but must be solved, or at least improved in the future. Laws must be established to norm the regulating responsibilities of social media platforms before things get too bad.

Reference:

#ShePersisted. (2023, February). Monetizing Misogyny. https://she-persisted.org/wp-content/uploads/2023/02/ShePersisted_MonetizingMisogyny.pdf

CCDH. (2023, June 01). Twitter fails to act on 99% of Twitter blue accounts tweeting hate. https://counterhate.com/research/twitter-fails-to-act-on-twitter-blue-accounts-tweeting-hate/

Duffy, C. (2023, May 29). Elon Musk says Twitter has ‘no actual choice’ about government censorship requests. CNN. https://edition.cnn.com/2023/05/29/tech/elon-musk-twitter-government-takedown/index.html

Faverio, M. (2023, May 17). Majority of U.S. Twitter users say they’ve taken a break from the platform in the past year. Pew Research Center. https://www.pewresearch.org/short-reads/2023/05/17/majority-of-us-twitter-users-say-theyve-taken-a-break-from-the-platform-in-the-past-year/

Flew, T. (2021). Hate Speech and Online Abuse. In Regulating Platforms (pp. 91-96). Polity Press.

House of Commons Home Affairs Committee. (2017). Hate Crime: Abuse, Hate and

Extremism Online (No. HC609). https://publications.parliament.uk/pa/cm201617/cmselect/cmhaff/609/609.pdf

Kann, S. & Carusone, A. (2022 November 22). In less than a month, Elon Musk has driven away half of Twitter’s top 100 advertisers. Media Matters. https://www.mediamatters.org/elon-musk/less-month-elon-musk-has-driven-away-half-twitters-top-100-advertisers

Knight, W. (2022, November 22). Here’s Proof Hate Speech Is More Viral on Elon Musk’s Twitter. Wired.https://www.wired.com/story/heres-proof-hate-speech-is-more-viral-on-elon-musks-twitter/

Malik, N. (2022, November 28). Elon Musk’s Twitter is fast proving that free speech at all costs is a dangerous fantasy. The Guardian. https://www.theguardian.com/commentisfree/2022/nov/28/elon-musk-twitter-free-speech-donald-trump-kanye-west

Marcuson, M. (2023, December 7). Facebook, X/Twitter, YouTube and TikTok approve violent misogynistic hate speech adverts for publication in South Africa. Global Witness. https://www.globalwitness.org/en/press-releases/facebook-xtwitter-youtube-and-tiktok-approve-violent-misogynistic-hate-speech-adverts-publication-south-africa/

Meco, L. D. (2023, February 17). ‘Gender trolling’ is curbing women’s rights – and making money for digital platforms. The Guardian. https://www.theguardian.com/global-development/2023/feb/17/gender-trolling-women-rights-money-digital-platforms-social-media-hate-politics

Parekh, B. (2012). Is there a case for banning hate speech? In M. Herz and P. Molnar (Eds.), The Content and Context of Hate Speech: Rethinking Regulation and Responses (pp. 37–56). Cambridge University Press.

Rashawn, R. & Anyanwu, J. (2022, November 23). Why is Elon Musk’s Twitter takeover increasing hate speech? Brookings.https://www.brookings.edu/articles/why-is-elon-musks-twitter-takeover-increasing-hate-speech/

Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021, July 5). Facebook: Regulating hate speech in the Asia Pacific. Final Report to Facebook under the auspices of its Content Policy Research on Social Media Platforms Award. Dept of Media and Communication, University of Sydney and School of Political Science and International Studies, University of Queensland. https://r2pasiapacific.org/files/7099/2021_Facebook_hate_speech_Asia_report.pdf

Twitter. (2023a). Hateful Conduct. https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy

Twitter. (2023b). How to get the blue checkmark on X. https://help.twitter.com/en/managing-your-account/about-x-verified-accounts

United Nations (General Assembly). (1966). International Covenant on Civil and Political Rights. Treaty Series999, 171.

Vogels, E. (2021, January 13). The State of Online Harassment 2021. Pew Research Centre. https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/

Woods, L. and Perrin, W. (2021). Obliging Platforms to Accept a Duty of Care. In M.Moore & D. Tambini (Eds.), Regulating Big Tech: Policy responses to digital dominance (pp. 93-109). Oxford University Press.

Be the first to comment

Leave a Reply