Governance of content on social media – How Sina Weibo manages hate speech about feminism

Introduction

Since hate speech is proliferating alongside the growth of online content (MacAvaney et al., 2019), the Internet is becoming increasingly difficult to govern. Whether it’s online forums, video apps or music apps with commenting functions, hate speech seems everywhere. In China, there are no unifying regulations governing hate speech on the Internet, mainly relying on the self-governance of platforms, which also set the rules for punishment. Taking Weibo, the biggest social media platform in China, as an example, we can see how Chinese social networks are currently managing hate speech and what the gaps are. These governance issues are similar across web platforms to some extent, and thus their improvement methods could be somewhat universal.

Governance on Sina Weibo

Whether it is the content of a search through their search engines or the content of a user’s chat box, all Chinese service providers are legally responsible for any content that appears on their websites (Brussee, 2019). Therefore, Sina Weibo, as a microblogging website, has been improving its censorship of user content for over a decade since its launch. Sina has developed many technical means to restrict the dissemination of sensitive information, and currently mainly relies on machine review based on artificial intelligence and human review at the same time. All the content posted on the platform will first be tested by an artificial intelligence system based on a massive database, to select the keywords of the offending information, and then it will compare the content posted by the user and the existing offending keywords, and rate the degree of similarity. When this value exceeds a set value, the content is determined to be a violation and is blocked. When the content cannot be accurately judged by the system, it will also be handed over for manual review. For this reason, a special department with personnel whose only duty is to police users and censor content is established (MacKinnon, 2011). 

On the user level, in 2022, Weibo started to display IP addresses on users’ homepages, tweets, and comments, including provinces for domestic Chinese users and countries for overseas users. While some see this as an invasion of privacy, it is also a good monitor to some extent, for example, it reminds users that they should be careful about what they post and not create rumors or hate speech. In addition, when users post comments containing offending words, the system will remind them before blocking the content: the comment may affect others, please reconsider. When a user opens the chat box, it will prompt the user to speak in a friendly manner, and malicious statements will receive penalties from the platform, such as a ban on speaking and account blocking. For tweets containing offending information, they may be deleted by the system directly after they are sent, or they may be changed to be visible only to the user, making the user believe that the content has been sent.

According to Parekh (2012), hate speech is when someone expresses, promotes, inflames, or incites hatred towards a group of people who are characterized by one or more characteristics, such as race, ethnicity, gender, religion, nationality, or sexual orientation. However, different social media platforms may adopt their own definitions of hate speech and have their own community guidelines and punishments for hate speech (Gelashvili, 2018). On Weibo, the definition of hate speech is summarized in the seventh section of the “Weibo Community Convention” under “promoting hatred”, which defines promoting hatred as the act of labeling a specific group of people with specific physical, psychological, regional, cultural, and other attributes, and spreading the related content in an attempt to legitimize and normalize the exclusion, discrimination, attack, and harm of that group. These acts are specifically categorized by the platform into 6 types, including organizing, inciting, and guiding most users to discriminate, defame, and insult a specific individual or group; organizing, inciting, and guiding most users to offend public order in some way (such as the order of government offices, the publication, performance, and broadcast of the literary, film works, video games, exhibitions, the normal holding and broadcast of sports events and e-sports events); organizing, inciting, and guiding most users to submit complaints in some way; using indecent words to curse others or attack the deceased; exposing others’ personal information and calling on others to conduct flesh searches; and intensifying conflicts by excerpting screenshots or words of others, instigating cyber violence. The platform will take the disposal of the offending content or account as appropriate, such as restricting, changing, blocking, deleting the relevant content, limiting the account’s posting, likes, following, and other functions, deducting the user’s credit points, terminating the user’s right to use Weibo, and reporting to the relevant regulatory departments, etc. Although Weibo defines specific content belong to “promoting hatred”, the rules for governing and punishing hate speech on Weibo are the same as those for other undesirable content, such as pornography, disinformation, and bloody content.

Hate Speech Case on Sina Weibo

According to the research (Sinpeng et al., 2021), we can determine that hate speech can be divided into 3 forms, including discrimination, interiorization, and deprivation. All these three forms of hate speech can be seen in the discussion of feminist issues on Weibo, and the most common one is “deprivation”, which refers to the content that deprives target audiences of the ability to voice their own opinions (Sinpeng et al., 2021).

Social media platforms provide the public with a better opportunity to express themselves freely, which has led to more social issues being discussed openly and frequently by more people. However, inevitably, more polarized effects occur due to the amplifier role of the internet because it reflects and reinforces available discourses (Chetty & Alathur, 2018). The discussions about feminism on Sina Weibo have been going on for several years, but unluckily, hate speech has been spreading on the site simultaneously, and even gradually on more social media platforms. Hate speech is not only found in the statements of some male users who oppose feminism but also in the content posted by radical feminist users, which greatly hinders the spread and development of true feminism in China. Nowadays, Weibo has become an important forum for feminism, but the spread of hate speech has led to an inability to discuss feminist issues in depth and in a friendly manner on the platform. Extreme feminist users and extreme anti-feminist users often stir up gender antagonism by posting statements that attack each other, such as “All dangers are caused by men, if they disappeared from the world then everything would be fine” or “Women are not entitled to and do not need to demand equal rights because their contribution to society is far less than that of men”. Meanwhile, both sides also tend to deprive each other of the right to have a voice, for example, users who oppose feminism post that if other women are not complaining about the inequality of rights then you shouldn’t either, or extreme feminist users attack male users who support feminism that men are not qualified to speak up for feminism and question their intentions of saying so. 

Another form of hate speech, interiorization, can also be seen frequently on Sina Weibo. Users who post hate speech about women’s rights issues tend to use words that are insulting and discriminatory by changing the words “male” and “female” through harmonization or semantics. Chinese character “locust” is usually used to replace “male” because their pronunciation is the same, while when mentioning “women”, they often use the word that describes the gender of animals, unlike English, where the gender of humans and animals are described in completely different ways in Chinese, so by replacing the word, women would be offended. Both males and females are ranked as inferior through dehumanization by being labeled as insects and sub-human (Sinpeng et al., 2021).

In 2022, Weibo officially banned 265 accounts for publishing extreme speech for 15 days to permanent bans, with accounts publishing content that instigates gender antagonism accounting for a large percentage of the total. However, such belated management is more like a cover-up. In recent years, Weibo has been using “extreme feminism” to attract attention and stir up emotions in order to make up for the loss of users, and they might even recommend more incendiary hate speech to users through the algorithm just for a higher user stickiness. (Mo, 2022).

Problems and Solutions

The emergence of Weibo has undoubtedly broken the control of speech in traditional media and greatly promoted freedom of expression, but ‘free speech must be balanced when its demands conflict with other normative commitments’ (Howard, 2019). The need to ensure users’ right to free speech while accurately identifying, managing and deleting hate speech is the key problem that Weibo is currently facing in terms of speech governance.

Weibo’s current censorship algorithm has a “zero tolerance” for profanity-laden content, but on the one hand, this treatment tacitly assumes that strong expressions all belong to hate speech and affect users’ emotional expressions, while on the other hand, users can still use other combinations of Chinese characters or Hanyu Pinyin acronyms to post essentially the same content without being probed by the system. This makes the platform inadequate for detecting and blocking the spread of hate speech in the first place, and instead may affect the user experience. Meanwhile, Weibo’s censorship system defaults all the content published would be in Chinese as it is only set up to probe for sensitive words in Chinese and lacks control over inappropriate words in other languages. Besides, in addition to offensive words and profanity being set as sensitive words, other words such as “homosexual” will make the content be removed, regardless of whether they are positive discussions, which deprives users’ right to discuss certain topics on the platform. In addition, for the hate speeches that do not contain any sensitive words, the system would basically be unable to accurately detect, making the content fails to enter the manual review step and directly be published on the platform. Unless the content suddenly gains a lot of attention, the platform won’t review and intervene in this content again. 

According to Brown (2020), there are 3 levels of governance: the moderation level, the oversight level and the regulatory level. The moderation level of online hate speech governance is concerned with content that is judged offensive based on the community standards or content policies of the platforms (Brown, 2020). However, while Weibo gives its own understanding and definition of hate speech in the Community Convention, it fails to reflect this completely in its censorship, making the censorship system independent of the rules. Since the message of hate speech can also be transmitted through ambiguous jokes, innuendos, and visuals, and it can also be subtle, mild, unemotional, or even boring (Parekh, 2012), platforms are facing additional challenges in probing hate speech nowadays. According to the research (Sinpeng et al., 2021), page administrators can be the key actors in hate speech regulation for pages and groups, this emphasizes the importance of manual review. By improving the algorithm of the existing system to probe key sensitive words in languages other than Chinese, as well as the Chinese abbreviations of sensitive words and the endless stream of new undesirable words, it can help Weibo improve its governance of hate speech. But more importantly, Weibo should increase its manual censorship and provide uniform, standardized training for its employees to improve the identification and management of implied hate speech that could be hard to detect by the algorithm. The governance of hate speech on Weibo can also be improved from the regulatory level, where national governments or governmental organizations frequently get involved in Internet governance at this level (Brown, 2020). The government should introduce strict laws to regulate the behavior of platforms to stop them from intentionally conniving in the spread of hate speech for purposes such as attracting users’ attention, or even intentionally giving a high exposure to topics that may provoke hate speech.

Conclusion

Hate speech management has been initiated on Weibo, but there is no separate set of rules and censoring system for managing it, and the same managing methods and regulations are also used for other undesirable content, such as pornography and rumor. Hate speech that stirs up gender antagonism is one of the most common forms of hate speech on Weibo, where ‘deprivation’ and ‘interiorization’ can be seen frequently. To reduce the emergence and spread of hate speech, the ‘algorithmic-then-manual’ censorship system adopted by Weibo should be optimized on both sides, especially the manual review. At the same time, the relevant departments should also regulate the platform to avoid any inappropriate behavior of increasing user stickiness and gaining more attention that intentionally inflames and spreads hate speech.

Word Count: 2100

References

Brown, A. (2020). Models of governance of online hate speech. online], Council of Europe, https://rm. coe. int/modelsof-governance-of-online-hate-speech/16809e671d (Retrieved: November 26, 2021).

Brussee, V. (2019). Comparing Censorship Regulations of Sina Weibo and Facebook Not quite as different? https://www.politicseastasia.com/wp-content/uploads/2019/01/MA-Digital-EA-2018-Research-Paper-Brussee-for-publication.pdf

Chetty, N., & Alathur, S. (2018). Hate speech review in the context of online social networks. Aggression and Violent Behavior, 40(1359-1789), 108–118. https://doi.org/10.1016/j.avb.2018.05.003

Gelashvili, T. (2018). Hate Speech on Social Media: Implications of private regulation and governance gaps.

Howard, J. W. (2019). Free Speech and Hate Speech. Annual Review of Political Science, 22(1), 93–109. https://doi.org/10.1146/annurev-polisci-051517-012343

MacAvaney, S., Yao, H.-R., Yang, E., Russell, K., Goharian, N., & Frieder, O. (2019). Hate speech detection: Challenges and solutions. PLOS ONE, 14(8), e0221152. https://doi.org/10.1371/journal.pone.0221152

MacKinnon, R. (2011). Liberation Technology: China’s “Networked Authoritarianism”. Journal of Democracy 22(2), 32-46. doi:10.1353/jod.2011.0033.

Mo, W. (2022, December). A Study of the Phenomenon of “Extreme Feminist” Groups on Sina Weibo. In 2022 6th International Seminar on Education, Management and Social Sciences (ISEMSS 2022) (pp. 1332-1339). Atlantis Press.

Parekh, B. (2012). Is There a Case for Banning Hate Speech? In M. Herz & P. Molnar (Eds.), The Content and Context of Hate Speech: Rethinking Regulation and Responses (pp. 37-56). Cambridge: Cambridge University Press. doi:10.1017/CBO9781139042871.006

Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Ses.library.usyd.edu.au. https://doi.org/10.25910/j09v-sq57

Be the first to comment

Leave a Reply