Reddit’s battle with hate speech – Has r/the_Donald really been deleted?

Image From Kieran Walsh

By the time Reddit banned r/The_Donald, the community had long been overrun by alt-right users, filled with conspiracy theorists, trolls, and high volumes of hate speech, making it one of the most notorious communities in the platform’s history. While it is not the most destructive community on the site, it certainly has the most visibility and influence among toxic communities.

Islamophobia is common in r/The_Donald memes

Reddit is positioned as a platform for social news, entertainment, and community exchange, and its users typically share links or comments, memes and stories in a way that fosters community connections. Because Reddit’s users use pseudonyms and user accounts are not tied to a user’s real social circle in the same way that Instagram or Facebook is. So Reddit’s online atmosphere tends to be more playful and blunt, filled with a variety of discriminatory material consisting of spoofed memes or videos. People associate hate speech with vehemence of expression, when it can also be subtle and not overtly aggressive in its framing (Sinpeng, 2021). It has been argued that there is a continuum between statements that are discriminatory but do not constitute hate speech; statements that openly advocate violence against minorities(Cortese, 2005).

In other words, hate speech is not exclusively violent or emotional; hate speech disguised as “popular knowledge,” jokes, and satirical comments can also have a corrosive effect on people’s self-perceptions, digital community environments, and user behaviour. The language itself is not the primary defining feature of hate speech (Flew, 2021). Instead, toxic viewpoints and inciting violence cause even more harm to marginalized and discriminated groups.

Memes are powerful tools that, when used to spread hate speech, can have global reach (Hermida, 2023). For example, a meme about “Remove Kebab” often appears in r/The_Donald threads calling for violence and open hatred against Muslims. In March 2019, a document drafted by a gunman who shot 50 people at a Christchurch, New Zealand, mosque refers to himself as a “Kebab Removalist.” His self-designation originates from the meme “Remove Kebab.” In the wake of the Christchurch mosque shooting, users of r/The_Donald posted threads addressing the incident and violence against Muslims. When asked if users from r/The_Donald were posting hate speech on r/newzealandsubreddit after the Christchurch attack, the moderator of r/newzealand pointed out that these users try to use suggestive wording and images, making it difficult to tell which postings should be acted upon and delect.

With the example of the meme at r/The_Donald, it’s easy to see how hate speech can harbor Reddit’s seemingly relaxed community atmosphere.

Reddit’s main moderation methods

Reddit’s main moderation methods involve a combination of community-driven governance and administrator moderation policies. Over time, Reddit’s moderation approach changed significantly, leading to the decision to ban r/The_Donald to emphasize its new moderation requirements.

Reddit positions itself as a platform for creating community and fostering a sense of belonging, rather than as a space for attacking marginalized or vulnerable groups. However, Reddit has faced prolonged criticism for its lax management of hate speech, and its neutral platform governance, coupled with minimal top-down moderation, has led to Reddit being labeled as “the dark corner of the internet”.

Reddit is known to be a community-driven digital platform, meaning that it is primarily a community of user-created and user-managed communities, which means that users can jointly determine the rules and direction of a subreddit. This results in many Reddit communities having rules that are specifically tailored to their content. For example, in a humour community, everyone must try to be funny, in a feminist community, everyone must respect women. This community-driven nature gives Reddit’s community rules more control over user participation and contributions than Reddit’s content policy.

Reddit has long operated under a “user-run, not platform-run” model. This reliance on the unpaid efforts of users has clearly contributed to the persistence of hate speech on the platform. For instance, volunteer moderators on Reddit can enforce community rules, promote content creation, and resolve disputes by issuing bans, which definitely reduces the workload of Reddit’s administrators. Meanwhile, Reddit benefits from these moderators’ unpaid work while shifting the responsibility of sharing content and administrative tasks to users themselves

Additionally, Reddit’s immature platform design makes moderators dependent on third-party plugins to remove offensive content and ban users, which is often a tedious process. Coupled with the fact that moderators are not paid, few are willing to take on this time-consuming task, which undoubtedly contributes to the proliferation of hate speech on Reddit.

Also worth mentioning is Reddit’s quarantine, a feature rarely seen on other digital platforms that aims to limit user access to identified Reddit subsections; quarantine only stops some of the hate speech from spreading, while not banning the content completely.

Image From Policyreview

Despite platforms can use low-cost design solutions like quarantining to compromise between containing antisocial activities and preserving freedom of speech(Chandrasekharan, 2022). But for users in the quarantined subforum, Reddit is not encouraging users to change their hateful behaviour but rather pushing them to leave Reddit for platforms with fewer regulations and restrictions. While the policy reduces access to hate material on the Reddit, it does not necessarily do so across the internet but may push it into less regulated spaces where it can continue unchecked, arguably making segregation a double-edged sword. The migration process of r/The_Donald is a good example of this.

r/The_Donald user migration

Sadly, for digital platforms, moderation of hate speech is like whack-a-mole. Destroy one gathering place and others will spring up, and there will always be people who will create sites where speech is not restricted in any way to avoid bans and moderation.

When Reddit removed r/The_Donald, it wasn’t at the peak of activity for r/The_Donald users anymore; many had already fled to Thedonald.win. In fact, since the quarantine of r/The_Donald, the community’s moderators promoted their new website TheDonald.win by pinning posts on the r/The_Donald. In their posts on TheDonald.win, they mentioned fleeing Silicon Valley’s censorship system and starting anew from scratch.

TheDonald.win, like r/The_Donald, was rife with violent political and racial rhetoric and was one of the platforms used to plan the 2021 attack on the U.S. Capitol. However, an internal power struggle over the TheDonald.win domain led to the creation of Patriots.win and the closure of TheDonald.win.

Image From Patriots.win

To this day, Patriots.win still has a large number of active users. Despite the migration of users to the platform, hate speech has not been eliminated. Private domains such as Patriots.win lack sufficient administrators and algorithmic reviewers compared to larger platforms such as Reddit, and as a result Patriots.win has become an orgy of hate speech. Migration has somewhat led to a decrease in posting activity, number of posts, active users and new users on the new platform. But hat the reduction in activity may come at the expense of a more toxic and radical community(Horta, 2021).

While the users who fled from r/the Donald are still spewing hate in other corners of the Internet, Reddit’s own community environment has certainly gotten better.

The Changing Face of Reddit’s Review Policy

Over the years, many critics have openly questioned why r/The_Donald hasn’t been banned, even though it has consistently violated the Reddit’s policy on racism, bigotry, and the spread of disinformation. As a toxic community itself, r/The_Donald’s moderators don’t do much content management, so it has to rely on Reddit’s platform administrators. But Reddit’s administrators have been hesitant to get involved in content disputes, insisting that Reddit is a fair and unbiased space for discussions. But remaining “neutral” in these cases valorizes the rights of the majority while often trampling over the rights of others. (Massanari, 2017)

As hate speech festered and toxic communities grew, Reddit’s longstanding practice of neutrality and non-interference shifted.

In 2012, Reddit subforum r/Creepshots was boycotted by other users for posting suggestive or revealing sneak peeks of women, and as a result, Reddit has enacted a content policy that prohibits any suggestive or pornographic content involving minors; assaults and harassment of individuals; incitement to violence; and assaults and harassment of the broader community.

In 2015, Reddit introduced quarantine policy, making it more difficult to access certain Reddit subsections. Accessing or joining a quarantined Reddit subreddit requires bypassing warning prompts. Additionally, to prevent accidental viewing, quarantined Reddit subforums do not appear on the front page of r/all, and their user counts are not visible, much less generate any revenue. The quarantined Reddit subforum cannot be found via search or recommendations, meaning users need to search for the Reddit subforum via Google or know the Reddit subforum URL to access it. The quarantine resulted in a significant decrease in traffic for toxic communities on the Reddit platform.To demonstrate its freedom of speech and transparency, Reddit also retains the ability for users to appeal quarantine.

For a long time, r/The_Donald has been a breeding ground for racism, misogyny, homophobia, and other forms of prejudice. The first step towards addressing this issue came in 2019 when Reddit made r/The_Donald an optional subforum due to excessive harassment and threats against public figures by its users following the Oregon Senate Republican strike. Despite the fact that posts from r/The_Donald are hidden from Reddit’s front page, if users insist on visiting, they can still access the subforum, so while hate speech on r/The_Donald has decreased, it still continues.

In June 2020, Reddit’s content policies were updated, influenced by the COVID-19 pandemic, the Black Lives Matter movement, and heightened advertiser pressure, resulting in a more stringent stance against hate speech. For instance, the updated Rule 1 states, “Communities and users that incite violence or that promote hate based on identity or vulnerability will be banned.” This rule focuses more on banning hate speech and aims to protect vulnerable people on Reddit from bullying and marginalization.

Reddit Content Policy: https://www.redditinc.com/policies/content-policy

Image from Pingwest

At the same time Reddit decided to completely ban r/The_Donald, which has 800,000 subscribers, as well as 2,000 other groups deemed to be against Reddit’s policies. Dramatically, the ban came on the same day that President Donald Trump’s Twitch channel was suspended.

Joe Mulhall, a senior researcher at the political action group Hope Not Hate, said, “They’re not so big that they can’t vet, they just don’t prioritize vetting.” These platforms must make moderation a centrepiece of their policies, which means they have to be constantly on the lookout for emerging groups, rather than waiting until there’s public pressure and conversations about advertisers to moderate.”

Although some have argued that Reddit’s removal of toxic groups in 2020 was more of a reputation management exercise than an ethical clean slate. But undoubtedly, Reddit’s content moderation methods are gradually being updated and upgraded. Reddit is slowly establishing a comprehensive content moderation framework that encompasses individual actions, system improvements, appeals processes, and enforcement measures.

What else can Reddit do about its hate speech?

I think moderation is always crucial and can be approached in several ways.

  1. Encouraging users to self-regulate involves establishing modest norms and fostering a sense of responsibility for one’s own words, actions, and contributions to the community. Utilizing ‘netiquette’ can further promote self-regulation among users.
  2. Improve the loopholes on the Reddit platform: Reddit users can link to any material on Reddit that harasses others, but since these links are hosted elsewhere, it may be permissible, and such loopholes make it difficult for Reddit administrators to detect hidden hate speech.
  3. Provide professional training for platform content review managers: better recognize the role of page administrators as key gatekeepers of hate speech content. And platforms should invest in diverse review teams that can understand and interpret content in different cultural contexts.

Finally, I believe that enforcement and sanctions from third parties are also very important for governing hate speech on digital platforms. For the time being, digital platforms can enter into partnerships with governments, as well as platforms such as the Global Network Initiative and the Global Alliance for Responsible Media Current legislation, to introduce proportionate content review policies and accountability measures, and all parties should be urged to Enhance the transparency of the regulatory process and efficient enforcement to ensure that users are held accountable for their policy violations. In the long term, diverse research can be supported by the creation of new types of globalized institutions based on different cultural backgrounds, which can take into account groups based on different languages, cultures, and politics, and help to focus on the nuances of interactions on various digital platforms. These measures could result in digital platform governance that is relatively modest and healthy for global user-generation.

Reference:

  • Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney.
  • Cortese, A. J. Paul. (2005). Opposing hate speech. Praeger Publishers.
  • Flew, T. (2021). Regulating platforms. Polity Press, pp. 91-96
  • Hermida, P. C. de Q., & Santos, E. M. dos. (2023). Detecting hate speech in memes: a review. The Artificial Intelligence Review, 56(11), 12833–12851. https://doi.org/10.1007/s10462-023-10459-7
  • Chandrasekharan, E., Jhaver, S., Bruckman, A., & Gilbert, E. (2022). Quarantined! Examining the Effects of a Community-Wide Moderation Intervention on Reddit. ACM Transactions on Computer-Human Interaction, 29(4), 1–26. https://doi.org/10.1145/3490499
  • Massanari, A. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807
  • Horta Ribeiro, M., Jhaver, S., Zannettou, S., Blackburn, J., Stringhini, G., De Cristofaro, E., & West, R. (2021). Do Platform Migrations Compromise Content Moderation? Evidence from r/The_Donald and r/Incels. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1–24.https://doi.org/10.1145/3476057

Images:

Be the first to comment

Leave a Reply