Who decides what you can post online?

Pulling Back the Curtain on Social Media Content Moderation

Introduction 

Do you realize that the decisions about which content is removed from your feeds every day are taken by many individuals from all over the world, and many are only paid the minimum salary? In the case of social media platforms such as Facebook, Twitter, YouTube and TikTok, the process of content moderating is the procedure that decides which billions of users are allowed to post. The process is often not seen, regardless of its significance. In terms of the industry that it is, let me inform you that it’s an unorganized business that is characterized by unclear rules, human biases and a lack of accountability towards the public.

Privacy & Digital Rights

Tarleton Gillespie that moderation is one of the most important aspects that platforms are doing, and that is curating material so that they can keep users engaged, scrolling and consuming advertisements (Gillespie, 2010). However, most platforms cover their moderation practices hidden behind the scenes to give the impression they are not “conduits.” The only instance when the public gets to see the process behind it is when a controversial post posted by a famous person is taken down or activists’ accounts are shut down, which causes instant fury that spreads across the internet. Who determines what we can and cannot publish on the web? In the beginning, when social media companies were in the beginning phases of their growth, the moderation process was carried out by ad hoc means, which involved youngsters programming and senior executives within Silicon Valley offices making snap choices (Burgess et al., 2017). The platforms handed over the dirty job of evaluating content as well as enforcing constantly changing guidelines to a multitude of contractors. They typically had offices overseas in regions like India, the Philippines and India. The outsourcing occurred as the platforms expanded rapidly. They are shackled by difficult conditions. This includes having to make hundreds of decisions that are high stakes within a single second every day over the breach of the complex policies of the platforms. They have no recourse for guidance, support for the psychological, or space for subtlety (Suzor, 2019).

It is not true to claim that the moderators’ standards for moderation are objective, despite the fact that they do make efforts to approach the political landscape in a neutral manner. Moderators’ beliefs and cultural preferences can be used to create a subjective framework through the lens of evaluating policies that are hazy and unreliable. It results in strange shifts in standards and shifting lines, like Facebook allowing Playboy centerfolds while blocking photos of mothers breastfeeding or YouTube disabling monetization of LGBTQ creators’ content and allowing homophobic obscenities. Both are examples of shifting boundaries and weird dual standards (Rizos, 2012).

The inconsistencies of these kinds illuminate the incorrect argument that is used to develop global guidelines that define “acceptable speech” in a way that can be universally applied. While context and intention matter, they’re commonly overlooked by moderators, with only a couple of seconds to assess each post. If algorithms or overworked human reviewers are confronted with irony, in-group jokes, and satire, they often cause the algorithms to fail. This results in an overemphasis on anti-hate and counter-speech. Moderation is always a reactive race since there are not standards. Platforms continue to expand their guidelines and ban material lists because of disasters in the field of public relations, as well as “please explain” demands from legislators and journalists. But, on the other hand, the policies created by at-will tend to end up strengthening and aggravating inequality and power disparities in society (Ali et al., 2022).

An example of this is Indigenous Australian activist Celeste Liddle, who saw a photo that was removed from Instagram due to it containing females without shirts and wearing traditional body paint (Carlson & Frazer, 2018). The incident highlighted Instagram’s inability to reflect different cultural practices that are not Western. To reinforce the patriarchal shame and stigma that surrounds female bodies, photos taken by Rupi Kaur that featured a theme of menstruation had to be cut because they were judged to be inappropriate (Prasetyani et al., 2023). However, many of the bikini-sexy photos that white models upload remain on the internet. While the removal of posts without just cause could be extremely irritating to users who do not receive any explanation and who have limited options to challenge the ruling, it is not the only problem that is at issue. At a fundamental level, laws that are not explicit give an advantage to those who adhere to the norms of the dominant cultural norm. This benefit comes at the expense of those who are not included, as well as the political opposition, which is crucial.

People who are LGBTQ or people belonging to ethnic minorities are disproportionately censored due to prohibitions regarding vague concepts like “graphic violence” and “sexually suggestive content.” Most moderators are males in their 20s and have a deep rootedness within Western cultural and gender-based norms, and the posts of people who draw attention to violence committed by police towards African Americans are removed (Cubbage, 2014). In contrast to Neo-Nazis, male rights organizations often function in a manner that is not harmed by the possibility of being reprimanded.

Artificial Intelligence (AI), Automation, Algorithms & Datafication 

There are billions of pieces of data being created each day and the platforms must take decisions within a few seconds, they rely on crude and imprecise techniques, including algorithms and keyword filters, as well as low-wage human effort to filter through the sea of information. The human brain, however, can make sophisticated decisions about the meaning of things, their cultural background, and their intentions (Chen, 2020). But computers and moderators who are overworked and do not have adequate education cannot do this. Uncertain guidelines and a heavy dependence on untrained automatic flagging mechanisms create a need for excessive moderation. This allows users to limit content at the press of a button. From Rohingya activists who document an ethnic cleanse of Myanmar to Egyptian anti-torture activists as well as Black Lives Matter groups, there are numerous instances where trolls have abused reporting systems to band together and get content as well as accounts they do not prefer to be removed (Kaplan, 2015). The incidents have taken place in many different situations.

Given the sheer volume of content being distributed through the channels of the platforms designed to put engagement and virality on their top priority lists, it Is inevitable that mistakes are created, and enforcement can be incongruous. But it is also true that the effects of those mistakes are not evenly distributed among all social groups (Lim & Sng, 2020). It Is not just harmful to the right to free expression as well as the capability for individuals to take part in the public square when those who have been marginalized or oppressed are not heard; it can also pose risks to the overall well-being of our digital ecosystem and our democracy. Most users do not know the reasons behind the flagging of their content or the deactivation of their accounts. Only the most well-known attacks on celebrities, including leaders of states or celebrities, are newsworthy and require platforms that can provide explanations to the public. As far as the common consumer is concerned, however, it’s obscure, and there is no visibility into the decision-making process and no way to challenge the outcome.

Companies in the field of technology are not keen on us observing the complicated process. They give moderators instructions that are intentionally vague and insufficient and prohibit moderators from providing their users with specific information regarding the reason that their content was taken off. It is a comforting notion that the platforms acting as neutral conduits will be broken if the chaos of moderation is exposed. It would expose the huge power of the platforms over the acceptable expression of ideas within the public realm of digital media. This is a systemic issue that is rooted in the structure of businesses and the priorities of platforms that have a focus on attracting interest and raising venture capital (Crawford, 2021). They do not have the capacity to support the common good. Instead of blaming individual moderators who are unpaid and must make tough decisions in difficult circumstances, it is important to recognize this as an issue that is systemic. There’s no reason for the platforms to fundamentally alter their strategies to become more transparent and accountable. That is, even when it means they are not able to stem the flow of hate-filled propaganda, inaccuracy, as well as extremism and harassment.

This is why it is insufficient to allow platforms to employ more moderators who are outsourcing and adjust their rules in accordance with the limits (Crawford, 2021). There must be a real change in the method by which speech is regulated and monitored on the web. The transition must include legally required transparency standards, strict due process safeguards, and regular assessments of the human rights impact provided by the government. If clearly and consistently established standards for content that are based on the international human rights framework cannot be met by the law, penalties should be imposed, like large fines that should be taken into consideration.

Hate Speech and Online Harms

Much of the discussion about content moderation has focused on false positives—content that is deemed inappropriately removed even though it is not in violation of policies. There are serious issues in the absence of enforcers about harassing campaigns, hate speech and incitements to violence. It has been demonstrated the ways in which social media platforms can be used by extremists to attack people of minorities or vulnerable groups, including the Myanmar military making use of Facebook to promote violence against Rohingya Muslims and neo-Nazi organizations that are forming on Twitter to harass Jewish journalists and spreading. “Plandemic” COVID conspiracy ideos (Nazar & Pieters, 2021). 

The online effects of these harms can spill over into the real world, causing reputational harm and affecting those targeted by the attack’s ability to fully participate in the civic and social. The social biases of society mean that the threshold for how to define hate speech towards minorities is usually raised unjustly by media outlets, which tend to grant controversial influencers as well as elected officials an unfettered sway over the newsworthiness of the news. In the meantime, the use of context-blind algorithms as well as reactive policies has resulted in black activists, sex educators and others who are vulnerable to being barred from discussing injustice and racism. The automated filtering of content is disproportionately able to restrict LGBTQ creators and other people who are of color (Lucero, 2017). They also misrepresent discussions about lived experiences with race and homophobia as anti-Semitism.

To better safeguard marginalized groups, public debate platforms must be proactive in attempting to apply clear policies against hate speech under human supervision and increasing legal due procedures and appeals. The policy teams should comprise members from communities that are affected, seek the input of outside experts such as anti-racism organizations, and conduct human rights impact assessments to pinpoint and minimize the detrimental impacts of moderating policies. Determining acceptable rules for hate speech and cyber-harm is an enormous challenge considering the speed and scale of the Internet (Lucero, 2017). We cannot expect perfection. Tech companies should become more transparent in their processes for making and enforcing these crucial judgments, as well as extend cooperation with civil society, research associations and the affected communities to refine and enhance them. 

Conclusion 

In conclusino, we, as citizens, have an obligation to push government officials, civil society watchdogs, as well as academics and platforms themselves to work together to reconsider the process of content moderation in a way that is more democratic, accountable, transparent, and open and based on human rights values. In the absence of collaboration, we’ll be able to continue receiving similar flawed, biased or unbalanced results until we modify our strategy. For us to create an administration system for platforms that is worthy of democratic values as well as the digital public sphere, it is crucial to look behind the controls over expression on the internet.

References

Ali, A., Tasawar Abdul, H., Rana Tahir, N., Irfan, S., Hyungseo Bobby, R., & Heesup, H. (2022). Preparing for the “Black Swan”: Reducing Employee Burnout in the Hospitality Sector Through Ethical Leadership. Frontiers in psychology, Article contrasting_C8. https://doi.org/10.3389/fpsyg.2022.1009785 

Burgess, J., Poell, T., & Marwick, A. E. (2017). The SAGE Handbook of Social Media. SAGE Publications Ltd. http://digital.casalini.it/9781473995802

http://digital.casalini.it/5018793 

Carlson, B., & Frazer, R. (2018). Yarning circles and social media activism. Media International Australia, 169(1), 43-53. https://doi.org/10.1177/1329878X18803762 

Chen, S. (2020). Social Concern, Government Regulation, and Industry Self-Regulation: A Comparison of Media Violence in Boonie Bears TV and Cinematic Creations. Sage Open, 10(4), 2158244020963136. https://doi.org/10.1177/2158244020963136 

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. In (pp. 1-21). Yale University Press. 

Cubbage, J. (2014). African Americans and social media. Social Media: Pedagogy and Practice, 103-127. 

Gillespie, T. (2010). The politics of ‘platforms’. New Media & Society, 12(3), 347-364. https://doi.org/10.1177/1461444809342738 

Kaplan, A. M. (2015). Social Media, the Digital Revolution, and the Business of Media. International Journal on Media Management, 17(4), 197-199. https://doi.org/10.1080/14241277.2015.1120014 

Lim, F. K., & Sng, B. B. (2020). Social media, religion and shifting boundaries in globalizing China. Global Media and China, 5(3), 261–274. https://doi.org/10.1177/2059436420923169 

Lucero, L. (2017). Safe spaces in online places: social media and LGBTQ youth. Multicultural Education Review, 9(2), 117-128. https://doi.org/10.1080/2005615X.2017.1313482 

Nazar, S., & Pieters, T. (2021). Plandemic Revisited: A Product of Planned Disinformation Amplifying the COVID-19 “infodemic” [Original Research]. Frontiers in Public Health, 9. https://doi.org/10.3389/fpubh.2021.649930 

Prasetyani, N. Y., Nugi Fitriafi, T., & Maisarah, M. (2023). Mental Process in Milk and Honey Poems Collection by Rupi Kaur. Journal of English Language Teaching, Literatures, Applied Linguistic (JELTLAL), 1(2), 34-53. https://merwinspy.org/journal/index.php/jeltlal/article/view/85 

Rizos, D. (2012). Lad magazines, raunch culture and the pornification of South African media. Agenda, 26(3), 38-49. https://doi.org/10.1080/10130950.2012.718541 

Suzor, N. P. (2019). Lawless: The secret rules that govern our digital lives. In (pp. 10-24). Cambridge University Press. 

Be the first to comment

Leave a Reply