Addressing Hate Speech and Online Harms: A Call for Legislative Reform and Global Accountability

Oliver Wen

Cyberbullying, Hate Speech and Online Harms have become a big worry for individuals and lawmakers around the world. The sad story of Mia Janin, a 14-year-old girl who took her life after being bullied online, highlights the urgent need for new online regulatory laws to be held accountable to protect individuals. Mia’s dad, Mariano Janin, has been speaking out, saying laws against online harms are crucial to keep people safe. 

This article looks at what happened after Mia’s death, the principle and background of hate speech and online harms, and potential solutions to this issue.

Mariano Janin said his daughter Mia was adored by her close friends. (Photo: BBC)

North London schoolgirl Mia Janin took her own life after being bullied by boys at her school. This 14-year-old died while still a child and while still in the process of maturing into adulthood.

The former deputy head teacher at her school mentioned that there were some small talks around the school —— he was made aware of a boys’ WhatsApp group in which members were rating the “attractiveness” of female students.

One of Mia’s TikTok videos was shared to the group chat, created by boys in the school. The boys used the group chat to share faked nude photos of the girls, and Mia was one of them.

“Tomorrow’s going to be a rough day, I’m taking deep breaths in and out. I’m currently mentally preparing myself to get bullied tomorrow,” said Mia in her voice note to a friend on 10th March 2021, the night before she returned to school.

The day after Mia returned to school, Mia’s mother Marisa discovered her lifeless body in her bathroom at the family home. “Suddenly I heard a horrible scream from my wife. This time everything stopped for me and my wife. Life changed completely,” Mia’s father told BBC.

Three years later, on 3rd March 2024, Mia’s father Mariano Janin told BBC that the lack of social media regulations and limitation laws caused all these problems. To tackle the issue affecting young people globally. He asked for a fully functioning regulation and public awareness regarding hate speech and online harms.

It was time to be “more alert and I would like the system to work, ” said Mr Janin.

Mia’s death showed why it’s important for people to develop a fundamental understanding of what social media algorithms are, how tech companies control them, and what could go wrong.

But before we can look into that, we need to talk about how the internet started and how it grew. This foundational stage provides crucial context for understanding the causes of issues such as hate speech and online harms.

The Background of Social Media Platforms and the Internet

The Internet. (Photo retrieved from Pixabay)

At first, many believed the Internet would be a place where anyone could share information freely, without borders or restrictions, referring to what’s been called “Internet libertarianism”. According to Shapiro, A. L. 1999, Internet libertarianism is a philosophy or ideology that advocates for minimal government intervention or regulation of the Internet. 

Shapiro, A. L. emphasises individual freedom, freedom of speech, free markets, and limited government involvement in cyberspace. This led to the commercialisation and platformisation of the internet with capitalist characteristics in later times. However, the lack of government involvement and structured regulation caused serious risks and set the stage for illegal use and harmful information on the internet.

In the late 2000s and early 2010s, platforms like Facebook, X (formally Twitter), and other social media networks gained widespread popularity worldwide, leading to the dominance of a few major online platforms. Keskin, B. 2018, J. et al. 2018 provided a comprehensive analysis of the rise of digital platforms and their implications for society, he concluded that this phenomenon, known as platformisation, occurred as a few platforms became the centre of online activities, including socialising, content sharing, and E-commerce. 

Social media platforms. (Photo retrieved from Pixabay)

Companies such as Google, Meta (formally Facebook), and X (formally Twitter) generated significant profits primarily through advertising, and E-commerce. These platforms recognised that the longer users spent on their platforms, the more revenue they could generate from advertising. Zuboff, S. 2019 discussed the implications of this economic model for privacy, democracy, and human autonomy. He provided insights into the “broader socio-political consequences of digital platforms’ profit-driven practices.” (Zuboff, S. 2019)

According to Zuboff, S. 2019, tech giants began collecting extensive data on users’ preferences, behaviours, and interests to develop highly targeted advertising algorithms.

In March 2016, Instagram made an important announcement about changes to how its feed works. Before this, Instagram showed posts in the order they were posted, which is called a chronological feed. But with this announcement, Instagram said it would start using a new algorithm, which meant that users would see the posts that Instagram thought they would be most interested in, rather than just the most recent ones.

“To improve your experience, your feed will soon be ordered to show the moments we believe you will care about the most,” Instagram explained on its official site. This is a significant example of personalising advertisements based on individual user data, Instagram aimed to increase the effectiveness of product sales and user engagement.

These algorithms became important tools for social media platforms to make more money and keep users on their platforms for longer periods—a strategy that worked well commercially. However, there were serious consequences. The algorithms contributed to the rise of hate speech and online harm, as the platform prioritises profit over regulation and privacy. This led to polarisation, extreme viewpoints, the spread of harmful information and the creation of filter bubbles where users only see content that aligns with their existing beliefs.

“Free Market and Individual Freedom” Dream and Reality

The notion of “free market” and “individual freedom” within Internet libertarianism has expanded far beyond its original expectations. According to Malhotra, A. et al 2014, a profit-driven ideology with limited regulation has brought significant negative impacts to individuals, such as the following:

  1. Spread of explicit information

Teenagers on their phones. (Photo retrieved from Pixabay)

This happens through the algorithm’s tendency to prioritise content that generates high engagement, such as likes, comments, and shares. Due to pornographic and explicit content’s sensational nature, they can often gain significant engagement, leading the algorithm to spread it across a wider audience, including teenagers.

This brought us back to Mia’s case. Once teenagers, such as Mia’s fellow students engage with explicit content by liking or interacting with a post, social media algorithms often continue to show them similar content, even if they didn’t intend to see more of it. This can gradually change teenagers’ attitudes towards harmful content and lead to negative influences. Such as sexism and racism could become more accepted over time.

  1. Spread of misinformation

Fake News. (Photo retrieved from Pixabay)

The focus of social media algorithms on making money rather than regulating content can contribute to the spread of sensational misinformation. Fake news and AI-generated deep fake videos or images, often designed to be eye-catching and explicit, can easily spread across the internet as a result.

In the 2023 Australia referendum context, the targeted messaging from the No campaign worsened polarisation and ignored constructive dialogue between opposing sides. “A key piece of misinformation underpins the slogan of the ‘No’ campaign: ‘ If you don’t know, vote no’. This mantra is premised on the constant claim put forward by News Corp that there are no details about the Voice,” said Victoria Fielding in an article in Independent Australia

The case of Mia and other female students’ fake nude photos, shared by her fellow students, could be an example of spreading sexist content, misinformation and doxxing information, which caused serious mental health problems for victims. 

  1. Filter Bubble

Poor kitty exploring the world through the cage. (Photo retrieved from Pixabay)

The social media algorithm has played a significant role in creating filter bubbles, which “ encapsulate widespread public fear that the use of social media may limit the information that users encounter or consume online.”  (Kitchens, Johnson, & Gray, 2020, 1619) The filter bubbles restricted users’ exposure to diverse perspectives, and second, it fostered the adoption of increasingly extreme ideological positions.

U.S. former President Donald Trump’s social media usage also highlighted the effectiveness of manipulating sensational content leading towards a filter bubble. “Of his 100 most popular posts, 36 contained election-related falsehoods, ” said Nate Rattner in an article on CNBC,  This statistic demonstrates how Trump leveraged social media to spread misinformation, garnering significant engagement and amplifying his messages within his supporter base.

Make America Great Again. (Photo retrieved from Pixabay)

Trump successfully cultivated an environment where his supporters were less exposed to alternative viewpoints and more inclined to adopt extreme ideological positions. An example could be the “Chaos in Washington” back in 2021. “Thousands of supporters of President Trump have stormed the US Capitol building, venting their anger at the victory of Joe Biden in the presidential election. They forced the evacuation and lockdown of Congress, where lawmakers were preparing to approve the election result,” said Huw Edwards on BBC News.

Trump supporters were surrounded by like-minded individuals, making them less open to considering alternative viewpoints.

  1. Polarisation

Opposing countries. (Photo retrieved from Pixabay)

The influences of the filter bubbles have led to a more diverse and opposing society where

“People become more set in their beliefs and less open to considering different perspectives, this refers to “polarisation”. (Encrypt, S. 2019)

An example of polarisation could be the changing favourable opinions of U.S. citizens about China over time. Gallup has tracked China’s image in the U.S. at least once a year since 1996, according to their data, “China was viewed in a positive light by 53% in 2018. However, it fell to 41% in 2019, 33% in 2020, and 20% in 2021 and 2022 during the height of the COVID-19 pandemic. Before 2023’s 15% rating, 20% was the lowest on record. Social media algorithms with the dissemination of misinformation, and filter bubbles played an important role in this issue.

This decline in favourable opinions towards China among U.S. citizens undercovered polarisation in society. As attitudes towards China became increasingly negative, people became more entrenched in their beliefs and less open to considering alternative perspectives. 

How do we get it right?

To effectively tackle the negative impacts of social media algorithms and online harms, it’s crucial to put in place comprehensive legislative reforms and global accountability measures. These steps should balance individual freedom with the need for regulation to safeguard users from harmful content and malicious activities online. One big problem we encounter is that the current owners of social media platforms would do everything possible to prevent new regulations that might reduce their profits. They benefit from spreading misinformation and harmful content online, so they resist any changes that could affect their income.

One strategy is to establish mandatory and strong regulatory frameworks that hold social media platforms responsible for the content shared on their platforms. This might involve setting clear guidelines for acceptable content, implementing systems for content moderation, and imposing penalties for violations of these rules. 

Additionally, promoting transparency in algorithmic processes and data collection practices can increase user awareness and enable individuals to make informed decisions about their online interactions.

Moreover, Educating users, especially Mia’s fellow students, about the potential risks of online activities and equipping them with the skills to critically evaluate information can help mitigate the spread of misinformation and reduce the impact of filter bubbles.

Governments, tech companies, civil society organisations, and academia need to collaborate in developing effective solutions to these complex challenges. By working together to identify and tackle the root causes of online harms, we can create a safer and more inclusive digital environment for everyone.

Reference list

BBC News. (2021, January 7). Chaos in Washington as Trump supporters storm Capitol and force lockdown of Congress [Video file]. Retrieved from https://www.youtube.com/watch?v=UXR_bqyAy4E

BBC News. (2022, August 1). How algorithms affect what you see on social media. Retrieved from https://www.bbc.com/news/articles/cn0nd1gnj4lo

BBC News. (2021, September 10). Mia Janin took own life after bullying – inquest. Retrieved from https://www.bbc.com/news/uk-68456057

Duggan, M. (2017, July 11). Online harassment 2017. Pew Research Center. https://www.pewresearch.org/internet/2017/07/11/online-harassment-2017

Encrypt, S. (2019, February 26). What are filter bubbles & how to avoid them. Search Encrypt Blog. Retrieved from https://choosetoencrypt.com/search-engines/filter-bubbles-searchencrypt-com-avoids/

Fielding, V. (2023, November 12). No campaign built on lies and misinformation. Independent Australia. Retrieved from https://independentaustralia.net/politics/politics-display/no-campaign-built-on-lies-and-misinformation-,17867

Gallup. (n.d.). Record low: Americans’ view of China. Retrieved from https://news.gallup.com/poll/471551/record-low-americans-view-china-favorably.aspx

Instagram. (2016, march 1). Shedding more light on how Instagram works. Retrieved from https://about.instagram.com/blog/announcements/shedding-more-light-on-how-instagram-works

Keskin, B. (2018). Van Dijk, Poell, and de Wall, The Platform Society: Public Values in a Connective World. Media and Communication, 6(3), 107-108. https://doi.org/10.23860/MGDR-2018-03-03-08

Kitchens, B., Johnson, S. L., & Gray, P. (2020). Understanding echo chambers and filter bubbles: The impact of social media on diversification and partisan shifts in news consumption. MIS Quarterly, 44(4), 1619. https://doi.org/10.25300/MISQ/2020/16371

Malhotra, A., & Van Alstyne, M. (2014). The dark side of the sharing economy … and how to lighten it. Communications of the ACM, 57(11), 24-27. https://doi.org/10.1145/2668893

Rattner, N. (2021, January 13). Trump’s tweets: A legacy of lies, misinformation and distrust. CNBC. Retrieved from https://www.cnbc.com/2021/01/13/trump-tweets-legacy-of-lies-misinformation-distrust.html

Shapiro, A. L. (1999). A political economy of the internet: Libertarian and communitarian perspectives. The Information Society, 15(2), 71–79. https://doi.org/10.1080/019722499128616

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs. https://www.hbs.edu/faculty/Pages/item.aspx?num=56791

Be the first to comment

Leave a Reply