Social media is deeply integrated into the lives of individuals. In Australia, Aboriginals use social media to connect with family, to work, and to seek help, and through social media, the Aboriginal way of life has been reshaped. However, the rise of social media has caused several problems, and Aboriginal online identities have brought about online radicalism for them. Social media activism is a movement where users use hashtags to voice their opinions, protest, and advocate for and defend their rights on the internet (Baldy et al., 2021). According to research, almost all indigenous people have suffered traumatic or disruptive experiences on social media (Bronwyn & Ryan, 2018). The Internet can both express racial identity and amplify racism (Matamoros-Fernández, 2017). As we know, we can create and join communities of people with similar interests on Twitter on our own. Therefore, there will exist extreme racist communities that spread hate on digital platforms. Specifically, Aboriginals can come under direct attack from the digital platforms’ speech. For example, on Twitter or Facebook, where hate speech and racism run rampant, Indigenous people often see unkind words when they visit pages. Thus, indigenous people’s use of social media can have a detrimental effect on their health.
The Harm Of Hate Speech
For example, a previous WA Aboriginal community page featured a story about a 14-year-old Aboriginal boy who was killed by a non-Aboriginal driver. However, this community page was accused of stirring up racial tensions between Aborigines and non-Aborigines and of inciting violence against Aborigines upfront (Bronwyn & Ryan, 2018). This Aboriginal community page alone featured a lot of racial rhetoric and hate speech. Thus, social media promotes the reproduction of violence and the hierarchical power of repression and discrimination suffered by Aboriginal people (Bronwyn & Ryan, 2018). In addition, Aboriginal people are vulnerable to stereotyping in their activities on social media. On digital platforms, there are various indigenous memes that can be ridiculed, and indigenous people who are “too white” will be questioned about their Aboriginality (Bronwyn & Ryan, 2018). Simultaneously, indigenous identity may be threatened. As a result, while some Indigenous people may pretend to be invisible and avoid engaging with the content, their silence is just as heavy burdensome and traumatic (Bronwyn & Ryan, 2018). I will use the social media activism campaign #Indigenousdads as a case study.
Case Study: #Indigenousdads Campaign
Figure 2: A Cartoon of Bill Leak, posted by ABC Radio Melbourne, 2016. Source from ABC News
The collective rage of indigenous peoples lies behind the social media radicalism of #Indigenousdads. They are victims of racist and discriminatory behavior, as well as collective trauma (Carlson et al., 2017). To be specific, a cartoon by Australian newspaper editorial cartoonist Bill Leak depicted Indigenous fathers as negligent and careless, sparking hashtag activism (Carlson et al., 2017). The cartoon sparked a heated debate about Aboriginal people, which Aboriginal people saw as racist toward their group, and it was made public. Thus, Indigenous Australians experienced a profound sense of collective trauma and used public media spaces to process and cope with trauma (Carlson et al., 2017). In this cartoon, a police officer returns an Aboriginal boy to his beer-can-toting father, who asks the officer, “Yes, by the way, so what’s his name?” This is followed by racist and derogatory remarks directed at the Aboriginal father (Carlson et al., 2017). Indigenous anger was fuelled by public statements from Prime Minister Malcolm Turnball and former Prime Minister John Howard, who claimed the cartoon was not racist (Carlson et al., 2017). This left Aboriginal people feeling deeply offended and prejudiced and began to make #Indigenousdads post photos of proud Aboriginal fathers on social media. Twitter is a social media medium that indigenous peoples and their non-indigenous partners use to organize and have conversations (Baldy et al., 2021). This is why users use the hashtag #Indigenousdads on Twitter to raise awareness and destigmatize Indigenous fathers. Indigenous Australians have used this campaign to express their political stance (Baldy et al., 2021). Besides, by using the hashtag #Indigenousdads, Twitter users can effectively raise awareness of the current state of discrimination against Indigenous people in the national, mainstream media today. Australia’s Aboriginal people have been subjected to constant hatred and discrimination for a long time, with a recent report by the charity Mission Australia showing that Aboriginal Australians suffer psychological distress at twice the rate of non-Aboriginal people and are more likely to be admitted to hospital due to self-harm (Carlson et al., 2017). Therefore, the stresses placed on this subjugated group of Aboriginal people are lethal on a mental and physical level (Carlson et al., 2017). According to Carlson (2017), the rise of social media has brought unprecedented grief to indigenous Australians.
Figure 3: An example of #Indigenousdads posting, 2016. Source from Twitter
According to the survey, (Carlson & Kennedy, 2021) Aboriginal people believe that the hate speech they receive on the internet comes from the anonymity of the internet. This is because, because of its anonymity, hate speech from racists is not harmful or influential for racists. But for Aboriginal people, when they present themselves as Aboriginal on the internet and join the Aboriginal community, their personal information is easily exposed (Carlson & Kennedy, 2021). Moreover, negative content and harmful information can easily fester on social media. In terms of #Indigenousdads, racist cartoons spread at a very high rate and speed and generate a lot of discussions. Twitter users shared images of their own indigenous fathers, who openly expressed their condemnation of the Bill Leak cartoons and their dissatisfaction with the government’s attitude. This social media activism helped to dispel indigenous stereotypes, but the campaign did not address any major inequalities but was part of a broader fight against racism (Petray, 2015). Therefore, social media can be regarded as a tool for indigenous people to be hurt, but it also provides them with new forms of action to break down discrimination (Baldy et al., 2021). The #Indigenousdads have united Indigenous people by expressing their love, their anger, and showing their emotions (Baldy et al., 2021). A study by the McNair Ingenuity Research Institute found that indigenous Australians use Facebook at a rate 20% higher than the national average(Montgomery, 2014). This demonstrates that Aboriginal people are happy to use Facebook to communicate, but once they identify themselves, Aboriginal communities on sites such as Facebook are often flooded with comments that are racist towards them (Montgomery, 2014).
Regulating Online Hate Speech
Do social media companies and Australian regulators have a duty to regulate online hate speech in this situation? As users can be anonymous online, this makes it difficult to control hate speech as it is beyond the scope of legal regulation. This is why the Internet has become a tool for racists to incite speech. Indeed, regulation concerning online hate speech is difficult because the Internet is to a large extent unregulated by law. Therefore, the Australian regulator, the Australia Communication and Media Authority (ACMA), has introduced regulations to regulate the content of the Internet(Mason & Czapski, 2017). Under the law, they can remove or filter undesirable speech (Mason & Czapski, 2017). At the same time, digital platforms have provisions to regulate the behavior of their users. Facebook, for example, is seen as an important site for spreading hate speech. Thus, Facebook has introduced online regulations that state that posting hate content or bullying comments will be removed directly by officials (Andrew Jakubowicz, 2017). In the case of #Indigenousdads, it is indigenous radicalism, and social media is used as a platform for them to voice their anger and break stereotypes. But it also brings with it a warning for Aboriginals, as Indigenous identity can be threatened on social media, and through #Indigenousdads they are directly exposed to online public space. However, Twitter and Facebook platforms will come to remove racist comments, although not always promptly (Matamoros-Fernández, 2017). However, a major challenge for regulation is the need to reach a consensus on whether this speech constitutes racism. Furthermore, social media sites frequently have narrow definitions of what constitutes a racist or hateful act (Montgomery, 2014). This is because there will be users who argue that it was just a joke, and they will use ambiguous words to exploit the situation (Andrew Jakubowicz, 2017).
Since the introduction of anti-discrimination laws in Australia and racial vilification laws in some states, there has been a 41% increase in complaints of racist hatred online, according to the Australian Human Rights Commission (AHRC) report (Montgomery, 2014). Therefore, racial vilification legislation does have an important role to play in addressing hate speech (Montgomery, 2014), and regulators have a duty and ability to safeguard the physical and mental health of Indigenous people. However, there are limits to both the regulation of social media and the legislation of government agencies. Specifically, once a user is anonymous or overseas, there is no way to enforce their behavior. Additionally, Facebook’s ban is also vague, suggesting that they do not provide extensive or substantial regulation of online speech (Montgomery, 2014). For instance, many pages in Australia have memes about indigenous people that have not been removed for months (Montgomery, 2014).
Figure 4: The Aboriginal Memes page on Facebook, 2012. Source from The Sydney Morning Herald
Platforms for social media are gaining popularity and becoming hubs for the public. Every behavior on social media must adhere to a particular set of standards, and online laws are getting harsher(Flew et al., 2019). Social media platforms are in charge of policing hate speech because, by enforcing laws, they can protect indigenous people from racism and maintain their brand reputation (Flew et al., 2019). Media companies can keep Indigenous people safe from harm by filtering out inappropriate speech and deleting hate speech, but not everyone is in favor of such regulatory rules. There are certain problems with regulation, and balancing freedom and regulation is a difficult task. According to research (Andrew Jakubowicz, 2017), users want a certain level of regulation but not one that affects their free speech. Some users feel they need freedom of expression and that an internet under state and media control would make them feel spied on and their privacy exposed (Nemes, 2002). Besides, some users do not believe that racism online can cause substantial harm (Nemes, 2002). At the same time, some people do not believe that their statements bring about discrimination. Just as Bill Leak does not believe that the cartoon discriminates against indigenous people, having stated that it is only a true depiction. As a result, this has led to a lack of full attention and substantial progress toward regulation by media companies and governments.
In conclusion, Indigenous Australians can actively access communication and achieve their goals through social networks, using social media activism (#Indigenousdads) to fight back. Thus, they have been able to seek help from online sources and unite the Indigenous community. Nevertheless, social media activism can also be a double-edged sword, as Indigenous Australians show their Aboriginality unwittingly when participating in this movement, which can attract racists to post hate speech against them and even threaten them, resulting in the suicide rate among Aboriginal people has increased and they have become psychologically unhealthy. In this context, media companies and government institutions have issued ordinances to regulate and curb online hate speech, and while these ordinances have not entirely stopped Aboriginal people from being victimized, they have had an important effect and reduced the rate of Aboriginal suicide.
Andrew Jakubowicz, K. D. G. M. Y. P. A.-M. B. N. B. A. O. R. A. K. C. (2017). Cyber Racism and Community Resilience: Strategies for Combating Online Race Hate (1st 2017. ed.). Springer International Publishing.
Baldy, C. R., Baldy, C. R., Bel Arde-Lewis, M., Belarde-Lewis, M., Berglund, J., Berglund, J., Carlson, B., Carlson, B., Cote-Meek, S., Cote-Meek, S., Duarte, M. E., Duarte, M., Dutta, M., Dutta, M., Elers, P., Elers, P., Elers, S., Elers, S., Farrell, A., . . . Zheng, C. (2021). Indigenous Peoples Rise Up : The Global Ascendency of Social Media Activism. Rutgers University Press. https://doi.org/10.36019/9781978808812
Leak, B. (2016). Bill Leak cartoon in The Australian an attack on Aboriginal people, Indigenous leader says [Photo]. ABC News. Retrieved from: https://www.abc.net.au/news/2016-08-04/cartoon-an-attack-on-aboriginal-people,-indigenous-leader-says/7689248
Bronwyn, C., & Ryan, F. (2018). Social media mob: being Indigenous online. https://doi.org/https://doi.org/APO-234656
Carlson, B., & Kennedy, T. (2021). Us Mob Online: The Perils of Identifying as Indigenous on Social Media. Genealogy (Basel), 5(2), 52. https://doi.org/10.3390/genealogy5020052
Carlson, B. L., Jones, L. V., Harris, M., Quezada, N., & Frazer, R. (2017). Trauma, Shared Recognition and Indigenous Resistance on Social media. AJIS. Australasian journal of information systems, 21. https://doi.org/10.3127/ajis.v21i0.1570
European Economicand Social Committee. (2022). [Photo]. Freedom of expression is not a licence to engage in hate speech. Retrieved from: https://www.eesc.europa.eu/en/news-media/news/freedom-expression-not-licence-engage-hate-speech
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of digital media & policy, 10(1), 33-50. https://doi.org/10.1386/jdmp.10.1.33_1
Griffen, R. (2016). Twitter [Photo]. Retrieved from: https://twitter.com/RyanJGriffen/status/761846500968148992/photo/1
Mason, G., & Czapski, N. (2017). Regulating cyber-racism. Melbourne University law review, 41(1), 284-340.
Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930-946. https://doi.org/10.1080/1369118X.2017.1293130
Mores, A & Lowe, A. (2012). Contents removed from racist Facebook page [Photo]. The Sydney Morning Herald. Retrieved from: https://www.smh.com.au/technology/contents-removed-from-racist-facebook–page-20120808-23tr1.html
Montgomery, H. (2014). The internet: the benefits, problems and legal difficulties for Indigenous Australians. Indigenous Law Bulletin, 8(14), 19-23.
Nemes, I. (2002). Regulating hate speech in cyberspace: Issues of desirability and efficacy. Information & Communications Technology Law, 11(3), 193-220.
Petray, T. (2015). Taking back voice: Indigenous Social Media Activism. AQ (Balmain, N.S.W.), 86(1), 24-27.