LGBTQ and Hate Speech on social media platform


Hate speech is defined as the incitement or expression of hatred towards a group of people or some features, such as“race, ethnicity, gender, religion, nationality and sexual orientation” (Flew, 2021, p. 115). It was born out of hostility towards a particular object or group of people, such as parents who have long held the conventional wisdom that video games are bad for children’s development. In recent years, with the growth of the internet, more and more hate speech has been published on social media platforms such as Twitter and Facebook, but unlike the main trending topic, hate speech is fundamentally unacceptable, there is no doubt about that, then why is it still popular? 

Due to the fact that social media users often use anonymous IDs, the boundaries of their ethics are highly blurred and they are less likely to fear reprisals for saying something offensive online than they would if they said something offensive to a black person on the street, they might get beaten up. However, saying the same thing on social media won’t have any impact on the real world, the most you’ll get is cyber violence or suspension of your account. Since ethics and morality cannot restrain these people, it is necessary to seek legal and platform regulation, but how can it be regulated favourably? How can it be distinguished from freedom of expression?

Hate speech effect

There is no doubt that, as with any violent act, verbal violence such as online hate speech can be harmful to mental health, ranging from anger to depression and even suicide. The worst affected groups are teenagers, whose age makes them inherently more dependent on social media than adults, and whose moral values are immature and easily influenced by the outside world. Victims can have trouble digesting their negative emotions thus making them doubt themselves. According to research published by the American Psychological Association, the rate of attempted suicides decreased by 16% annually overall in states that passed hate crime laws to protect LGBTQ individuals, (American Psychological Association, 2022). These numbers mean that hate speech about LGBTQ is deadly for people. Of course, such harm does not only affect the LGBTQ community but also racist, female communities and other groups that are harmed by hate speech. In a study of anti-Muslim hate speech mentioned in Ștefăniță ‘s paper, because these harmful words are anonymous, victims worry that anybody could assault them, which makes them consider withdrawing from society, and also victims thought that the threats they heard online could become reality at any time (Ștefăniță & Buf, 2021, p. 50-51).

Case Study

The LGBTQ community has long been a controversial group on the internet, with a large section of the population resisting understanding and accepting the LGBTQ community, and since Elon Musk’s acquisition of Twitter, there has been a spike in hate speech against the LGBTQ population on Twitter, the graph below is a comparison of the data.

(Silberling, 2022)

The reason for this is that Musk took over Twitter with a tweet about whether there should be an amnesty for some suspended accounts, and he reduced restrictions on speech to increase freedom of expression (Montclair State University, 2022). This has led to users posting hate speech with greater impunity.

(Montclair State University, 2022)

Grooming is when a person engages in predatory conduct to prepare a child or young person for sexual activity at a later time (Department of Education, n.d.). However, the term groomer is currently used on Twitter to defame LGBTQ, by creating fictitious predators harming children, it has been used as a dehumanising term to target, criminalise, and isolate the marginalised LGBTQ population. (Montclair State University, 2022). Countless innocent LGBTQ users have been vilified as groomers for admitting to being part of the LGBTQ, which is undoubtedly hate speech. And before Elon Musk took over Twitter, the slanderous act of calling LGBTQ people groomers was added to Twitter’s hate speech policy. But why would Musk condone such defamation?


Researchers at the Centre Against Digital Hate (CCDH) analysed five accounts that share a large number of anti-LGBTQ conspiracy theories and found that they generate up to $6.4 million a year in advertising revenue for Twitter (Perry, 2023). This makes one wonder if his targeting of Twitter’s free speech release was due to profit. It is hard to avoid that businessmen put profit first, and it is right to advocate freedom of speech, but the boundary between freedom of speech and the condoning of hate speech is blurred by Twitter. The harm done to the defamed and vilified LGBTQ crowd is ignored by the social media platform.

Alyssa MacKenzie is a Runner, singer/songwriter from Orlando, also an activist for LGBTQ.  She was called a groomer, and a paedophile, simply for being transgender, and she highlighted the harm being done to the LGBTQ population, yet some of the comments on this tweet were telling her to ignore it, and there were users who said it was sad but need to learn to ignore it who had also been called groomers. Other users said they would not be affected unless they were convicted. These comments show that some of those who are subjected to hate speech do not realise that they are being hated, that they do not understand that hate speech on social media can also have a serious personal impact, or that there is nothing they can do but learn to live with it or ignore it. The impact of hate speech does not just effect those who are the intended targets; it also has an impact on bystanders, who may become desensitised and so view it as less offensive and more acceptable (Gorenc, 2022, p. 417).

Imran Ahmed, CEO of the Center to Combat Digital Hate said, “Twitter must decide if they believe in the fundamental rights and freedoms of LGBTQ+ people, or if they want to continue profiting from and normalising hate” (Perry, 2023). Twitter cannot profit from hate speech against LGBTQ people and advocate for LGBTQ human rights at the same time, they need to make a choice, if they choose to ignore the hate speech, it will attract a lot of LGBT haters to Twitter in a short time, they can express their hatred with impunity, which will undoubtedly lead to more substantial profits. As Alyssa says it’s no coincidence that Ahmed also says the same sentence as well, “that Elon Musk sends out homophobic, transphobic, racist and disinformation actors of all kinds ‘bat-signals’ that encourage them to flock to Twitter” (Perry, 2023). And the consequences of continuing to do so are horrific, with the perpetrators going even further because they are not legally liable and the victims’ voices are not heard and they are forced to endure which can lead to psychological damage and even radical behaviour such as suicide.

Then how can hate speech on social media be regulated and balanced against freedom of expression?

The European Court of Human Rights (ECHR) has clearly stated that anti-gay speech is hate speech, not free speech, and is not allowed and that the law protects LGBTQ people and gives them equal rights (Barakou, n.d.). However, there is no legal definition of ‘hate speech’ in US law, and in the US, hate speech is protected by the First Amendment, but under current First Amendment jurisprudence, hate speech can only be criminalised if it directly incites imminent criminal activity or contains a specific threat of violence against an individual or persons (American Library Association, 2023). This also results in hate speech related to LGBTQ groups, racial discrimination, feminism, etc. not constituting a crime, and speech does not count as a specific threat of violence, therefore allowing people to be more reckless on social media users. As the focus of this article is Twitter, how does the platform regulate hate speech?

According to the safety and cybercrime policy of Twitter, Twitter states that “You may not directly attack other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease”(Twitter, 2023).  The penalty for violating this policy on Twitter is to limit the number of views on the tweet, or delete the tweet, or in the worst case, suspend the account. Such penalties do not serve any purpose and the user is perfectly free to register a new account and continue to post inappropriate comments.

Countries should learn from ECHR that hate speech is illegal, regardless of the form it takes and whether or not it causes specific threats of violence, and that when victims are subjected to hate speech and report it to the relevant authorities, the relevant authorities should take measures, such as verbal education or warnings in less serious cases, or fines or prison sentences in serious cases by assessing the trauma suffered by the victim. The government should also work with social media platforms to issue clearer rules on the use of hate speech, with clearer definitions and corresponding penalties to highlight the dangers and seriousness of hate speech. Social media platforms should also modify their rules to take into account the circumstances of users in different regions and local policies. Social media platforms should also create psychological counselling services so that users can seek a counselling and psychological assessment when they are hurt by words, and then determine the level of punishment.

Strengthen manual and intelligent auditing. Social platforms should strengthen technologies such as ai to monitor hate speech on their platforms, not just blocking keywords or deleting posts, as users may continue to post in more subtle terms, so manual auditing should be linked to deal with sensitive topics and terms in a timely manner. Suspensions are not a permanent solution, and in addition to legal action, IP addresses should be blocked so that users can never register again. Social media platforms should also organise awareness and education campaigns at certain times of the year to highlight the dangers of hate speech to the public and offer rewards to users who participate in education campaigns to encourage more users to participate.

Freedom of speech & Hate speech

Freedom of speech is important, but it does not justify the unbridled use of words to hurt others, and there are large numbers of people who use hate speech on platforms under the banner of freedom of speech. Freedom does not mean that there are no boundaries or principles. In layman’s terms, freedom does not mean that you can kill and set fire to people, and freedom does not excuse all behaviour. But the need to safeguard people’s right to freely express themselves and advocate their ideas on the one hand, and the need to defend the rights of others to be free from abuse and to be protected as equal members of society on the other, is difficult indeed (Gorenc, 2022, p. 418). Platforms should have clearer guidelines to help users distinguish between what is acceptable and within the bounds of freedom of expression and what is unacceptable hate speech – a poll could be launched for the general public: which words and expressions are uncomfortable to you and indicate the level of discomfort, and based on the results of the poll, guidelines could be established to regulate people’s speech.

In conclusion, hate speech is an issue that every social platform should take seriously, and its impact on users, whether on a psychological or physical level, should not be underestimated. Platforms should not give up on regulating hate speech for the sake of profit: for example, the Twitter conversation about LGBTQ people. At the same time, platforms should take various measures to regulate hate speech more vigorously: for example, improving policies, strengthening censorship, setting penalties and increasing publicity. Internet users should also be aware of the seriousness of hate speech and monitor themselves and those around them to prevent them from becoming part of online violence.


American Psychological Association. (2023, February 8). Hate speech and hate crime. Advocacy, Legislation & Issues. Retrieved April 16, 2023, from 

AutisticJAC. (2023). “you learn to ignore it”. Retrieved from

Barakou, S. (n.d.). ECtHR confirms hate speech against LGBTI people is not freedom of expression. ECtHR confirms hate speech against LGBTI people is not freedom of expression | LGBTI Equal Rights Association for Western Balkans and Turkey. Retrieved April 16, 2023, from

Department of Education. (n.d.). Child sexual exploitation and grooming. Retrieved April 16, 2023, from,or%20their%20parent%20or%20carer.

Flew, T. (2021). Hate Speech and Online Abuse. Regulating Platforms (pp.115). Polity.

Gorenc, N. (2022). Hate speech or free speech: An ethical dilemma? International Review of Sociology32(3), 413–425.

MacKenzie, A. (2023). “I have been called a groomer”. Retrieved from

Montclair State University. (2022, November 29). Study: Use of ‘groomer’ hate speech increased on Twitter after Colorado Springs nightclub shooting. Press Room Study Use of Groomer Hate Speech Increased on Twitter After Colorado Springs Nightclub Shooting Comments. Retrieved April 16, 2023, from

Perry, S. (2023, April 1). Twitter makes millions from groomer slur after ‘Elon Musk sends bat signal’. PinkNews. Retrieved April 16, 2023, from

Silberling, A. (2022, December 13). Anti-LGBTQ slur takes off on Twitter after Elon Musk’s takeover. TechCrunch. Retrieved April 16, 2023, from 

Ștefăniță, O., & Buf, D. (2021). Hate Speech in Social Media and Its Effects on the LGBT Community: A Review of the Current Research. Romanian Journal Of Communication And Public Relations, 23(1), 47-55. doi:10.21018/rjcpr.2021.1.322

Takiyon836. (2023). “I mean”. Retrieved from

Twitter. (2023). Twitter’s policy on hateful conduct | twitter help. Twitter. Retrieved April 16, 2023, from 

Be the first to comment

Leave a Reply