When Deepfake Strikes, Are We Accomplices?

There is growing concern around the world about the use of digital communication platforms to disseminate “disinformation, misinformation and ‘fake news'”, where “misinformation” refers to unintentionally disseminated falsehoods, and “fake information” refers to deliberate falsehoods for manipulative purposes (Flew, 2021). In recent years, Deepfake, an innovative tool based on artificial intelligence, has sparked change in the film and television industry. However, as this technology has evolved, the abuse of Deepfake technology create a large number of  disinformation, misinformation and ‘fake news’ and has raised concerns about the protection of individuals’ privacy and digital rights, especially the sexual violation issue.

What is Deepfake?

Deepfake, from the combination of “deep learning” and “fake” (Wang & Kim, 2022). It is a synthetic media technology based on AI Deep Learning, trained on a large amount of accurate data to build an AI model capable of generating fake media content. GANs (Generative Adversarial Networks) are at its core, consisting of generative and discriminative networks. GANs learn how to create realistic fake images by analyzing many authentic images, such as photos or videos of people. In short, these networks can trained to produce highly realistic fake video and audio. The characters look and sound authentic, even though they never say or do such things in the real world.

Legal and Illegal Applications and Challenges of Deepfake

Deepfake has shown its potential in the entertainment and creative fields. In legal applications, Many special effects are made by Deepfake in the film industry. For example, Paul Walker appeared in Fast 7, where Deepfake was used to restore historical images or recreate a deceased actor’s performance in the film. 

However, Deepfake can also be abused in illegal applications, resulting in the creation of fake news, political manipulation, and personal defamation, and most worryingly, it is being used to produce pornographic content. The abuse of this technology not only threatens the privacy and reputation of individuals but also has the potential to cause confusion and misinformation at the societal level.

Deepfake faces difficulties at the technical and socioethical levels as well as at digital rights and governance levels. At the technical level, with the development of AI technology, it has become increasingly more work to differentiate between real content and Deepfake-generated content. At the socioethical level, Deepfake poses challenges regarding authenticity, trust, privacy, and security. At the digital rights and governance level, how to balance the relationship between technological innovation and individual privacy protection and ensure the authenticity and reliability of information in the public sphere are urgent issues to be resolved.

Deepfake abuse cases – sexual violation

AI face swap is the most severe problem of deepfake, causing a lot of sexual violations. 96% of deepfake video content is non-consensual pornography, and the victims could be millions (Fido & Harper, 2020). Some “face-swapping” apps can easily create fake nude photos of women, becoming a new tool for bullying and harassment, where young women are the hardest hit (Wang & Kim, 2022). This approach can be interpreted as a form of slut-shaming, a form of retaliation for damaging their professional reputation for their professional work (Vitis & Naigler, 2021). The internet has made copyright infringement easier to commit, more insidious, and to a wider audience through technologies such as Deepfake. (Powell et al., 2021) 

As reported, in January 2024, a nude photo of Taylor Swift, generated by Deepfake, spread across the internet, attracting widespread attention and discussion. On social media platform X (formerly Twitter), delays in the platform’s online speech governance resulted in the photo being viewed 47 million times before it was removed.


In October 2023, Francesca Mani, a 14-year-old high school girl from New Jersey in the United States, was informed that students on Snapchat had widely circulated nude photographs of her and some of her female classmates. The photos were fake pornographic images generated by the artificial intelligence face-swapping app “ClothOff.” Francesca came forward in March 2024, calling for legislation to hold the producers legally accountable.

The sharing of victims’ pornographic without their consent could easily lead to a severe impact on their mental health and create a sense of constant, existential threat (Henry et al., 2021). From Hollywood stars to teenage girls, AI ‘face-swapping’ is becoming a tool for sexual violence. With one-click generation and one-click distribution, it has never been easier to create fake pornographic images. It is not just celebrities facing this problem; almost everyone is at risk. Ensuring that violence against women is seen as a societal issue rather than an individual one (Vitis & Naegler, 2021). Thus, as women, and more importantly, as people living in today’s technological environment, we must wake up to the fact that our indifference and negligence can be a weapon to harm ourselves, and we must take such cases seriously. When AI becomes an accomplice to sex crimes, are we far from becoming victims?

Platform Issues and Suggestions in Cases

Social Media Platform-X

In the case of Taylor Swift, despite having a  Non-consensual nudity policy, social platform X (formerly Twitter) faced significant challenges in the department due to Musk laying off 80 per cent of its software engineers who were focused on “trust and security issues.” This is a significant reason why Deepfake-related pornography could not be sorted out in time after it went viral. At the time, X was no longer able to control and remove the distribution of fake Taylor Swift pornographic images and had to mitigate the impact of the incident by blocking all searches about Taylor Swift. Today, X has become one of the most “powerful and prominent” distribution channels for fake photos and videos going viral in a low-censorship environment.

X’s Policy Suggestion

Public concern may also stem from questions about the slowness of digital platform companies to act on hate speech against individuals (Flew, 2021). X has demonstrated significant shortcomings in dealing with ‘trust and safety’ about pornographic content, and online abuse and particularly in identifying and dealing with Deepfake-related content. It shows that the platform needs to strengthen its policy and technical capabilities further in these areas to address the challenges posed by Deepfake more effectively. Currently, the AI labelling of platform content is based solely on user feedback, which has led to the absence of labels or hints that “this content was generated by AI” for some content that does not receive many views.

Platforms can add AI recognition technology. When users publish content, the audit should not only identify the bot spam but also put AI tags on the content to help users better judge the AI-generated content and reduce the occurrence of violations. Social platforms should strengthen online environmental governance. In Australia, platforms need to take responsibility for communication. In 2023, Australia was in the second phase of reforms to the offence of defamation, including any publication of defamatory material on “digital intermediaries” such as websites, social media platforms, and search engines (Tant, 2023).

Deepfake Production Platform

For platforms such as X, Facebook, Instagram, or Snapchat, action can be taken against the offending user, but this does not address the underlying problem, which is the output of the offending Deepfake. In the case of Francesca Mani, Deepfake’s production platform,

In the case of Francesca Mani, the developer of the platform “ClothOff,” which uses Deepfake technology, they made it clear that the creator takes full responsibility. However, the website also states that no data will be saved, which increases the user’s discretion and risk of abuse.

Deepfake Production Platform Issue and Suggestions

The existence of such websites that may have pornographic content is related to national policies. For example, in China, any website linked to pornography is banned. For countries that do not have an explicit ban on pornographic content, the laws should be strengthened against this type of deepfake platform. Technology is innocent, so the technology developer should take more responsibility. Thus, deeply forged AI programs should include hidden watermarks or blockchain technology in their outputs to deter users who maliciously violate the law and reduce the number of crimes. When a violation occurs, the case officer can trace it back to the originator and protect the victim’s legal rights from being infringed upon.

In 2019, China adopted the Notice on Preventing and Counteracting Online Rumour Information, which requires prominently marked non-authentic information content produced using Deepfake (Reuters, 2019).  In 2019, the U.S. Congress introduced the Deepfake Accountability Act, which aims to punish the use of Deepfake technology to produce and disseminate false audio and video information(Congress, 2019).  In December 2023, the EU’s Artificial Intelligence Act required Deepfake content to disclose its AI-generated sources(European Parliament, 2024). In 2024, the UK Online Safety Act introduced new offences that criminalised the sharing of or threatening to share intimate images, including deepfakes(Lords Chamber, 2024).

Public awareness is important

While strengthening technical governance, it is also necessary to enhance public awareness and vigilance of Deepfake risks. Deepfake prevention education and the development of digital literacy are all significant in combating Deepfake abuse (Naffi, 2024). Victims of Deepfake Pornography may be reluctant to report abuse for fear of being perceived as trivial(Dahlstrom, 2024). There is a lack of public awareness of the Deepfake technique and its potential for abuse. This lack of awareness increases the public’s vulnerability to misinformation and disinformation and lacks the courage to confront the issue. We should combine education and media technology to strengthen awareness of Deepfake. We should also make those who show images to others, or those who share and distribute images, aware that they are complicit in the “crime.” Networking in cyberspace can amplify voices, build solidarity, and mobilize a wider public; cyber-hashtags promote active participation by audiences who share their experiences like more people (Vitis & Naegler, 2021).

For example, the following strategies can be adopted:  1. The creation of multimedia educational content, which is a combination of video, audio, and interactive games to vividly explain Deepfake technology and identification methods.  2. Social media outreach: Utilise the wide reach of social media to post easy-to-understand educational materials to increase public awareness of Deepfake.  3. Collaborate on public lectures and seminars: Collaborate with technology companies and educational institutions to organize lectures and seminars for the public, providing up-to-date information and a platform for discussion. 4. Incorporation of relevant education in school curriculum: Incorporate knowledge of Deepfake into relevant curricula in secondary schools and universities to educate the younger generation.  5. Popularisation and application of technological tools: Promote apps and tools that can assist in identifying Deepfake content and improve the public’s self-protection capabilities.


The series of problems caused by Deepfake technology ultimately reflects the conflict between technological development and social ethics. In the face of the loopholes of new technologies, the key is how we regulate and guide them. Technological advances have magnified the good and evil of human nature, and some people have abused the technology out of malicious attacks or profit-driven. Mere technical governance and legal discipline cannot solve the problem fundamentally, and it is more necessary to strengthen moral education and awaken the goodness of human nature. In the era of the information explosion, we should always maintain independent thinking and make rational judgments based on the whole view of information. In addition, the information people share on the Internet, such as text, images, videos, etc., may become Deepfake’s “data feed”.We must look at the right to personal privacy and the right to data in the AI era with a new perspective. Finally, we need to realize that when people lose their pursuit of “truth,” any rumour or falsehood can be regarded as “true.” If we are unable to guard the social order in the new wave of technology, we may become “victims” of new events at any time, or even “perpetrators” in the process of spreading false information.


Congress. (2019, June). H.R.3230 – DEEP FAKES Accountability Act. congress.gov/. https://www.congress.gov/bill/116th-congress/house-bill/3230

Dahlstrom, F. (2024, February 6). Deepfake pornography and the law – go to court. Go to Court. https://www.gotocourt.com.au/deepfake-pornography-and-the-law/

European Parliament. (2024, March 13). Artificial Intelligence Act: MEPs adopt landmark law | News | European Parliament. https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law

Fido, D., & Harper, C. A. (2020). Non-consensual Image-based Sexual Offending Bridging Legal and Psychological Perspectives (1st ed. 2020.). Springer International Publishing. https://doi.org/10.1007/978-3-030-59284-4

Flew, T. (2021). Regulating Platforms. Polity. https://bookshelf.vitalsource.com/books/9781509537099

Powell, A., Flynn, A., Sugiura, L. (2021). Gender, Violence and Technology: At a Conceptual and Empirical Crossroad. In: Powell, A., Flynn, A., Sugiura, L. (eds) The Palgrave Handbook of Gendered Violence and Technology. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-83734-1_1

Vitis, L., Naegler, L. (2021). Public Responses to Online Resistance: Bringing Power into Confrontation. In: Powell, A., Flynn, A., Sugiura, L. (eds) The Palgrave Handbook of Gendered Violence and Technology. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-83734-1_34

Henry, N., McGlynn, C., Flynn, A., Johnson, K., Powell, A., & Scott, A. J. (2021). Image-based sexual abuse : a study on the causes and consequences of non-consensual nude or sexual imagery. Routledge.

Wang, S., & Kim, S. (2022). Users’ emotional and behavioral responses to deepfake videos of K-pop idols. Computers in Human Behavior, 134, 107305-. https://doi.org/10.1016/j.chb.2022.107305

Lords Chamber. (2024, February 13). AI: “Nudify” apps – Hansard – UK Parliament. https://hansard.parliament.uk/lords/2024-02-13/debates/2795CA37-82A5-48CC-B2A7-FF075715DF02/AI%E2%80%9CNudify%E2%80%9DApps

Naffi, N. (2024, January 8). Deepfakes: How to empower youth to fight the threat of misinformation and disinformation. The Conversation. https://theconversation.com/deepfakes-how-to-empower-youth-to-fight-the-threat-of-misinformation-and-disinformation-221171

Reuters. (2019, November 29). China seeks to root out fake news and deepfakes with new online content rules. Reuters. https://www.reuters.com/article/us-china-technology/china-seeks-to-root-out-fake-news-anddeepfakes-with-new-online-content-rules-idUSKBN1Y30VU/

Tant, J. (2023, November 27). Moving on from Voller? Stage 2 defamation reforms on the horizon. HWL Ebsworth Lawyers. https://hwlebsworth.com.au/moving-on-from-voller-stage-2-defamation-reforms-on-the-horizon/

Be the first to comment

Leave a Reply