Truth or Trickery: Deepfake is threatening your privacy and digital security

Deepfake is all around you

Are you afraid that someday in the future when you’re lying on the couch swiping through videos like you always do, you’ll find your face replaced with the body of a porn star? This fear is not unwarranted. In today’s digital age, where technology is advancing at an astonishing rate, a new phenomenon has emerged that blurs the line between reality and fiction: the Deepfake. The Deepfake comes as a result of advances in artificial intelligence technology, which, with just a single portrait photo, can mimic a person’s facial form, expressions, mannerisms, and voice, and then generate a highly realistic face-swapped video (Westerlund, 2019). Deepfake technology can switch the faces of different people without the consent of the person whose image and voice are involved as a way to manipulate that person to say and do whatever they want (Nas & Kleijn, 2024).

The rise of the new term, Deepfake, can be traced back to 2017, when a Reddit user named “deepfakes” shared a video on the platform of him using an algorithm to exchange the faces of many celebrities onto the bodies of porn actresses, including Wonder Woman’s Gal Gadot, Scarlett Johansson, Maisie Williams, and many others, making “Deepfake” synonymous with “AI face swap. (Cole, 2018). The spread of Deepfake’s videos on social media platforms was phenomenal, and after a few months, the newly formed Deepfake enthusiast community had 90,000 members (Westerlund, 2019). Back in 2019, the technology wasn’t available to the general public, but now you can even find countless detailed tutorials on AI face-swapping production via YouTube.

There are multiple commercial applications for Deepfake technology, such as generating post-production special effects for movies and translating dubbing into other languages while preserving the original voice. However, the proliferation of this technology has raised serious concerns about privacy, digital security, and potential misinformation. Deepfake productions can be made for malicious purposes, including sexual harassment, dissemination of false information, and extortion. These AI-generated videos, images, and recordings are easy to use and highly realistic, making them difficult to detect and often leaving viewers questioning what is real and what is not.

Concerns About Deepfake:

A porn actor could be anyone:

According to a 2019 report by Deeptrace Labs, a Deepfake detection software company, 96% of all Deepfake videos they found online were Pornography Videos. Notably, 99% of these types of videos are aimed at women, and almost no one is choosing a male to star in face-swapped porn. The most frequent are female artists from the United States or the United Kingdom, followed by female Korean K-pop singers (Wang & Kim, 2022).

In January 2024, a piece of pornographic Deepfake content about Taylor Swift went viral on several social media platforms, receiving particular attention on X (formerly known as Twitter). After the incident festered, social media platform X promptly banned all users from searching for Taylor Swift. When you type in the world’s pop superstar’s name on the search bar, you’ll only see the line, “Something went wrong. Try reloading.” Despite being processed very quickly, this photo that sparked the confusion was still viewed 47 million times before it was deleted (Brandon, 2024).

Imagine a scenario where AI uses a portrait of a popular actress in a pornographic scene to make it appear as if she’s engaging in explicit sexual behavior. Viewers may assume that the actress voluntarily participated in the movie, when in fact, she was completely unaware and uninvolved in creating the content. Not only does this damage the actress’s reputation, but it also violates her autonomy and right to privacy in controlling her image. The Deepfake controversy centers on the issue of consent. Deepfake ability to superimpose one person’s likeness onto another’s body or change their voice without their knowledge or permission poses a serious threat to personal privacy.

Concerns about this kind of pornography don’t just only involve pop stars, but any ordinary person – a crush at work, a friend from high school, or even a stranger. Given the advances in artificial intelligence technology, anyone could be a target for Deepfake. To create a mock picture or video only needs enough images of the intended person, which is usually not a problem these days thanks to social media. As long as they have shared one selfie of themselves on their social accounts, they can unknowingly be the subject of a sex video generated by someone’s AI. For as little as $65, it is even possible to commission a five-minute sex video about anyone from the creators of Deepfake (Tenbarge, 2023). It may also be perpetrated by our friends, acquaintances, coworkers, classmates, and the ordinary men and boys around us. For example, students at a high school in New Jersey, USA, were found to be using artificial intelligence to convert ordinary, everyday photos of female classmates on social media into nudes and share them in school groups (McNicholas, 2023).

Deepfake pornography can have a devastating impact on the victims involved. In addition to the initial shock and embarrassment of having one’s likeness used in this way, individuals may face long-term psychological damage, including anxiety, depression, and post-traumatic stress disorder. The ubiquity of the Internet means that once Deepfake porn videos are created and shared online, it is nearly impossible to completely remove them, leaving victims vulnerable to ongoing harassment and exploitation.

Is what I’m seeing real?

The proliferation of AI-generated Deepfake content further contributes to the proliferation of disinformation. By manipulating audiovisual evidence, bad people can create confusion and muddy the waters with falsehoods. While Deepfake is still being used early on primarily in the industry that produces pornographic videos, it in the future may increasingly be used in more serious, high-impact areas such as economic fraud, kidnapping for ransom, political sabotage, and fake news (Westerlund, 2019). In 2019, a financial staff member of a multinational company in Hong Kong was scammed during a video conference call, where the perpetrator used Deepfake technology to impersonate the CFO of the UK headquarters company and fraudulently obtain $25 million. This is the first fraud case using artificial intelligence technology. The Hong Kong police have made the details of this case public. In this carefully planned scam, the employee was tricked into attending a multi-party video conference. He believed that these people were several other staff members, but in reality, all of them were false images of Deepfake (Chen & Magramo, 2024).

As remote communications become more prevalent in the digital age, the risk of impersonation and fraud in video teleconferencing poses a significant challenge to businesses and individuals alike. Deepfake technology enables fraudsters to convincingly commit identity theft to impersonate someone else. In this case, the finance officer was tricked into believing that he was interacting with real company representatives, when in fact they were false images generated by Deepfake technology. This highlights the ease with which Deepfakes can manipulate perceptions and deceive unsuspecting victims. Without robust security measures and authentication protocols, videoconferencing can be vulnerable to manipulation and exploitation by malicious actors, posing a significant threat to the digital security of citizens.

Deepfake videos have also been used to manipulate political narratives and influence public opinion. For example, imagine a Deepfake video of a political candidate making inflammatory or controversial statements. Even if the video is completely fabricated, it could still be widely distributed and trusted by many on social media platforms, thereby manipulating public opinion and undermining the democratic process. In a 2018 Deepfake video, former U.S. President Donald Trump offers advice on climate change to the people of Belgium. The video fueled online outrage over the US government’s interference in Belgium’s climate policy, but the truth is that Trump did not say those words; the video was falsely produced by a Belgian political party to call on the Belgian government to take more urgent climate action (Schwartz, 2018).

Influenced by the rapid spread of information on social media platforms, Deepfake technology has become a powerful tool for people with ulterior motives. What may be most damaging about these videos is not the false information itself, but the fact that Deepfake undermines journalistic integrity and the masses’ trust in authority. Even videos in which both voice and portrait are featured have the potential to be faked, so what else is there to trust? This seed of skepticism poses a new kind of threat to cybersecurity and digital rights today.

what can be done to combat Deepfake?

Anti-Deepfake Technology

First, technology must play a key role in detecting and mitigating the effects of Deepfake. While the details of this technology are gradually improving, AI can play an important role in distinguishing real videos from fake ones. There are some subtle differences between Deepfake-generated videos and real videos, such as sudden jerky, overly smooth skin, unnatural shadows, lack of hair details (it is difficult for current face-swapping technologies to reproduce real hair texture), inconsistent clarity, etc. These are the key elements of Anti-Deepfake techniques for judging authenticity (Westerlund, 2019). Researchers and developers are working tirelessly to develop advanced algorithms and tools that can recognize and validate digital content, helping to differentiate between real videos and AI-generated fakes.

Statutes and regulations for Deepfake

But technology alone cannot solve Deepfake’s woes. As Deepfake technology continues to evolve, our legal and regulatory frameworks must be adapted and updated to hold perpetrators accountable for their behavior and protect individuals’ digital rights. Governments and policymakers must work together to enact legislation that prohibits the creation and distribution of Deepfake content without consent and impose severe penalties on those who violate these laws. In Australia, most states and territories criminalize the taking, sharing, or threatening to share private photographs of others without their consent, including digitally altered images such as Deepfake (“The Laws in Australia,” n.d.).

In addition, social media platforms and tech companies have a responsibility to implement strong policies and safeguards to prevent Deepfakes from spreading on their platforms, including AI-driven content review and fact-checking mechanisms. Suzor mentioned that platforms are “trying to find rules that align with what their users expect” (Suzor, 2019). X, in 2020, banned users from sharing synthesized media that could deceive, confuse, or harm people. Meanwhile, TikTok has also updated its policies to prohibit Deepfakes, expand fact-checking, and flag election misinformation (Perez, 2020; Twitter, 2020).

Personal Awareness

Perhaps the most powerful weapon in the fight against Deepfakes, however, is personal awareness. By educating ourselves about the dangers of Deepfake technology and learning to look critically at the information we are exposed to, can we can be more rational to manipulation and deception. Instead of blindly believing what you see or hear online, try to determine the reliability of the source – does it appear to be genuine? Is the origin account verified? If you’re feeling suspicious, there are a myriad of resources available to help navigate the murky waters of fake news and misinformation, such as double-checking from multiple corroborating sources? Utilizing Google screenshots to see if there are similar original images? Personal awareness and judgment are the last line of defense in protecting our personal privacy and digital rights.


The rise of Deepfakes presents us with new ethical, legal and technological challenges. As we navigate the deep waters of Deepfakes, let us keep our eyes open and remain vigilant. With vigilance and determination when our privacy and digital security are at stake, we can still ensure that the digital age remains a force for good, not a tool for deception and manipulation.


Brandon, A. (2024, February 1). Taylor Swift Deepfakes: New technologies have long been weaponised against women. the solution involves us all. The Conversation. 

Chen, H., & Magramo, K. (2024, February 4). Finance worker pays out $25 million after video call with Deepfake “chief financial officer.” CNN. 

Cole , S. (2018, January 24). We are truly fucked: Everyone is making ai-generated fake porn now. VICE. 

The laws in Australia – the image-based abuse project. The Image-Based Abuse Project – Detecting, Preventing & Responding to Image-Base Abuse. (2024, February 10).

Maras, M.-H., & Alexandrou, A. (2018). Determining authenticity of video evidence in the age of artificial intelligence and in the wake of

McNicholas, T. (2023, November). New Jersey high school students accused of making AI-generated pornographic images of classmates. CBS News.

Nas, E., & de Kleijn, R. (2024). Conspiracy thinking and social media use are associated with ability to detect deepfakes. Telematics and Informatics, 87, 102093.

Perez, S. (2020, August 10). Tiktok updates policies to ban deepfakes, expand fact-checks and flag election misinfo. TechCrunch.

Schwartz, O. (2018, November 12). You thought fake news was bad? deep fakes are where truth goes to die. The Guardian.

Suzor, Nicolas P. 2019. ‘Who Makes the Rules?’. In Lawless: the secret rules that govern our lives. Cambridge, UK: Cambridge University Press. pp. 10-24.

Tenbarge, K. (2023, March). Found through google, bought with Visa and MasterCard: Inside the deepfake porn economy.

Twitter. (2020). Our synthetic and manipulated media policy | X help. Twitter.

Wang, S., & Kim, S. (2022). Users’ emotional and behavioral responses to deepfake videos of K-pop idols. Computers in Human Behavior, 134, 107305.

Westerlund, M. (2019). The emergence of Deepfake Technology: A Review. Technology Innovation Management Review, 9(11), 39–52. 


Be the first to comment

Leave a Reply