The double-edged sword of the digital age—Deepfake

In the digital age, Artificial Intelligence (AI), automation, algorithms, and data mining have become central to technological development, driving social progress and innovation. Algorithms are rules and processes established for activities such as computation, data processing, and automated reasoning, i.e., computational processes in which user inputs interact with data sets to generate outputs. Well-designed algorithms are constantly evolving to be able to predict the outputs that users seek from the large data sets they work with (Fetzer, 1990). AI is a technology that mimics human intelligence, a system that imitates various human functions and utilizes external data (e.g., big data) to excel at a given task. It processes large amounts of data through algorithms to automate decision-making and task execution, increasing efficiency and accuracy (PK, F. A., 1984). Datafication, on the other hand, is the process of converting real-world information into a digital format so that it can be processed by computer systems. It is the convergence and development of these technologies that have not only spawned countless innovations, but have also led to new challenges and risks, as exemplified by the emergence of deepfake technology.

What is Deepfake?

With the development of AI technology, especially in the field of deep learning, a technology called “Deepfake” has emerged. In daily entertainment, we may not be unfamiliar with the “face swap” function in software applications, which allows users to swap their own faces with those of others, such as swapping their own faces with the protagonist of a favorite movie or TV clip, for humorous effect. Another example is FaceApp’s aging filter that changes your photo to show what you will look like decades from now (Kaushal, 2021). The term “Deepfake” is a combination of two terms “deep learning” and “fake.” (Chadha et al., 2021). Maras and Alexandrou propose that it is the product of an artificial intelligence application that merges, combines, replaces, and superimposes images and video clips to create fake videos that appear to be real (Maras & Alexandrou, 2019). Anyone can replace or mask another person’s face in an image or video. Not only that, deepfake technology can also alter the original voice (Chadha et al., 2021). For example, David Beckham and an AI company collaborated to produce a video. By using machine deep learning to recognize Beckham’s face and replace the shape of his mouth, the video shows him speaking nine languages (Global News, 2019). The application of this technology not only demonstrates the great potential of the AI field, but also triggers extensive discussions about personal privacy and information security.

Revolutionary application of deepfakes in the industry

Deepfake technology generates or manipulates video and audio content through the use of AI algorithms and deep learning, a technology that is actively used in many industries. For example, in the film and television industry, deepfake technology is utilized to reconstruct actors. In the movie Fast & Furious 7, deepfake technology was utilized to complete the remaining scenes of the late actor Paul Walker. In China’s film and television industry, if the main actors in a drama are blocked by the industry because of their personal misdeeds, this technology will be used to replace them with other actors so as to ensure that the drama is aired in its entirety.

Recently, the University of Bath Research Center conducted two experiments to validate the positive applications of deepfake technology in the education industry. The two experiments were exploring fitness training and public speaking skills training. In both experiments, the viewer’s own face was swapped and pasted onto a fitness instructor or senior speaker by using deepfake technology. The results of the study showed that when participants watched a training video featuring a deepfake of themselves, the exercisers exercised more efficiently, and the speech trainers’ confidence and perceptions increased dramatically. (University of Bath, 2023).

This research case provides an empirical basis for the positive application of deepfake technology. It highlights the innovative use of deepfake technology in the education industry. Often, educational videos are centered on third-party or generic characters that may not adequately capture learners’ attention and emotional engagement. Deepfake videos that use self-representation allow learners to better understand and emulate these skills as they watch themselves perform complex tasks. This is because they directly see themselves performing the task, enhancing their engagement and personal connection, which in turn improves learning and motivation. Clark mentions that although deepfake technology is often criticized for being used for undesirable purposes, research projects have shown that when used to support and enhance people’s performance, deepfake can also bring positive value to people’s lives (University of Bath, 2023). This use opens up innovative applications of deepfake technology, suggesting that it has potential applications.

Threats and crises behind technological advances

Technological advances promote both good and bad in human nature, and they can either propel us forward or set us back. Deepfake technology is no exception, and like any other technology, it has a light side and a dark side (Baccarella et al., 2017). Deepfake technology is also controversial due to the negative effects that exist. Its most dangerous aspect is that deep counterfeiting applies deep learning to produce fake images, creating a false world of video and audio content that is almost indistinguishable from the real thing, making it difficult for humans to detect that content has been manipulated by deep counterfeiting techniques (Chadha et al., 2021)

A major fraud case occurred in Hong Kong in February this year that utilized deepfaking techniques to modify publicly available video and other imagery. An employee at a multinational company was defrauded of $25 million during a video conference call by fraudsters using deepfake technology to impersonate the company’s chief financial officer (CFO). According to the Hong Kong police, the complex scam consisted of a video call in which other “employees” were created using deepfake technology to convince the worker that he was talking to a real coworker (Chen & Magramo, 2024).

As this case demonstrates, deepfake technology can also be used for nefarious activities, such as identity forgery and evidence tampering, which not only cause financial losses but can also undermine the credibility of individuals and organizations and even affect national security. Creating false but realistic images, videos, or audio recordings makes it extremely difficult to distinguish between authenticity and forgery. It can create misleading content without an individual’s consent, as if the statements or actions were made by a real individual, an act that inherently violates an individual’s moral rights and dignity. An individual’s face and voice can be copied and altered without permission and used to create inaccurate content, which is a serious violation of an individual’s right to privacy and likeness.

A worrying trend

According to the latest data from the Hong Kong Police Force in China, from January to November 2023, Hong Kong’s tech crimes surged to nearly 32,000, accounting for 40% of all crimes, up 53% year-on-year, and involving more than HK$5 billion, up 72% year-on-year. Among them, an email fraud case involved the highest amount of money, amounting to HK$78.2 million. Internet investment fraud increased significantly, totaling 4,703 cases, 1.8 times the same period of the previous year, accounting for 15% of total technology crimes and involving HK$3 billion (Chen, 2022). Meanwhile, a study suggests that between 2022 and 2023, the percentage of deepfake fraud cases increased from 0.2% to 2.6% in the U.S. and from 0.1% to 4.6% in Canada (Incode, 2024).

These figures reflect a dramatic increase in the prevalence of deepfake technology in fraud cases. With the advancement and popularization of technology, deepfake technology has become easier to access and manipulate, enabling fraudsters to create false information, documents, videos, etc. more easily, which poses a serious challenge to global information security.

Technology Detection

Advanced detection techniques are effective in identifying false information that has been misused to create false information. Earlier deep forgeries were detected by recognizing patterns (Chadha et al., 2021). For example, unnatural image flickering or color distortion in an image. Moreover, another approach to detecting deep fakes is based on physiological signals (Chadha et al., 2021). It mainly utilizes the subtle physiological changes that humans exhibit in their natural state, which are often missing or unnatural in deepfake-generated videos. The core of this approach is to analyze whether the characters in the video display normal physiological signals, such as incoherent facial expressions, low blinking frequency, and out-of-sync breathing patterns with the rhythm of speech, etc. David Guera and Edward Delp investigated a method for detecting videos that have been modified. They combined two techniques: a Convolutional Neural Network (CNN) and a Long Short-Term Memory Network (LSTM.) The CNN helps analyze each frame of the video, and the LSTM is used to detect temporal inconsistencies between frames. Their method was tested on 600 videos with over 97% accuracy (Chadha et al., 2021). Currently, most content platforms, such as YouTube, Facebook, or twitter use state-of-the-art technology to detect unethical, illegal, or malicious content and remove it, and both government and private platform operators have begun to develop in-depth forgery detection technology (Meskys et al., 2020). While deepfake detection technology continues to advance, the field still faces significant challenges, and ongoing technical research is needed to ensure that detection capabilities remain competitive with evolving technology.

Legal regulation

While technology is being used to constrain technology, legal means also need to be in place. While some countries have begun to look at legal measures to address the negative impacts of deepfake technology, the use of deepfake technology and the creation or use of deepfake images, audio, or video are not specifically regulated in most countries, for example, by specialized AI legislation (Iris & Alec, 2024). The General Data Protection Regulation was introduced in Europe in 2018, and the GDPR, a broadly encompassing data protection law, not only sets strict rules on the way businesses can process personal data but also strengthens the protection of data from potential privacy violations by giving individuals more control (Fetzer, 1990). Its implementation shows how effectively the law can respond to the challenges posed by technology, especially in the area of personal data privacy protection. Referring to the framework of the GDPR, the development and improvement of laws specifically targeting AI technologies, especially those that pose an impact on the security of public information and the privacy of individuals, such as deepfake technology, will not only provide specific regulatory guidance but also prevent the misuse of the technology and protect individuals and society from its negative impacts.


Artificial intelligence technology is a double-edged sword. In the current digital age, advances in AI technology have brought convenience to mankind, but they are also accompanied by risks and challenges that cannot be ignored. Deepfake technology has driven innovation and development in other areas such as entertainment and education, but this technology has also been used to create misleading information, violate personal privacy, and even engage in criminal activities, a series of malicious behaviors that pose a serious threat to individuals and society. Existing technical detection and legal measures can, to a certain extent, address the threats and challenges posed by deepfake technology. However, due to the rapid development of AI technology, governance in the face of future risks requires long-term consideration of how to regulate and address the potential risks associated with AI and the establishment of a more comprehensive legal framework and technology regulatory mechanism to more effectively utilize the technology to drive innovation and improve productivity.


Baccarella, C. V., Wagner, T. F., Kietzmann, J. H., & McCarthy, I. P. (2018). Social media? It’s serious! Understanding the dark side of social media. European Management Journal36(4), 431-438.

Chadha, A., Kumar, V., Kashyap, S., & Gupta, M. (2021). Deepfake: an overview. In Proceedings of Second International Conference on Computing, Communications, and Cyber-Security: IC4S 2020 (pp. 557-566). Springer Singapore.

Chen & Magramo. (2024, February 4). Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’. CNN.

Chen. (2022, February 6). “Changing faces” posing as CFOs and swindling away ¥200 million! Details of Hong Kong’s biggest AI scam revealed. STCN.

Fetzer, J. H., & Fetzer, J. H. (1990). What is artificial intelligence? (pp. 3-27). Springer Netherlands.

Global News. (2019, April 10). David Beckham ‘speaks’ nine languages in call to end malaria. [Video]. YouTube.

Incod. (2024, February 13). $25 Million Stolen Using Deepfakes in Hong Kong: Incode’s Passive Liveness Technology, Shield Against Advanced Fraud.

Iris & Alec. (2024, January 24). Rolling in the deepfakes: Generative AI, privacy and regulation. LexisNexis.

Kaushal. (2021, October 10). 8 best face swap apps for Android and iOS. TechWiser.

Maras, M. H., & Alexandrou, A. (2019). Determining authenticity of video evidence in the age of artificial intelligence and in the wake of Deepfake videos. International Journal of Evidence & Proof, 23(3): 255–262.

Meskys, E., Kalpokiene, J., Jurcys, P., & Liaudanskas, A. (2020). Regulating deep fakes: legal and ethical considerations. Journal of Intellectual Property Law & Practice15(1), 24-31.

PK, F. A. (1984). What is artificial intelligence? Success is no accident. It is hard work, perseverance, learning, studying, sacrifice and most of all, love of what you are doing or learning to do, 65.

Be the first to comment

Leave a Reply