Safeguarding Reality: Deepfakes and the Imperative for Stronger Digital Governance

As artificial intelligence and automation technology continues to upgrade and innovate, we have stepped into the digital era where generative artificial intelligence (GAI) is prevalent. In this era, it seems that there is nothing that can not be subverted, and even the “seeing is believing” and ” pictures don’t lie ” that people have always believed in have gradually become ridiculous statements. It is worth noting that due to the lowering of the threshold for the use of relevant generative tools, more and more videos and images containing deepfake techniques can be seen on major social media platforms, and new application scenarios are increasingly being developed. Deepfake technology allows for the creation of new surreal but fraudulent materials that can not easily be captured by the human eye for defects and look very real (Sharma & Kaur, 2021). Unfortunately, this major advancement in artificial intelligence is being used to create unauthorised digital replicas of real people (including spoof audio, video, and pictures) is becoming easier while also making it harder for people to tell the difference between the real and the fake.

The negative impact of deepfake is meant to be a broader ethical and social dilemma brought about by the rapid deployment of generative AI (Singh, 2024). Due to the widespread use of this technology in illegal scenarios such as pornography and internet fraud (Sharma & Kaur, 2021). Just in January this year, a large number of AI-generated fake pornographic and bloody photos of the famous singer Taylor Swift went viral on multiple social platforms, with million of viewers, triggering social media shock and attracting the attention of the White House (Simon, 2024). But in fact, this is not the first time that Taylor has become a victim of AI “deep fakes”. Not long before this incident, a fake Taylor Swift promotional advertisement generated by artificial intelligence was widely circulated on the social media platform Facebook. In this advertisement, “Taylor Swift” sits in front of the piano, promoting the French high-end cookware brand Le Creuset and stating that she will give fans a set of cookware for free. In the deepfake clip using Taylor Swift’s likeness, the AI-generated voice says: “Hey y’all, it’s Taylor Swift here. Due to a packaging error, we can’t sell 3,000 Le Creuset cookware sets. So, I’m giving them away to my loyal fans for free.” “If you’re seeing this ad, you can get a free cookware set today – but just a heads-up, there are a few rules (Haynes, 2024).” When victims are directed to the bogus websites, they will be asked to fill in their personal details and pay a shipping fee of $9.96 for the free cookware. The New York Times reported that cookware sets claimed to be free will not actually be delivered, and those who provide bank card details will subsequently be repeatedly charged (Haynes, 2024).

Ai-generated video of Taylor Swift promoting Le Creuset cookware.

Taylor Swift in the video advertisement is not real. Instead, it was composited using artificial intelligence based “deepfake” technology to combine her voice with her appearance and Le Creuset advertisement clips. Fraudsters use machine learning software to create such synthetic content, using real video and audio clips of public figures to create fake clips, which are ubiquitous online and easily accessible (Cerullo, 2024). In addition, Haynes (2024) states that this AI-generated scam video ad was created as a “deepfake” by utilizing several clips from Taylor Swift’s 2016 video interview with Vogue, as well as various images of Le Creuset products. Furthermore, even though Taylor Swift may be a fan of Le Creuset based on the fact that the brand was once featured in a scene of a documentary she was filming, the company involved quickly responded to this by stating that she had no official marketing relationship with the company, and all product promotion activities are from official social events, apologizing to cookware fans who were deceived by Deepfake (Cerullo, 2024). However, it is particularly noteworthy that deepfake video advertisements can be misleading when infiltrating Facebook and other social media platforms. The increasing sophistication of the digital age of deepfakes and the increasing potential for the spread of misinformation through social media platforms have undermined citizens’ trust in digital online media content (Jones, 2020). Today’s advanced internet has made social media a breeding ground for disinformation, and with a raw voice or a raw image, criminals can use it to fabricate even more information that is hard to distinguish from the real thing, which can lead to everyone being a potential victim of malicious AI operations.

The production of deep fake videos is inseparable from AI technology, and the implementation of AI technology is inseparable from machine learning and deep learning, according to which they allow computers to learn from real data before generating fake instances with statistical similarity, imitating voices and image, and thus creating alternative realities (Sharma & Kaur, 2021). Deepfakes are synthetic media generated using Artificial Intelligence (AI) and through Deep Learning, including tampering, forgery, and automatic generation of images, videos, and audios, which are often digitally manipulated by cyber crooks to produce misleading videos or images. The terrifying aspect of Deepfakes is that they can depict people saying or doing things they have never truly done before, and they can spread false information that appears to come from trustworthy sources, with highly realistic effects that are difficult to discern. Additionally, due to the open and transparent nature of video, audio, and image materials of public figures, they provide a great deal of material for AI training, and as a result, well-known figures are frequent victims of “deepfakes”. Crawford (2021) indicates that AI is neither artificial nor intelligent, because without large-scale datasets or predefined rules and intensive computational training, AI systems are unable to discern and generate anything autonomously and rationally. In terms of technical principles, Deepfakes creates and refines fake content by using two algorithms which are generators and discriminators. The generator creates the initial fake digital content based on the desired output build and training dataset, while the discriminator needs to analyse whether the initial version of the generated content is authentic or not (Barney, 2024). Repeating this process multiple times allows the generator to continually improve and create highly realistic fake content, while the discriminator likewise becomes more proficient at identifying flaws in the generator and correcting them. Moreover, the implementation of “deepfakes” relies primarily on deep neural networks, specifically generative adversarial network (GAN), which are created by combining generator and discriminator algorithms (Barney, 2024). GAN use deep learning to identify patterns in real images and then use these patterns to create fake images or videos. In other words, training a lot of facial data allows the model to learn potential relationships between facial features and generate fake facial features that are highly comparable to the real ones. When creating a deepfake photo, the GAN system views the target’s photo from multiple angles to capture all the details and perspectives (Barney, 2024). In a similar way, when creating a Deepfake video, the GAN views the video from all angles and analyses the behaviour, movement, and speech patterns (Barney, 2024). In order to fine-tune the realism of the final image or video all the above information needs to be run multiple times in the discriminator.

Admittedly, technology itself is neutral, and in the age of the Internet, advanced AI technology and open data flows have brought huge dividends to society, making our lives more convenient and efficient. As Taylor Swift’s case shows, deepfakes have a huge dark side, but the technology also has the potential to be used for the benefit of humanity, such as in education and the arts. This means that when dealing with the potential harm caused by deepfake technology, it should be effectively regulated rather than completely banning the development and application of this technology. As technology advances, it can itself be used to combat “deepfakes.” The concept of co-evolution is conducive to media technology change and Internet governance (Just & Latzer, 2019). Technology companies and research organisations should increase their research efforts on deepfake technology and develop corresponding technical means to detect and prevent the dissemination of deepfake content. For instance, artificial intelligence-based deep forgery detection algorithms should be developed to improve the accuracy of identifying false content. Algorithms are both a tool and a result of governance, with highly developed information processing and the ability to adapt through trial and error learning, and are part of the process of co-evolution (Just & Latzer, 2019). Furthermore, algorithms on the Internet can be viewed as governance mechanisms as tools used to exert power, with the ability to promote interests at individual as well as public and collective levels (Just & Latzer, 2019). Algorithmic choices on the internet not only influence what we think, but also how we behave. At the same time, algorithmic choices shape the construction of individual consciousness, influencing the culture, knowledge, norms and values of society, namely the collective consciousness, thus further shaping the social order of modern societies (Just & Latzer, 2019). This makes algorithms a highly strategic element in governing the Internet as an information society.

Considering the increasing role of artificial intelligence technology, especially generative software, in the media field, technical issues and algorithms can be seen as policy issues (Just & Latzer, 2019). Essentially, there is a need to develop standards regarding the scope and use of AI technologies due to societal sensitivities. Technological innovation is political in nature and will combine and interact with co-evolving political, legal and economic markets to jointly establish a public order framework (Just & Latzer, 2019). Therefore, the government should formulate relevant laws and regulations to clarify the boundaries and responsibilities of the use of deepfake technology. At the same time, strengthen the supervision of deepfake technology and crack down on the use of this technology for illegal and criminal activities. In addition, Andrejevic (2019) points out that the medium we use to express cognition and ideas is an external social medium that rejects our complete control, as it inevitably conveys information that is different from our intentions. This is based on the fantasy of automation technology and media, which transforms stubborn human language into its actionable version. It is important to strengthen the popularization of deepfakes and improve the public’s ability to identify false content. This can be achieved by providing relevant education and training to improve the public’s technical literacy and enhance their awareness and vigilance of deepfake technology.

In general, artificial intelligence is a double-edged sword. It brings not only convenience and efficiency, but also increasing information security threats. “Deepfake” is the product of the technological advancement of “deep learning” in the concept of artificial intelligence. It has many valuable applications, but “deepfakes” are also used for sinister purposes. More and more sophisticated artificial intelligence technologies are being used to generate false information for fraudulent activities. Deep fake videos, audios, images, and generated texts are abused by criminals. How to control technology to be used in the right place is a top priority in today’s society. The rapid development of artificial intelligence and the ease of use of deepfake tools pose serious challenges to the trust and security of online information. We need to adopt a comprehensive response strategy, including efforts to strengthen laws and regulations, technical prevention, and raise public awareness, to deal with the possible negative impacts of deepfakes technology. Only in this way can we ensure the healthy development of science and technology, protect personal privacy and social stability.

Reference List

Andrejevic, M. (2019). Automated culture, in Automated Media. Routledge, 44–72.

Barney, N. (2024). What is Deepfake Ai?. TechTarget.

Cerullo, M. (2024, January 16). AI-generated ads using Taylor Swift’s likeness Dupe Fans With Fake Le Creuset Giveaway. CBS News.

Crawford, K. (2021). The Atlas of AI: Power, politics, and the planetary costs of Artificial Intelligence. Yale University Press.

Haynes, T. (2024, January 11). Watch: Taylor Swift used in Deepfake Le Creuset AD. The Telegraph.

Jones, V. A. (2020). Artificial intelligence enabled deepfake technology: The emergence of a new threat. Utica College.

Just, N., & Latzer, M. (2019). Governance by algorithms: Reality construction by algorithmic selection on the internet. Media, Culture & Society39(2), 238–258.

Sharma, M., & Kaur, M. (2021). A review of Deepfake Technology: An emerging ai threat. Advances in Intelligent Systems and Computing, 605–619.

Simon., B. (2024, January 28). The Dark Side of AI: How Taylor Swift became a victim of Deepfake pornography. LinkedIn.

Singh, S. (2024, February 8). And how marketers can battle deepfakes to protect their brands. LinkedIn.

Be the first to comment

Leave a Reply