How should we deal with the harm caused by Deepfake?

Introduction

In an era where digital media permeates every aspect of our lives, the emergence and use of Artificial Intelligence (AI) is undoubtedly a hot topic today. Deepfake is a combination of the phrases “deep learning” and “fake.” It is a composite media in which existing images, videos, or recordings of people are replaced by portraits of others (Coleman, 2023). At the core of the deep faking phenomenon is generative AI, a branch of AI that generates new outputs based on trained data. Unlike traditional AI systems, generative AI can creatively generate images, videos, and more. No technology is absolutely good or bad, so how do we deal with the series of impacts brought by deepfake?

Although generative AI can become an effective tool to help people create, the misuse of Deepfake also highlights the enormous risks that constantly evolving AI technologies can bring to individuals and society. Therefore, we should use generative AI such as deepfake critically, acquire the ability to recognize misinformation and enhance our awareness of prevention. Technology companies and communication platforms should take responsibility. The government should strengthen governance by developing a comprehensive regulatory framework and ethical standards to create a secure digital environment, thereby safeguarding citizens’ personal privacy, security, and rights.

The advantages of Deepfake

It is undeniable that generative AI can be widely applied in multiple fields, bringing convenience to people. Deepfakes can convert faces in images or videos into another person by using AI and facial mapping techniques. It has positive uses. In the field of education, the use of synthetic media can bring historical figures back to life in front of students, making the classroom more attractive and interactive, thereby enhancing students’ enjoyment of learning. It is also widely used in the entertainment industry. For example, Marvel Studios uses deepfake technology to age many characters in movies. Furthermore, videos related to deepfake have always been popular on YouTube.

Concerns raised by Deepfake

From creative content generation to controversial deepfake technology, generative AI has many powerful functions. However, generative AI technology is a double-edged sword. While enjoying its convenience and practicality, we must face its negative impact. Its development has raised concerns about violating personal privacy and social surveillance.

Generative AI can be used to create fake news or content that may trigger public opinion (Wach et al., 2023). Social media platforms are the main target of Deepfakes, as misinformation and rumors are more likely to spread there, and users often drift with the current trend. Recently, deepfake has received negative attention due to its abuse. On January 25, 2024, AI-generated explicit pornographic images of Taylor Swift circulated on X. The most prominent post has reached 45 million views, 24,000 reposts, and hundreds of thousands of likes and bookmarks. Finally, the post was deleted after about 17 hours on the platform (Weatherbed, 2024). However, with the fermentation and rapid spread of the incident, many images still exist, and even many counterfeits appear, which seriously damages her reputation and image and has attracted widespread social attention. This incident is a typical social issue caused by deepfakes involving multiple aspects such as privacy, ethics, and government regulation.

Harms to women

The problems that deepfake technology may cause not only affect and harm public figures such as international superstars but also ordinary citizens cannot guarantee sufficient security in the AI era. This is not the first such case, as women and girls around the world have faced similar abuse. A 14-year-old female high school student from New Jersey, USA, was once a victim, and her image was used to create nude photos. Similar incidents have never stopped happening, it is only because of the reputation of celebrities such as Taylor Swift that this issue has entered the public eye. With the increasing accessibility of generative AI tools, more and more software can create deepfakes. Creators no longer need to master technical skills or strong financial resources to generate realistic synthetic images, videos, and audio of real people. When it comes to deeply fabricating pornographic content, women are the primary target. According to DeepTrace, 96% of Deepfake videos on the internet are unauthorized female fake videos (Brandon, 2024). This number is terrifying, not only infringing on privacy but also a form of digital violence that exacerbates the risks and challenges women face both online and offline. The involuntary nature of Deepfakes deprives women of their dignity and autonomy in depicting their bodies. It places this power in the hands of criminals.

Deepfake technology is not neutral, and it amplifies biases and defects in AI systems. There is already a lot of evidence to suggest that artificial intelligence programs have biases against women, just as we all know that Amazon’s AI recruitment tool prefers men over women. Biased data can directly lead to biased algorithms and outputs. Deepfakes exploit these existing gaps in AI, amplifying gender bias in dangerous ways by building models and utilizing flawed training data (Samani, 2024). This has led to more extreme distortions and abusive descriptions of women. Research has shown that image-based abuse can cause significant harm to victims, including anxiety, depression, suicidal ideation, social isolation, and reputation damage (Brandon, 2024). We all have social media accounts and generate a large amount of data every day. It is a scary thing to think that anyone could be a target of deepfake.

Privacy issues

Driven by deepfake technology, the misuse of personal images and videos poses a significant threat to citizens’ personal privacy, and privacy leakage has become one of our most concerning issues. We don’t know where our images will be created or used, at what time, and for what purpose. Moreover, Deepfake can be manipulated to obtain personal and economic benefits, such as impersonating individuals for financial fraud or other illegal activities. According to Hong Kong police, a financial officer of a multinational company was scammed during a video conference call and paid $25 million to the fraudster, who used Deepfake technology to impersonate the company’s CFO (Chen & Magramo, 2024). Most examples of deepfakes have taken a pessimistic view of society, indicating how people currently use deepfakes to deceive and potentially exploit others (Kietzmann et al., 2020).

Ethics and morality

The use of AI-generated photographs poses significant ethical issues surrounding personal portrait ownership and control. This new synthesis technique generates images that closely resemble real people without their consent and knowledge. The moral impact on personal image and the loss of control over portrait rights have raised concerns among people. All publicly accessible digital materials can be used to generate training datasets for AI models (Crawford, 2021), and coupled with the fact that algorithms are not neutral, this will strengthen existing social biases and lead to unequal and discriminatory consequences. In addition, the harm caused by deepfakes in terms of reputation and influence is enormous. Taking Taylor Swift as an example, disseminating erroneous information can damage these well-known and authoritative women’s influence, credibility, and authority. Furthermore, besides the fact that deepfakes will bring false information, what is more worrying is that after people are constantly exposed to false information, people will choose not to believe in the real information and consider it false (Westerlund, 2019). This weakens citizens’ trust in digital media and hurts both sides.

Platform responsibility

Generative AI is a crucial driver of future profitability and growth for media platforms. Companies such as Facebook and YouTube are not just distribution platforms, they should also be held accountable for the content posted on their websites. We know that social media platforms played an essential role in the Taylor Swift incident. As the main channel for information dissemination, social media is responsible for strictly reviewing and managing the content posted on its platform. Taylor’s explicit photos have been widely circulated this time, indicating that there are loopholes in the platform’s content review. Therefore, social media platforms must strengthen their content review mechanism, have sufficient capabilities and technology to identify and filter false content, and effectively assume the responsibility of protecting user privacy and information security. In addition, researchers must prioritize ethical factors and develop a system that first protects user privacy and security (Wach et al., 2023). Strict data protection measures, clear regulations on collecting and using personal information, and regular evaluation and updating of security systems are absolutely necessary.

Government regulation

Spreading false information is easy, but correcting, limiting, and regulating the use of deepfake is a challenging task. The more features generative AI has, the more risks and concerns it poses for people. Regulation and legislation are both effective means to combat deepfakes, and the increasingly complex AI technology requires new legal and regulatory frameworks for management (Westerlund, 2019). There is no federal law in the US to regulate Deepfake, but some states have already begun implementing Deepfake legislation. For example, California’s deepfake law came into effect in 2019, which not only criminalizes unauthorized deep forgery of pornography, but also grants victims the right to sue those who use their own portraits to create images (Owen, 2024). At present, Australia has not yet implemented specific regulations on the use of Deepfake technology and the creation or use of Deepfake images, audio, and video. Placing the management of AI under the law, whether it is accountability systems or strict regulations on usage conditions and privacy protection, is an effective regulatory measure for any country and government.

Personal awareness

We need to understand that with the rapid development of technology, any photos we post on social media sites are at risk of being deeply faked. It’s challenging to detect them because they use real footage and have real voices. At present, more and more fake photos are used and quickly spread on the internet. Therefore, in addition to platform and government management, we must also consciously strengthen the protection of personal privacy and improve our awareness of prevention. When we use the internet, if we have questions about any request that pops up on the website, we should carefully consider it first rather than directly choose to agree. It is better to spend some time to verify security than to be a victim. At the same time, it is also necessary to explain and popularize science to older populations and those unfamiliar with emerging technologies. Middle-aged and elderly people have always been the primary targets of fraud crimes. After all, using deepfake technology to impersonate friends’ or family’s portraits, videos, and voices to commit fraud is already common today. We must learn to critically evaluate the authenticity and credibility of images and videos on the internet and minimize the adverse effects of false information on us.

Conclusion

As our lives increasingly enter the digital space, the ability of emerging technologies and their applications may significantly impact privacy and governance. The growing prominence of deepfakes has triggered discussions on authenticity on the internet and the boundaries between fact and fiction. It is undeniable that Deepfakes provide opportunities for positive changes in our lives in various aspects, such as education and entertainment. However, because of the widespread use of technology, generative AI such as deepfake brings increasing risks to people. In the future, this technology will inevitably be increasingly used for damaging purposes, such as retaliatory pornography, bullying, fake video evidence in court, extortion, political sabotage, terrorist propaganda, and fake news (Westerlund, 2019). The harm caused by Deepfake will not be actively reduced, which requires the platform to take responsibility, strictly review the published content, and develop a system that is conducive to protecting user privacy. The government needs to develop a clear regulatory approach. Strict regulations will help constrain the use and dissemination of AI features such as deepfake and effectively govern this current and gradually increasing malicious use. As users, we need to be able to effectively identify erroneous information and manipulate behavior to mitigate the harm caused by generative AI. The development of Deepfake technology faces significant challenges, and in the future, citizens, organizations, and governments need to work together to ensure the dignity, rights, and security of individuals in the digital age while utilizing AI technology.

Reference

Brandon, A. (2024, February 1). Taylor Swift deepfakes: new technologies have long been weaponised against women. The solution involves us all. THE CONVERSATION. https://theconversation.com/taylor-swift-deepfakes-new-technologies-have-long-been-weaponised-against-women-the-solution-involves-us-all-222268

Chen, H., & Magramo, K. (2024, February 4). Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’. CNN. https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html

Coleman, K. (2023, October 21). How Deepfakes are Impacting Culture, Privacy, and Reputation. STATUS LABS. https://statuslabs.com/blog/what-is-a-deepfake

Crawford, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (1st ed.). Yale University Press. https://doi.org/10.2307/j.ctv1ghv45t

Kietzmann, J., Lee, L. W., McCarthy, I. P., & Kietzmann, T. C. (2020). Deepfakes: Trick or treat? Business Horizons, 63(2), 135–146. https://doi.org/10.1016/j.bushor.2019.11.006

Owen, A. (2024, February 2). Deepfake laws: is AI outpacing legislation?. onfido. https://onfido.com/blog/deepfake-law/

Samani, A. (2024, February 7). Gender Equity in AI: Are Deepfakes A Distorted Reflection of Society’s Bias? https://aartisamani.com/gender-equity-ai-deepfake-challenge/?trk=article-ssr-frontend-pulse_little-text-block

Wach, K., Duong, C. D., Ejdys, J., Kazlauskaitė, R., Korzynski, P., Mazurek, G., Paliszkiewicz, J., & Ziemba, E. (2023). The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT. Entrepreneurial Business and Economics Review, 11(2), 7–30. https://doi.org/10.15678/EBER.2023.110201

Weatherbed, J. (2024, January 26). Trolls have flooded X with graphic Taylor Swift AI fakes. The Verge. https://www.theverge.com/2024/1/25/24050334/x-twitter-taylor-swift-ai-fake-images-trending

Westerlund, M. (2019). The Emergence of Deepfake Technology: A Review. Technology Innovation Management Review, 9(11), 39–52. https://doi.org/10.22215/timreview/1282

Images:

Coleman, K. (2023). How Deepfakes are Impacting Culture, Privacy, and Reputation [Image]. STATUS LABS. https://statuslabs.com/blog/what-is-a-deepfake

Saner, E. (2024). Inside the Taylor Swift deepfake scandal: ‘It’s men telling a powerful woman to get back in her box’ [Image]. The Guardian. https://www.theguardian.com/technology/2024/jan/31/inside-the-taylor-swift-deepfake-scandal-its-men-telling-a-powerful-woman-to-get-back-in-her-box

Be the first to comment

Leave a Reply