Artificial intelligence (AI) is currently one of the hottest topics on the social front, and it is gradually affecting people’s lives. AI applications are becoming more and more powerful, and it has already made significant achievements in human health, the world economy, and social security, helping us solve many of the world’s important problems. However, ensuring that AI maintains its role of “benefitting humanity” has become a challenge (Townsend, n.d.), and it has generated a great deal of negative ethical discourse, a representative example being the AI Face Swap.
AI Face Swap is an emerging field of artificial intelligence in recent years, which is largely based on a powerful AI technology – Deepfake technology – that uses image audio processing and machine learning algorithms to combine images to create new never-before-seen footage, and the videos created using Deepfake are “fake video” (Townsend, n.d.). Even though AI Face Swap has been widely used to create movie special effects, reconstruct crime scenes, and synthesize e-books, the “fake videos” created by Deepfake have posed ethical problems as their use has grown rapidly because these “fake videos”. The “fake videos” created by Deepfake have raised ethical issues, as they provide a huge opportunity for unscrupulous people to cheat.
Therefore, we will discuss the ethical issues caused by AI Face Swap, which include Deception, Privacy, and Accountability, using Deepfake technology as an example, and provide better options for the future development of AI Face Swap through critical trials to make it better serve people’s daily life.
The connection between AI Face Swap and Ethic
On March 5, 2019, James, the founder of Deepfakes (Parkin, 2019), had uploaded a video to YouTube of a scene from three years ago on the NBC Tonight Show where hosts Jimmy Fallon and Dion Flynn played Donald Trump and Barack Obama calling each other, and James used Deepfakes to James used Deepfakes to replace Fallon and Flynn’s faces directly with the real Trump and Obama and re-posted it on YouTube three years later (Parkin, 2019). The technique of seamlessly transferring the “skin” and “facelessness” of one face to another, unrelated face, instantly shocked many observers and suggested that the technology could be used by unsuspecting people to humiliate politicians and even sway elections (Parkin, 2019). At the time, James said he didn’t post the video to fool anyone, but “purely for a laugh” (Parkin, 2019). However, the AI Face Swap is far more powerful than James could have imagined.
After Deepfake emerged, there was a surge in the misuse of AI face swap. The first time the term “Deepfakes” appeared on the social news site Reddit (SONTAG, 2019), it had an extremely bad impact. An account called “deepfakes” used Deepfake to replace the face of famous actress Scarlett Johansson in a video of a sexual relationship (SONTAG, 2019). More and more people then used deepfake to create “fake videos” with similar content, which led Reddit (SONTAG, 2019) to shut down the community and explicitly prohibit any user from distributing Deepfake’s “fake” videos on the community platform (Zucconi, 2018). However, Deepfake’s algorithm was already open sourced on the algorithm hosting platform Github (CCTV, 2019), so the number of users spreading similar “fake videos” on Reddit still reached 90,000.
The harm caused by these inappropriate content videos is not only limited to the female celebrities whose faces were swapped, but also reflects the serious moral corruption of the user culture, so the relationship between AI Face Swap and Ethic must be mutually co-existent, just like Alan (Zucconi, 2018) said and must pass the test before driving is a reason, before doing something to know what can be done and what cannot be done, before using Deepfake should know how to avoid the ethical issues it brings and minimize the risk.
Ethical Issues about AI Face Swap – Deepfake
Even though Deepfake is essentially a “visual deception”, it is still successfully integrated into some aspects of our daily lives as one of the new areas of artificial intelligence.
What if you could choose your favorite actor to play the lead role in any movie you watch?
Shamook (Foley, 2019), the creator of Deepfake, remade the trailer for Spider-Man: No Way Out and used Deepfake technology to replace Tom Holland’s face with that of the previous film’s protagonist, Beemer Quill. This seemingly absurd action was brilliantly executed and boosted the film’s viewership significantly (Foley, 2019).
In addition, Data Grid, a Japanese artificial intelligence company, developed software that automatically generates virtual models to help brands promote their products (Townsend, n.d.). Brands use Deepfake to implement electronic trials where users can try out products before deciding to buy them. This has improved customer shopping satisfaction and increased sales for merchants (Townsend, n.d.).
However, even with the support of these advantages, it is difficult to avoid the ethical problems caused by Deepfake abuse. In January 2008, the AI face-swapping software FakeApp was officially launched (CCTV, 2019), and independent developer Alan Zucconi also focused on the ethical requirements of Deepfake in his own FakeApp tutorial: “The technology used by Deepfake is controlled by artificial intelligence algorithms that are learned by machines that learning, and the machine does not have the quality of judging things good or bad; it is inherently neutral, a tool that can be used for good or bad.”
Hao Li (CCTV, 2019), co-founder of Pin Curtain Animation, also said Deepfake can really be used to do some very bad things against morality, but this is not the main purpose of Deepfake, it is just a fun tool to be used for entertainment (Zucconi, 2018). We should consider ethics when using Deepfake, effectively avoid ethic issues, and let AI face swap this application to bring us more and better experience.
As mentioned earlier, Deepfake is fundamentally a form of “visual deception”. This technology can be easily weaponized and can easily affect society as a whole (CCTV, 2019). Often, people do not have the ability to distinguish between true and false content on a technical level, and Deepfake itself does not have features to help distinguish between true and false, which enhances its deceptive effect.
Pindrop, a cybersecurity company, conducted a survey in 2018 after the emergence of FakeApp (Stupp, 2019), which calculated that the number of fraud cases that occurred between 2013 and 2017 increased by 350% (Stupp, 2019), with 1 in 638 fraud cases being conducted using artificial intelligence face-swapping technology.
Deepfake allows any user to use the face and voice swap feature at will, which definitely reduces the cost of crime for the criminals. Synthesized “fake videos” can easily increase the incidence of social fraud, and people usually do not have the technical ability to distinguish between the real and the fake, especially for special care groups such as the elderly, infants, and children. Deepfake’s algorithmic process does not include any mechanism for content review, it allows users to upload any content and also allows them to replace it with any content. In most cases, it is easy for people to believe what they really see with their own eyes. In the first moment of seeing it, it does not occur to them that it is technically false content synthesized by artificial intelligence. If users use Deepfake excessively and spread the synthesized “fake content” arbitrarily, it is likely that they will forget that it is based on a virtual model of artificial intelligence and not a human program.
While anyone can be a victim, the physical and mental health of the elderly, infants, and children, as well as their own interests, are vulnerable if large numbers of Deepfake synthesized “fake videos” are used as fraudulent tactics by unscrupulous individuals. It also tends to destroy people’s perception of reality, leading to less trust in videos and images in the future (CCTV, 2019).
Even though the algorithms included in Deepfake technology do not collect user’s identity information, such as name, age, home address, and other information that people can easily be aware of privacy breaches. However, Deepfake can collect personal photos, videos, and audios without the consent or even the knowledge of the user, which is information that is closely related to the individual in a broad sense (LIU, 2020) and is the personal information that people are most likely to ignore.
The Deepfake algorithm itself does not contain real-name authentication, privacy risk tips, etc. There is a potential privacy risk for personal photos and videos that people post on public media platforms, because we do not know whether the publicly posted content will be saved and used in Deepfake by unscrupulous people. Anyone can be a victim of a Deepfake privacy breach, especially some famous public figures.
As mentioned earlier, actress Scarlett Johansson’s photos were saved by strangers and replaced in pornographic movies using Deepfake technology (SONTAG, 2019). This is a big privacy breach, because by the time Scarlett knew about it, the video had already been sent out, and she had no personal knowledge of it until then.
Imagine how bad it would be to suddenly find out one day on the Internet that a photo of yourself had been replaced in an indecent video and made public?
As Deepfake is increasingly used for inappropriate purposes, there is a great need to regulate its algorithms and the software that uses them. the House HR3230-Deepfake Accountability Act of 2019 was introduced (116th Congress, 2019) in an attempt to use provisions to criminalize synthetic media using Deepfake technology. But on a practical level, these provisions seem overly optimistic (Coldewey, 2019).
The introduction of laws to regulate is certainly a good way forward but based on the extent of the abuse of Deepfake technology, they should have used more realistic methods of regulation, such as real-name authentication. If users want to fabricate fake videos of others for harmful purposes, real-name authentication could have organized the situation in a very timely manner, as the creator would have followed once the content was sent out.
But the good news is that Deepfake’s developers have since refined the algorithm, and now Deepfake requires any user to use an “unremovable” digital watermark (Coldewey, 2019) when posting created videos. Failure to do so would make it a crime to use Deepfake (Coldewey, 2019). AI face-swapping software has been doing more research to make sure they can use Deepfake smoothly and to push AI face-swapping to the best of their ability for the benefit of society and people.
All in all, AI face swapping is becoming more and more popular as a new field, and the Deepfake technology code used is open source directly to users. People can now use this technology very easily, and it brings a lot of convenience to our daily life, such as helping brands to promote their products to increase sales, and helping movies to create special effects. But at the same time, we should also pay attention to the ethical issues brought by AI face-swapping: Deception, Privacy, Accountability, first of all, we should pay attention to prevent the fraud caused by Deepfake, then we should protect the personal privacy of users from infringement, and finally, we should always pay attention to the regulation of AI face-swapping, not to let the artificial intelligence that should be convenient for people. Artificial intelligence should not be convenient for people to bring us more harm. The development of artificial intelligence is going on hotly, but how to make artificial intelligence provide convenient security for our daily life while reducing the ethical problems caused by collateral is a question worth thinking about at this stage.
- Townsend, M. (n.d.). Beneficial artificial intelligence. Www.givingwhatwecan.org. https://www.givingwhatwecan.org/cause-areas/long-term-future/artificial-intelligence?gclid=CjwKCAjw8-OhBhB5EiwADyoY1Z_gK0GgN40sMS2wRUKG2OKuhmRCcwwr5MbklEZhxEVPeREZE3BTNBoCY1MQAvD_BwE
- Parkin, S. (2019, June 22). The rise of the deepfake and the threat to democracy. The Guardian; The Guardian. https://www.theguardian.com/technology/ng-interactive/2019/jun/22/the-rise-of-the-deepfake-and-the-threat-to-democracy
- Flew, T. (2021). Regulating platforms. John Wiley & Sons.
- Loon, B. van. (2020). The Ethics of Digital Face-Swapping. Luc.edu; LUC. https://www.luc.edu/digitalethics/researchinitiatives/essays/archive/2019/theethicsofdigitalface-swapping/
- CCTV. (2019, September 9). “Face-swapping” truth “Deep Forgery” online orgies and security threats. Tech.sina.com.cn. https://tech.sina.com.cn/i/2019-09-09/doc-iicezzrq4476750.shtml
- SONTAG, S. (2019, August 7). What is a deepfake? The Economist. https://www.economist.com/the-economist-explains/2019/08/07/what-is-a-deepfake?gclid=CjwKCAjw8-OhBhB5EiwADyoY1Rth9jDM1gu3gJpVxtoYNwD4sd5kpO_tdN2c29uCKZHoOzEUoWgtRhoCZgoQAvD_BwE&gclsrc=aw.ds
- Zucconi, A. (2018, March 14). The Ethics of DeepFakes. Alan Zucconi. https://www.alanzucconi.com/2018/03/14/the-ethics-of-deepfakes/
- Foley, J. (2019, November 15). 9 deepfake examples that terrified and amused the internet. Creative Bloq; Creative Bloq. https://www.creativebloq.com/features/deepfake-examples
- Stupp, C. (2019, August 30). Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case. WSJ; Wall Street Journal. https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402
- LIU, C. Q. (2020, December 20). What does the Personal Information Act protect. Www.yicai.com. https://www.yicai.com/news/100883787.html
- 116th Congress (2019,June 28) H.R.3230 – DEEP FAKES Accountability Act. https://www.congress.gov/bill/116th-congress/house-bill/3230
- Coldewey, D. (2019, June 14). DEEPFAKES Accountability Act would impose unenforceable rules — but it’s a start. TechCrunch. https://techcrunch.com/2019/06/13/deepfakes-accountability-act-would-impose-unenforceable-rules-but-its-a-start/