Introduction
A number of AI face-swapping and dress-up incidents have occurred recently, sparking widespread public concern. With the growing sophistication of AI technology and the mass adoption of the technology, the governance of Internet culture is facing serious challenges. The transformative AI technology represented by Deepfake can not only simulate and restore characters or scenes, but also use technology to fake images, sounds and videos. Deepfake has given rise to scene-based and immersive interactions, which can not only disrupt public perception and pollute the online cultural environment, but can also lead to problems such as online violence and privacy leaks, resulting in damage to users’ interests. However, there are two sides to the coin. The illegal use of new AI technologies such as Deepfake can make Internet governance more difficult. At the same time, sensible governance of the technology and the Internet environment can also contribute to the viability of AI technologies for human use. This blog post will use Deepfake as a case study to delve into Internet culture and Internet governance and to analyse the key issues that arise in the process of governance.
What is Deepfake
Deepfake refers to the application of artificial intelligence-based synthesis techniques(Filimowicz, 2022). Based on DFL, which is currently the most commonly used algorithm for deepfake behaviour on the web(Young, 2019). Deepfake can generally be divided into four categories: reenactment, replacemen, editing, and synthesis.In simple terms(Filimowicz, 2022). Deepfake allows users to synthesise images, sounds and videos using technology. AI face swapping and video synthesis are two of the most common forms of Deepfake. In addition to this it is also used for message tampering and voice synthesis(Filimowicz, 2022). A number of individuals h and organisations are using Deepfake technology to create cyber violence and financial fraud on the internet.

Deepfake: Jordan Peele plays Obama again
Cyberbullying against women
Deepfake pornography is destroying women, Hao argues that ( 2021). Mort is a broadcaster in the United States and a victim of Deepfake’s pornography distribution campaign. She found out that her likeness was being used in Deepfake’s pornographic videos and that the content of the account was being updated all the time (Hao, 2021). As Mort was considered an influencer, she also experienced cyberbullying. Mort’s fake composite images and videos were commented on and watched by many people and were difficult for her to accept (Hao, 2021).
You may think that Deepfake is far away from you, or even irrelevant to you. Statistics show that between 90 and 95 per cent of Deepfake’s videos on the internet are non-consens ual pornography. About 90% of this is non-consensual female pornography (Hao, 2021). The impact of Deepfake on online culture is also evident in the way that synthetic pornography returns women, who appear to have a higher social status, to the position that men consider them to be: sexual objects. In the famous “Xiaoyu deepfake” incident in Taiwan, Taiwan’s president Yingwen Cai was also one of the protagonists of a synthetic pornographic video (Voices, 2021). The fact that women are in a position of domination within a patriarchal social structure does not mean that they are not personally empowered or in a position of strength to talk about gender power on other levels of society (Kalpokas & Kalpokiene, 2022). In a patriarchal system, individual women, even if they achieve political power or social status, do not really change the structure of gender power. They will always be sexual objects in the eyes of some men, still objects of domination (Kalpokas & Kalpokiene, 2022).
While it is true that cyber violence caused by Deepfake is difficult to govern, countries have introduced policies and regulations regarding Deepfake synthetic faces. The United States was the first country to introduce legislation on Deepfake, with the largest number of bills directly related to Deepfake, and action has been taken at both the federal and state government levels(U.S. Federal GovernmentU.S. Federal Government,2018). In the Malicious Forgery Prohibition Act, the boundaries of “deep fake” are strictly defined. Deepfake is defined in the Act as an audio-visual recording that has been illegally obtained and used. The term “audiovisual recordings” refers to digital content such as images, video and voice. In addition to this, the Texas bill, which will come into force in 2019, also defines as a criminal offence the use of technology such as Deepfake to create deepfake videos with the intention of interfering with elections(Texas State Government,2019).

S.3805 – Malicious Deep Fake Prohibition Act of 2018-USA
In addition to the United States, China is also actively addressing the issue of portrait rights infringement and online violence caused by Deepfake. The Civil Code, which will come into force in 2021, prohibits any organisation or individual from infringing on the portrait rights of others by means of forgery using information technology, responding to the need for closer protection of portrait rights in the context of technological developmen(Government of the People’s Republic of China,2021)t.

Civil Code of the People’s Republic of China-China
Privacy Leakage
Behind Deepfake’s rapid rise in popularity and controversy are users’ concerns about their privacy and security (Crawford, 2021).The information Deepfake obtains from users is mainly portrait images(Rathgeb et al., 2022). From a technical point of view, it is no more risky than any other platform or app for uploading photos. However the Deepfake database stores a large number of face exemplars, which means that users are exposed to more risk. The so-called AI Face Swap is a fake face generated by human image synthesis technology based on artificial intelligence(Rathgeb et al., 2022). There are two key steps in AI Face Swap, face extraction and face transformation. Face extraction outlines the face from the photo and labels key areas such as eyes and nose, so that the object to be replaced in the image is clear(Rathgeb et al., 2022). Biometric information such as fingerprints, irises and faces is perhaps more important to personal privacy than an address or mobile phone number(Schick, 2020). Passwords can be replaced if lost, but biometric information is non-renewable, and once compromised, you cannot have a second face.
Such important facial information is being collected and disseminated on the internet by Deepfake. In 2018 China introduced the Personal Information Security Code, which clarifies that personal biometric information, such as personal facial recognition features, is personal and sensitive information, and that for such sensitive information(Government of the People’s Republic of China,2018). Users must have the express consent of the subject of the personal information before collecting it. In terms of preservation, it is required that de-identified processing does not allow the disclosure of personal characteristics. A high level of security precautions must also be achieved during the transmission of information(Government of the People’s Republic of China,2018).The European Union also released the “Artificial Intelligence Ethics Guidelines” in 2019. The guidelines stipulate that privacy and data management are one of the seven elements that AI needs to meet, and user privacy is prohibited from being spread on the Internet(European Union,2019). Strictly stipulates the boundary between AI and public privacy.
Deepfakes affect politics
Crawford argues that AI as a tool of state power can shape politics (2021).In recent years, disinformation about politics has flooded social networks. The emergence and development of Deepfake once again provides a more convenient way to create false information (Giansiracusa, 2021). According to the latest research results of Princeton University professor Jacob Shapiro, from 2013 to 2019, a total of 96 political campaigns have been launched on social media, most of which are to defame politicians, mislead the public, and intensify debates (Mazarr, 2019). Among these campaigns, 93% were original content, 86% amplified existing content, and 74% distorted objective facts (Mazarr, 2019). On Youtube, fake face-swapping videos of Donald Trump, Barack Hussein Obama, Elon Musk and other celebrity politicians have become commonplace. Although these videos are spoofed by netizens, there are also precedents for Deepfake’s deep fake capabilities and possible influence on the political world. In 2018, Gabonese President Ali Bongo disappeared for several months due to illness. In order to dispel speculations of conspiracy, the government released a video of Ali Bongo’s New Year’s speech. This video synthesized by Deepfakes technology almost triggered a military mutiny (Washington Post, 2020). In 2019, a video of Malaysian Economy Minister Azmin Ali having sex with a man went viral. Although Azmin Ali claimed that the video was forged through Deepfakes technology and was a political conspiracy, international experts claimed that no signs of forgery could be found (CNA, 2019).
Germany is the first country in the West to enact special legislation on harmful speech on the Internet. In terms of preventing the harm of artificial intelligence fraud technology, its current laws also severely crack down on false video information on the Internet. In 2018, Germany passed the “Social Media Management Act”, requiring social media companies to set up relevant procedures to check illegal content on their websites and delete obviously illegal information within 24 hours. Offending individuals and companies could face fines of up to 50 million euros.
Key issues
In the face of a huge flood of online content and increasingly strict content audit policies in various countries, it is increasingly difficult to rely on traditional audit models to accurately determine the meaning of content and respond to the problems caused by the explosion of information in a timely manner(Gillespie, 2021). Synthetic information and algorithmic audit mechanisms are not yet perfect. Although machine auditing can effectively relieve the pressure of manual auditing, there may be discrimination in results due to imperfect algorithmic auditing mechanism(Gillespie, 2021).
The use of artificial intelligence technology to produce fake news, driven by political or economic interests, and to disseminate it widely or to target specific groups of people. Deepfake is an upgraded version of content fakery in the age of artificial intelligence, a deep fakery technology based on generative adversarial networks, which can be used in video production to “change faces”(Filimowicz, 2022). This deep faking technology, based on generative adversarial networks, can be used in video production to “change faces”, “change voices” and mimic behaviour, and in the future it will even be able to fictionalise environmental scenarios(Young, 2019). The main difference between this and fake news is that people are largely unable to make a judgement on the authenticity of the content itself, making governance more difficult.
In the smart era where everything is connected and everything is media, any smart terminal may become a source of content and a window to receive it, storing a large amount of data that can be mined.The issue of personal privacy leakage has jumped to the forefront of risk in online content governance.(Waldo et al., 2007). Deepfake has a powerful database(Rathgeb et al., 2022). This information is private, unique and unchanging, and if leaked or misused will have a serious impact on citizens’ rights.
Conclusion
In the digital age, people will be faced with the reality of more complex problems inherent in online content, as well as new issues arising from the security and misuse of AI proper. However, the continued maturation and optimisation of AI technology itself and its potential for application in various governance scenarios will in turn provide more efficient technical support and innovative ideas for online governance. As a result, the enabling effect of AI technology on online content is spiralling, and opportunities and challenges will co-exist in the long term.
Reference List
Crawford, K. (2021). The atlas of Ai: Power, politics, and the planetary costs of Artificial Intelligence. Yale University Press.
Filimowicz, M. (2022). Deep Fakes: Algorithms and Society. Routledge Taylor & Francis Group.
Giansiracusa, N. (2021). How algorithms create and prevent fake news exploring the impacts of social media, deepfakes, Gpt-3, and more. Apress.
Gillespie, T. (2021). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
Hao, K. (2021, February 16). Deepfake porn is ruining women’s lives. now the law may finally ban it. MIT Technology Review. Retrieved April 14, 2023, from https://www.technologyreview.com/2021/02/12/1018222/deepfake-revenge-porn-coming-ban/
Kalpokas, I., & Kalpokiene, J. (2022). Deepfakes a realistic assessment of potentials, risks, and Policy Regulation. Springer.
Mazarr, M. J. (2019). The emerging risk of virtual societal warfare: Social manipulation in a changing information environment. RAND.
Rathgeb, C., Tolosana, R., Vera-Rodriguez, R., & Busch, C. (2022). Handbook of digital face manipulation and detection: From deepfakes to morphing attacks. Springer.
Schick, N. (2020). Deep fakes and the infocalypse: What you urgently need to know. Monoray.
Text – S.3805 – 115th congress (2017-2018): Malicious Deep fake … (n.d.). Retrieved April 14, 2023, from https://www.congress.gov/bill/115th-congress/senate-bill/3805/text
Voices, G. (2021, October 21). Taiwan: Deepfake pornographic video victims call for new laws against sexual violence in Cyberspace. The News Lens International Edition. Retrieved April 14, 2023, from https://international.thenewslens.com/article/157928
Waldo, J., Lin, H., & Millett, L. I. (2007). Engaging privacy and information technology in a Digital age. National Academies Press.
Young, N. (2019). Deepfake technology: Complete Guide to Deepfakes, politics and social media. Nobert Young.
YouTube. (2020, February 13). The suspicious video that helped spark an attempted coup in gabon | the fact checker. YouTube. Retrieved April 15, 2023, from https://www.youtube.com/watch?v=F5vzKs4z1dc
YouTube. (2019, June 18). Third Batch of sex videos implicating Malaysian minister released. YouTube. Retrieved April 15, 2023, from https://www.youtube.com/watch?v=r_SUwxmrVbQ
中华人民共和国民法典. 中华人民共和国民法典_中国人大网. (n.d.). Retrieved April 14, 2023, from http://www.npc.gov.cn/npc/c30834/202006/75ba6483b8344591abd07917e1d25cc8.shtml
个人信息保护有法可依. 个人信息保护有法可依_中国人大网. (n.d.). Retrieved April 15, 2023, from http://www.npc.gov.cn/npc/c30834/202108/fff5b54882e6484299fc95db30bdba44.shtml
Be the first to comment