When Seeing Can No Longer Be Believed

INTRODUCTION

There have been instances where celebrities’ and politicians’ images were used to generate fake videos through this advanced artificial intelligence (AI) algorithm. This technology, Deepfake, has posed a new challenge to authenticity and trust. The presence of Deepfake is not only a milestone in technological advancement but also a significant risk to individual privacy, legal ethics, and public safety. This blog post will start with a detailed case study on how Deepfake may undermine justice to set the background for further discussion on its potential negative impact, exploring how it is used to influence public opinion and its governance challenges, and ending with consideration of its governance solutions.

It’s Getting Harder to Spot a Deep Fake Video (Bloomberg Originals, 2019)

DEEPFAKE IN COURTROOM

One of the negative impacts that Deepfake embodies on social and public safety is its invasion of the judicial system. An outstanding example in the United States involved a cheerleader’s mother who was charged with harassment for allegedly creating fake videos and photos of her daughter’s rivals (Lenthang, 2021). In the content, which was initially believed to be generated by Deepfake, girls appear naked and engage in behaviours that could lead them to be kicked off the team, such as vaping and drinking alcohol. However, after some technical experts confirmed that the videos appeared to be authentic, and due to a lack of other supporting evidence, the charges were dropped (Delfino, 2023). Nevertheless, the mother has received a large amount of negative attention over the case, suffered damage to her reputation and received threats (Delfino, 2023). The mere doubt about the evidence’s authenticity can already cause reputational and psychological trauma to the victim, and complicate the legal proceedings, and this is just the tip of the iceberg of the negative impact that Deepfake technology and its ability to imitate have brought to the litigation.

According to Delfino (2023), Deepfake poses three major challenges for judicial proceedings: the authenticity of evidence, the “Deepfake defence” (pp.293), and the growing distrust and doubt among jurors. Technological advancements have significantly enhanced the power and efficiency of computing, making algorithms increasingly effective (Flew, 2021, p.108). Under the paradigms of “Dataism” (van Dijck, 2014, p.201) and “Surveillance Capitalism” (Zuboff, 2015, p. 83), Deepfake continues to evolve and update driven by profit motives and using personal data as raw materials. This evolution has led to growing scepticism regarding the credibility of courtroom evidence. Such scepticism blurs the line between real and fake, even causing people to question the authentic evidence and to defend it against claims that it might have been produced by Deepfake, adding extra complexity to legal proceedings. In this process, jurors, who rely on emotional resonance and subjective judgment, may increasingly doubt the authenticity of all digital image and audio evidence.

In addition to interfering with the judicial system itself, these challenges may also threaten social stability and disrupt public order if not addressed promptly. For instance, doubts about the authenticity of evidence might result in the wrongful imprisonment of innocent individuals; evidence that cannot be verified as authentic may be dismissed, allowing criminals to escape legal punishment; the uncertainty of jurors may affect their judgments, leading to unfair verdicts. Therefore, managing and governing the use of Deepfake has become a critical issue both within the judicial system and the public society,

HOW DEEPFAKE IS MISUSED?

The previous case is just one example of how Deepfake can affect legal proceedings, and in fact, however, its implications extend far beyond the courtroom. Gaur (2022) mentioned two growing trends in the use of Deepfake (p.3): the shift from targeting famous people to the general public, as the cheerleader’s mom; and the rise in criminal purposes such as scams and fraud, as Elon Musk’s image was used to generate fake video for stealing cryptocurrencies (Ongweso Jr, 2022). Moreover, Deepfake is increasingly been put to more improper usage like political manipulation and interference, creating fake news and misleading information and non-consensual pornography, etc. (Al-Khazraji et al., 2023).

In this digitalized era, social media algorithms tend to “prioritize the distribution of polarizing, controversial, false, and extremist content and the use of social media for unaccountable forms of “dark” targeted ads.” (Andrejevic, 2020, p.45). This curation of content is concerned to be harmful to the social and political structures and is employed to encourage conspiracy theories and manipulate social conflicts for financial and political advantage (Andrejevic, 2020). As a result, this content prioritization not only leads to increasing misuse of Deepfake, but also amplifies its potential negative effects, fostering some significant consequences such as social division, distorting public trust, affecting the credibility of journalism and media, and bringing legal and ethical challenges (Al-Khazraji et al., 2023).

As a product of data and algorithms, Deepfake is alike, although should be inherently neutral, is deeply affected by the user’s intentions. Despite the inappropriate examples that were just discussed, it has demonstrated significant potential in various fields. For example, experiential industries that rely heavily on visual effects such as the tourism sector, may use the technique to enhance visitors’ engagement and enrich their overall experiences (Picazo and Moreno-Gil, 2019). Additionally, Deepfake has also made great contributions to the advertising industry, significantly reducing post-production time and cutting down on production costs (Kietzmann et al., 2021). Essentially, Deepfake reflects technological progress and creative artistical expression, it can revolutionize traditional practices and make vital contributions to society when utilized properly and responsibly. While the technical skills needed to produce Deepfake may deter average computer users, they are much less challenging for dedicated enthusiasts or gamers (Ice, 2019). Therefore, due to the accessibility of technology, almost anyone can create convincing fake content at minimal cost.

CHALLENGES OF REGULATING DEEPFAKE

Differentiating these algorithm-generated contents is usually challenging as Deepfake adopts the Deep Learning subfield within AI (Gaur, 2022), excels in synthesizing audio, video, and images, and generates “convincing forgeries that can be indistinguishable from authentic recordings” (Al-Khazraji et al., 2023). According to the test by Meyer (2022), none of the 100 participants could achieve a higher score than 85% when asked to recognize fake content; similarly, the University of Waterloo (2024) also mentioned in their more recent research that only 61% of the respondents were able to identify correctly after testing 260 people. As technology evolves, Deepfake is becoming more sophisticated, producing content that is increasingly indistinguishable from reality.

Figure 1: Images used in the University of Waterloo’s study (2024).

In addition to the difficulty people face in identifying Deepfake content, the effectiveness of artificial intelligence detection tools is also diminishing. Many universities and scholars are actively developing tools to detect synthetic content, while there is widespread concern that these existing tools may never be adequate and “may simply signal the beginning of an Al-powered arms race between video forgers and digital sleuths” (Bodi, 2021, p.148). The potential possibility of training machine learning algorithms gives rise to this worry, as experts are concerned that Deepfake technology evolves rapidly in this virus-antivirus pattern will eventually outperform the existing forensic instruments (Bodi, 2021).

The rapid pace of technological advancement often surpasses the public awareness of the risks it may involve. Tech companies and institutions possess a far deeper insight into AI technology than the general public, and this logic of algorithms isn’t always transparent (Zuboff, 2019). Such asymmetry makes it challenging for the public to fully understand the risks associated with AI technologies. Likewise, there is insufficient technological education from the media and educational system, which may prevent the public from being promptly communicate the possible threats of emerging technologies like AI and Deepfake. The positive effects of technological development, such as its applications in healthcare, education, and entertainment, typically receive more attention from the media and the market, while the potential negative impacts are often overlooked or underestimated. These factors collectively contribute to a lag in the public’s understanding of the changes that technology brings to society.

Regulating Deepfake on social media platforms poses difficulties too. “Filter bubbles” that by-produced by algorithms limit the diversity of information users encounter (Pariser, 2011), and exacerbate the entrenchment and amplification of potentially negative aspects, leading specific user groups to continuously receive misleading information, thus facilitating further misuse of Deepfake technology. However, completely prohibiting the dissemination of such synthetic content is unfeasible and may result in overreaction because of the violation of the free speech rights of people who use this technology responsibly. Consequently, achieving a balance between regulation and freedom on platforms is particularly challenging, as existing detection instruments struggle to capture the latest Deepfake content and the lag of public awareness of its bad influences discussed above. The current regulations and policies often fail to keep pace with rapid technological advancements, which makes them ineffective at managing the spread of Deepfake.

IMPERFECTIONS IN CURRENT LAWS AND REGULATIONS

Although governments and relevant agencies around the world have acknowledged the potential role of technology in spreading misinformation, and have initiated some actions, there are still spaces that need to be improved, especially in the realm of Deepfake technology. For instance, Singapore’s Protection from Online Falsehoods and Manipulation Act 2019 targets the governance of fake news and misinformation, playing a crucial role in rapidly addressing its dissemination. While Meskys et al. (2020) critique it as a post-action measure that is not really solving the essential problem. The European Union’s General Data Protection Regulation (GDPR) is another case in point, focused mainly on personal data protection and applicable to a certain extent in managing Deepfake content as it often utilizes unauthorized personal images. Nonetheless, it reveals limitations and gray areas (van der Sloot & Wagensveld, 2022), particularly when the synthetic content features fictitious characters or deceased individuals, causing the regulation ineffective. Although there are a few specific laws in the U.S. that aim to prevent the misuse of Deepfake, such as California’s Assembly Bill No. 730 and Texas’s Senate Bill No. 751 which prohibits the use of Deepfakes to interfere in political elections, they are considered as potentially conflicting with the freedom of speech (Bodi, 2021).

AREAS TO IMPROVE

As stated above, existing laws and regulations demonstrate various limitations in effectively regulating Deepfake technology. These limitations not only reflect the insufficient coverage but also involve jurisdictional constraints since these rules largely apply only within specific nations or regions, making it challenging to manage content produced or spread abroad and internationally. Furthermore, considering that Deepfake technology is generated and disseminated by algorithms, it is crucial to the control of techniques at its source. Thus, more refined technological policies are also needed.

Comprehensive governance measures are required to encompass the national, platform, and individual levels. Similar to how social media platforms’ co-regulation involves states, firms and NGOs (Flew, 2021, p.164), the governance of Deepfake also needs collaborative efforts. From national and government perspectives, initiating more effective regulations is an essential baseline to protect society. To reduce jurisdictional ambiguities, common laws or regulations must be established globally. Social media platforms must regularly update their content moderation standards and technologies to successfully detect and prevent the spread of harmful content while preserving the right to free speech. Most significantly, for improved management and mitigation of these concerns, raising individual understanding and awareness of the risks associated with technological misuse and the dissemination of false information is essential, therefore, supporting public education is also imperative. By adopting this co-governance approach to address these concerns, we can mitigate the negative consequences of Deepfakes and make use of their beneficial potential, guaranteeing that the advancement of technology does not disrupt societal stability and public trust.

REFERENCE

Al-Khazraji, S. H., Saleh, H. H., Khalid, A. I., & Mishkhal, I. A. (2023). Impact of Deepfake technology on social media: detection, misinformation and societal implications. The Eurasia Proceedings of Science Technology Engineering and Mathematics, 23, 429–441. https://doi.org/10.55549/epstem.1371792

Andrejevic, M. (2020). Automated culture. In Automated Media (1st ed., pp. 44–72). Routledge. https://doi.org/10.4324/9780429242595-3

Bloomberg Originals. (2019). It’s getting harder to spot a deep fake video [Video]. YouTube. https://www.youtube.com/watch?v=gLoI9hAX9dw

Bodi, M. (2021). The First Amendment implications of regulating political Deepfakes. Rutgers Computer & Technology Law Journal, 47(1), 143-.

Delfino, R. (2022). Deepfakes on trial: a call to expand the trial judge’s gatekeeping role to protect legal proceedings from technological fakery. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4032094

Flew, T. (2021). Regulating platforms. Cambridge: Polity.

Gaur, L. (2022). DeepFakes: creation, detection, and impact (1st ed., Vol. 1). Routledge. https://doi.org/10.1201/9781003231493

Ice, J. (2019). Defamatory political Deepfakes and the First Amendment. Case Western Reserve Law Review, 70(2), 417-.

Kietzmann, J., Mills, A. J., & Plangger, K. (2021). Deepfakes: perspectives on the future “reality” of advertising and branding. International Journal of Advertising, 40(3), 473–485. https://doi.org/10.1080/02650487.2020.1834211

Kirchengast, T. (2020). Deepfakes and image manipulation: criminalisation and control. Information & Communications Technology Law, 29(3), 308–323. https://doi.org/10.1080/13600834.2020.1794615

Lenthang, M. (2021). Cheerleader’s mom created deepfake videos to allegedly harass her daughter’s rivals. ABC News. https://abcnews.go.com/US/cheerleaders-mom-created-deepfake-videos-allegedly-harass-daughters/story?id=76437596

Meskys, E., Liaudanskas, A., Kalpokiene, J., & Jurcys, P. (2020). Regulating deep fakes: legal and ethical considerations. Journal of Intellectual Property Law & Practice, 15(1), 24–31. https://doi.org/10.1093/jiplp/jpz167

Meyer, D. W. (2023). Find the real: a study of individuals’ ability to differentiate between authentic human faces and artificial-intelligence generated faces. In HCI International 2022 – Late Breaking Posters (pp. 655–662). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-19682-9_83

Ongweso Jr, E. (2022). Scammers use Elon Musk Deepfake to steal crypto. Vice. https://www.vice.com/en/article/v7d5y9/scammers-use-elon-musk-deepfake-to-steal-crypto

Pariser, Eli. (2011). The filter bubble : what the Internet is hiding from you. Viking.

Picazo, P., & Moreno-Gil, S. (2019). Analysis of the projected image of tourism destinations on photographs: A literature review to prepare for the future. Journal of Vacation Marketing, 25(1), 3–24. https://doi.org/10.1177/1356766717736350

University of Waterloo. (2024). Can you tell AI-generated people from real ones? (2024). In NewsRx Science (pp. 93-). NewsRX LLC.

van der Sloot, B., & Wagensveld, Y. (2022). Deepfakes: regulatory challenges for the synthetic society. The Computer Law and Security Report, 46, 105716-. https://doi.org/10.1016/j.clsr.2022.105716

van Dijck, J. (2014). Datafication, dataism and dataveillance: big data between scientific paradigm and ideology. Sureveillance & Society, 12(2), 197–208.

Zuboff, S. (2015). Big other: surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75–89.

Zuboff, S. (2019). The age of surveillance capitalism: the fight for a human future at the new frontier of power. New York: Public Affairs.

Be the first to comment

Leave a Reply