Under the Silicon Sky: How Will Deepfake and Other Generative AI Rule the World by Manipulating Imaginations?

Short videos have become the most popular form of entertainment worldwide. People can share their thoughts and experiences in brand-new ways like never before. Some are only focusing on sharing their lives, and others have started to deliver live commerce to profit from it. “April Anna” is one channel that benefits from commerce on the Chinese short video platform called “Xiaohongshu.”

In this channel’s videos, “Anna” expresses how much she loves Russian and Chinese and straightly expresses her feelings about wanting to marry the Chinese. She then tries to sell goods exported from Russia and convinces her viewers that her money will be used to save her homeland, Russia.

It seems like a beautiful story, and her audience feels the same way. By January 2024, she had over three thousand followers and over one hundred thousand views on her videos posted on the platform.  She even opened an online shop on one of the major online shopping platforms, Taobao. However, the growing trend stopped suddenly. On January 29th, a YouTuber, Olga Loiek, posted a video called “Somebody Cloned Me in China…”. In the video, she expressed her confusion and fury about channels like “April Anna” and “Natasha in China”, stealing her face and identity and attempting to sell goods from Russia to China. She stated that because there is an ongoing war between Ukraine and Russia, she would never participate in such activities as she is Ukrainian.

She identified at least five accounts using her face as a training source to fake her identity using AI technology trying to sell Russian products(Loiek, 2024). After the video, Most of those fake accounts were tokens by the platforms, but some still remain on Taobao, selling fake Russian goods that actually come from China. This incident, which could have severe economic and political implications, ignited a public outcry, with many demanding immediate legislation and regulation on AI-generated content, acknowledging the pressing need to prevent such misuse. Some even speculated that it was a form of propaganda by the Chinese government, further heightening the stakes.

How does this deepfake work? Will the problem happen to everyone? To fully understand these problems, we need to know what Artificial Intelligence generative content (AIGC) is and what has happened in recent years.

What Exactly Is AIGC?

In Yihan’s opinion, AIGC refers to giving human instructions that could help teach and guide the model in completing the task, utilising GAI algorithms to generate content that satisfies the instruction(Sun, 2023). In today’s blog, I will only focus on image and video generation and the concerns it brings to society. After many iterations and refinements, these AI tools have reached the point of near-perfection. In the image generation field, stable diffusion and the Midjourney are the most common AI generation tools artists and creators use. The most common use for video generation tools is the function integrated into video industry software such as After Effect and Davinci Resolve. However, a new AI called Sora (meaning the sky in Japanese) trained by OpenAI has achieved an astonishing visual effect and will come to the public shortly. And this Sora may potentially become the primary tool for future video generation.

Video generated by the Sora

These generation tools’ functionality may vary, but the core of them are basically based on the same mechanics: utilise the input text to generate a blurry, noisy image that will suit the prompt most at first, then combine the noises through different steps of denoisers to iterate and refine the image to finally get a good result(Dehouche & Dehouche, 2023).

The news mentioned above is a typical case of profiting from artificial intelligence-generated content (AIGC). This technology has developed rapidly over the last few years, and now it has now achieved a new level of advancement where it can blend effortlessly into different artwork styles without being detected. This has led to a significant surge in the AIGC community’s growth and prosperity. Currently, over 500,000 models are available for AIGC applications on Hugging Face and even more so on another platform called CIVITAI.

Given its responsive and accurate results, many industries, such as film and games, have now integrated these AI tools into their workflow: The Chinese film ‘The Wandering Earth 2’ used AI tools to finalise the leading actor’s role after he passed away unexpectedly, and the deep fake technology has been used widely on Chinese media platforms such as Bilibili and TikTok. These appliances in the industry and on social media platforms have brought fresh experiences to audiences, and some of them have been huge successes.

The Manipulated Imagination

And due to the capital required to build AI at scale and the ways of seeing that it optimizes AI systems are ultimately designed to serve existing dominant interests. In this sense, artificial intelligence is a registry of power.

Crawford, K. (2021). Atlas of AI, Yale University Press.

Although these technologies have brought great benefits to various industries, ordinary people have different sentiments. Crawford believed that artificial intelligence is a registry of power. It is neither artificial nor intelligent. He thought that AI depended entirely on existing power and resources. Moreover, it will eventually serve the dominant interests(Crawford, 2021). The current status of the artificial intelligence utilities is similar to what has been mentioned:

  • – Consume natural resources and use the generated power to calculate and generate the image or the video.
  • – The result of the generation is to fulfil a specific purpose.
  • – The data of the trained model are based on the existing images or video; the AI can only use the existing model set. Furthermore, it cannot create anything new.

Based on the summary, AI-generated images and videos are primarily used to cater to the user’s interests and serve as a tool rather than an intelligent creator of novel content. This brings lots of opportunities to the industries as well as risks to the public.

Concern AIGC Rise

The rapid expansion of AI-generated relevant applications raises public concerns about its utilisation. Many believe that cases that may involve political manipulation, Impersonation, or financial fraud, such as the Olga case, will occur more frequently than expected without proper regulation and governance. Countries and groups are now legislating or have finished their own law to regulate the AI generated content:

  • – China plans to legislate laws on AI governance by 2025(Roberts et al., 2021).
  • – The European Union has already legislated the EU AI Act(Edwards, 2021).
  • – America has created a blueprint for an AI bill of rights in 2024(Policy, 2024).

Despite all the efforts in legislation regulating AI and its generated content, there is still a long way to go. The lightning speed of AI tool development makes the detection system harder and harder to develop. In Rehaan’s opinion, in the deepfake area, the existing detection system has become obsolete with the advancement of AI techniques, and effective detection systems need to be developed(Rehaan et al., 2024).

It is not just a technological issue but also a political one. Reaching an agreement between countries and organisations is a challenging task. Lots of evidence has shown that AI-generated content can be used as a political weapon in some instances. Former US President Trump shared an image of himself praying on Truth, a social media platform he created, which was later revealed to be generated by AI(Novak, 2023); President Joe Biden’s voice was trained by an unknown organisation and is used to send fake calls and tell the public to stay at home during the voting period(Khalid, 2024). Some even suggest that the 2024 election of the USA is an “AI-generated hell”(Jeong, 2024).

Trump praying in a church generated by AI (Novak, 2023)

Some of the cases happened nationwide, and some were international. The US Department of Commerce has decided to ban artificial intelligence chips sold to China in 2023 to slow the rapid development speed of the next generation of the frontier model, and more chips are going to be banned this year for military concerns(Leswing, 2023). All these restrictions and banning methods make governance and regulation globally even harder.

A 25 Million Financial Fraud: Hongkong Worker Scammed by a Deepfake CFO.

As AI technology develops, the content it generates will undoubtedly be used to conduct illegal actions such as financial fraud and Impersonation. A multinational firm encountered financial fraud and lost about 25 million dollars. One finance worker at this firm received an email claiming that a secret transaction would be carried out. At first, the worker did not pay attention to it, believing it was a fraud.

However, after a video call, the worker believed that the person on the call was the company’s chief financial officer. The worker thinks that the person on the other side not only looks like him but also sounds like him. Without hesitation, this financial worker transacted 200 million Hong Kong dollars, which is about 25 million us dollars. He did not notice that this was a fraud until he reported it to the firm(Todd, 2024).

The Hong Kong police caught six criminals connected to similar scams, but it is still unclear if the money was recovered, and the police did not reveal more details about the case(Magramo, 2024). Security experts suggest taking measures before such incidents happen again. They suggest integrating authentication steps into online meetings, but how and what detection technique should be used is still a problem.

Another similar case happened in politics during the 2024 election. A group of Trump supporters shared a series of ten photos on Facebook that purported to showcase Trump’s support for the black community. However, it was later revealed that these photos were actually generated by AI tools and not real photographs. Some people had initially believed the images to be authentic until the media intervened and clarified that they were the work of AI-generated content.

The purpose of creating these AI-generated images of these supporters is to help Trump win six key swing states and the 2024 election. However, according to the BBC, such behaviour may inflame the election tension again, like in the 2020 election (Spring, 2024).

AI-generated pictures showing Trump with his African-American supporters. (Thurston, 2024)

The Unknown Path Towards The Future: Can People Still Trust What They See?

Despite the rapid development of AI technology, governments and organisations are trying to regulate it as best they can. For instance, while there are no established laws related to artificial intelligence in China, an interim regulation was introduced and became effective in August 2023. This regulation outlines the guidelines for governing the use of Artificial Intelligence Generate Content (AIGC) and the measures for safeguarding personal information and privacy in the context of AI(China, 2023). EU’s AI ACT declared that the regulations are risk-based. Moreover, it prohibits the use of artificial intelligence in situations where there is an unexpected risk. States like New York in the United States also set up relevant laws to regulate this technology after the blueprint came out (Kiesow Cortez & Maslej, 2023).  

Under the regulations and restrictions on AI-generated content, most social media platforms have taken action:

  • – YouTube plans to introduce a new feature that informs viewers whether a video is synthetic or not and may delete the video or suspend the account if it is synthetic and harms elections, ongoing conflict, public health crises, or public officials(Moxley, 2023).   
  • – TikTok asks its content creators to add AI-generated labels to synthetic videos. It also prohibits content related to public figures and ongoing conflicts or incidents; these contents may be entirely removed or even suspended from the creator’s account.
  • – Bilibili, one of the biggest media platforms in China, notified its content creators to label AI-generated content and five other specific tags, such as risky behaviour and personal opinion, in their videos. The policy took effect in September 2023(Feed, 2023).

Though many platforms have set rules for regulating AIGC under the laws and acts set by the government or countries, it is still noticeable that many platforms still widely use AIGC to attract people. The “xiaohongshu” platform mentioned at the beginning still promotes videos and posts related to AI-generated content and does not currently have any regulation on AIGC. There is even news that reveals this platform is stealing artwork from active creators on the platform to train its model without notice or information.

As AI, like the recent model SORA, develops, we should note that, given the financial and political interest these AI bring to us, we may enter an era in which we cannot even believe our own eyes. However, due to the nature of laws and regulations, we may still have a long way to go.

Reference

China, t. C. A. o. (2023). Interim regulation on the management of generative artificial intelligence (AI) services.  Retrieved from https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm

Crawford, K. (2021). Yale University Press. https://doi.org/doi:10.12987/9780300252392

Dehouche, N., & Dehouche, K. (2023). What’s in a text-to-image prompt? The potential of stable diffusion in visual arts education. Heliyon, 9(6).

Edwards, L. (2021). The EU AI Act: a summary of its significance and scope. Artificial Intelligence (the EU AI Act), 1.

Feed, T. (2023). Bilibili requires users to tag AI-generated content. https://technode.com/2023/09/15/bilibili-requires-users-to-tag-ai-generated-content/

Jeong, S. (2024). The AI-generated hell of the 2024 election. https://www.theverge.com/policy/24098798/2024-election-ai-generated-disinformation

Khalid, A. (2024). Two Texas companies were behind the AI Joe Biden robocalls. https://www.theverge.com/2024/2/6/24063885/biden-robocalls-ai-fcc-cease-desist-lingo-telecom-life-new-hampshire

Kiesow Cortez, E., & Maslej, N. (2023). Adjudication of Artificial Intelligence and Automated Decision-Making Cases in Europe and the USA. European Journal of Risk Regulation, 14(3), 457-475. https://doi.org/10.1017/err.2023.61

Leswing, K. (2023). U.S. curbs export of more AI chips, including Nvidia H800, to China. https://www.cnbc.com/2023/10/17/us-bans-export-of-more-ai-chips-including-nvidia-h800-to-china.html

Loiek, O. (2024). Somebody Cloned Me in China… https://www.youtube.com/watch?v=3FQSFnZpsqw

Magramo, H. C. a. K. (2024). Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’. https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html

Moxley, J. F. O. C. a. E. (2023, November 14). Our approach to responsible AI innovation.  https://blog.youtube/inside-youtube/our-approach-to-responsible-ai-innovation/

Novak, M. (2023). Donald Trump Shares Fake AI-Created Image Of Himself On Truth Social. https://www.forbes.com/sites/mattnovak/2023/03/23/donald-trump-shares-fake-ai-created-image-of-himself-on-truth-social/?sh=1a4054c571f6

Policy, O. o. S. a. T. (2024). Blueprint for an AI Bill of Rights.  Retrieved from https://www.whitehouse.gov/ostp/ai-bill-of-rights/

Rehaan, M., Kaur, N., & Kingra, S. (2024). Face manipulated deepfake generation and recognition approaches: a survey. Smart Science, 12(1), 53-73. https://doi.org/10.1080/23080477.2023.2268380

Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., & Floridi, L. (2021). The Chinese Approach to Artificial Intelligence: An Analysis of Policy, Ethics, and Regulation. In (pp. 47-79). Springer International Publishing. https://doi.org/10.1007/978-3-030-81907-1_5

Spring, M. (2024). Trump supporters target black voters with faked AI images. https://www.bbc.com/news/world-us-canada-68440150

Sun, Y. C. a. S. L. a. Y. L. a. Z. Y. a. Y. D. a. P. S. Y. a. L. (2023). A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT. https://doi.org/https://doi.org/10.48550/arXiv.2304.06632

Thurston, J. (2024). AI images of Donald Trump with black voters spread before election. The Times. https://www.thetimes.co.uk/article/ai-images-of-donald-trump-with-black-voters-spread-before-election-p3fhfc8wl

Todd, D. (2024). Hong Kong Clerk Defrauded of $25 Million in Sophisticated Deepfake Scam. https://www.secureworld.io/industry-news/hong-kong-deepfake-cybercrime#:~:text=Hong%20Kong%20Clerk%20Defrauded%20of%20%2425%20Million%20in%20Sophisticated%20Deepfake%20Scam,-By%20Drew%20Todd&text=As%20artificial%20intelligence%20continues%20advancing,out%20sophisticated%20scams%20and%20attacks

Be the first to comment

Leave a Reply