AI as a tool provides us with many conveniences in different areas, including drawing, writing, and making videos. When AI comes to the internet world, many of us welcome it and continue to challenge how complicated AI can do. However, since AI participates more frequently in daily life, something has begun to change. AI has started to disturb the digital world (Crawford,2021). People are concerned about super intelligence intensifying the insecurity on the internet. And fake news easily created by AI is a good example of a chaotic situation. Especially since it happens online every day and causes political and privacy problems, this blog will focus on two cases, one is a photo of Pope Francis wearing a stylish white puffer jacket, and the other one is a Chinese girl being ‘taken off’ her clothes by AI. Using these two cases explains the problems caused by fake news created by AI and why people will believe the fake news.
Recently, Pope Francis, the head of the Catholic Church, the bishop of Rome, and the sovereign of the Vatican City State, got the public’s attention through his ‘fashion sense’ rather than Catholic doctrines. A picture of him wearing ‘Balenciaga’ (a famous fashion brand) was identified as fake. According to some technical Staff analysis, AI made the fake photos. After the clarification, people still fear how AI can pass off as real. When we see some pictures, little clues help people distinguish the artificial synthesis part from the naked eye. For the non-believers of Catholics, these photos shocked them that Pope not only cared about God but was also attracted by fashion. After the clarification, the non-believer seems not to be affected by the prank. However, for the faithful, this is a disaster. They saw their leader being insulted. And it is disrespectful to their faith. Moreover, if AI can make a more real ‘Pope’ in the fake news and indicates some misleading behavior for the faithful, it may cause worse outcomes.
Pope Francis himself, as a public figure, his photos could be found anywhere on the Internet before the occurrence of AI. It is also true that before the AI, his portraits had been spoofed multiple times, while AI seems to decrease the technical issue and make the photos more ‘real.’
Figure 1: AI creation Pope wears ‘Balenciaga’
Although public figures always meet this kind of infringement of portrait rights issues due to the high exposure, this is a classic problem caused by the internet. It does not mean the ordinary with a better situation Even though there are some laws to protect privacy, there is still a huge space to get a promotion (Karppinen,2017). In this case, Pope was only changed into trendy clothes and was not forged to make some instructive language asking the congregants to do some illegal acts. Three years ago, such political and religious figures were also used by AI to make fake political statements. The image of the former president of the United States, Barack Obama, was created by Al to make fake news, with realistic images, and to make political statements that caused panic in American society. Fortunately, the video was quickly denied by officials. However, AI technology still causes concern.
Figure 2: Fake Obama post statement
In another Chinese case, a young Chinese ordinary girl posted her photo on social media RED (Chinese social media, like Instagram). In this photo, she is standing on the subway, trying to show her new clothes. After a while, she attains the ‘Likes’ and a naked version of her photo. Moreover, she found that ‘naked’ her becomes the news’ main character in the news. The news is talking about this as a ‘naked’ challenge in public transportation from her ‘sugar daddy’. She immediately sued the news and clarified. However, some people still believe the fake news and commend some awful words. What the worse is that some people share the ‘naked’ photo through private Chat space.
Figure 3 :AI ‘took off’ her clothes, and some people shared them on Chats.
The worst thing is that we can still find many AI website that provides the ‘naked’ service when you google AI nude. This is not the first case of Al ‘taking off’ people’s clothes, similar cases like changing the face of characters in some porn videos. However, this is ironically the first-time causing people’s and relevant law departments’ attention. And I noticed many law departments mentioned this situation caused by ‘technology’ rather than specific AI issues. The lack of focus on AI issues may trigger more problems.
Moreover, if netizens had not discovered that the nude photos were fake, the fake news would have continued to spread across the internet. It seems that the platform which published the fake news did not take immediate action, and the news platform itself did not recognize the photo as an AI composite. Therefore, it did not stop the news from spreading. This lack of awareness of AI technology is a detrimental effect on online governance, both for the news platform and for the government departments
Why fake news created by AI can fraud people
The easiest reason we can find may be the advanced AI technology. Progressive technology indeed makes ‘Pope’ just like wearing fashionable clothes and makes the Chinese girl naked standing in the subway. This is more mature and convenient than Photoshop, which is also popular in composite photos. Though, technology is a concerned issue. Al companies may have more responsibility because the whole fake news creation process seems not to be limited. The easy operation and free access using strangers’ photos on Al websites provide conditions for all the people. The goal of AI is to offer digital services to humans, and due to this reason, AI is just a data processing tool. It cannot distinguish what is wrong and right. (Crawford,2021). As the generator and the platform provide service, whether they intervene in the data process the users do not know. Platform companies strictly protect information about how they moderate content information about content, and the rules and the data we need to evaluate its performance are kept secret (Suzor,2019). I prefer to believe they leave out the issue to law departments. Then there is a question, ‘If decisions are seen as being made by machines’, then who is responsible for the resulting outcomes?’ (Flew, 2021).
Besides the AI, the lack of relevant law protection leaves space for such fake news to be transported online. In the Chinese ‘naked’ girl case, as I mentioned above, some law departures define this case as a technological crime. ‘The Cybersecurity Law of the People’s Republic of China’ (2015) is the crucial legal term for Internet governance in China. However, I found that some terms are blurred in this case. For example, in this case, the 27th term of ‘The Cybersecurity Law of the People’s Republic of China’ has been used to explain internet crime.
The 27th term of ‘The Cybersecurity Law of the People’s Republic of China’ writes:
27th term: Any individual and organization shall not engage in illegal intrusion into other people’s networks, interference with the normal functions of other networks, theft of network data, and other activities that endanger network security; shall not provide programs and tools specifically designed to engage in network intrusion, interference with the normal functions of the network and protective measures, theft of network data and other activities that endanger network security; knowing that others are engaged in activities that undermine network security, shall not provide technical support, advertising and promotion, payment settlement and other assistance
However, it is confusing who should be blamed for the case, the user of the AI naked, the AI platform, or the social media the girl posted her photos, especially for the AI platform. Therefore, how to govern the Al is still challenging for the government. Until now, most attempts to regulate the Internet through intermediaries have been relatively straightforward – with lawmakers worldwide generally not paying enough attention to the complicated and complex situation. (Suzor,2019).
Improving the law is still the important part. States need to cooperate with other stakeholders, including the AI platforms, to express more detailed legal terms. The fast development of Internet technology consistently causes new issues, and this is also an effective way to supervise platforms’ regulation work. Considering the regulation work is expensive and needs to spend much time. Compared with traditional command and control legislation, the role of the state is relatively limited.
Relatively limited to informal supervisory and mentoring roles. (Gorwa, 2019). Then, AI platforms play a crucial role in dealing the fake news. For example, in the Pope’s case, he is a sensitive topic celebrity related to religion. When users try to use their photos or name, platforms could set some limitations and track the using path through the media. However, the limitation and tracking may result in restrictions on the innovative creation, even betraying AI’s original goal. Moreover, this is a massive cost for both money and human resource. Hence, there is a need for third party since it can help platforms save regulation budgets (Gorwa, 2019). Meanwhile, third parties as a better effect on Internet governance than other departments (Flew, 2021). In the Chinese girl’s case, naked photos were used to make fake news, but these naked photos were found to be fake and synthetic by the users of social media. They do not have the technical proficiency of the platforms but instead frequently use the platform and are familiar with cyberspace. These users belong to digital civil society groups, which are part of NGOs (No government organization). Nevertheless, the regularity of the NGO is informal governance. They lack laws for affection and support for organization regulation experience. That is the reason they need to corporate with platform companies and governments. However, negotiation among them has much more challenging. Who supports NGO training? How much responsibility must the platforms take? How to balance the creation and limitations of using AI?
Whether it is the case of the Pope or the Chinese girl, the emergence of artificial intelligence, while bringing more convenience and easy access to the creation of technology, has also brought new challenges to the governance of the network of the relevant parts. For public figures, especially those involved in politics, religion has a significant influence. The emergence of social networks has already provided a faster way to help them to spread information. Effective AI governance requires a multidisciplinary approach that combines expertise from different fields, such as computer science, law, ethics, and social science. It also requires ongoing collaboration and dialogue among stakeholders to ensure that rules and guidelines are updated and adapted to changing circumstances. However, this collaborative process has multiple challenges. For AI platforms, the original purpose of the platform is to provide low-cost technology to users, and it would certainly defeat the original purpose of AI technology to comply with the terms of the collaboration and restrict some of the AI features to avoid infringement and criminal behavior. For the government, whether it is the legal department or the network governance department, the governance terms of AI platforms are too unclear, and the management process lacks transparency (Nooren, et al, 2018). The government is unable to offer targeted governance advice and legal support. As NGOs, they have the same problem as the government. Meanwhile, their recommendations and actions are not formally recognized and lack authority. However, tripartite cooperation is an effective way to manage AI platforms.
1. Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300252392
2. Flew, T. (2021). Regulating Platforms. Polity. https://bookshelf.vitalsource.com/books/9781509537099
3. Gorwa, R. (2019). The platform governance triangle: conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407
4. Nooren, P., van Gorp, N., van Eijk, N. and Fathaigh, R.Ó. (2018), Should We Regulate Digital Platforms? A New Framework for Evaluating Policy Options. Policy & Internet, 10: 264-301. https://doi-org.ezproxy.library.sydney.edu.au/10.1002/poi3.177
5. Suzor, N. (2019). Lawless: The Secret Rules That Govern our Digital Lives. Cambridge: Cambridge University Press. doi:10.1017/9781108666428
6.Tumber, H., & Waisbord, S. (Eds.). (2017). The Routledge Companion to Media and Human Rights (1st ed.). Routledge. https://doi-org.ezproxy.library.sydney.edu.au/10.4324/9781315619835