Balancing Innovation and Ethics: The Reign of AI and Human Rights

Deepfake can mimic your face.
Deepfake can mimic your face.

In today’s digital world, AI is almost everywhere. AI is playing an increasingly important role in our daily life. From Face ID to voiceover assistants on phones, from self-driving cars to intelligent watches, from generated text to generated videos, AI technology is already permeating our lives, providing our lives with convenience and innovations.

With the introduction of AI in all industries, issues of all kinds have been raised, such as concerns about privacy protection, unemployment, ethical challenges, copyright challenges, sparking contemplation and apprehension regarding the development and application of Artificial intelligence.

Can we still trust?

The development of AI is a double-edged sword, bringing both potential risks. Concerns such as algorithmic bias, data privacy and other issues have gradually been debated, while the intersection with copyright challenges has also added complexity to the discussion.

One of the most abusive technologies is deepfake which has caused a lot of crimes and also online harassment.

Deepfake technology also caused a lot of online fraud and sexual harassment, therefore a proactive exploration of solutions to properly regulate AI technology is needed, including governance policies, ethical guidelines and the enhancement of interdisciplinary collaboration, to maximize the potential of AI while simultaneously protecting individual rights and collective values.

AI and human rights: protection or violation?

In Jan 2024, an AI-generated porn of Taylor Swift was flooding the internet, using the deepfake technology. The platform of this propagating, namely X, slowly reacted to this situation and caused 47m views before it was finally taken down.

Image 1: Deepfake Explicit Images of Taylor Swift Spread on Social Media.

Taylor was not the only case, but the famous case. Her fans pushed X to delete such videos and raised a great sense of public anger. She was a survivor of online sexual abuse, but a lot of other women are not that lucky. Ordinary women do not have a huge number of fans to work to protect their rights and help them take down such content on the internet (Saner, 2024).

This incident raised a lot of public concern with even the white house coming out to reply.

White House: AI is alarming.

How can we protect ordinary individuals?

With the progress of artificial intelligence, the creation and dissemination of pornography become much easier. This technological advancement has led to the widespread of deepfake content on the internet resulting in serious violations of human rights for victims. And women always bear the brunt of this scourge.

People cannot tell if the content is generated by AI, this is not something that can be apologised for and explained after the incident. Once the harm has occurred, some consequences cannot be ignored. Even a powerful woman like Taylor Swift can suffer a lot from online harassment, not to mention ordinary people.

My fake naked body: the story of an everywoman’s online sexual abuse

Noelle Martin was a common law student when she discovered her face was stolen from her social media account and modified into porn videos. She was only 18 years old by that time. She went to the police but there were no laws at that time to prevent the dissemination of her fake naked images, and even was no way to delete those generated videos and pictures.

The perpetrator did not stop the abuse and even made it worse with the creation of more deepfake pornographic videos of her.

Even after 10 years, she still cannot be fully recovered from the online abuse.

She said this was something that she could never escape from because the harm is a permanent, lifelong form of abuse (Scanlan, 2023).

Cyberbullying and online sexual abuse can bring great mental disability to the victims. Individuals subjected to such harm may face serious psychological health issues such as anxiety, low self-esteem, depression and even post-traumatic stress disorder. The persistent abuse and humiliation can lead to social withdrawal and thereby affect their social life and interpersonal relationships.

Martin (2024) said takedown and removal of fake content is a futile and uphill battle.

“You can never guarantee its complete removal once something is out there.” She said.

“From your employability to your future earning capacity to your relationships. It’s an inescapable form of abuse, that has consequences that operate in perpetuity.” Martin (2024) said.

Victims of online harassment may also experience social rejection and isolation and even lose confidence and social skills with the feeling of being unsafe and helpless, leading them to avoid social interactions and participation.

Everything that happens to the victims is just because the perpetrator clicked the button of a generating application at the very start.

None of the people in the online abuse process are innocent. The developer of the AI application, the users that generate the harmful content, the disseminators that spread the content and the platforms that allow the existence of such content.  

The spread of malicious attacks and rumours can have a long-term negative effect on the victim’s daily life, academic pursuits and career. Victims may also face actual violence and even become targets in the real world, resulting in not only mental harm but also physical harm.

What caused these harms? Bias and discrimination in AI algorithms

For incidents like this, sadly the overwhelming targets are always women and girls. These kinds of incidents have something to do with misogyny.

AI systems are typically trained and learn from data. The content they generate depends on what they are fed with. If the data contains gender bias or misogynistic tendencies, the AI-generated content may reflect these biases and exhibit them in decision-making and behaviours. Even a system with spectacular training can make terrible predictions when presented with novel data (Crawford, 2021).

Generally speaking, the problem of transparency and fairness arises from the imbalances between the institutions that design algorithms and have access to data and the people who are subject to the decision-making based on those algorithms and data (Flew,2021). 

A study in 2019 from the cybersecurity company Deeptrace found that 96% of online deepfake video content was nonconsensual pornography (Saner, 2024).

So the regulation of the use of AI is highly important if we want to make the full and healthy use of Artificial Intelligence.

Individuals should recognise the impact of their behaviour in cyberspace. Strengthening online education can help people better understand the dangers and consequences of online abuse relating to AI.

However, protecting self-information does not mean retreating from a world where AI prevails. Say no to online sexual abuse and protecting data privacy does not equal putting someone back in her box. Since the existing and potential victims do nothing wrong, we must find a way to ensure individuals’ rights to express and showcase themselves on social media and to protect them from online harassment caused by AI.

Therefore, there is also a need to strike the balance between preventing AI from abusing privacy and protecting the freedom of the cyber world. We need to develop comprehensive policies and measures that should take into account the need to protect personal data security.

Deepfake utilization: AI is more than online sexual abuse

Just as gender bias can influence the content generated by AI, it can also be employed by fraudsters. As AI becomes increasingly sophisticated, it can be used by malicious actors to deceive and manipulate individuals online.

Deepfake technology effect picture.

In some deepfake fraud cases, scammers can fool the banks and security system. As incredible as this may sound, it does happen in our daily lives. In some simple cases, the scammers just use AI-generated voices and images to make money from the victims’ families or companies.

To generate someone’s voice is much easier than we can imagine, even an iPhone can do this job. Just go to the settings, and you can easily set up your personal voice. This technology can mimic your timbre and tone. Although accessing this feature on an iPhone requires users to read 150 sentences and enter a password before loading making it relatively safer, some scammers can also use other ways to get your personal information even on a biological level such as voice or outlooks.

In Feb 2024, a finance worker at a multinational firm was tricked into paying out 25 million dollars to fraudsters using deepfake technology (Chen, 2024).

The fraudsters used a certain method to steal the facial and vocal information of the worker’s colleagues and invited him into an online meeting, where all 40 colleagues were generated by deepfake technology. This massive fraudulent amount took place right in this online meeting room.

Even a multinational corporation with highly skilled and high-level operation rights has fallen to the deception, highlighting the alarming level of realism achieved by this deepfake technology.

How to protect human rights?

In the era of big data, we never know when our personal information might be stolen. When using our phones, facial recognition is employed, and social media platforms also possess our facial data. Even our voices can be recorded secretly during phone calls. We are unaware of when our information is leaked and generated by AI and then abused.

It feels like we are living in a transparent world where our personal information is flying in the sky.

The advancement of AI adds a lot of complexity to this issue. AI technologies can process vast amounts of data to generate highly realistic simulations or deepfake which can be used to deceive individuals or ruin someone’s reputation, leading to potential threats to both financial security and personal safety. 

The breakdown of social trust

If the deepfake technology cannot be regulated properly, society will face a serious crisis of trust. Once preconceived, we cannot be certain whether the images, videos or audio we see or listen to are authentic and credible, as they could have the possibility be maliciously used or manipulated.

This uncertainty can lead people to lose trust in social media, news and any other areas. This will severely undermine the foundation of trust in society, leading to social instability and chaos.

In such a landscape, it is crucial to adopt proactive measures to safeguard our personal and financial data. This requires efforts from governments, technology corporations and society to develop and enforce relevant governance.

Governments are taking actions


In 2019, the Chinese government introduced laws requiring individuals and organizations to disclose when they use deepfake technology in videos and other media. The regulation also prohibits the use of deepfake technology without a clear disclaimer that the content has been generated.

In Jan 2023, the Chinese government also passed the Cyberspace Administration of China that claimed the users and providers of deepfake technology should establish clear procedures throughout the lifecycle from creation to distribution (Lawson,2023).

The provisions also require companies and individuals who use deepfake technology to create, copy, publish or transmit information to obtain consent, verify identity, register records with the government, report illegal deepfakes, offer recourse mechanisms, provide watermark disclaimers and more (China Network Information Network, 2022).


The EU has also taken a proactive approach to deepfake regulation, calling for increased research into deepfake detection and prevention, and also regulations that would assure clear labelling of AI-generated content.

The EU also passed a law that allows online social media platform companies to remove deepfake or other disinformation from their platforms.

In June 2022, the EU’s Code of Practice on Disinformation updated the law that addresses deepfake fines of up to 6% of global revenue for violators (European Commission, 2022).


However, Australia does not have an official detailed law targeting deepfake technology. According to the Final Report: Human Rights and Technology (2021), the Australia Human Rights Commission claimed that new technology should ensure ethical AI to protect human rights and new technology must come with robust human rights and safeguards. The development and use of new and emerging technologies should contain practical measures focused on regulation, education, training and capacity building.

Boost the good future of AI

In the face of the threat posed by abusive use of deepfake and unethical AI technology, social trust is under serious challenge. Effective governance is key to ensuring that technological developments align with the great public interest. Governments, corporations and civil society should work together to establish sound laws and regulations.

The proper use of AI technology can bring a lot of benefits. Just like Crawford (2021) stated, the AI system is like a container into which various things are placed and then removed.

Keep the good aspects of AI and get rid of the bad aspects.


Chen, H., & Magramo, K. (2024). Finance worker pays out $25 million after video call with deepfake ‘chief financial officer.’ CNN.

China Network Information Network, (2022). The State Internet Information Office and other three departments issued the “Regulations on the Management of In-depth Synthesis of Internet Information Services”.

Crawford, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.

European Commission. (2022). The 2022 Code of Practice on Disinformation.

Flew, T. (2021). Regulating platforms. Polity Press.

Lawson, A. (2024). A look at global deepfake regulation approaches. Responsible AI.

Lawson, A. (2024). A look at global deepfake regulation approaches. Responsible AI.

Saner, E. (2024). Inside the Taylor Swift deepfake scandal: ‘It’s men telling a powerful woman to get back in her box.’ The Guardian.

Scanlan, R. (2023). Explicit photo woman will ‘never escape.’ News.

Technology and human rights. (2024). Australian Human Rights Commission.


Feature photo: Bank work with fintechs to counter ‘deepfake’ fraud

Image 1: Deepfake Explicit Images of Taylor Swift Spread on Social Media.

Image 2: CBS Reports | Deepfakes and the Fog of Truth.

Be the first to comment

Leave a Reply