The privacy leaks are everywhere in the era of AI: the strategy of online scammers is constantly upgrading!

People often say that technology changes life, and artificial intelligence makes human lifestyle more efficient and intelligent through data and algorithms. With the continuous development and progress of AI, the security issues are increasingly prominent. Nowadays, online scammers have been eyeing the strategies of using AI technology to strengthen scam and cybercrime, and news about AI technology scam is emerging one after another.

The concept and current situation of AI scams

As we all know, the main types of AI are deep learning, machine learning and natural language processing, which simulate human thinking and behavior through machines, and become more natural in the process of self-learning (Gherhes, 2018). The updating iteration of AI technology makes the scam strategy of scammers even more powerful. AI scam aims to enable online scammers to implement scam activities with more complex and difficult to detect intelligent scam strategies through advanced functions such as AI data analysis and algorithms and automated decision-making.

In this digital era, people’s personal information is preserved in digital form. All our actions on the Internet or online media platforms are being collected and stored in big data. In this environment, everyone’s personal privacy and information may be facing the threat of being leaked or abused. Does personal privacy really belong to us in the network today? Actually, it is not. Nowadays, many problems about privacy leakage come from the field of digital networks. They are collecting and storing users’ privacy and highly detailed data, such as users’ personal preferences, location data, social networks and so on. People’s personal privacy cannot be completely hidden on the Internet.

Because of this, people’s personal information is being stolen and collected unconsciously. AI technology has been abused by scammers for online scam and cyberattacks, which has brought great challenges to network security and personal privacy. They use AI technology to clone sounds, generate images and even make deepfake videos to enhance the scale and complexity of online scams. The hidden dangers and threats existing in the privacy and security issues of AI scams cannot be underestimated.

Emerging threats from the weaponization of AI

According to Vaithianathasamy (2021), online scammers are no longer a person wearing a hoodie, but have the most advanced AI technology to carry out intelligent scam together. Online scammers can use these technologies to hide themselves in dark corners to scam. The advancement and diversity of AI technology increase the success rate of online scammers and making it difficult to prevent. Among these, the emerging threats caused by the malicious use of AI technology are frightening.

Online scammers weaponize AI technology to attack personal privacy and undermine network security. They constantly dig out new scams through machine learning of AI technology (Babich, Birge & Hilary, 2022). They use the power of AI to easily realize remote scam activities through data and algorithms. For example, deepfake, phishing messages and network automation attacks. Nowadays, the images and videos generated by AI technology through data and algorithms are more and more realistic. In addition, the synthesized text and sound are becoming more and more fluent, and it is difficult for us to distinguish the authenticity. What a heinous thing it is. Now what you see is not necessarily true.

How is AI technology used by scammer in online scam? (Case study of two common AI scams)

AI technology is data-driven, and the Internet and digital content greatly increase the amount of information available (Andrejevic, 2019). On the Internet, data can be shared in essence, but there are great privacy and security issues when extracting data without others’ permission (Baker-Eveleth et al., 2022). Most of the data sources used in AI scam activities are extracted without personal knowledge.

For a simple example, have you ever answered a strange phone call? When you pick up the phone, either the other person does not talk, or the AI robot is talking to you. Then you should be careful, maybe it is the AI scam phone that is collecting your voice at this time! Personal data and privacy are often stolen and collected inadvertently.

Deepfake: AI face swap and speech synthesis

Face swap and speech synthesis of AI technology have become popular in the network. People only need to upload a photo of themselves and a short audio message to receive a self “created” by an AI generator a few seconds. The operation mode of this technology is to carry out deep learning and natural language processing based on data input provided by users, and then interactive AI can respond effectively and quickly to generate output (Flew, 2021).

How does deepfake work (BasuMallick, 2022)

Nowadays, online scammers only need to collect or steal the photos and audio of the target person, and they can start to clone the facial features and voice features of the target person through AI image processing technology and speech synthesis technology and algorithms, and then implement deepfake scam activities.

Case study: Finance worker pays out $25million after video meeting with deepfake

This is a $25 million online scam case caused by AI deepfake technology. It happened in a multinational company in Hong Kong, and the victim was the financial staff of this company. At that time, he received a financial transaction email from the company’s chief financial officer in the UK.

At first, the financial staff thought it was a phishing email after reading it, but then after a video conference with the CFO and other employees, he dispelled this doubt. Because there are other colleagues who are familiar with the appearance and voice besides the “CFO” who attended the meeting. However, according to the police investigation, all the participants in the video conference are digitally reconstructed by AI deepfake technology (Chen & Magramo, 2024).

Nowadays, online scammers use the victims’ trust in familiar faces and voices to disguise online scam against specific individuals or groups. The deep learning of AI technology makes it easy for scammers to deceive victims through deepfake.

Automated Attack: Phishing SMS

Have you ever received a phishing email or text message? They often disguise themselves as trusted sources to trick recipients into providing their personal information and privacy, such as account information and passwords, credit card information, family information and so on.

Some scammers add AI technology to phishing messages, use natural language processing to make phishing messages that imitate human thinking and tone, and send them randomly and automatically to many recipients who may be deceived.

The most common way in phishing messages is malicious scam links, and scammers induce people to click on links by creating a sense of urgency or promising rewards (Stojnic, Vatsalan, & Arachchilage, 2021). Many phishing messages will induce people to click on links to fill in information or download software. Once the system is authorized to these phishing websites and software, personal information and privacy will all be stolen and used by scammers.

How fearful this is, online scammers use AI and algorithms to create a series of seemingly credible sources and phone numbers. However, these are just misleading fraudulent means created by scammers to defraud personal privacy and personal property.

Case study: AusPost’ s phishing messages

Have you received a short message from AusPost, which tells you that there is something wrong with the express delivery, and you need to pay the courier fee as soon as possible before you can reschedule the delivery, and a link that looks like the official website of AusPost is attached at the end of the short message. Unfortunately, if you happen to shop online, you will probably click on this fake AusPost phishing message created by online scammers.

How to spot an SMS Phishing Scam (AusPost, 2019)

Online scammers use the AI grouping method of smart phones to send fraudulent phishing messages with the same ID name “AusPost”. Therefore, fraudulent phishing messages will appear in the same conversation list as legitimate AusPost messages. As you can imagine, they are indistinguishable.

AusPost will never SMS customers asking for…(AusPost, 2019)

According to the official reply of AusPost website, they will never ask customers to pay the courier fee or fill in personal information by sending text messages or making phone calls. All text messages and phone calls with these requirements are phishing scams disguised as express delivery by scammers. Once you click on the website link in the phishing message, you will jump to the fake AusPost website, and the risk of this website is not only to steal personal information and privacy, but also to cause heavy economic losses.

Security and supervision of AI technology has become a problem that must be paid attention to now!

With the rapid improvement of AI technology, the existing legal and regulatory framework has been difficult to keep pace with the development of technology. For this reason, the lag of supervision of AI technology provides available space for online scammers. They ignore the restrictions and rules of governance, and steal and infringe on others’ personal privacy and information by extremely immoral means.

Concerns about AI technology often involve data security and privacy protection, which is also the core issue that needs attention in the field of digital policy and governance. Today’s legal and regulatory framework cannot achieve security governance, and people’s personal privacy and data information are exposed to the Internet. The potential security problems brought by AI technology, such as privacy leakage, scam attacks, and illegal abuse, all need to be paid attention to.

Some existing data regulations (Dooley, 2023)

Nowadays, the lack of transparency, interpretability and fairness in digital policy and governance is unfair to people. Although the current digital policies and governance (such as GDPR & CCPA) have mechanisms to protect personal privacy and protect people’s digital rights, people can never ensure that personal data and privacy will not be leaked or abused. The complete transparency of AI decision-making process is an impossible goal (Crawford, 2021). However, people do not know how their data will be collected and stored, and most digital network platforms do this because they want to avoid hackers, scammers, and phishing senders by hiding rules and auditing processes (Suzor, 2019).

However, if we want to solve the security and supervision issues of AI technology from the root, we should join hands with relevant departments to crack down on the abuse of AI technology and establish a strict legal and regulatory framework to protect people’s privacy and data security. Therefore, in digital policy and governance, we should adjust and update the existing governance framework in real time, comprehensively cover the laws and regulations on AI technology scam, improve the transparency and interpretability in the decision-making process of AI technology, and emphasize the legal and reasonable application and management of AI technology. Only in this way can we effectively combat and reduce the security risks caused by AI scam.

Rethinking online scam caused by AI

The addition of AI technology is the icing on the cake for scammers’ fraudulent means. Do you feel worried and scared after reading this blog? With the rapid development of AI technology, can the existing digital policies and governance really protect our personal privacy and data? Undoubtedly, the fraudulent means of AI technology pose a major challenge to digital policy and governance.

AI technology makes deepfake and phishing messages more subtle and difficult to detect. Scammers use machine learning and deep learning to improve the success rate of scam, making today’s defense work more difficult. According to Pasquale (2015), data is the most important thing in people’s lives. They are invisible and intangible, but it can show all the privacy of a person. In this digital age, our personal privacy and property are likely to be completely abused and stolen in a few minutes or even seconds by online scammers’ deepfake or phishing messages. Perhaps in the future, AI scam will become more personalized and refined, making it difficult to detect and discover. Please stay vigilant because as AI scam is always being updated and improved.


In the era of AI, privacy leaks are everywhere, and online scammers use AI technology to implement AI scam activities that are more difficult to defend. From deep learning to deepfake, can we really see through counterfeit faces and voices immediately? From phishing messages to malicious links, will we be manipulated by scammers to sell our personal privacy and information? The challenge of AI scam is not only for individuals, but also for digital policy and governance. What we should do now is to strengthen the existing network security governance and protect everyone’s privacy and security issues and ensure that AI technology is no longer abused by illegal online scammers.


Andrejevic, M. (2019). Automated Media. Routledge.

Babich, V., Birge, J. R., & Hilary, G. (2022). Artificial Intelligence and Fraud Detection. In Innovative Technology at the Interface of Finance and Operations (Vol. 11, pp. 223-247). Springer International Publishing AG.

Baker-Eveleth, L., Stone, R., & Eveleth, D. (2022). Understanding social media users’ privacy-protection behaviors. Information and Computer Security30(3), 324-345.

Chen, H., & Magramo, K. (2024). Finance worker pays out $25 million after video call with deepfake ‘chief financial officer.’ In CNN Wire Service. CNN Newsource Sales, Inc.

Crawford, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (1st ed.). Yale University Press. 

Flew, T. (2021). Regulating platforms. Polity Press.

Gherheş, V. (2018). Artificial Intelligence: Perception, expectations, hopes and benefits. Romanian Journal of Human – Computer Interaction11(3), 220-231.

Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.

Suzor, N. P. (2019). Lawless: the secret rules that govern our digital lives. Cambridge University Press.

Stojnic, T., Vatsalan, D., & Arachchilage, N. A. G. (2021). Phishing email strategies: Understanding cybercriminals’ strategies of crafting phishing emails. Security and Privacy4(5). Vaithianathasamy, S. (2019). AI vs AI: fraudsters turn defensive technology into an attack tool. Computer Fraud & Security2019(8), 6-8.

Image reference

Australia Post. (2019, November 7). How to spot an SMS Phishing Scam. [Video]. YouTube.

Aurielaki. (2016). Hacker phishing infographic 3D flat isometric people design concept. Spam phishing attack risk threats for computer systems vector illustration. iStock.

Brandwayart. (2023). Ai Generated Hacker Computer royalty-free stock illustration. Free for use & download. Pixabay.

Boo, B. (2022). Scam Alert Cyber Attack Hack royalty-free stock illustration. Free for use & download. Pixabay.

BasuMallick, C. (2022). What Is Deepfake? Meaning, Types of Frauds, Examples, and Prevention Best Practices for 2022. Spiceworks.

Dooley, B. (2023). Navigating Data Privacy Regulations: Comparative Insights into GDPR, CCPA, LGPD, PDPA, and Privacy Act. Infocepts.

Studio, M. (2024). Robotic hand holding handset with a human profile made of audiiowave. Faking voice, robocall problem, scam concept. Vector illustration. IStock.

WION. (2024, February 7). Gravitas | CFO digitally cloned, $25 million stolen using deepfakes | WION. [Video]. YouTube.

XH4D. (2022). Metaverse 3d rending. iStock.

Be the first to comment

Leave a Reply