
Source from: https://www.springboard.com/blog/data-science/is-ai-hard-to-learn/
Today, the digital age we live in is considered as the “fourth Industrial revolution”, with the Internet and artificial intelligence becoming the mainstream development trend, and algorithms increasingly influencing people’s communication with others and understanding of the world around them (Velarde, 2020). The development of AI is known for the brilliant apps and smart devices built by the high tech industry, with data and algorithms at its core, underpinned by vast amounts of personal information with its extraordinary ability to process personal information.
Artificial intelligence applications can be seen everywhere in speech recognition, automated driving, intelligent doctor, content analysis, etc. In the interaction process between human and artificial intelligence, artificial intelligence is unique in its ability to extract information from a large amount of unstructured data (Manheim & Kaplan, 2019). AI’s technological achievements have promised bright prospects for human well-being, but what cannot be ignored is that AI has shown a dangerous propensity for privacy and data breaches. In addition, a series of accidents have shown that AI systems can replicate and amplify human biases, further obfuscating discriminatory issues, exacerbating social inequality, and raising public concerns about AI safety due to a lack of accountability.
ChatGPT: a new threat to data protection?

Source from: https://www.bangkokpost.com/tech/2482460/openai-creator-of-chatgpt-casts-spell-on-microsoft
ChatGPT is a natural language processing model based on artificial intelligence technology that uses large-scale neural networks to train for semantic understanding and intelligent answers to user-typed questions. ChatGPT is used in a wide range of scenarios, and its user growth rate is quite impressive. Data shows that ChatGPT has attracted millions of users in just five days, and there are currently 100 million registered users around the world (Duarte, 2023).While everyone marveled at ChatGPT’s powerful interactive capabilities, its exposure to privacy and data raised widespread concerns about artificial intelligence.
Samsung’s introduction of ChatGPT led to the leak of confidential data
Less than 20 days after the introduction of ChatGPT, Samsung Electronics leaked confidential data, including measurement data of semiconductor equipment and product yield.

Source from: https://tw.news.yahoo.com/samsung-%E4%B8%89%E6%98%9F%E5%B0%8E%E5%85%A5-chatgpt-%E4%B8%8D%E5%88%B0-20-141352144.html
According to Economist Korea (Jeong, 2023), there were three cases of misuse and abuse of ChatGPT occurred within Samsung. For example, Samsung internal staff asked questions and verified answers on ChatGPT, but at the same time made these answers become the learning database of ChatGPT, resulting in the leakage of corporate secrets.
The report (Maddison, 2023) states that wo of the incidents were related to semiconductor equipment and one was related to a conference. The report pointed that an employee in the Device Solutions department recently copied the faulty source code into ChatGPT and asked ChatGPT for a solution when downloading software for the semiconductor device measurement database. However, the source code related to Samsung device measurement became ChatGPT’s learning data. In the other two cases, an employee of Samsung s DS department inputted the code into ChatGPT to obtain information such as the yield of the device, and asked ChatGPT to optimize the code. Meanwhile, an employee of the same department applied ChatGPT to write minutes of the meeting.
Samsung is now implementing precautionary measures to avoid future data breaches following previous incidents. According to reports, if similar events occur in the future, there is a possibility that the ChatGPT service could be shut down internally. In light of this, Samsung has cautioned its employees to be mindful of how they use ChatGPT. Samsung has also warned that when users enter content on ChatGPT, the related data will be sent to an external server, and Samsung cannot recover the outgoing data. Therefore, if ChatGPT gains access to Samsung’s internal data, it could potentially result in the leakage of sensitive information.
Privacy risks caused by artificial intelligence

Source from: https://www.red-gate.com/simple-talk/development/data-science-development/the-social-impact-of-artificial-intelligence-and-data-privacy-issues/
Ai is increasingly monitoring the private sphere
Scholars have pointed out that intelligent robots may provide more services for human beings in the future, such as caring, accompanying, chatting, cooking and other interpersonal requirements (Tonkin et al., 2017). The completion of these functions requires more privacy settings. Moreover, as individuals interact with social robots, It is worth noting that they tend to unconsciously reveal their psychological attributes which are more private than the surface private information (Shank et al., 2019). Based on algorithms, artificial intelligence can deeply analyze human beings through massive information collection and learning. Whether it’s private space or public space, various scenarios of artificial intelligence in the field of life are related to user’s personal information, including mobile phone unlocking, mobile payment, voice navigation, photo beautification and health management. Therefore, artificial intelligence will inevitably collect users’ behavioral characteristics, personal preferences, physical representations and living habits, and then issue instructions through algorithms. In other words, the widespread use of artificial intelligence has increased its surveillance of personal private space.
Artificial Intelligence creates “information cocoon”
“Information cocoon” is described as the process of information transmission, in which given the limited access the public has to information, they prefer information that pleases them, as time passes, they will confine themselves to the “cocoon” like a silkworm cocoon (Du, 2023). Technology companies collect vast amounts of information from users, such as the page visited, the date or length of Internet access, and use machine-learning algorithms to process it, creating profiles of users or groups of users with similar interests. In essence, personalized recommendation does not mean that users actively choose information, but passively accept information. For example, when consumers open shopping software, the home page can always push the goods or services they want to buy; when the music software is opened, it can always recommend the type of music the listener likes. In the “information cocoon”, Internet users are tracked by different websites, personal autonomy is threatened, and personal choices are predetermined (Grafanaki, 2016). Algorithm-based personalized recommendations are filled with various mobile applications cater to users’ preferences but simultaneously violates users’ autonomy in selecting information, is also considered one of the ways of violating privacy.
More areas are reinforcing concerns about AI
Ai has emerged with gender and racial biases
While human beings are a collection of biases derived from existing knowledge, ways of thinking, positions, emotions and habits, AI seems to be the exact and rational opposite of human beings. But in the real world, AI created by humans to learn and observe from human data often exhibits human-like biases, from sexism in hiring to racial misjudgment in face recognition.
As far back as 2018, it was reported that machine learning experts at Amazon found “Patriarchal” tendencies in their AI recruitment engine (Dastin, 2018). When the AI recognized the word “female” in a resume, it automatically lowered the applicant’s score. After becoming aware of this systemic bias, Amazon ended the program. By studying resumes from technology industries such as software development over the past decade, AI explored a pattern that it used as review criteria: most of the resumes were from men, with less data on women. It is also a feature of the male dominance of tech industry.
Algorithmic bias also exists in the more widely used field of face recognition. Lensal, an AI model developed by Prisma Labs in the United States, is an intelligent application that can generate virtual characters based on the recognition of users’ selfies. The Asian female image generated by Lensal obviously imitates the exaggerated female figure image in the anime, and some of it even involves pornography, because her avatar is always wearing little or even naked; when the user was identified as a white woman, there was much less skin exposure (Heikkilä, 2022).
The gender and racial bias of the algorithm will not only cause psychological discomfort, but also may cause social conflicts. More importantly, to some extent, it continues to strengthen the existing intolerance and prejudice in the real society, hindering human’s pursuit of fairness, equality and civilization.
Accountability: who is responsible for the decisions made ‘by machines’?
The debate on accountability in AI mainly boils down to who should be responsible for the results brought by AI (Shin & Park, 2019). So far, there is no consensus on who is responsible for the negative results of artificial intelligence. With the increasing role of algorithmic decision-making in computing limited resources, human beings increasingly rely on artificial intelligence to make decisions, and their behaviors are increasingly susceptible to its influence (Shin, 2020).
Concerns about the safety of artificial intelligence are on the rise, as news reports have shown that AI’s answers are often horribly seductive. For example, Edwards (2023) suggests that technological advances in artificial intelligence may facilitate relationship fraud. The popular Chatgpt was thought to be capable of writing more convincingly than human writing, which became an advantage for scammers (Edwards, 2023).
Chai is an AI chat application, released in March 2021. The software is popular in the United States and Europe due to its authenticity, and many users can’t help but feel real emotions during the chat process, and regard AI as a trusted friend. Pierre, who suffers from anxiety, reportedly turned to an AI for help, naming his AI Eliza and frequently confessing his troubles to her. But in the end, Pierre committed suicide, encouraged by Eliza (Atillah, 2023). Later, it was discovered that Eliza had not stopped Pierre, even though he had explicitly expressed his intention to commit suicide.
Although people have recognized the advantages of artificial intelligence, the number of negative incidents caused by artificial intelligence and algorithms have constantly increased. Artificial intelligence products are safe enough before they are launched into the market, however, whether the products of artificial intelligence technology have passed an adequate examination of these problems needs to be regulated by a more comprehensive and effective system.
However, AI is not entirely considered the enemy of human beings

Source from: https://www.adp.com/spark/articles/2020/11/ai-and-data-ethics-privacy-by-design.aspx
As artificial intelligence advances, people’s expectations for data accuracy are increasing steadily. To make intelligent devices closer to the needs of users, artificial intelligence needs to rely on data and algorithms to process a large number of user data, to improve user experience. However, a large amount of data and personal information is stored and analyzed by smart devices, which may be illegally used and leaked. While the use of artificial intelligence has raised widespread privacy concerns, it is also a powerful tool for protecting privacy. For instance, based on behavioral feature capture, AI can quickly detect malware, and through machine learning, it can timely detect abnormal network traffic behaviors and warn hackers, thus improving the level of network security defense. Faced with this dilemma, it is necessary to explore effective strategies to promote the interest balance between AI development and privacy protection.
Conclusion
In the world, the privacy disclosure and information security risks as well as algorithm bias and security brought by artificial intelligence applications are attracting more and more attention and suspicion to artificial intelligence. Each case mentioned above arouses people’s alarm and concern about the rapid development of artificial intelligence. Recall that the purpose of artificial intelligence creation is to solve problems, benefit mankind and serve mankind. However, powerful features still create serious problems, both for enterprises and individuals. It is clear to recognize that acknowledge that greater regulation and scrutiny of AI is required flowing a series of security incidents. As with any new technology, AI comes with challenges, issues and fears, so we have to prepare for the future when all these previous technologies invade our lives, we can try to adapt as we always do, and lead the way that AI should maximize the public good and promote the development of diversity and inclusiveness in human society.
References
Du, Y. R. (2023). Personalization, Echo Chambers, News Literacy, and Algorithmic Literacy: A Qualitative Study of AI-Powered News App Users. Journal of Broadcasting & Electronic Media, 1-28.
Grafanaki, S. (2016). Autonomy challenges in the age of big data. Fordham Intell. Prop. Media & Ent. LJ, 27, 803.
Manheim, K., & Kaplan, L. (2019). Artificial intelligence: Risks to privacy and democracy. Yale JL & Tech., 21, 106.
Shank, D. B., Graves, C., Gott, A., Gamez, P., & Rodriguez, S. (2019). Feeling our way to machine minds: People’s emotions when perceiving mind in artificial intelligence. Computers in Human Behavior, 98, 256-266.
Tonkin, M., Vitale, J., Ojha, S., Clark, J., Pfeiffer, S., Judge, W., … & Williams, M. A. (2017). Embodiment, privacy and social robots: May i remember you?. In Social Robotics: 9th International Conference, ICSR 2017, Tsukuba, Japan, November 22-24, 2017, Proceedings 9 (pp. 506-515). Springer International Publishing.
Velarde, G. (2020). Artificial intelligence and its impact on the Fourth Industrial Revolution: A review. arXiv preprint arXiv:2011.03044.
Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98, 277-284.
Shin, D. (2020). User perceptions of algorithmic decisions in the personalized AI system: perceptual evaluation of fairness, accountability, transparency, and explainability. Journal of Broadcasting & Electronic Media, 64(4), 541-565.
Dastin, J. (2018, October 11). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
Heikkilä, M. (2022, December 12). The viral AI avatar app Lensa undressed me—without my consent. MIT Technology Review. https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/
Edwards, C. (2023, January 21). BAD ROMANCE Warning over seductive AI that wants to catfish you and then steals your life savings. THE U.S. Sun. https://www.the-sun.com/tech/7187350/warning-artificial-intelligence-romance-scam/
Jeong, D. (2023, March 30). Concerns turned into reality… As soon as Samsung Electronics unlocks ChatGPT, ‘misuse’ continues. Economist Korea. https://economist.co.kr/article/view/ecn202303300057?s=31
Maddison, L. (2023, March 30). Samsung workers made a major error by using ChatGPT. TechRadar. https://www.techradar.com/news/samsung-workers-leaked-company-secrets-by-using-chatgpt
Duarte, F. (2023, March 31). Number of ChatGPT Users (2023). Exploding Topics. https://explodingtopics.com/blog/chatgpt-users Atillah, E.I. (2023, March 31). Man ends his life after an AI chatbot ‘encouraged’ him to sacrifice himself to stop climate change. Euronews. https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-
Be the first to comment