
Artificial Intelligence…revolutionary or just plain scary?

The term ‘Artificial Intelligence’ has been widely used in today’s society. It’s known to be revolutionary, the fact that technology can now be “intelligent”. Artificial Intelligence (AI) itself is where “a reliability and competence of codification can be produced which far surpasses the highest level that the unaided human expert has ever…attain” (Michie, 1978, as cited in, Crawford, 2021, p. 7). In other words, AI is known as machines that can perform tasks that usually require human intelligence (Allen, 2020, p. 5). In today’s society, we typically associate AI with our friends Siri and Alexa, who so easily respond to us, humans, like we’re simply having a conversation. It is no longer a surprise when Siri greets you good morning or when Alexa automatically switches your lights off at a certain time. These are once things where human intelligence was necessary in order to perform, however, it can now be done through AI. Another more recent example would be ChatGPT, where essays can be written for you. Again, things that once required human intelligence, now can easily be done through technology. The development of Artificial Intelligence has truly been a game changer, but have we stopped to consider the concerns that come with this invention? Let’s first discuss how AI actually works before diving into the various concerns that exist with the invention of AI. We will then discuss how these concerns with AIs and algorithms are so prominently present in our day-to-day lives, especially in this digital media era. You might not notice them now, but you will start to question just how much our choices, biases and decisions can be based on an algorithm, something that doesn’t even have human intelligence.
How AI Works
Look, let’s just admit we all always had an idea of what AI is, but never how it actually works or how it came to be, so let’s explore that together. Essentially, Allen (2020), the Chief of Strategy and Communications of Joint Artificial Intelligence CENTER (JAIC), explained that AI systems are developed through various approaches including Handcrafted Knowledge AI and Machine Learning AI (p. 5-7). The difference between the two approaches is where the AI receives its knowledge–Handcrafted Knowledge AI being with the help of people or users–Machine Learning AI being with the help of the data that already exists, through the examples that already exist instead of being explicitly programmed (Allen, 2020, p. 5-7). To further understand the difference, here are some examples of the two.
Handcrafted Knowledge AI (Allen, 2020, p. 6)
- Tax preparation software: requires users to input their tax information to the data formats where it will then be processed
- Deep Blue: a chess playing AI that was developed with software engineers and several chess grandmasters that was able to beat a human world chess champion in 1997
Machine Learning AI (Donnery, 2021)
- Image recognition: assigning names to a picture
- Speech recognition: Alexa knowing who’s talking
Through these examples, we can truly see the difference between the two approaches. The first approach needed help from humans to develop its “intelligence”; the second approach uses the data that already exists to develop its intelligence. Allen (2020) mentions how we often interact more with Machine Learning AI (p. 7) so let’s expand more deeply on how it works.
For Machine Learning AIs, data is the most important commodity in the creation process. These systems learn from data, then algorithms, then computing hardware but having the right data tends to be the key (Allen, 2020, p. 7). Using the speech recognition example written above, let’s break it down. For those of you who own an Amazon Alexa, remember when you first had to state your name and speak random sentences to Alexa? That itself becomes the data that is gathered and then studied, so whenever Alexa hears that voice, Alexa knows that it’s talking to you instead of your sibling or housemate. Alexa can also gather data on your housemate by getting them to do the same thing. Now, Alexa is intelligent enough to tell the difference between the two voices. How cool is that? The machine is basically learning by itself through a pre-created set of codes that humans wrote.
Now, picture Alexa learning your voice, your housemates voice, your brothers voice, and because you’re not the only person in the world who owns an Amazon Alexa, picture your neighbors Alexa learning their voices, their children’s voices, their friends voices, etc. And it’s not just voices, Alexa also knows your address and can also tell you when you’ve received a package, it can even know what kind of package assuming you connect your email or if you purchase your things from Amazon. That’s A LOT of data that Amazon now holds. A lot of data that they can then commodify to third parties who might need your data such as shopping sites who need information on their target audience–who might happen to just be you!

Accessing YOUR Data
For artificial intelligence to be able to work properly, it needs HEAPS of data, we’re talking literally an infinite amount of data and there are two problems that can arise from this but both in regards to invading YOUR privacy.
Now, as discussed above, for machine learning AI to work, they have to be provided with a set of data that they can generate a “solution” for which means we, as data givers, are giving away our data for free. Let’s take TikTok and your For You Page (FYP) as an example. Have you ever noticed just how accurate your FYP is? How it just knows the type of content you’ll enjoy? This is because TikTok has a powerful algorithm that analyzes user preference through various methods such as user’s liked videos, the dwell time of a certain video, etc. (Yan and Zhang, 2019, p. 61). Tiktok also uses machine learning algorithms by analyzing facial features, products, and traits of people and objects in the video to understand the content that optimizes categorization (Mage, 2022). This helps TikTok further understand the different categories their users–you–are into. TikTok also stated on their privacy policy that the app collects user location data, up to a granularity of three square km (Dowling, 2023) which is why when you are traveling, you also see content more niche to that particular city or country. Along with location data, TikTok is also able to collect user contact lists, access calendars and scan hard drives on an hourly basis (Touma, 2022). These are things users usually “willingly” provide because it’s either a part of their contract placed on their terms and conditions in order to use the app or disguised as a way to connect you with your friends who are also on TikTok when they are asking to connect your contacts. So now, all your private data, including your name, birth date, location, contacts, are available for TikTok and I think we already know how risky, if exposed, location data could be by other parties to track users.
Moving on to the second consequence of giving away your data–what do you think all this data is being used for? Other than creating a curated For You Page? If you thought it’s to be sold, you are on the right track! This is called Data Commodification. This is when massive amounts of data collected on an individual is traded between other data brokers and purchased by businesses to understand consumer needs better (Bridge et. al, 2021, p. 361). So now, not only does TikTok have a massive amount of your data, other companies and businesses do too! More often than not, your data is commodified and sold to third parties for marketing purposes such as targeted advertisements. Businesses value consumer needs because if those needs are met then they can gain profit, however, an even scarier use of these targeted ads are when it’s being used for the “radicalization of vulnerable people…to change one’s opinion of politics” (Bridge et. al, 2021, p. 376). Isn’t that scary? Now, knowing that apps such as TikTok essentially control our narrative–curated FYPs, targeted ads–have you stopped to think how the algorithm or artificial intelligence might have altogether to do with what we see online? Controlling the videos or contents we’re exposed to? Prioritizing one type of content over the other?
TikTok and Algorithmic Bias

Let’s still talk about TikTok as an example as I feel like it’s one of the most used social media platforms in this generation. Back in 2020, during the Trump and Biden election, I remembered watching videos of Black creators on my FYP raising concerns that they’ve been shadowbanned or their videos have been removed when educating the masses about the election or speaking about Black Lives Matter (BLM). I also remembered some heavily right-wing political video content popping up on my FYP when I know for sure their values don’t align with mine, so what’s happening here? I start to think about how weird it was because TikTok’s algorithm has always been very accurate prior to the election time, why has it suddenly changed? In 2021, Media Matters for America created a report that showed how TikTok’s algorithm actually did push users towards accounts with far-right values where the algorithm also increasingly favors users with far-right contents (Binder, 2021). During the same time, creators of color who are posting BLM content are having their videos taken down, muted or hidden from their followers (McCluskey, 2020). Now, as we’ve discussed above, the algorithm as a result of artificial intelligence is made, the “intelligence” of AI is highly formalized through various processes that require heaps of data (Crawford, 2021, p. 6). We tend to assume that AI is intelligent, unbiased, because it acts on rational and existing data, but have we stopped to think WHO wrote the codes for the particular AI to work? WHO programmed the codes for the machine to work? What kinds of data are fed to the system? Algorithms might not be as neutral as we thought, considering the fact that PEOPLE are quite literally the programmers of these AI systems and we know for sure people can carry biases–this is what’s known as algorithmic bias. Heilweil (2020) wrote an article on how algorithms can be racist and sexist due to the set of chosen data it is fed. A study showed that when “teaching” AI through the internet, the system automatically created bias against black people and women. Even though engineers who check for biases exist, it’s also important to understand that it’s an extremely male-dominated field and therefore, as unbiased as a program is, it can still carry prejudice based on their creators. So try and stop to think, what kind of contents are you forced to consume? Are there particular types of contents prioritized over the other? What do you think this says about AI? Can it be a new form of controlling us?
Basically…
I’m not here to scare you about you mindlessly scrolling through your FYP, nor am I trying to stop you from using AI altogether. I love my Amazon Echo, the fact that I can simply tell Alexa to play Taylor Swift while I cook and not having to get my slimy hands on my phone to press play is good enough for me. What I guess I’m trying to say is to understand that despite the amazing growth in technology AI has brought, it’s important to know just HOW MUCH it actually demands from us to make it work including HEAPS of data that are violating our rights to privacy. On top of that, it’s important to understand that artificial intelligence isn’t at all neutral. Somewhere out there a white male programmer is creating his own version of AI, claiming it’s neutral, but no one can really be neutral, we all have subconscious prejudice or biases that can reflect in our work, including mine here. Moral of the story: I love my Amazon Echo, TikTok keeps me scrolling for ages, but now I’m also more aware of the contents being pushed on my feed and I start to question why Amazon keeps asking me if I’d like to make other kitchenware purchases.
References
Allen, G. (2020). (rep.). Understanding AI Technology. Retrieved April 6, 2023, from
https://apps.dtic.mil/sti/pdfs/AD1099286.pdf.
Binder, M. (2021, October 28). TikTok’s algorithm is sending users down a far-right extremist
rabbit hole. Mashable. Retrieved April 6, 2023, from https://mashable.com/article/tiktok-recommendations-far-right-wing
Bridge, J., Kendzierskyj, S., McCarthy, J., & Jahankhani, H. (2021). Commodification of
consumer privacy and the risk of data mining exposure. Strategy, Leadership, and AI in the Cyber Ecosystem, 361–380. https://doi.org/10.1016/b978-0-12-821442-8.00009-4
Crawford, K. (2021). Introduction. In Atlas of AI: Power, politics, and the planetary costs of
Artificial Intelligence (pp. 1–21). essay, Yale University Press. Retrieved April 6, 2023, from https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300252392.
Donnery, L. (2021, September 9). Machine learning: 6 real-world examples. Salesforce.
Retrieved April 6, 2023, from https://www.salesforce.com/eu/blog/2020/06/real-world-examples-of-machine-learning.html#:~:text=1.,white%20images%20or%20colour%20images
Dowling, B. (2023, March 15). TikTok bans: What the evidence says about security and privacy
concerns. The Conversation. Retrieved April 6, 2023, from https://theconversation.com/tiktok-bans-what-the-evidence-says-about-security-and-privacy-concerns-200608
Heilweil, R. (2020, February 18). Why algorithms can be racist and sexist. Vox. Retrieved April
6, 2023, from https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency
Mage. (2022, February 17). How does Tiktok use machine learning? DEV Community.
Retrieved April 6, 2023, from https://dev.to/mage_ai/how-does-tiktok-use-machine-learning-5b7i
McCluskey, M. (2020, July 22). These TikTok Creators Say They’re Still Being Suppressed for
Posting Black Lives Matter Content. Time. Retrieved April 6, 2023, from
https://time.com/5863350/tiktok-black-creators/
Touma, R. (2022, July 19). TikTok has been accused of ‘aggressive’ data harvesting. Is your
information at risk? The Guardian. Retrieved April 6, 2023, from https://www.theguardian.com/technology/2022/jul/19/tiktok-has-been-accused-of-aggressive-data-harvesting-is-your-information-at-risk
Yan, X., & Zhang, Z. (2019). Research on the Causes of the “Tik Tok” App Becoming Popular and the Existing Problems. Journal of Advanced Management Science, 7(2), 59–63.
Be the first to comment