
Yunke (Susan) Zhang
SID: 490044944
Imagine this:
You start your day by opening your favourite music app on your phone as usual and hit play on a recommended track. You like the song being played so much that you press the like button and add it to your personal playlist.
Just then, a notification from Instagram pops up, suggesting a user you might know. You unlock your phone, and while you find that you don’t recognise the person, you follow him as Instagram informs you that you have a few mutual friends.
During your lunch break, you browse through Facebook reels and short video recommendations. You stumble upon a short video about fitness. It inspires you and you think maybe you should start exercising as well, you know, just to achieve that fit and energetic state.
So you hop on Amazon wanting to buy a pair of dumbbells. You spend quite a while browsing but can’t decide on which one to get. By the time you return home, you continue shopping on Amazon. You notice that not only Amazon remembers the products you had viewed at your lunch break, but it also kindly recommends other fitness products for you.
The story above might not be part of your daily routine, but you definitely won’t find these moments unfamiliar.
According to data by Statista, there are 6.9 billion smartphone users worldwide, equivalent to 86% of the world’s population. Interestingly, it’s hard to find any of the most popular smartphone activities that don’t involve algorithms (Ericsson, 2022; Statista, 2023).

Among numerous such instances, have you ever wondered why does your phone seem to see right through you?
Or let’s broaden the question a little ——
Why do the websites and platforms you engage with appear to know you so well? What role do they play in your daily decision-making process?
Algorithms are changing our lives. But what are they?
The prevalence of the Internet and mobile devices (e.g. computers and smartphones) along with the increasingly sophisticated processing capabilities for large amounts of data contributed to the advent of the era of Big Data (Dean, 2014; Flew, 2014). One of the most significant features of this age is that people’s behaviour and information are able to be recorded and collected in the form of data, then be categorised and correlated for further analysing to generate predictive outcomes. Algorithms, throughout the process, serve as the instructions and rules that inform the established way to process sets of data in order to obtain the necessary results needed by certain tasks (Flew, 2021).

Let’s take Netflix as an example to see how algorithms work:

Of course, Netflix’s recommendation algorithm system in reality is much more complex than what is shown in the above diagram. As a website, Netflix has more than just basic recommendation tasks to execute. For instance, when you hop on to its homepage, you will probably see a large image of a video appear at the top, and below are sections of “continue watching” and “popular on Netflix”. The algorithms served for these features all differ from one another (Steck et al., 2021).

Algorithms merely a technical tool? Stop thinking in this way!!
It is reasonable to say that algorithm is already an inseparable part of this automated society; or we have to process and analyse such a massive amount of data manually (Andrejevic, 2019). The existence of algorithms makes it seem as if we have the power to call the vast amount of data in the world and make it work for us. Just take those apps on our smartphones as an example. When we open up the app store, we will soon realise that it offers functionalities that cover nearly every aspect of our lives. For example, watching videos and listening to the music that we might like, discovering new interesting products on shopping websites, or knowing people nearby through social media.
With so many conveniences, it doesn’t seem easy to examine the effects of algorithms objectively and critically. We used it so well that we seamlessly integrated them like tools. They assist us to make more efficient choices in our daily lives. But is this really the case?
The answer is “no.”
While the idea that “algorithms are just neutral technology” is widespread and attributes the output of algorithms solely to user input, the process of generating algorithmic recommendations is not as simple as chopping up received data just like what we do with salad leaves and mixing them together (Pasquale, 2015).
The algorithm is much more complex than the general public realises. For many years, scholars tried to break through the barriers of technological complexities and secrecy to reveal the secrets of the black box of algorithms.
For example, a few years ago, Google search engines still prioritised search results with signals of racism to cater to prevalent public stereotypes of minority groups (Noble, 2018).

The effect of filter bubbles has been considered another unconscious impact of algorithms (Just & Latzer, 2016). Resulting from the function of personalisation, platforms create an isolated chamber for users by only recommending content that aligns with their own opinion or of certain topics. As the information people receive becomes monotonous, their values and consciousness will also change thus creating new trends in society. An example is the prevalence of “body ideals “in social media and the gradual diffusion of body dissatisfaction (Beckett, 2022). The function of the algorithm is to make users continually receive the relevant content which directly leads to an increase in interest in photo editing and cosmetic surgery.
In addition, several studies have found that many popular platforms use algorithmic mechanisms related to cognitive psychology (e.g. mere-exposure effect, Zeigarnik effect) to encourage users to spend more time on the platform (Martin, 2021; Montag et al., 2019). Siles and his colleagues (2019) referred to the relationship between users and algorithmic recommendation systems as “mutual domestication.” It means that while we train our platforms by feeding them with data about our preferences, the system also influences us – specially, by encouraging or inhibiting certain actions and decisions.
“If something is a tool, it genuinely is just sitting there, waiting patiently. If something is not a tool it’s demanding things from you. It’s seducing you, it’s manipulating you, it wants things from you. We’ve moved away from a tools based technology environment, to an addiction and manipulation used technology environment. Social media isn’t a tool waiting to be used. It has its own goals, and it has its own means of pursuing them by using your psychology against you.”
Tristan Harris, Social Dilemma
Many similar events or phenomena have been exposed and started to make people doubt the neutrality of algorithms. However, research shows that people still tend to view algorithms as instruments by underestimating the impact of algorithms on themselves – particularly those who know less about the working mechanisms of algorithms(Dogruel et al., 2022; Martens et al., 2023).
Looking into the outcomes of algorithms – Example of Cmbridge Analytica

Flew (2021, p. 84-85) categorised the challenges of algorithms in modern society into following groups:
- Legal and ethical issues
- Bias, fairness and transparency
- Accountability
- Governance of data and privacy
- Impact on human behaviour
The classification provides a relatively comprehensive framework for us to consider the various effect of algorithms.
Taking the 2016 U.S. Election as an example. Cambridge Analytica was accused of illegally misappropriating data from a large number of Facebook users to develop targeted advertising to influence their voting decisions (Detrow, March 20). The collected data included, but was not limited to, users’ favourite groups or pages, their interpersonal relationships, location, education, and so on. What Cambridge Analytica did was collect user information and apply psychographic profiling algorithms for analysis. Based on the results, they served users with advertisements aimed at their psychology and values. For example, a Republican supporter who, according to the analysis, possessed high levels of neuroticism and conscientiousness might see an ad about promoting gun rights to protect themselves and their families (Nix, 2016).
There is no direct evidence shows that Cambridge Analytica played a significant role in the election. But it’s worth mentioning Cambridge Analytica’s algorithmic methods and the potential for these results to influence group behaviour.
We’ve consented the premise of election is that citizens autonomously vote for the candidates they would like to support. However, influencing voters’ mindsets and thus shaping their behaviour without their knowledge is already an infringement on their autonomy and privacy and also the fairness of the election (Isaak & Hanna, 2018). Moreover, the decision made under the emotional stimulus of advertising clearly can’t be counted as a rational one. Throughout the process, we can also see a significant unequal power relation between users and the institutions that control their data —- We are easily seen through, yet we know little about those powerful agencies (Pasquale, 2015).
In his speech, Nix explained the OCEAN model used by Cambridge Analytica. This algorithmic model is able to analyse user data and create personality profiles for individual users. This algorithm can have a huge potential to influence society, including the market and even political election (as Nix said). It means that platforms or institutions can easily generate messages that can trigger expected emotions or behaviours in users, and users may not even be able to tell whether these messages are serving their interests genuinely or just trying to influence their decision-making process to achieve the goals of those powerful entities (Just et al., 2016). To most people, algorithms are like a black box —— we know about their existence, but we don’t know what’s inside. Such an extramely low transparency but high complexity of algorithms are important features that enable them to avoid from being supervised by citizen and government. Their ethical implications are even more difficult to verify (Pasquale, 2015).
What about the regulations? Who should be accountable for it? Facing the secrecy and complexities of algorithms, there is no perfect answer to these issues. The law tends to protect those that can make profits, including platform algorithms (Flew, 2021). In this scandal, the Federal Trade Commission was only able to define what Cambridge Analytica did as unfair and deceptive business practices (Hu, 2020). As for Facebook, the platform was required to obtain user consent before disclosing information to third parties. But is it really effective? The terms used for user consent are often lengthy and confusing. It’s difficult to ensure how much do users understanding the rights they just granted to the platform when they tick those boxes. Secondly, the consequence of refusing is that users will be denied access to the service. They have little say in the formulation of these terms (Suzor, 2019).
Conclusion: What can we do as users to ensure our self-awareness and autonomy when we interact with the results of algorithms?
The above discussion about the impact of algorithms focuses merely on the personal level without diving too much into their impact on society. However, it is obvious that when they are applied in a large scale, algorithms can change the way people perceives the world thus creating new social orders (Just & Latzer, 2016).
The ultimate goal of this blog is not to oppose algorithms but to call for a critical perspective to consider their influences on our mindsets and behaviour. So on a personal level, what can we do? First and foremost, developing digital and information literacies(Andrejevic, 2019; Jones-Jang et al., 2021). These not only entail knowing how to use digital platforms and tools but also the awareness of making sensible decisions within those communities of platforms. On the other hand, objectively and critically assessing formation clusters that are brought to us by algorithms, such as recognising biased or false information. The continuous underestimation of the impact of algorithms is a major factor that exacerbates their impact (Hinds et al., 2020). Acknowledging their existence and influence is the most crucial first step to take for us.
References:
Andrejevic, M. (2019). Automated culture. In Automated media (pp. 44-72). Routledge.
Beckett, S. (2022). Social Media-related Body Dissatisfaction, Beauty Ideal Internalization, and State Self-Objectification. Journal of research in gender studies, 12(1), 69-83. https://doi.org/10.22381/JRGS12120225
Dean, J. (2014). Big data, data mining, and machine learning : value creation for business leaders and practitioners. Wiley.
Detrow, S. (2018, March 20). What did Cambridge Analytica do during the 2016 Election?. npr. https://www.npr.org/2018/03/20/595338116/what-did-cambridge-analytica-do-during-the-2016-election
Dogruel, L., Facciorusso, D., & Stark, B. (2022). ‘I’m still the master of the machine.’ Internet users’ awareness of algorithmic decision-making and their perception of its effect on their autonomy. Information, communication & society, 25(9), 1311-1332. https://doi.org/10.1080/1369118X.2020.1863999
Ericsson. (2022). Number of smartphone mobile network subscriptions worldwide from 2016 to 2022, with forecasts from 2023 to 2028 (in millions) [Graph]. In: Statista.
Flew, T. (2014). Twenty key concepts in new media. In New Media (Fourth edition.) (pp. 18-36). Oxford University Press.
Flew, T. (2021). Regulating platforms. Polity Press.
Hinds, J., Williams, E. J., & Joinson, A. N. (2020). “It wouldn’t happen to me”: Privacy concerns and perspectives following the Cambridge Analytica scandal. International journal of human-computer studies, 143, 102498. https://doi.org/10.1016/j.ijhcs.2020.102498
Hu, M. (2020). Cambridge Analytica’s black box. Big data & society, 7(2), 205395172093809. https://doi.org/10.1177/2053951720938091
Isaak, J., & Hanna, M. J. (2018). User Data Privacy: Facebook, Cambridge Analytica, and Privacy Protection. Computer (Long Beach, Calif.), 51(8), 56-59. https://doi.org/10.1109/MC.2018.3191268
Jones-Jang, S. M., Mortensen, T., & Liu, J. (2021). Does Media Literacy Help Identification of Fake News? Information Literacy Helps, but Other Literacies Don’t. The American behavioral scientist (Beverly Hills), 65(2), 371-388. https://doi.org/10.1177/0002764219869406
Just, N., & Latzer, M. (2016). Governance by algorithms: reality construction by algorithmic selection on the Internet. Media, Culture & Society, 39(2), 238-258. https://doi.org/10.1177/0163443716643157
Martens, M., De Wolf, R., Berendt, B., & De Marez, L. (2023). Decoding Algorithms: Exploring End-Users’ Mental Models of the Inner Workings of Algorithmic News Recommenders. Digital journalism, 11(1), 203-225. https://doi.org/10.1080/21670811.2022.2129402
Martin, E. (2021). Experiments of the Mind : From the Cognitive Psychology Lab to the World of Facebook and Twitter. Princeton University Press. https://doi.org/10.1515/9780691232072
Montag, C., Lachmann, B., Herrlich, M., & Zweig, K. (2019). Addictive features of social media/messenger platforms and freemium games against the background of psychological and economic theories. International journal of environmental research and public health, 16(14), 2612. https://doi.org/10.3390/ijerph16142612
Nix, A. (September 28, 2016). The power of big data and psychographics | 2016 Concordia Annual Summit. [Video]. Youtube. https://www.youtube.com/watch?v=n8Dd5aVXLCc&ab_channel=Concordia
Noble, S. U. (2018). A society, searching. In Algorithms of oppression: how search engines reinforce racism (pp. 15-63). New York University Press.
Pasquale, F. (2015). Introduction the need to know. In The black box society: the secret algorithms that control money and information (pp. 1-18). Harvard University Press.
Siles, I., Espinoza-Rojas, J., Naranjo, A., & Tristán, M. F. (2019). The Mutual Domestication of Users and Algorithmic Recommendations on Netflix. Communication, Culture and Critique, 12(4), 499-518. https://doi.org/10.1093/ccc/tcz025
Statista. (2023). Mobile Internet usage worldwide. Statista. https://www-statista-com.ezproxy.library.sydney.edu.au/download/MTY4MDg2NTEzOSMjMzE1MTg3NSMjMjEzOTEjIzEjI3BkZiMjU3R1ZHk=
Steck, H., Baltrunas, L., Elahi, E., Raimond, D. L. Y., & Basilico, J. (2021). Deep learning for recommender systems: A Netflix case study. AI Magazine, 42(3), 7-18. https://doi.org/https://doi.org/10.1609/aaai.12013
Suzor, N. P. (2019). ‘Who Makes the Rules?’. In Lawless: the secret rules that govern our lives (pp. 10-24). Cambridge University Press.
Be the first to comment