In today’s digital age, the internet is an essential tool that is ingrained into the habits of billions of people. As we progress into modernity, the need for the internet to conduct daily tasks and activities has increased. For me, waking up in the morning involves unlocking my phone with a face scanner, then I take photos of my breakfast whilst curating the perfect upload of photos of my latest adventure to the beach, newest purchases and memories with friends.
There is a lot of information uploaded in that short 10 minute browse on my phone whilst I ate my food. Oftentimes we dismiss the concerns about the security and privacy of data that is uploaded to the internet. That is until the bi-annual data breach reaches the newspapers and becomes the hottest topic for a week. From TikTok being banned on Australian Government devices for claims of spyware, to the Optus data breach, there are previous examples of data security being compromised, and concerns have been raised from citizens, businesses and governments regarding the ethics and limits of data collecting.
With the internet being a system where every action is tracked and analysed, it is assumed that any information about us and our actions remain with the platform, are secure and out of sight for people who shouldn’t have access to it. Personal data falling into the wrong hands can result in the loss of finances, identity theft, and the compromise of sensitive information, all of which risks damage to reputation, whether for an individual, organisation or government (Chen, H. Beaudoin, C. Hong, T. 2017).
Privacy is seen as one of the fundamental rights and freedoms of individuals, with Article 12 of the Universal Declaration of Human Rights (1948) stating: “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks”. However, when privacy issues are translated to internet activities, individuals disregard many online privacy behaviours compared with offline privacy behaviours (Barnes, 2006). In other words, “in online contexts we don’t lock our doors”. (Bartsch, Dienlin. 2016).
Data privacy, whilst a universally agreed upon human right, is subjective and cannot use not a one size fits all approach. Different platforms have different purposes, thus differing in standards of privacy. My blog post will focus on how platforms balance privacy with the user experience of their platforms.
As platforms are entangled with the users data to create the platform’s online experience (Cassanova, 2020), I will outline how simply following the instructions that are posed to the user when they create an account, could result in online harms. Then I will analyse the privacy laws which confirm the legality of these practices by platforms, platforms regularly push the boundaries of privacy laws in an attempt to create unique interface experiences and to gain income through personalised adverts, concluding with the importance of online privacy literacy.
Now, it is not required to share personal information to facebook. However due to the nature of social media, it is a far more underwhelming experience when you try to remain hidden and decide to not share information like the city you live in, the schools you attended and favourite restaurants.
You might miss out on information like the next big sporting game, upcoming high school reunion, or the new ‘happy hour menu’. The problem is when this is done to unsuspecting people who may not be aware of the scope and publicity of their online actions, this opens opportunities to take advantage of this freely available information.
For instance, a common “secret question” to recover a forgotten password is ‘What is your Mother’s maiden name?’ or “What is the first suburb you lived in?” These are the same questions asked by Facebook when first setting up the account.
Facebook has a section which allows for any former names to publically uploaded to your Facebook profile, alongside your place of birth and names of family members, all available to be publicly seen and viewed. Obviously this is not mandatory and there are settings which restrict who can see that information, but for many this would go unnoticed, as it is not the default option to hide these answers. This leaves personal information very exposed to people with malicious intent, who may need the person’s answer to their secret question to reset someone else’s password.
Spotify too, perhaps your ‘secret question’ to a password is a favourite song or favourite artist. Spotify by default allows followers to publicly view what track you are listening to, as well as playlists and most played artists. With no approval system for Spotify followers, there is potential for personal information to be used against the individual, as a result the user following the interface of the platform. This is especially true in regard to new internet users who lack the digital literacy and understanding to control a platform’s privacy settings to their liking.
And all of this is legal because you simply pressed accept to the terms and conditions right? Perhaps, and whilst it is true that in order to use a platform, the user must agree to their terms and conditions (Australian Competition and Consumer Commission, 2018). I honestly can’t say I have ever read the terms and conditions for a platform, and I dont think I’m the only one. The terms and conditions are heavily written in legal language, lengthy with small text. This is difficult for people to understand, which makes people less likely to read and comprehend the terms of services (Flew, 2021). This is an intentional design feature, which serves the purposes of companies legal obligations rather than protecting privacy of users.
So you’re saying if I agree to the terms of services of the platform and simply follow the instructions posed to me by the userface, my data could be leaked? Well, probably not. If you’re already able to read this far into this blog post and understand the terminology used, don’t worry, there is a good chance your internet literacy is satisfactory enough to be able to change the privacy settings and realise posting photos of your credit card is not very smart.
However, as we discussed before, privacy is a subjective term, in which everyone has different definitions of privacy that have been shaped by personal experiences. For these platforms, which have for the most part been coded and constructed by university educated males in developed and democratic nations, their own definitions and conceptions of privacy have now been set as universal standard for the platform, despite the wide ranges of jurisdictions platforms are available in (Markwick, Boyd. 2018). Placing onus on the user to understand privacy terms and conditions whilst being subject to Western understandings of privacy could lead to negative outcomes for users.
For example, if a platform has written in the terms and services that: all content a user produces is publicly available for anyone to see. Because in the platform’s country of origin, freedom of speech laws allow anything besides hate speech and threats of violence. However, in many nations, criticising the government is illegal, and in 2018, Turkish Police detained human rights activist Nurcan Baysal for Tweets condemning the Turkish government’s military incursion in the Syrian enclave of Afrin (Sinclare-Webb, 2018). I am not saying that Baysal is at fault and deserves to be arrested, nor am I saying Twitter is at fault for not specifying the legality of tweets in Turkey, but more so focusing am focusing on the need for, A) more specific and concise descriptions of a platform’s terms of services that guarantees informed consent (through a short quiz), and B) the need for digital literacy to increase among vulnerable populations and people who have recently begun to join social media platforms.
So why is there an imbalance between privacy protection and user information sharing? Like most things, it boils down to money. Simply put, the more a platform is able to algorithmically alter their content to the users interests, tastes and personal beliefs, the more likely a person is going to spend time engaging with the platform’s content, thus increasing the exposure to tailored adverts specific to the user’s interest (Germanakos, P., & Belk, M. 2016). This is obviously a very lucrative strategy employed by the platforms, as targeting users specific interests for advertising purposes would reach the product’s target audience more efficiently thus increasing likelihood of a consumer purchasing (Lai, Cheng & Lansley. 2017). Platforms argue that this mode of operation has the users best interest at heart. Platforms need capital to maintain the interface from somewhere, as many platforms are privately owned firms but are also free for users, advertising provides a large percentage of a platform’s revenue (Holvoet, Hudders, Herrewijn. 2022). Platforms believe this form of advertising is beneficial for both the user and advertiser as promoting products which are related to the users interest is seen more positively than non targeted spam adverts.
These two reasons above align with Nissenbaum’s theory of contextual integrity as well as the revised Principle of Respect for Context. What this means is that platforms have a right to use personal data if it matches the social context and norms of the data being used.
It is appropriate for platforms to collect data regarding a user’s food preferences in regard to the advertisement of restaurants. Neither a user or business would benefit from advertising a steakhouse to a vegan, and whilst seems a bit dodgy, publicising someone’s dietary preferences does not directly threaten someone’s security or safety in most jurisdictions. However, it would be inappropriate for a social media platform to gain access to someone’s personal finances to determine if they should advertise a sports betting company to a user. Personal finances aren’t relevant to the context of social media platforms, and socially understood to be a vulnerable and sensitive topic that deserves confinement.
Just because a platform tells you to not ‘lock your online privacy doors’ in order to get a better online experience, it does not mean you should invite everyone in with open arms, nor should one barricade themselves in an attempt to prevent privacy leaks. Privacy is a subjective term, and different needs will arise for individuals and platforms alike. The consequences of data leaks can be severe, and the compromise of data security can occur by simply following the instructions posed by the platforms. With platforms assuming that users have adequate privacy literacy, combined with minimal regulation enforcing informed consent of a platform’s privacy, the advent of the social media platform has brought about misunderstandings of the public scope of content produced.
Ultimately, businesses and platforms benefit from the minimalist privacy laws in place, using personal data curates an online experience tailored to the individuals likes, and this keeps them using the platform, which in turn exposes them to targeted adverts which contain products which align with their interests. Whilst ethically debatable, platforms are entitled to continue this practice, as long as the data collected is socially and legally acceptable, and must relate to the purpose of the platform. There is no ‘one size fits all’ solution to balance privacy protection and information sharing, and requires serious discussion from multiple stakeholders. Regardless, I will conclude by emphasising the need for policymakers, educators and individuals to promote online privacy literacy for citizens, whether through law, community workshops or by simply discussing the ways privacy can be compromised and the simple steps to reduce the risk of a privacy breach.
Australian Competition and Consumer Commission. (2018). Digital platforms inquiry: Preliminary report. Australian Competition and Consumer Commission
Bartsch, M. Dienlin, T. (2016) Control your Facebook: An analysis of online privacy literacy. Computers in Human Behavior, Vol. 56, pp 147-154.
Cassanova, J. (2018) Balancing privacy and platform quality in social media: a proposal.
Flew, Terry (2021) Regulating Platforms. Cambridge: Polity, pp. 72-79
Germanakos, P., & Belk, M. (2016). Human-Centred Web Adaptation and Personalization From Theory to Practice (1st ed. 2016.). Springer International Publishing.
Holvoet, S., Vanwesenbeeck, I., Hudders, L., & Herrewijn, L. (2022). Predicting Parental Mediation of Personalized Advertising and Online Data Collection Practices Targeting Teenagers. Journal of Broadcasting and Electronic Media
Lai, J., Cheng, T., & Lansley, G. (2017). Improved targeted outdoor advertising based on geotagged social media data. Annals of GIS, 23(4), 237–250.
Marwick, A. & boyd, D. (2019) ‘Understanding Privacy at the Margins: Introduction’, International Journal of Communication, pp. 1157-1165.
Nissenbaum, H. (2018). Respecting context to protect privacy: Why meaning matters. Science and Engineering Ethics, 24(3), 831-852.
Sinclair-Webb, E. (2018) Activist Detained in Turkey for Tweets.
Universal declaration of human rights. (1948). United Nations.