By Samantha Lay
In a world where everyone is seemingly always connected, sharing and organising their lives through online platforms, the concept of ‘privacy’ seems to have been discarded. More and more, we seem to find ourselves struggling to keep our personal information to ourselves, as we mindlessly sign away at terms and conditions without a second thought.
Have you ever stopped to consider what private information digital platforms have access to and what they’re doing with it?
The answer is likely yes, but what are we supposed to do when the sign up process for every service and product involves divulging private information?
Does privacy really exist in this digital age?
Let’s take a look at the classic case of how platforms manage the idea of ‘privacy’ and how these corporate conglomerates make it intentionally difficult for you to keep your information to yourself.
What is privacy in this digital age?
Generally speaking, privacy is a human right which includes the right:
- to be free from interference and intrusion
- to associate freely with whom you want
- to be able to control who can see or use information about you (Office of the Australian Information Commissioner, n.d., para. 2).
Given the technological changes in the world, we have had to adapt to how the concept of privacy applies in this digital age. In 2012, the UN recognised that human rights in the digital space must be protected in the same way we do in the physical world (Office of the High Commissioner for Human Rights [ACCC], 2013).
However this has clearly been difficult in practice, given that the scope of human rights has extended into a whole other dimension which alters and advances daily.
Let’s break the key issues of governance of personal information down by looking at how platforms are attempting (or not attempting) to protect our data.
How do platforms regulate the exchange of ‘personal’ information?
Platforms have privacy policies to inform users of how their data and personal information is gathered, utilised and shared. This is generally included in the terms and conditions which a user must agree to before accessing the platform (Australian Competition and Consumer Commission, 2018).
So in theory, we all understand what they are willingly signing up for right? I’m sure you’re guilty of mindlessly hitting that ‘agree’ button. But have you ever thought that perhaps, this behaviour is encouraged by platforms which are making it increasingly more difficult for their users to understand what they’re signing up for?
Terms of service agreements are complex, vague and final, as users need to accept these in order to access the platform. Because of this, users have difficulty providing their informed or free consent as these agreements are hard to comprehend, can be altered without notification to users and users are unlikely to be able to utilise the service if they do not accept these conditions (Flew, 2021).
There is also a strong power imbalance in this relationship between the user and operators of these platforms, as operators have complete power in deciding on the rules and how to enforce them. Rather than protecting their users, these agreements are designed with the platforms best legal interests at heart (Suzor, 2019).
What is ‘personal’ information?
The data collected and utilised by digital platforms is widespread and often highly detailed. This information can provide intel on a user’s interests, commonly frequented areas and spending habits to name a few (Goggin et al., 2017).
A more recent look into Google and how it manages the privacy of its users reveals how the current concerns of platform users are being brought to the spotlight.
What information is Google collecting from you?
Google reports that its mission is “to organise the world’s information and make it universally accessible and useful” (Google, n.d.-a, para. 1). How often are users accessing information through the platform? Well, Google’s influence is clearly represented in the fact that it still holds the position of the most utilised search engine in the world, outdoing its competitors by miles (Statista, 2023).
Think about how many times you ‘Google’ something you need answers to in a day. Now multiply this by the billions of users worldwide. Clearly, Google has access to a continuous supply of data but how exactly are they handling the power they hold with all this personal information?
Given its dominance and billions of users, as well as the plethora of applications under the Google Suite, Google must have measures in place to protect the privacy of its users right?
The company certainly tells us it does. Google itself reports that it has multiple measures in place to protect its users’ data. This includes the ability to control the data the company has access to, as well as claims that personal information is never sold to third parties. The company also has a suite of privacy features such as safe browsing, scanning of applications and emails and tools to name a few (Google Safety Center, n.d.-a). It also claims ‘sensitive’ information is not used to personalise ads (Google Safety Center, n.d.-b) however acknowledges it utilises data about user activity to inform its advertisements for a better experience (Google Safety Center, n.d.-c).
The company has also seemingly recognised growing concerns around privacy and in 2021, pushed forward the agenda of ensuring its users privacy by claiming to create a “privacy-first web” (Temkin, 2021, para. 1) by removing support for third party cookies. This was replaced with the Federated Learning of Cohorts (FLoC) approach where users are clustered together based on shared interests. This way, individuals can be hidden within a larger group, thus providing personalised ads and content but keeping user activity private (Temkin, 2021; Bindra, 2021).
This is all well and true, however perhaps we need to consider this from the lens of Google itself and what it does to protect us from itself, rather than how the company is safeguarding us from external, potentially dangerous criminals trying to steal our data.
Events in recent years paint a very different picture to how Google has portrayed its approach to protecting user privacy. The company has made headlines for the sheer amount of data its products collect, as well as how it uses this information.
Is Google really protecting your data?
In 2021, Google Chrome was found to be harvesting data from its users after Apple required app developers to disclose how much data it collects from its users. Google was found to collect the highest amount of data than 3 other major browsers and links all this information to user identifies (Doffman, 2021).
Within a year from this incident, Google was then involved in a record breaking 391.5 million dollar settlement within 40 states by continuing to retrieve location data, despite this setting being switched off by its users. This was said to occur between 2014 to 2020 and the lawsuit was brought against Google for their violation of consumer protection laws (Kang, 2022).
This was discovered after a story by the Associated Press, which found that Google was still tracking the location of more than 2 billion users despite them opting out of this. This type of data was reported to be particularly sensitive given its ability to identify a user and their routines.
Was this intentional? Perhaps, given that Google benefits greatly from access to this data. Location tracking allows companies to sell ads to those looking for customers nearby and is part of Google’s resources which creates upwards of 200 billion dollars yearly in ad revenue (Associated Press, 2022).
Does the right to privacy really exist in this digital age?
It’s clear that as users of digital platforms and services, we sign away at countless terms and conditions to communicate with others and feel a sense of belonging and connectedness. We willingly give these corporations access to personal information about what we like, the ways we live and the places we visit. However, it is clear that the long and convoluted nature of these agreements make it easier or almost encourage us to sign away at these agreements, without proper consideration of the personal information we are giving away.
I’m sure many of you are also on multiple digital platforms, meaning that our data is stored across multiple companies which may all utilise this information for different purposes. Revenue generation for these platforms in the form of targeted advertisements is one of the main motives for this data collection. I don’t necessarily mind the personalisation of advertisements, however the accuracy of these are becoming increasingly unnerving.
These platforms use the ‘all or nothing’ approach with their users. If one wants to be a part of these online communities, they have no choice but to accept the conditions of these agreements.
So does that mean that the right to privacy no longer exists if we want to be a part of this digital age?
If we are to reference the earlier definition at the beginning of this article, perhaps we could argue that it does exist, although perhaps only partially. These platforms can be used without interference and can allow its users to connect with whomever they please. However, it could be argued that the last point of controlling who can view or utilise information on an individual is something that either no longer exists or rather is difficult to control.
Companies like Google do seemingly give you the ability to control the information it can view and utilise, yet have still been shown to collect data despite users placing in controls to prevent this behaviour. A quick search online uncovers a plethora of articles on other companies tracking their users and collecting their personal information to use to their advantage.
The way I see it, the right to privacy will and should always exist.
We cannot fight the digitalisation of our personal information and our need or desire to utilise these services which collect this data. Our information is provided as a cost for access to everyday services, but this should not mean that we shouldn’t be able to control how this data is accessed and used. Legislation will need to continue to adapt to the ongoing changes to technology and what this means for our personal information. Companies should have privacy at the forefront of their agenda if they wish to retain their users and should be held accountable for their actions.
We cannot deny the risk to our data that comes with agreeing to using these services, however we should not be living in fear of this information coming into the wrong hands.
Associated Press. (2022, November 15). Google will pay $392m to 40 states in largest ever US privacy settlement. The Guardian. https://www.theguardian.com/technology/2022/nov/14/google-settlement-40-states-user-location-tracking
Australian Competition and Consumer Commission. (2018). Digital platforms inquiry: Preliminary report. Australian Competition and Consumer Commission. https://apo.org.au/sites/default/files/resource-files/2018-12/apo-nid209641.pdf
Bianchi, T.. (2023). Worldwide desktop market share of leading search engines from January 2015 to January 2023. Statista. https://www.statista.com/statistics/216573/worldwide-market-share-of-search-engines/
Bindra, C. (2021). Building a privacy-first future for web advertising. Google Ads. https://blog.google/products/ads-commerce/2021-01-privacy-sandbox/
Doffman, Z. (2021, May 20). Why You Shouldn’t Use Google Chrome After New Privacy Disclosure. Forbes. https://www.forbes.com/sites/zakdoffman/2021/03/20/stop-using-google-chrome-on-apple-iphone-12-pro-max-ipad-and-macbook-pro/?sh=c06bd6a4d084
Firmbee.com. (2021). [Photograph of man using laptop in a cafe]. Unsplash. https://unsplash.com/photos/31OdWLEQ-78
Flew, T. (2021). Regulating Platforms. Cambridge: Polity. https://www-cambridge-org.ezproxy.library.sydney.edu.au/core/books/lawless/lawless/D8882CFD59139F1EBAF8AFF295B9F889
Goggin, G., Vromen, A., Weatherall, K., Martin, F., Webb, A., Sunman, L., Bailo, F. (2017) Privacy, Profiling, Data Analytics. In Digital Rights in Australia. Sydney: University of Sydney. https://ses.library.usyd.edu.au/bitstream/handle/2123/17587/USYDDigitalRightsAustraliareport.pdf?sequence=7&isAllowed=y
Google Safety Center. (n.d.-a). Overview. Google. https://safety.google/intl/au/
Google Safety Center. (n.d.-b). Data practices. Google. https://safety.google/intl/au/privacy/data/
Google Safety Center. (n.d.-c). Ads and data. Google. https://safety.google/intl/au/privacy/ads-and-data/
Google. (n.d.). About. Google. https://about.google/
Graham, S. (2015). [Photograph of man writing on documents]. Unsplash. https://unsplash.com/photos/OQMZwNd3ThU
Kang, C.(2022, November 14). Google Agrees to $392 Million Privacy Settlement With 40 States. New York Times. https://www.nytimes.com/2022/11/14/technology/google-privacy-settlement.html
Office of the Australian Information Commissioner. (n.d.). What is privacy? Office of the Australian Information Commissioner. https://www.oaic.gov.au/privacy/your-privacy-rights/your-personal-information/what-is-privacy
Office of the High Commissioner for Human Rights. (2013). The right to privacy in the digital age. United Nations. https://www.ohchr.org/en/stories/2013/10/right-privacy-digital-age#:~:text=In%20its%20 resolution%20on%20the,in%20particular%20freedom%20of%20expression%E2%80%9D.
Suzor, N. (2019). Lawless: The secret rules that govern our digital lives. Cambridge: Cambridge University Press. doi:10.1017/9781108666428
Temkin, D. (2021). Charting a course towards a more privacy-first web. Google Ads. https://blog.google/products/ads-commerce/a-more-privacy-first-web/