“Free” may be the most expensive: when convenient apps become a cover for privacy leaks

People always say that technological developments and innovations have brought many conveniences to the society. When you use free apps to bring rich experiences to your life, have you ever thought about how the developers of these apps make profits? Are they really all generous for users to use for free? Or is free just a way to get users to use the apps more deeply, and they have something else in mind? If you’ve ever wondered these things, then the truth may be beckoning you. There is no doubt that most app developers are not non-profit organisations, they are not so friendly, or even, they have boldly put their hands on your digital privacy. Let’s take an example, you have your eye on a laptop but you are not sure whether it is worth it or not, so you search for reviews about it on social media. For some time after this, every time you browse through the posts, you see discounts on this laptop, and you also see promotions at well-known chain electronics shops nearby. At this point you should have realised that your search history as well as geo-location should have been leaked and that there are thousands of users just like you whose digital privacy has been betrayed.

I believe that most people believe that the right to privacy as an integral part of human rights and that the right to human privacy should be sacrosanct. Flew (2021, pp.76) states that the right to human privacy is very important because it guarantees our security, freedom, and dignity such as freedom from discrimination. However, the right to privacy is not valid in all situations, it is justified because of the particular social and legal context Flew (2021, pp.76). The reason for this is that it may conflict with other rights. For example the right to information, where citizens have the right to know the details of what the government is doing. There is also the right to justice, where the judiciary has the right to access citizens’ private information for court trials. So, surprisingly, the right to privacy is not strictly speaking part of human rights (Flew, 2021, pp. 76). However, it is obvious that the developers of apps are not entitled to collect and use user information at will without the user’s knowledge, they are taking advantage of the information gap to entice and force the user to make bad choices.

Maybe you’re very concerned about privacy leaks, but you can’t cut yourself off from the app because it’s become part of your lifestyle. This explains why we gladly accept ‘free’ services even when we know the risks involved. But it’s time to say no to this kind of rude banditry.

Now that we’ve learned that “free” apps aren’t really free, how does it confuse and deceive users, god forbid, for its own profit?  With the rise of the internet, people have already been curious about digital privacy and security.  According to a survey by Goggie et al. (2017, pp. 1), there are a large number of social media users who want to know the way platforms handle their personal data, a figure that reaches 78% of the population surveyed.  However, there has been pessimism about data breaches as far back as the last century.  And with the change of times, the number of people with similar concerns has now become larger. According to a survey by Pew Research Center in 2018 showed that 91% of people believe that they are no longer able to stop internet companies from collecting user’s privacy, and only 9% believe that internet companies keep their user’s data safe (Rennie, 2018, as cited in Flew, 2021, pp.75).  This is exactly what is happening, the vast majority of internet companies are monetizing their users’ private information.  Take the well-known social media platform Facebook, for example, which has been deeply involved in the scandal of trading users’ privacy. In 2018, Cambridge Analytica firm was accused that it collected 87 million user privacy from Facebook and shared it privately with third-party app companies, and in July of the following year, the company was fined $5 billion by the Federal Trade Commission (Issac and Kang, 2019, as cited in Flew, 2021, pp. 75).

In general, the theft of users’ privacy on the Internet is not a crude plunder, but a quiet collection, and it forces users to make choices. Flew (2021, pp. 76) argues that instead of directly using hacking to invade users’ privacy, app developers are asking for information in the guise of conditions of use and selling it in a package along with the rest of the user’s digital information. As a result, users are forced to make a choice, if they refuse to provide the information, they will not be able to receive the service, and if they do not, they will be at risk of having their information compromised. If users only want to give apps limited access to their devices after carefully reading the terms of use, app developers still have a way to deal with this. Online terms of service agreements are often very obscure and confusing, and they offer limited options (Flew, 2021, pp. 77). This means that it is difficult for users to gain the upper hand in the game with app developers and the balance of power has been upset. Suzor (2019, p.17, as cited in Flew, 2021, pp.77) siad that app developers have the final right of interpretation as rule makers, and that they have secured many rights for themselves in the agreement.

The reason why users’ private information is the forbidden fruit that app developers want to steal is not just because they have a penchant for voyeurism, but because it can be the raw material for tools that predict and intervene in human behavior. Google was the first to start analyzing users’ behavior extensively, and not only did it monetize users’ privacy, but it also used it to optimize its search engine, and nowadays a large number of app developers such as Facebook are among the players (Flew, 2021, pp. 81). User information is the learning data for machine learning of human behavior, and its learning results can be used to infer the rule of human behavior, or even control the development of the situation, which is called ‘surveillance capitalism’ by Shoshana Zuboff (Zuboff, 2019, p. 8, as cited in Flew, 2021, pp. 80).

Nowadays, information digitization is a very common thing, and a large amount of information can be found in the digital archives of the user’s platform (Friedewald et al., 2017, as cited in Goggie et al., 2017, pp. 9). If the trend of apps mastering user privacy is unstoppable, will they be able to protect that data?  Excluding the case of app developers actively sharing user information with third-party companies, there are not a few cases of user privacy leakage caused by cyber attacks.  Goggie et al. (2017, pp. 9) raised a variety of concerns about such issues, including, but not limited to, the property loss caused by the leakage of personal property information and the security problems caused by the geographical location leakage.

The hacking of Medibank, which resulted in a breach of user privacy, is a case in point.  Medibank was hit by a costly cyber attack in 2022 that saw the personal information of about 9.7 million customers stolen by hackers.  Kost (2024) said the customer data, which included users’ names, dates of birth, passport numbers, and health insurance information, was eventually posted on the dark web and sold publicly, while extorting $10 million from Medibank to prevent the spread of the information.  This incident is very worrying, it seems that the company can not effectively protect the privacy of users, information leakage only in the blink of an eye.

Such platform information leakage incidents are not unique, and the problem can be roughly divided into two parts.  One is that users choose between privacy and convenience, and they acquiesce to the platform’s unauthorized collection of user information because it really improves the service experience.  Francis and Francis (2017, p.46, as cited in Flew, 2021, pp.77) argued that while emphasizing privacy, users condone the platform’s willfulness, which is a privacy paradox.  Of course, this is not the sole fault of the user, and the platform also needs to bear some responsibility for this issue, even most of the responsibility.  Not only that, the source of the problem is that the platform refuses to disclose the user information collection and use process for many similar reasons such as internal secrets of the company, making the data flow opaque, but the user can not defend his rights as the disadvantaged party.  Suzor (2019, pp.24) believes that the confusion and controversy of platform rules are the fundamental reasons why they cannot be made public.  Just as Medibank promises strict protection of user data, it may also have never made clear data protection rules public, or the rules made public may have lacked authenticity. Leakage of user information will lead to a crisis of trust between users and the platform.  For users, their personal and privacy security will be threatened at all times.  For the platform, they lose the trust of users, and their future development will be hindered, and even affect the stability of the entire industry.

No one wants their privacy to circulate on the black market, but why platforms can still wantonly trample on users’ privacy?  This shows that there are gaps and deficiencies in the regulation of privacy protection.  For example, Facebook’s chief executive, Mark Zuckerberg, announced during the social media platform’s privacy policy controversy that Facebook would allow users to make suggestions on terms of service, and that any rule changes after that would require a vote of users.  This seemed like a sign of democratizing virtual community building within Facebook, but things didn’t turn out well, and the new decision gradually lost steam.  Facebook has made changes in the details of voting, including the fact that more than 30% of active users must approve a change to the existing rules, a user participation rate that is nearly impossible to achieve (Suzor, 2019, pp.10).

From a government regulatory perspective, privacy laws can suffer from a lag, whereby they are only updated after a major incident has occurred in order to prevent a repeat of the incident. In the case of Australia’s Privacy Act 1988, for example, it has undergone several updates since its inception, most recently on 10 December 2024. Mellis et al. (2025) summarise the main changes in this amendment, such as the fact that individuals can take the necessary legal action to defend their rights after a privacy breach has occurred, and the fact that organisations now have an obligation to show users the the automated processes they use. But the new amendments still need time to be implemented, which means that there are most of the fine print that won’t come into effect until the future. So how do we, as part of the online community, prevent privacy breaches and abuses from happening in the first place. the Office of the Australian Information Commissioner (n.d.) has provided us with advice on how to do this, including but not limited to:

  • Be cautious about interacting with social media, such as sharing and commenting
  • Be cautious about sharing personal information on social media
  • Read privacy terms carefully and challenge organisations and agencies if you do not understand them.

I believe we now have a comprehensive understanding of privacy breaches and abuses. It’s important to remember that companies act so ‘generously’ because our personal information is so valuable. Major privacy breaches such as those at Facebook and Medibank show that there are always ways for crooks and unscrupulous merchants to exploit loopholes in the law. And because of the gaps in the law and the lag in updates, most of the time the government is unable to anticipate and prevent incidents from occurring. Therefore, it is important for us to firstly protect our online privacy, secondly advise our family and friends and lastly monitor the company’s operations and the implementation of the law.

Flew, T. (2021). Issues of Concern. In T. Flew (Ed.), Regulating Platforms (pp. 72-103). Cambridge, UK, Polity.

Goggin, G. (2017). Digital Rights in Australia. The University of Sydney. https://ses.library.usyd.edu.au/handle/2123/17587

Kost, E. (2024, November 18). What Caused the Medibank Data Breach?. UpGuard. Retrieved April 4, 2025, from https://www.upguard.com/blog/what-caused-the-medibank-data-breach

Mellis, V., Kallenbach, P., Richardson, M., & Beach, A. (2025, January 29). Privacy and other Legislation Amendment Act 2024 now in effect. MinterEllison. https://www.minterellison.com/articles/privacy-and-other-legislation-amendment-act-2024-now-in-effect.

Office of the Australian Information Commissioner. (n.d.). Ways to protect your privacy. Office of the Australian Information Commissioner. https://www.oaic.gov.au/privacy/your-privacy-rights/ways-to-protect-your-privacy

Suzor, N. P. (2019). Who Makes the Rules? In Lawless: The Secret Rules That Govern our Digital Lives (pp. 10–24). chapter, Cambridge: Cambridge University Press.

Be the first to comment

Leave a Reply