WHETHER PLATFORMS HAVE A ‘DUTY OF CARE TO THEIR USERS

Are the online content shared or posted in the social media platforms regurlated?

Overview of the topic

The rise of the Internet and its many benefits have been undeniable, but it has also brought about concerns regarding the potential harm caused by online services. This is particularly true with the development of social media platforms such as Twitter, Facebook, what’s Up, and other online platforms, which have exacerbated the issue. While online platforms could have done something about these concerns, they have largely ignored them, and the regulatory environment has encouraged this failure.

Image 1. A signpost directing to the direction of Duty of Care (source: Facebook 2023).

The consolidation of the market into a small number of super-dominant providers has also contributed to the lack of competition in terms of safety. As a result, governments around the world are beginning to consider regulation to address growing user concerns about harmful conduct and behaviours. However, the best approach to regulation remains a question that needs to be addressed. One key issue to consider is whether platforms have a duty of care to their users (Moore et al., 2021). Some argue that they do, as platforms often have control over the content that is shared on their platform and therefore have a responsibility to ensure that users are not exposed to harmful content or behaviours. Others argue that platforms are simply intermediaries, and should not be held responsible for the actions of their users.

There are certainly challenges to regulating online harm, particularly given the scale of the main social media platforms and the transnational nature of these businesses. Creating a regulatory framework that is effective, while not creating a complex and unworkable set of rules, will require careful consideration. One potential solution is to create a set of broad principles that platforms must adhere to, which outline their responsibilities to their users (Nissenbaum, 2018). These principles could include requirements to take proactive steps to identify and remove harmful content, to have mechanisms in place for users to report harmful behaviour, and to ensure that users have access to support services if they are affected by harmful content or behaviours.

Image 2. Unknown figure reviewing an online content (Source: NSW Government Community Website, 2023)

Another potential solution is to shift the liability for harmful content or behaviour from the user to the platform. This could encourage platforms to take a more proactive approach to prevent harm, as they would be directly responsible for any harm caused by their platform. However, this approach would need to be carefully balanced to ensure that it does not stifle innovation or freedom of expression. One of the challenges in regulating online harm is the difficulty in defining what constitutes harm. Harm can take many different forms, including cyber bullying, hate speech, and the spread of misinformation (Polynczuk-Alenius, 2019). Each of these types of harm may require different approaches to regulation, and there is a risk that a one-size-fits-all approach could be ineffective. To address this challenge, it may be necessary to create a more flexible regulatory framework that can adapt to changing circumstances and evolving forms of harm (Top et al., 2021). This could involve creating a regulatory body that is responsible for identifying emerging forms of harm and developing appropriate responses. The reading by Goggin & Vromen (2017) states that while Australia has an adequate legal framework to control the online content perceived to be harmful to the users such as defamation speeches and other unwanted content considered to be discriminatory or bullying in nature, the Government has not created an effective policy to deter the online content deemed to be harmful to the people connected to social platforms. 

Exploring A Statutory Duty of Care Model for Media Regulation

The statutory duty of care model, which is being proposed, takes inspiration from the UK’s approach to regulating workplace health and safety. The model is designed to be flexible and broad, and it recognises that workplaces are public spaces where various types of activities occur, and different people interact. The Health and Safety at Work Act (HSWA) mandate employers to ensure, as far as reasonably possible, the safety, health, and welfare of their employees, as well as any other person who may be affected by their business.

The duty of care is not without limits, and it does not necessitate complete protection from harm. The key consideration is whether reasonable care has been exercised, and the term “reasonably practicable” does not demand every conceivable action (Acquisti et al., 2020). Professionals are held to standards that are customary in their industry, which may entail performing a risk assessment. The predictability of risk is also a relevant factor, and an employer would not be deemed to have breached their duty of care if they did not take precautions against an unforeseen threat.

The HSWA stipulates certain specific issues that employers must address concerning the safety of their employees, in addition to the general duty of care. These include providing safe machinery, training relevant personnel, and maintaining a safe work environment. The HSWA also mandates employers to prepare and review a written statement of their general policy regarding health and safety at work, which marks the start of a preventative approach based on risk assessment (Jozani et al., 2020). The proposed statutory duty of care model is broad and flexible enough to be applicable to the regulation of online platforms, which can be viewed as quasi-public spaces where diverse activities occur, and different people interact. The duty of care would require online platforms to take measures to reasonably ensure the safety of their users when utilising the platform for its intended purposes.

The duty of care would not require online platforms to monitor all content posted on their platforms. Instead, it would require online platforms to take steps to ensure that their users are reasonably safe. This could include measures such as developing policies and procedures for the removal of harmful content, providing users with information about how to use the platform safely, and working with law enforcement to identify and prosecute those who use the platform to commit crimes.

Moreover, the proposed statutory duty of care model is a broad and flexible approach to regulating online platforms. It would require online platforms to take steps to ensure that their users are reasonably safe when using the platform for the purposes for which it is intended (Flew, 2018). This model would not require online platforms to monitor all content posted on their platforms but would instead require them to take steps to ensure that their users are reasonably safe. The duty of care model is based on the UK’s approach to regulating workplace health and safety, which has been found to be flexible and future-proof.

Recent scandals exposing platforms

There has been a worldwide discussion generated by the recent controversies involving social media and other digital platforms about whether or not these sites have a “duty of care” to their users. Questions regarding the reliability of these platforms have arisen in the wake of the Cambridge Analytica data breaches, in which the private information of millions of Facebook users was acquired and used for political marketing (Bygrave, 2015). Concerns over false news, online abuse, and alleged election involvement have all contributed to the increasing techlash against Silicon Valley’s corporate titans, and this is not the first time Facebook has been accused of misusing personal data.

Public inquiries and parliamentary hearings have taken place in a number of countries as governments and authorities begin to take note. They include the inquiry into digital platforms conducted by the Australian Competition and Consumer Commission and the investigation into Russian meddling in the 2016 US presidential election conducted by the Senate Commerce and Judiciary Committees. The Select Committee on Communications of the House of Lords of the United Kingdom is also investigating how to regulate the Internet.

With all of these probes and inquiries, the need for more oversight of online hubs is becoming more urgent. Sen. Dianne Feinstein said it best in her testimony before the Senate Intelligence Committee: “The platforms you have built are being used, and it is your fault. And it is on you to take action, because if you do not, we will.” The latest testimony of Mark Zuckerberg, CEO of Facebook, before the US Congressional Committees, emphasises the seriousness of the situation.

The issue of whether or not online services owe a “duty of care” to their users is thorny. These online forums can, on the one hand, sway public opinion and affect political outcomes. Companies must take precautions to prevent malicious use of their algorithms and to safeguard the privacy of their customers (Hubbard et al., 2009). But, as private businesses, these platforms are not subject to the same rules as public utilities. Companies have an obligation to their stockholders to maximise profits and may be wary of making changes that could reduce profitability. While this subject lacks a simple solution, it is evident that digital platforms can no longer function as they have in the past. it is possible that Government and regulators will step in to fill the regulatory void left by the end of the self-regulation era. To recover users’ trust, digital platforms would do well to adopt preventative measures to deal with these problems. it is also possible that more regulatory or antitrust action will be taken if this is not done.

Self-regulation and selective stakeholder engagement

Based on the information supplied, it is clear that self-regulation and selective stakeholder involvement are insufficient governmental answers to the problems caused by digital platforms. It contends that the hazards involved with the exploitation of platforms, along with public interest concerns about privacy, the future of news, and the misuse of personal data by third parties, outweigh any possible benefits.

Other solutions are proposed in the article, such as the development of digital platforms and business models that do not require users to hand over their personal information in exchange for free access to services. Exit is a market-based solution to the dominance of digital platforms, such as competition measures or the promotion of other digital platforms, and is regarded within the framework of voice and loyalty, as are the proposals for increased regulation of digital platforms (McKenzie et al., 2022).

Leaving Twitter for Gab, an ad-free social network for those who “cherish liberty,” is used as an illustration of the risk that switching platforms could end up being worse than staying put. The essay contends, however, that there is still value in advocating for alternatives, such as the establishment of a non-profit organisation or the introduction of a less expensive and more civically minded social media platform (Flew, 2018).

In sum, the evidence demonstrates that the problems caused by digital platforms cannot be adequately addressed by self-regulation or selective stakeholder participation. To reduce the dangers of platform misuse and safeguard the public interest, it is preferable to look into other options, such as the promotion of different digital platforms and business models.

Legislations proposed

The introduction of the Privacy Bill of Rights by the Obama White House in February 2012 was a hallmark of privacy protection rights. The report, titled “Consumer Data Privacy in a Networked World: A Framework for Protecting Privacy and Promoting Innovation in the Global Digital Economy,” detailed a multi-stakeholder approach, the building blocks for efficient enforcement, new privacy legislation, and the intention to increase interoperability with international efforts (Nissenbaum, 2018). The study was written in response to widespread concerns about out-of-control information practices both in the open and behind the scenes.

The report was one of several commissioned by the United States government and other countries in response to rising public awareness of the need to protect personal data when using digital and information technology. The report acknowledges that the privacy issue is broad and that its origins lie in socio-technical systems. While privacy has been on the agenda of governments in the United States and Europe, the text draws attention to the fact that a wide variety of information practices are carried out under the cover of legality, and that outrage over these legal practices lends credence to the argument that something is wrong with the relevant bodies of law and regulation (Top et al., 2021).

The paper contends that the success of the Privacy Bill of Rights hinges on the interpretation of the Principle of Respect for Context (PRFC). As part of the Consumer Privacy Bill of Rights, the PRFC mandates that organisations handle personal information in light of the circumstances in which it was originally gathered (Nissenbaum, 2018). The blog stresses the importance of correctly interpreting this principle, as it places constraints on data collection and analysis. To ensure that personal data is utilised in ways that respect the social norms, values, and expectations of individuals, and the social settings in which they participate, the paper proposes that the PRFC’s interpretation should be grounded in a philosophy of contextual integrity.

References

Acquisti, A., Brandimarte, L., & Loewenstein, G. (2020). Secrets and likes: The drive for privacy and the difficulty of achieving it in the digital age. Journal of Consumer Psychology30(4), 736-758.

Bing, J. (2009). Building Cyberspace: A Brief History of Internet.

Bygrave, L. A. (2015). Internet governance by contract.

Flew, T. (2018). Platforms on trial. Intermedia46(2), 24-29.

Goggin, G., Vromen, A., Weatherall, K., Martin, F., Webb, A., Sunman, L., Bailo, F. (2017) Executive Summary and Digital Rights: What are they and why do they matter now? In Digital Rights in Australia. Sydney: University of Sydney.

Hubbard, A., & Bygrave, L. A. (2009). Internet Governance Goes Global.

Jozani, M., Ayaburi, E., Ko, M., & Choo, K. K. R. (2020). Privacy concerns and benefits of engagement with social media-enabled apps: A privacy calculus perspective. Computers in Human Behavior107, 106260.

McKenzie, G., Romm, D., Zhang, H., & Brunila, M. (2022). PrivyTo: A privacy‐preserving location‐sharing platform. Transactions in GIS26(4), 1703-1717.

Moore, M., & Tambini, D. (Eds.). (2021). Regulating big tech: Policy responses to digital dominance. Oxford University Press.

Nissenbaum, H. (2018). Respecting context to protect privacy: Why meaning matters. Science and engineering ethics24(3), 831-852.

Polynczuk-Alenius, K. (2019). Algorithms of oppression: how search engines reinforce racism: by Safiya Umoja Noble, New York, New York University Press, 2018, 256 pp., $28 (paperback), ISBN: 9781479837243.

Top, C., & Ali, B. J. (2021). Customer satisfaction in online meeting platforms: Impact of efficiency, fulfillment, system availability, and privacy. Amazonia Investiga10(38), 70-81.

Be the first to comment

Leave a Reply