AI Ethics: Implications for Regulation and Governance

Artificial intelligence (AI) is the ability of machines to complete tasks that otherwise require human intelligence. Examples of such tasks include learning, making decisions, and reasoning. In the context of internet cultures and governance, AI has become one of the most important tools used to manage online communities, particularly on social media. AI influences internet cultures by helping moderation efforts. Companies use AI to detect and respond to harmful content, which safeguards the safety of online spaces. Some of this harmful content may include hate speech, graphic violence, misinformation, fake news, and harassment (Andrejevic, 2019). Through its powerful capabilities, AI eases the workload associated with governing the internet. However, there are some legitimate concerns about the use of AI in governance, which justifies a close examination of its use. The ethical implications of AI development and use are increasingly becoming complex and urgent as technology advances, which underpins the need for effective regulatory and governance frameworks that promote transparency and accountability.

AI and its Impact on Society

Recent technological developments indicate that AI has tremendous potential to transform society as we know it. The technology has various applications in different sectors of the global economy, depending on the way people use it. For instance, healthcare systems can use AI to improve the accuracy of diagnoses and treatments by minimizing errors and informing decision-making. In the transport sector, governments can use AI to manage traffic flow and avoid accidents through machine learning. In the tech sector, companies can also use AI to avoid cyberattacks and promote online safety (Candelon et al., 2021). AI tools have the capacity to deter, detect, and stop cyberattacks such as Distributed Denial of Service (DDoS) attacks, malware, and phishing. More importantly, AI can complete tasks with exceptional accuracy and is not subject to human error or fatigue. That is why it is highly popular across different industries as can be seen from the diagram below.

Diagram 1. The level of popularity of AI in different sectors (Candelon et al., 2021)

One of the main challenges associated with AI is its ability to cause unintended harm. AI systems that fail to undergo thorough testing or that rely on flawed data sets could make mistakes that have severe implications. For instance, a transport system that relies on AI technology to perform tasks such as directing the flow of traffic could cause fatal accidents if programmed incorrectly. According to projections, AI in transportation market size will reach $14 billion in 2030 compared to $2,3 billion in 2020 as can be seen from the diagram below (Al-Sarawi et al., 2020).

Diagram 2. AI in transportation market size from 2020 to 2030 (in billion dollars) (Al-Sarawi et al., 2020)

In terms of governance, another major challenge is that AI could also perpetuate existing inequalities. For example, AI technology that relies on limited data sets could perpetuate discrimination against marginalized groups, especially on social media platforms (Flew, 2021). A high-profile case of discrimination is Apple’s credit card algorithm, which many have accused of discriminating against women (Candelon et al., 2021). AI systems that fail to consider the aspect of diversity may fail to acknowledge the needs of various communities, which could exacerbate their marginalization. There is a need to ensure that developers consider important factors such as race, gender, and ethnicity and how their algorithms could potentially affect users of the systems. This element is particularly important in AI systems that have far-reaching implications, such as those used in the criminal justice system.

AI in Regulating Misinformation and Disinformation

One of the main ways through which AI can regulate the internet is through content moderation. AI algorithms have the power to identify and flag information that is potentially false or misleading on social media platforms (Flew, 2021). They work by analyzing patterns and trends in the content shared on various platforms and determining its authenticity (Just & Latzer, 2017). For instance, AI systems can flag a post that credible sources have debunked. This type of work is very important in contemporary society since the number of internet users is overwhelming to most companies that run online platforms. Without AI, the amount of work involved in moderating content would be excessive and have an adverse impact on the workers involved. According to Crawford (2021), moderation work involving tasks such as graphic violence, hate speech, and online cruelty can leave lasting forms of psychological trauma. Therefore, there is a strong incentive to promote the development of AI.

  Artificial intelligence concept picture (Image by: Frobes)

AI can also conduct sentiment analysis, which is a process used to evaluate the sentiments expressed in social media posts and other forms of online content. Intelligent systems can analyze the emotions and attitudes in online content and decide whether to flag it or not. For instance, AI sentiment analysis can identify a trend involving a negative sentiment toward an individual, group, or issue that is based on fake or misleading information (Candelon et al., 2021). Companies also use AI to analyze networks of people and groups that spread harmful content. By evaluating connections between accounts and the sharing patterns, AI can identify key actors and sources of misinformation. This analysis helps facilitate targeted efforts to combat harmful content.

Nonetheless, there are also some challenges and limitations associated with using AI to govern internet cultures. The first one is flawed algorithms, which may incorporate biased information depending on the data they use as their basis. For instance, a flawed algorithm may exhibit a bias towards a group of people or an individual, especially if it struggles to understand the nuances of language and context (Candelon et al., 2021). Misinformation and disinformation trends also evolve rapidly, which means that algorithms require to adapt quickly to keep up with the changes. This process involves continuous development and refinement of the systems, which can cost a lot of money and human resources (Andrejevic, 2019). Further, AI algorithms could be clearer in some cases, which makes it challenging for regulators to comprehend how the systems make decisions. This lack of transparency may result in complacency and an inability to make accurate evaluations.

The Importance of Ethics in AI Use and Development

Given the many applications of AI in contemporary society and its potential to cause harm, there are many ethical considerations vital to its responsible development and use. These include the concepts of accountability, fairness, security, privacy, and transparency. Fairness refers to the idea that AI systems used in governance do not discriminate against a person or group. It requires that system developers formulate algorithms that are not only inclusive but also sensitive to the needs of various parties (Andrejevic, 2019). It also requires that AI systems are accessible to all people regardless of their status if their use affects them. Accountability is another important consideration, which requires individuals and organizations that develop AI systems to be accountable for the decisions made by their technologies (Candelon et al., 2021). Developers must make technologies that are transparent and explainable so that third parties can understand how they function. It involves the establishment of a clear line of responsibility in decision-making, which ensures users can seek recourse if an AI system harms them.

Privacy refers to ensuring that AI systems do not infringe on an internet user’s privacy. This consideration involves designing technologies that respect people’s data rights by not collecting and using their private data without their knowledge and express permission (Candelon et al., 2021). For instance, social media companies should not program their AI systems to access and share private information shared between different users in their private messaging system. Another ethical consideration is security, which means that AI systems should be secure and safe from manipulation or unauthorized access (Andrejevic, 2019). AI security threats are diverse and can be divided into three groups as can be seen from the diagram below.

Diagram 3. AI security threats by the level of danger (Andrejevic, 2019)

Espionage, sabotage, and fraud can lead to the loss of data and threaten the long term stability of online platforms including social media websites.  Under such conditions, security consideration is particularly important since AI systems often have system-wide access to perform their assigned tasks effectively. Developers should develop systems that respond robustly to cyber attacks through proactive detection and response. This requirement means that developers should adhere to the best practices of the industry and follow established standards.

Incorporating Ethics into AI Regulation and Governance

It is important to incorporate ethical considerations into AI regulation and governance. This process is based on establishing clear industry wide standards, guidelines, and best practices to govern the development and use of AI technologies. Governments play a significant role in the promotion of ethical AI by creating legislative frameworks to govern the tech industry. For instance, governments may require AI developers to use diverse data sets to avoid perpetuating discrimination in online spaces (Candelon et al., 2021). They may also require developers to create systems that are transparent in their operations to ensure accountability in case of any adverse outcomes. A good example of government intervention in AI regulation is the National Artificial Intelligence Initiative Office established by the U.S. government to coordinate research and development efforts involving multiple industries and agencies. Organizations also play a role in promoting ethical AI by creating internal policies and procedures that prioritize ethical practices.

The tech industry also plays a significant role in the promotion of ethical AI through self-regulation. A good example is the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems with the mission to ensure that all individuals involved in the development of intelligent systems have proper education, training, and capacity to make ethical decisions. Such organizations prioritize transparency, human rights, and accountability in their standards and guidelines. Another way through which different parties promote ethical AI is through private and public partnerships that involve civil society, academia, the tech industry, and the government (Andrejevic, 2019). These stakeholders come together to collaborate in various ways to develop and use AI in a manner that aligns with various ethical considerations.

What has already happened…

One current case study that underpins the ethical implications of AI is the use of facial recognition algorithms by law enforcement. In 2020 there was a case in Michigan, the United States, when an African American man, Robert Julian-Borchak Williams, was arrested and accused of shoplifting because of a mistake of a facial recognition algorithm (Hill, 2020).

A screenshot of actual suspects captured by a surveillance camera and the police mistook Mr. Williams for the man with the red hat. (Image by: CBS NEWS)

Mr. Williams was strongly aware that he did not commit the crime, but the police officers insisted on the truthfulness of the information delivered by the facial recognition algorithm. Although the mistake was eventually acknowledged, the credibility of this technology continues to be in doubt. Mr. William’s unwarranted 30-hour detention due to a technical error has left an enduring mark of humiliation and trauma on his psyche.

      

From the New York Times article, “the Wayne County prosecutor’s office said that Robert Julian-Borchak Williams could have the case and his fingerprint data expunged. ‘We apologize’ the prosecutor, Kym L. Worthy, said in a statement, adding, ‘This does not in any way make up for the hours that Mr. Williams spent in jail.’ “

He felt very aggrieved and told investigators “This is not me,”“You think all Black men look alike?”

Mr. Williams with his family at home (Image by: New York Times)

Another ethical concern associated with face recognition AI relates to mass surveillance. Some law enforcement agencies are using the technology to monitor public spaces such as train stations, public parks, and streets. This approach raises an ethical concern since the people monitored and recorded may not be aware and have not given their consent (Crawford, 2021). Such surveillance raises privacy concerns that the technology could have on important rights such as the right to free speech and assembly. Many high-profile legal cases have highlighted the ethical implication of face recognition technology, with private citizens suing police departments for warrantless searches, seizures, and arrests.

Ai Facial Recognition (Image by: Wirecutter)

In response to presented ethical concerns, various jurisdictions have made restrictions or prohibit the use of the technology by government agencies. For instance, cities such as San Francisco, California, Portland, and Massachusetts have banned facial recognition technology over concerns about racial profiling, privacy, and accuracy. Similarly, the European (EU) has also proposed new regulatory frameworks to restrict or prohibit the use of facial recognition AI in some public spaces. These regulations also seek to regulate the development of certain types of AI and subject them to oversight. Notably, some developers of facial recognition AI have also created strict guidelines before licensing their parties to use their software. In some cases, various companies have declined to sell their face recognition AI to law enforcement agencies over fears that they would misuse it.

Conclusion

To conclude, artificial intelligence is vital in regulating internet cultures, particularly in content moderation. It has powerful capabilities that allow it to detect and flag harmful content such as hate speech, misinformation, fake news, graphic violence, and harassment. However, its development and use present multiple ethical concerns. One of the main ethical challenges associated with it is that it could cause unintended consequences, especially if it relies on flawed algorithms. For instance, it could perpetuate existing inequalities and discrimination against minority groups. Therefore, it is essential to create a clear regulatory and governance framework that promotes transparency and accountability. A good example of this lies in the field of law enforcement, where face recognition technology poses numerous ethical concerns related to privacy and mass surveillance. In the long run, stakeholders in the private and public sectors should collaborate to create the necessary oversight required for the responsible use of AI.




References

Al-Sarawi, S., Anbar, M., Abdullah, R., & Al Hawari, A. B. (2020, July). Internet of things

       market analysis forecasts, 2020–2030. In 2020 Fourth World Conference on smart trends

       in systems, security and sustainability (WorldS4) (pp. 449-453). IEEE.

Andrejevic, M. (2019). Automated media. Routledge.

Candelon, F., di Carlo, C., De Bondt, M., & Evgeniou, T. (2021). AI Regulation Is Coming. Harvard Business Review. Retrieved April 3, 2023, from: https://hbr.org/2021/09/ai-regulation-is-coming.

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Cooper, A (2021). man in the red hat,[Photo]. CBS NEWS https://www.cbsnews.com/news/facial-recognition-60-minutes-2021-05-16/.

Darbinyan, R (2022). GETTY. Forbes. [Photo] . https://www.forbes.com/sites/forbestechcouncil/2022/06/14/the-growing-role-of-ai-in-content-moderation/?sh=50a3955c4a17.

Flew, T. (2021). Regulating platforms. John Wiley & Sons.

Hill, K. (2020, June 24). Wrongfully accused by an algorithm. The New York Times. Retrieved April 7, 2023, from: https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html.

Hill, K (2020). Mr. Williams with his family. The New York Times. [Photo]. https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html.

Just, N., & Latzer, M. (2017). Governance by algorithms: reality construction by algorithmic selection on the internet. Media, culture & society, 39(2), 238-258.

Klosowski,T (2022). Facial Recognition. [Photo]. Wirecutter.https://www.nytimes.com/wirecutter/blog/how-facial-recognition-works/.

Steinacker, S (2022). Cover background. PHARMABOARDROOM. [Photo]. https://pharmaboardroom.com/articles/ethics-in-artificial-intelligence-better-prepare-now/.

Wayne County prosecutor (2020, June 24). WCPO Statement in Response to New York Times Article Wrongfully Accused by an Algorithm. The New York Times. https://int.nyt.com/data/documenthelper/7046-facial-recognition-arrest/5a6d6d0047295fad363b/optimized/full.pdf#page=1

Be the first to comment

Leave a Reply