The rapid growth of Artificial Intelligence has led to numerous exciting advancements, from self-driving cars to intelligent devices. However, whether morality can be written into the code is worth pondering. Can artificial intelligence decide whether humans live or die? AI technological improvements have raised moral issues, notably in driverless vehicles. The reason for selecting autonomous driving as a case study is that it is now the AI technology humans are most likely to utilize. This article will focus on the ethical dilemmas posed by self-driving cars and other forms of AI automation, as well as the controversies facing AI’s involvement in Internet governance. As technological advances such as AI achieve great success in many areas, Internet governance and AI will also be closely integrated. This article will also discuss why a solid ethical framework is needed to regulate AI’s continued development in Internet governance.
II. The Moral Challenges of Self-Driving Cars
The increasing adoption of Artificial Intelligence has produced ethical difficulties, notably the trade-off between safety and ethics. On the night of March 18, 2018, the first human fatality in history occurred in an automobile accident employing autonomous technology in Tempe, Arizona, USA. The disaster soon made news across the United States and ignited a debate in academia about artificial intelligence’s ethical quandary. Emergency braking techniques are disallowed during autonomous driving because they can cause the vehicle to become unstable. In other words, no braking would hit pedestrians, yet strong emergency braking would endanger the passengers’ safety(Noble, 2018a). This is not a technical issue but rather an ethical decision.
Every time an autonomous driver faces a collision, the artificial intelligence system makes a judgmental choice. Programming these machines to make ethical decisions in a split second is a huge problem, even if reducing accidents caused by human mistakes is the fundamental purpose of self-driving automobiles (Just and Latzer, 2017). For instance, how should a self-driving car decide between killing a pedestrian and endangering its passengers by swerving? According to Pasquale (2015), AI systems should be developed in the public interest, establishing ethically sound autonomous technologies.
Meanwhile, some consumers believe that they purchased the car and that the driver’s safety should be the first consideration. Such publicity will undoubtedly serve the purpose of boosting sales and meeting the interests associated with autonomous driving. According to Flew (2021), platform administration should be driven by a sense of social duty. Companies should prioritize ethical AI decision-making over profit or market domination. Because even with a car, there will always be time to walk, and no one would want this moral tendency to become mainstream. Since there is no way to protect passengers without limits, it is easy to think of another solution, a moral judgment derived from Kantian utilitarianism, which means minimizing the harm of harmful actions and maximizing good outcomes（Can, 2022). However, can utilitarianism be genuinely fair and appropriate in AI governance?
If utilitarian judgments like prioritizing the protection of drivers are written into autonomous driving standards, they will also create a moral dilemma. Thus, Ethical decision-making must be built into their design and development to guarantee that autonomous vehicles put the well-being of all people involved in an accident first. For everyone involved in an accident, ethical qualities such as empathy, compassion, and respect for human dignity should be incorporated into the design of autonomous systems (Noble, 2018). So, there is a rising need for a comprehensive ethical framework to clarify that AI systems must contain ethics to facilitate Internet governance. The ethical dilemma is the biggest challenge for AI technology in Internet governance.
III. The Tesla Case Study
The “moral gap” that always arises in Internet governance regarding AI decisions means the moral disagreement that can arise between machines and humans. So this part will use Tesla Autopilot as a case study of the ethical dilemma of autonomous driving, which are equally applicable to Internet governance. It is argued that ethical responsibility and accountability must be built into the design of all Internet-managed AI technologies（Lin,2018).
The fatal 2016 collision between a Tesla Model S and a tractor-trailer is one of the most well-known events employing Tesla’s Autopilot feature. The motorist allegedly ignored multiple warnings to restore control while Autopilot was activated. Goodall and Goodman (2020) point out that Tesla has been criticized for its lack of transparency in handling data gathering and analysis following Autopilot accidents which emphasize the significance of businesses being accountable for the technology they create and ensuring that the process by which it is created and used is open and public.
More openness and responsibility on the part of developers of autonomous vehicles is needed, as demonstrated by the Tesla case study. At the same time, Tesla has been accused of avoiding responsibility for its technology’s inadequacies and downplaying its Autopilot system’s significance in these tragedies. Businesses working on autonomous vehicles have a responsibility to put the safety of everyone on the road first and be held liable for any accidents that may occur due to their products (Sparrow & Howard, 2017). If self-driving cars are to be used safely and ethically, a complete moral framework must develop, outlining firms’ obligations in creating and implementing autonomous technology.
Consequently, The case of Tesla can be used to demonstrate the importance of making ethical judgments about self-driving cars. The governance of artificial intelligence should emphasize deontology, or ethics and responsibility, more than utilitarianism. In today’s society, where any AI is applied to internet governance to change human life, then AI technology should be empathetic, just, and prudent in this context (Sparrow & Howard, 2017). Tesla needs to consider the above factors holistically so that the AI self-driving technology developed will be safer and more ethical.
IV. Balancing the controversy posed by AI technologies on Internet governance
Autonomous driving is a branch of artificial intelligence that remains controversial. This section will give examples of the current artificial intelligence technology in society, as well as the moral challenges it will face if practiced in Internet governance, and discuss how to balance the application of artificial intelligence technology to Internet governance.
Firstly, there is machine learning, one of the core technologies of artificial intelligence. There are three types of machine learning: supervised, unsupervised, and augmented. The AI case of autonomous driving mentioned in this paper falls mainly into augmented learning, meaning that the computer learns how to make the best decisions in a dynamic environment（Gogoll & Müller, 2017) . However, even then, there are still some ethical dilemmas that the computer cannot compute. Applying this technique to Internet governance will likely raise the risk of algorithmic bias (Just & Latzer, 2017). Assume that creating and disseminating knowledge has a non-neutral viewpoint influence. This scenario may result in the proliferation of incorrect, erroneous, and other information impressions that harm internet governance.
Secondly, there is deep learning artificial intelligence, which uses a neural network algorithm to learn and process data. Neural networks for artificial intelligence are made up of multiple layers, and each of these layers detects different features, for example, edges, colors, and shapes in an image (Awad, 2018). Deep learning has been hugely successful in areas such as image recognition, speech recognition, and natural language processing, but at the same time has raised concerns about the invasion of privacy by its technology (Nyholm & Smids, 2016). It is worth considering the need to focus on data privacy and security and improve the interpretability of models for using this technology in Internet governance.
Additionally, there is a hugely controversial AI technology called Autonomous Thinking, also called Artificial General Intelligence (AGI). It is a target in the field of artificial intelligence. The development of AI has generated some disputes regarding the safety and ethics of AI, particularly in the case of AGI (Artificial General Intelligence) or autonomous decision-making (Goodall N, 2014). The underlying reason people are wary of AI and consider it dangerous is the fear of whether AI will behave Uncontrollably if machines can think on their own and whether it will threaten the survival and security of humans (Kate C, 2022). The use of AGI in Internet governance will only lead to more trust or negative sentiment on the Internet unless the technology is regulated. These are all things that internet governance needs to balance when using the various technologies of AI.
There is no denying that independent thinking is indeed one of the key directions in the development of AI, and its implementation will change how the Internet is governed. However, with the development of these technologies come problems, such as the unfairness and bias of AI and the reshaping of the human-robot relationship. It is essential to consider these concerns and implement a solid ethical framework to guide the development and governance of AI on the Internet.
V. A prerequisite for using AI in Internet governance is the development of an ethical framework.
Any development of artificial intelligence in Internet governance should be human-centered, which is why developing an ethical legal framework is necessary. For example, self-driving cars must have a strong ethical basis to ensure safe, ethical, and socially responsible deployment (Bonnefon & Rahwan, 2016). That requires ethical considerations in AI algorithms, transparency and responsibility in AI development, clear ethical rules and legislation, and open dialogue and stakeholder collaboration (Just N, 2017). By addressing these difficulties, self-driving cars can make ethical and technological progress.
It is not just autonomous driving, an AI algorithm, that must be brought under ethical regulation, but other technologies need regulation to help govern the Internet. Online disinformation poses a serious challenge. For example, transformative new AI technologies, such as ChatGPT, can simulate human language and use Deep Fake technology to enhance the fidelity of faked images, sounds, and videos (Kate C,2022). Without ethical regulation, such AI could not only disrupt public perceptions but could also be used to commit online fraud, compromising users’ interests and even endangering public safety.
However, there are two sides to every coin. While ChatGPT’s emerging AI technology may exacerbate the proliferation of disinformation on the Internet, the judicious use of AI technology can also contribute to the proliferation of disinformation on the Internet. “Human-machine cooperation in governance” has become a hot topic nowadays. Firstly, AI, as an information processing tool, can automatically identify massive amounts of information through machine learning and natural language processing technologies and quickly and efficiently identify and filter out possible false information (Noble, 2018a). Secondly, as information consumers, humans can review and correct the information identified by AI as potentially false to improve the accuracy of false information detection (Pasquale, 2015). On this basis, large language models such as ChatGPT should be effectively regulated to meet ethical and legal code requirements.
Overall, the use of AI in Internet governance will face a wide range of ethical issues. A variety of measures need to be taken to improve AI technologies for this purpose, including improving AI algorithms, strengthening AI technologies to drive information review and make it easier to manage autonomously, and strengthening various AI laws and policy regulations to improve the efficiency of Internet governance ( Flew, 2021). Emerging AI technologies such as ChatGPT must be used ethically and morally to help address the issue of false and incorrect information on the Internet so that it can help Internet governance. Whether it is Tesla’s autonomy or OpenAI’s Chatgpt, it is always the ‘human’ who is in the driver’s seat in collaborative human-machine governance of the Internet, rather than relying on machines to drive everything, which is a central idea repeatedly highlighted in this article.
In conclusion, the rapid development of artificial intelligence and its integration into Internet governance pose significant ethical challenges. This article uses the case of Tesla, an artificially intelligent autonomous car, as an entry point to explore the ethical dilemmas faced by artificial intelligence. It then analyses the controversies in using various AI technologies for Internet governance. The article highlights the urgent need for a robust ethical framework to address these ethical challenges of AI. Internet governance combined with artificial intelligence is the future trend and should ensure that it is developed in line with human interests and values and used appropriately in Internet governance.
- Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., … & Rahwan, I. (2018). The moral machine experiment. Nature, 563(7729), 59-64.
- Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573-1576.
- Flew, T. (2021). Regulating platforms. John Wiley & Sons.
- Foot, P. (1967). The problem of abortion and the doctrine of the double effect.
- Gogoll, J., & Müller, J. F. (2017). Autonomous cars: in favor of a mandatory ethics setting. Science and engineering ethics, 23, 681-700.
- Goodall, N. J. (2014). Machine ethics and automated vehicles. Road vehicle automation, 93-102.
- Just, N., & Latzer, M. (2017). Governance by algorithms: reality construction by algorithmic selection on the Internet. Media, culture & society, 39(2), 238-258.
- Kate C. (2022). Atlas of AI: power, politics, and the planetary costs of artificial intelligence.
- Noble, S. U. (2018a). 1. A Society, Searching. In Algorithms of Oppression (pp. 15-63). New York University Press.02
- Nyholm, S., & Smids, J. (2016). The ethics of accident-algorithms for self-driving cars: An applied trolley problem?. Ethical theory and moral practice, 19(5), 1275-1289.
- Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
- Sparrow, R., & Howard, M. (2017). When human beings are like drunk robots: Driverless vehicles, ethics, and the future of transport. Transportation Research Part C: Emerging Technologies, 80, 206-215.