
(Cowan, One day the kids could be taking a driverless car to school, 2017)
Artificial Intelligence (AI) is revolutionising our society in countless ways, making tasks that were once impossible for us mere mortals now achievable with the power of complex algorithms. From virtual assistants like Siri setting our alarms to Netflix suggesting our next binge-watch, AI is making our lives easier and more convenient.
However, as incredible as AI is, it is still a new technology, and there are concerns about its potential impact in the future. As AI continues to advance, it will shape not only our lives but also our thoughts and actions. This raises ethical and social questions that we need to consider as we navigate the world of AI.
One area where AI is making significant advancements is in driverless cars. These autonomous cars have the potential to revolutionise transportation, but they also raise questions about liability and regulation. Who is responsible if something goes wrong with a driverless car? How should we regulate this emerging technology? Let us explore these issues and reflect on the challenges of liability and the regulatory difficulties of driverless cars.

(Kadry, Road to 2030: the Future of Autonomous Vehicles (AVs), 2021)
The Different Views on Determining Tort Liability
Let us not forget about the big question: Who is responsible if something goes wrong?
In the first situation, the human “driver” should be held liable, just like in a regular car accident. After all, even though the car is driving itself, there is still a human who is ultimately responsible for the behaviour of the autonomous vehicle. In fact, legislation in some states, such as California, still considers everyone in the car to be the driver and holds them liable (Schroll, 2015).
There are two reasons for this perspective. Firstly, drivers knowingly accept the risk of a technical fault causing an accident when they choose to use autonomous driving technology for reasons of profit. Secondly, it may be easier to address accidents with a few people in the car than with the manufacturer (Schroll, 2015).
Another voice says that the car manufacturer should be held responsible for accidents involving driverless cars. In fact, there are already laws, such as The Automated Vehicles Act (2022), that outlines the apportionment of liability for accidents involving driverless cars (Prez, 2022).
According to the law, the manufacturer should prove that they took all necessary steps to ensure the safety and legality of the driverless car. If a human driver is involved in an accident while driving while the autonomous driving system is engaged, the manufacturer may be held responsible for failing to detect the danger or take necessary action in time to prevent the accident in time (Prez, 2022).
But wait, there is more! Some places are saying that car owners should take responsibility for their driverless cars.
According to Shenzhen Special Economic Zone Smart Networked Vehicle Management Regulations, a regulation in China, if a driverless vehicle is involved in an accident and there is no one in the car to take control, the car’s owner may be responsible (Evinchina, 2022).
This is because the owner has a duty to ensure the safety of the vehicle and to take necessary precautions to avoid an accident, like being responsible for a pet even if it misbehaves when the owner is not around (Evinchina, 2022).
And here is where things get wild: What if we gave driverless cars their legal personalities? They would be electronic people with rights and responsibilities just like people.
It sounds crazy, but some scholars proposed in the 2017 European Parliament bill on the Robot People Act,
“Creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause.” (European Parliament, 2017).
This recommendation also applies to unmanned vehicles. Its implementation requires the establishment of a legal personality for robots, which should be able to confer rights and responsibilities like those of humans. Imagine a world where autonomous vehicles have their own legal identities and are responsible for their actions!
We have yet to set a universal law for driverless cars, so it is up to us to ensure they are safe and regulated. That is why we need to strengthen the regulation and supervision of driverless cars. One day we can sit back and let our driverless cars do all the work for us.
The Regulatory Difficulties of Driverless Car

(Bang, Black Box, 2019)
The term “black box” refers to the hidden layers of machine learning neural networks like a maze, making it challenging to decipher how the algorithm arrives at its decisions. It is like observing a child learn independently, read newspapers, and demonstrate genius, but it leaves you scratching your head, wondering how his brain works. It is a mystery that is hard to crack, and the same goes for these algorithms. Decoding these algorithmic black boxes is a complex task, but it is crucial for ensuring transparency and accountability in AI systems,
As Bathaee (2018) explains, one obvious reason why AI can be a black box is that it uses an algorithm that stores data in a database that is not easily audited or understood by humans. This lack of transparency poses a problem when it comes to regulating AI and preventing algorithmic discrimination.
For example, there have been cases where AI algorithms used in some start-ups to generate diverse human databases have inaccurately identified people with black skin, leading to many accidents where Tesla sports cars have run over black pedestrians because the AI algorithms are unable to recognise what black skin looks like in real life (Kim, 2021). This highlights the issue of AI algorithms failing to recognise certain races or ethnicities accurately, which can have serious consequences in real-life scenarios.
Algorithms used to create user profiles can be biased and unfair. The most common method is to label people based on their characteristics, like age, gender, and race. But this can lead to some serious problems.
For example, if the algorithm is misled to think that people of a certain age or race are more likely to behave in a certain way, it could make incorrect predictions and cause harm. We need to find better ways of creating user profiles that do not reinforce social biases and put people at risk.

(Sokhach, 4 Ways to Preserve Privacy in Artificial Intelligence, 2023)
Researchers like Shokri et al. (2021) have expressed concerns about disclosing AI algorithms, citing potential risks to user privacy. Milli et al. (2019) have also found the risk of hackers stealing entire algorithmic models by interpreting AI algorithms, which could lead to breaches of intellectual property and trade secret confidentiality.
On the other hand, Stalla-Bourdillon et al. (2019) propose that machine learning can be regulated throughout the process to avoid any leaks and protect privacy. Mercado et al. (2016) have found that increased transparency in AI systems can enhance efficiency and trust in Intelligent Agents (IA).
The ethical guidelines of the European Commission’s High Level Expert Group on Artificial Intelligence (AI HLEG) emphasise the importance of informing users about AI systems and explaining the system’s capabilities and limitations in a way that is understandable to everyone (European Commission, 2022). This can help avoid potential risks and misjudgements caused by AI algorithms while enhancing productivity.
When it comes to disclosing AI algorithms, there are various factors to consider, including privacy, intellectual property rights, security, and interpretability. Striking a balance between protecting the trade secrets and intellectual property of algorithms, and being transparent and understandable where it matters, is essential in navigating the complexities of AI regulation.
In conclusion, the disclosure of AI algorithms poses difficulties involving the black box, privacy, and intellectual property, but transparency and accountability are essential to building trustworthy AI. By carefully managing these considerations, we can further harness the potential of AI in the service of humanity.
The Chicago Convention, signed at the ICAO in 1944, laid down the foundation for pilotless aircraft with Article 8. It is incredible to see how far we have come since then! Today, unmanned aircraft systems are being used worldwide, thanks to the development of policies on pilotless aircraft (International Civil Aviation Organization, 2011).

(Air Force, MQ – 4C Triton Unmanned Aircraft System, n.d.)
But here is a question that comes to mind – why is it that while we had solid policies for autonomous aircraft for decades, driverless cars are still navigating in a somewhat uncertain regulatory landscape with no clear laws or policies?
I think there may be a few reasons for this disparity. First, autonomous aircraft technology has matured over the years, becoming a reliable and safe means of flying. The widespread adoption of unmanned aircraft systems globally reflects the acceptance and support of this technology by governments, aviation industry stakeholders, and the public.
Second, autopilot technology for aircraft has undergone extensive testing and certification processes, along with strict regulations and guidelines governing its operation. These factors have contributed to the safe and reliable operation of unmanned aircraft systems, making them valuable tools in various industries.
The two reasons above may be the shortcomings that driverless cars lack. By acknowledging these limitations, we can work towards addressing them and driving the progress of driverless cars. So, let us consider the question: Who should push the policy and legislation around driverless cars?
I think it is up to all of us – governments, manufacturers, technology companies, and the public – to get involved and drive the development of policies and legislation for driverless cars.
For governments: As mentioned in Part 1, the US, UK, EU, and China have all developed policies for driverless cars, but these policies are still evolving, and governments need to keep focusing on improving legislation related to driverless cars to ensure safety and proper regulation.
For manufacturers like Tesla and Waymo, their responsibility lies with them to push for improved legislation to address the initial difficulties unmanned cars face while also promoting their benefits. They can achieve this by developing a set of measures to control the quality and safety of their vehicles strictly and by proposing suitable rules to the government for the development and use of driverless cars.
Technology companies like Google and Microsoft can help by providing technical and professional advice to the government, helping to improve relevant laws and regulations, and providing timely feedback on issues related to unmanned vehicle technology.
Finally, as users and beneficiaries of driverless cars, we also have a role to play in expressing our suggestions and needs for driverless car legislation to the government. We can attend hearings or provide suggestions through online media to contribute to the legislative process.
In a nutshell, many regulatory issues and difficulties still need to be resolved for driverless cars. If we can fix them, our society will become smarter. Therefore, let us collaborate to ensure that driverless cars are safe and beneficial for everyone in our society. We hope that one day, we will have appropriate laws and regulations for driverless cars that address any concerns, meet our needs, and move us forward.
Related Video:
- I took a ride in Waymo’s fully driverless car
- Seniors React to Driverless Cars
- Why You Should Want Driverless Cars On Roads Now
Reference
Air Force. (n.d.). Mq-4C Triton Unmanned Aircraft System. Air Force. Retrieved April 1, 2023, from https://www.airforce.gov.au/aircraft/mq-4c-triton
Bang, J. (2019). Black Box. Investopedia. Retrieved April 1, 2023, from https://www.investopedia.com/terms/b/blackbox.asp
Bathaee, Y. (2018). AI, Machine-Learning Algorithms, and the Causes of the Black Box Problem. In The Artificial Intelligence Black Box and the Failure of Intent and Causation (Vol. 31, Ser. 2, pp. 890–938). essay, Harvard Journal of Law & Technology.
Cowan, J. (2017). One day the kids could be taking a driverless car to school. ABC News. Retrieved April 1, 2023, from https://www.abc.net.au/news/2017-03-11/everything-you-need-to-know-about-driverless-cars/8336322
European Commission. (2022, November 17). Ethics guidelines for Trustworthy Ai. Shaping Europe’s digital future. Retrieved April 7, 2023, from https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
Evinchina. (2022).一图看懂“深圳经济特区智能网联汽车管理条例“(附条例全文下载. [A Chart to Understand the “Shenzhen Special Economic Zone Smart Networked Vehicle Management Regulations” (with the full text of the regulation for download)]. Evinchina. Retrieved April 7, 2023, from http://www.evinchina.com/articleshow-400.html
European Parliament (2017). European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)., paragraph 59. Retrieved April 7, 2023, from https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html
International Civil Aviation Organization. (2011). Appendix: Examples of State/Regional UAS initiatives. In Unmanned Aircraft Systems (UAS) (p. 35). essay, International Civil Aviation Organization.
International Civil Aviation Organization. (2011). ICAO Regulatory Framework. In Unmanned Aircraft Systems (UAS) (pp. 3–6). essay, International Civil Aviation Organization.
Kadry, M. (2021). Road to 2030: the Future of Autonomous Vehicles (AVs). Cubictelecom. Retrieved April 1, 2023, from https://www.cubictelecom.com/blog/self-driving-cars-future-of-autonomous-vehicles-automotive-vehicles-2030/
Kim, T. (2021, October 7). Op-ed: Ai flaws could make your next car racist. Los Angeles Times. Retrieved April 7, 2023, from https://www.latimes.com/opinion/story/2021-10-07/op-ed-ai-flaws-could-make-your-next-car-racist
Mercado, J. E., Rupp, M. A., Chen, J. Y., Barnes, M. J., Barber, D., & Procci, K. (2016). Intelligent agent transparency in human – agent teaming for multi-uxv management. Human Factors: The Journal of the Human Factors and Ergonomics Society, 58(3), 401–415. https://doi.org/10.1177/0018720815621206
Milli, S., Schmidt, L., Dragan, A. D., & Hardt, M. (2019). Model reconstruction from model explanations. Proceedings of the Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3287560.3287562
Prez, M. de. (2022, January 26). Car makers face legal responsibility for self-driving car crashes. AM Online. Retrieved April 7, 2023, from https://www.am-online.com/news/car-manufacturer-news/2022/01/26/car-makers-face-legal-responsibility-for-self-driving-car-crashes.
Schroll, C. (2015). Splitting the bill: Creating a national car insurance fund to pay for accidents in Autonomous Vehicles. Northwestern Pritzker School of Law Scholarly Commons. Retrieved April 7, 2023, from https://scholarlycommons.law.northwestern.edu/nulr/vol109/iss3/8
Shokri, R., Strobel, M., & Zick, Y. (2021). On the privacy risks of model explanations. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. https://doi.org/10.1145/3461702.3462533
Sokhach, D. (2023). 4 Ways to Preserve Privacy in Artificial Intelligence. Landbot. Retrieved April 1, 2023, from https://landbot.io/blog/preserve-privacy-artificial-intelligence. Stalla-Bourdillon, S., Leong, B., Hall, P., & Burt, A. (2019). Warning signs – fpf.org. Future of Privacy Forum. Retrieved April 7, 2023, from https://fpf.org/wp-content/uploads/2019/09/FPF_WarningSigns_Report.pdf
#AI #AI ETHICS #Algorithmic bias #algorithms #Chatbots #datafication #DataPrivacy #DigitalRights #digital rights#data privacy #facebook #FREE SPEECH #Google #HATE SPEECH #Healthcare #LGBTQ #moderation #Online Harms #OnlineThreats #Platform Governance #Privacy breaches #PrivacyLaw #PrivacySecurity #racist speech #Social Media Community #Twitter AI AI Ethics algorithms ARIN6902 data collection decision-making Deepfake FRT Google governance of platform Hate Speech Internet LAW privacy privacy imbalance privacy policy rights social media social media platforms user data
#AI #AI ETHICS #Algorithmic bias #algorithms #Chatbots #datafication #DataPrivacy #DigitalRights #digital rights#data privacy #facebook #FREE SPEECH #Google #HATE SPEECH #Healthcare #LGBTQ #moderation #Online Harms #OnlineThreats #Platform Governance #Privacy breaches #PrivacyLaw #PrivacySecurity #racist speech #Social Media Community #Twitter AI AI Ethics algorithms ARIN6902 data collection decision-making Deepfake FRT Google governance of platform Hate Speech Internet LAW privacy privacy imbalance privacy policy rights social media social media platforms user data
#AI #AI ETHICS #Algorithmic bias #algorithms #Chatbots #datafication #DataPrivacy #DigitalRights #digital rights#data privacy #facebook #FREE SPEECH #Google #HATE SPEECH #Healthcare #LGBTQ #moderation #Online Harms #OnlineThreats #Platform Governance #Privacy breaches #PrivacyLaw #PrivacySecurity #racist speech #Social Media Community #Twitter AI AI Ethics algorithms ARIN6902 data collection decision-making Deepfake FRT Google governance of platform Hate Speech Internet LAW privacy privacy imbalance privacy policy rights social media social media platforms user data
#AI #AI ETHICS #Algorithmic bias #algorithms #Chatbots #datafication #DataPrivacy #DigitalRights #digital rights#data privacy #facebook #FREE SPEECH #Google #HATE SPEECH #Healthcare #LGBTQ #moderation #Online Harms #OnlineThreats #Platform Governance #Privacy breaches #PrivacyLaw #PrivacySecurity #racist speech #Social Media Community #Twitter AI AI Ethics algorithms ARIN6902 data collection decision-making Deepfake FRT Google governance of platform Hate Speech Internet LAW privacy privacy imbalance privacy policy rights social media social media platforms user data
#AI #AI ETHICS #Algorithmic bias #algorithms #Chatbots #datafication #DataPrivacy #DigitalRights #digital rights#data privacy #facebook #FREE SPEECH #Google #HATE SPEECH #Healthcare #LGBTQ #moderation #Online Harms #OnlineThreats #Platform Governance #Privacy breaches #PrivacyLaw #PrivacySecurity #racist speech #Social Media Community #Twitter AI AI Ethics algorithms ARIN6902 data collection decision-making Deepfake FRT Google governance of platform Hate Speech Internet LAW privacy privacy imbalance privacy policy rights social media social media platforms user data
#AI #AI ETHICS #Algorithmic bias #algorithms #Chatbots #datafication #DataPrivacy #DigitalRights #digital rights#data privacy #facebook #FREE SPEECH #Google #HATE SPEECH #Healthcare #LGBTQ #moderation #Online Harms #OnlineThreats #Platform Governance #Privacy breaches #PrivacyLaw #PrivacySecurity #racist speech #Social Media Community #Twitter AI AI Ethics algorithms ARIN6902 data collection decision-making Deepfake FRT Google governance of platform Hate Speech Internet LAW privacy privacy imbalance privacy policy rights social media social media platforms user data
#AI #AI ETHICS #Algorithmic bias #algorithms #Chatbots #datafication #DataPrivacy #DigitalRights #digital rights#data privacy #facebook #FREE SPEECH #Google #HATE SPEECH #Healthcare #LGBTQ #moderation #Online Harms #OnlineThreats #Platform Governance #Privacy breaches #PrivacyLaw #PrivacySecurity #racist speech #Social Media Community #Twitter AI AI Ethics algorithms ARIN6902 data collection decision-making Deepfake FRT Google governance of platform Hate Speech Internet LAW privacy privacy imbalance privacy policy rights social media social media platforms user data
Be the first to comment