top of page
  • Writer's pictureDurham Pro Bono Blog

The Ethical Dilemma of Self-Driving Cars

Disclaimer: The views expressed are that of the individual author. All rights are reserved to the original authors of the materials consulted, which are identified in the footnotes below.


The current situation


Presently, while fully-autonomous vehicles are not available in the UK or EU, partially-autonomous vehicles are out on the market. Current safety regulations require a human driver to be ready to take over control of the vehicle when needed. The Department for Transport has stated that the UK is ‘on track to meet its commitment to have fully self-driving vehicles on UK roads by 2021’[1].




Legal complications with self-driving cars


Tort and insurance liability


On March 18, 2018, Elaine Herzberg was struck and killed by a driverless car. As the first pedestrian fatality involving a fully autonomous vehicle, this presents an unprecedented liability challenge.[2] The crash was not caused by a defective sensor or software. Instead, it was caused by the driver’s judgment replaced by that of a machine[3]. Normally, a claim for vehicular negligence would be brought against the driver and their insurance provider. However, the fact that the self-driving car caused the outcome could mean that a claim for product liability would be brought against the car manufacturer instead.

Therefore, with a driverless car, any ‘fault’ of the driver suddenly becomes irrelevant since the driver no longer has control of the car. Legislators need to be cautious in how easily the law will impose ‘fault’ on the driverless technology. Liability would turn the human failing of a vehicle operator into fines or even criminal consequences[4], so if liability is set too high, this will hinder innovation and businesses will pull out.


Current tort law has not kept pace with innovation of driverless tech, so partially-autonomous cars are treated under the traditional vehicular negligence claim (though in the future when completely autonomous cars are involved, product liability claims are more likely to be brought forth). However, innovation of driverless tech is a gradual process which can last decades. During that process, the social and legal status of self-driving cars will conflict with existing legal standards.[5] Road laws will need to be updated and it remains unanswered what the requirements should be for people to own completely self-driving cars.


Morally murky decisions


What if a self-driving car has to make a split-second decision between fatally colliding with a pedestrian or making a swerve that will kill the driver? A human’s response would be a ‘reaction’ based on instinct, but a programmer making the same choice would arguably be ‘premeditating’ that action. If that action results in death, should that programmer be liable for homicide?

Driverless technology sits uncomfortably alongside murky issues of morality. A self-driving car in the future will be able to assess subjects around itself – for instance, should it be programmed to favour the life of a law-abiding motorist wearing a helmet versus a reckless motorist not wearing a helmet? It is difficult to imagine cars being able to meeting out a standard of ‘fairness’ devised by tech companies. Should the car save the passenger at any cost? Or adopt a utilitarian approach and reduce the total number of casualties? It will be interesting to see how legislation evolves in this area.


Data privacy and cyber-security


Self-driving cars will rely on collecting its driver’s personal information, including their home address, place of work and schools their children attend, which can pose a security threat. Given that the right to erasure under Article 17 of the EU GDPR[6] has been introduced and will be implemented in the Data Protection Act[7], this technically means that any passenger can ask the car company that holds data about them to delete that data and, in some circumstances, it must then do so. This may create difficult barriers for self-driving car companies to overcome. Furthermore, there is a cybersecurity risk that hackers may target the driverless tech system and gain control over any vehicle manufactured by the company, which can lead to disastrous consequences.


In conclusion, the emergence of the self-driving car has impacted a number of legal areas. From a new type of liability to questions of morality, the issues that the law will inevitably deal with are indeed challenging. It will be most interesting to see how the law will evolve with the innovation of completely-driverless technology.


Su-Ann Cheong (Technology and Media)


SOURCES

[1] Lianne Korilin, ‘Driverless cars will be on UK roads by 2021, says government’ (CNN, 6 Feb 2019) <https://edition.cnn.com/2019/02/06/uk/driverless-cars-scli-gbr-intl/index.html> accessed 28 Nov 2019.


[2] Daisuke Wakabayashi, ‘Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam’ (The New York Times, 19 March 2018) <https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html> accessed 28 Nov 2019.


[3] Ian Bogost, ‘Can You Sue a Robocar?’ (The Atlantic, 20 March 2018) <https://www.theatlantic.com/technology/archive/2018/03/can-you-sue-a-robocar/556007/> accessed 28 Nov 2019.


[4] Ibid (n 7).


[5] Ibid (n 7).


[6] General Data Protection Regulation (EU) 2016/679, Article 17.


[7] Data Protection Act 2018.

56 views0 comments

Recent Posts

See All

Comments


bottom of page