The Ethics of Crash Optimisation Algorithms

Patrick Lin started it. In an article entitled ‘The Ethics of Autonomous Cars’ (published in The Atlantic in 2013), he considered the principles that self-driving cars should follow when they encountered tricky moral dilemmas on the road. We all encounter these situations from time to time. Something unexpected happens and you have to make a split second decision. A pedestrian steps onto the road and you don’t see him until the last minute: do you slam on the brakes or swerve to avoid? Lin made the obvious point that no matter how safe they were, self-driving cars would encounter situations like this, and so engineers would have to design ‘crash-optimisation’ algorithms that the cars would use to make those split second decisions.In a later article Lin explained the problem by using a variation on the famous ‘trolley problem’ thought experiment. The classic trolley problem asks you to imagine a trolley car hurtling out of control down a railroad track. If it continues on its present course, it will collide with and kill five people. You can, however, divert it onto a sidetrack. If you do so, it will kill only one person. What should you do? Ethicists have debated the appropriate choice for the last forty years. Lin’s variation on the trolley problem worked like this:Imagine in some distant future, your autonomous car encounters this terrible choice: it must either swerve left and strike an eight-year old girl, or swerve right and strike an 80-year old grandmother. Given the car’s velocity, either victim would surely be killed on impact. If you do not swerve, both victims will be struck and killed; so there is good reason to think that you ought to swerve one way or another. But what would be the ethically correct decision? If you were programming the self-driving car, how would you instruct it to behave if it ever encountered such a case, as rare as it may be? (Lin 2016, 69)There is certainly value to thinking about problems of this sort. But some people worry that, in focusing on individualised moral dilemmas such as this, the framing of the ethical challenges facing the designers of self-driving cars is misleading. There are important differences between the moral choice confronting the designer of the crash optimisation system (whether it be programmed from the top-down with clearly prescribed rules or the bottom-up using some machine-learning system) and the choices faced by drivers in particular dilemmas. Recently, some papers have been written drawing attention to these differences. One of them is Hin-Yan Liu’s ’Structural Discrimination and Autonomous Vehicles’. I just interviewed Hin-Yan for my podcast about this and other aspects of his research, but I want to take this opportunity to examine the argument in that paper in more detail.

Source: The Ethics of Crash Optimisation Algorithms

Categories: Uncategorized

Post navigation

Comments are closed.

Create a free website or blog at

%d bloggers like this: