By June 2022, automakers reported almost 400 crashes of vehicles with partially automated driver-assist systems, including 273 Teslas.
Consider a situation in which an AV carrying a family of four notices a ball rolling across the road. When this AV approaches, a child runs out onto the road to get the ball. Should the vehicle endanger its passengers’ lives by swerving off the road? Or should it continue its path, ensuring its passengers’ safety at the expense of the child’s life?
Carmakers, car buyers, and policymakers should triumph over several intricate issues before vehicles can be given full autonomy. In a research project by Bonnefon, most of the tested participants indicated that they believed vehicles should be programmed to collide with something rather than run over pedestrians, even at the expense of killing the vehicles’ passengers. However, many of the same study participants were reluctant at purchasing such vehicles. In fact, they would rather ride in an AV that prioritizes their own safety above that of pedestrians.
The researchers concluded that it will be less probable for people to buy AVs if pedestrians are prioritized over passengers when regulating these vehicles. Although research indicates that autonomous vehicles could potentially reduce traffic, diminish air pollution, and save thousands of lives each year, the limited market for AVs would retard their development.
The ethics of autonomous vehicles
While AVs have been designed to reduce traffic conflicts, they may sometimes have to decide between two evils including knocking pedestrians down or sacrificing themselves and their passengers for pedestrians; but how will AVs make these critical decisions? This is where ergonomists and safety experts are helping to pave the way for automation technology.
Carmakers are now cooperating with ergonomists and safety experts alongside ethicists and those from other disciplines. The researchers are considering using algorithms in autonomous vehicles in order to simulate the decisions made by human drivers. Such algorithms could assess alternative crash outcomes and then rank them. However, as the example of the child and the ball illustrates, even the ‘least bad’ option could result in casualties.
AI vs humans vs autonomous vehicles
Ragunathan Rajkumar, a professor of electrical and computer engineering at Carnegie Mellon University said: “AI does not have identical cognitive capabilities with humans.”
Instead, AVs will probably decide based on speed, weather, road conditions, distance, and other data gathered by various sensors and cameras. Autonomous vehicles will most likely detect obstacles on their path and calculate the best course of action based on the speed at which they are moving.
A major challenge is collecting and processing data quickly enough to prevent dangerous situations before they occur.
We are just a few years away from seeing AVs on our roads. In addition to acknowledging the importance of research and consulting relevant organizations, it is encouraging to hear companies mention the importance of ergonomics and other essential elements. For ergonomists and safety experts, this is an exciting time as they work closely with automakers to design these vehicles for all users.
Conclusion: who’s to blame?
It is time to get back to our original question: who is responsible for accidents involving autonomous vehicles? Car manufacturers, algorithm programmers, or pedestrians? All of these factors play a role in preventing accidents, so they all have to be considered.
It will ultimately be up to the autonomous vehicle to decide who survives and who dies. It is important to ensure that philosophers, engineers, ethicists, programmers, ergonomists, health and safety professionals, and other experts are involved in this decision.
