Is There An Acceptable Level of Fatalities for Autonomous Vehicles?

March 7, 2016  |  By Michael O'Shea  |       

acceptable-levels-autonomous-vehicles
As a practical matter, and regardless of today’s hype associated with autonomous vehicles, there are a number of major obstacles to be overcome before they engender wide acceptance. These include general consumer acceptance, and hosts of regulatory challenges, to the most daunting of all, issues associated with consumer versus commercial liabilities. Nevertheless, these technologies are evolving apace, so to set the scene as it were, here are some initial facts regarding the history and development of self-drive systems.

  • In 2004 the U.S. Defense Advanced Research Projects Administration (DARPA), challenged high-technology developers and University scientists to produce a first generation fully-autonomous vehicle spurred on by a $1 million dollar prize entitled the Urban Grand Challenge. The first group of entrants were largely tech-retrofitted vehicles utilizing consumer brands such as HUMMER, Chrysler, VW, and Nissan, along with an odd collection of one-off cars such as Carnegie-Mellon’s “Sandstorm”. The competition produced no clear winner, as all of the vehicles either broke down, or crashed before completing a purpose-built obstacle course laid out in California.

  • By 2007, self-drive vehicles were completing complex mazes of obstacles successfully, suggesting that ‘07 marked the beginning of true privatization of the self-drive “revolution”.

  • In 2013 four US states including Nevada, Florida, California, and Michigan passed laws that allowed for limited public highway test programs primarily associated with Google’s effort. In all cases, the rules called for driver-only occupancy, in addition to the fact all test vehicles had to offer an immediate ability to take control of the vehicle if it strayed off course.

  • In the 2014/2015 timeframe, several small bay-area self-drive innovators most notable Cruise Automation, in addition to five serious brand-manufacturer initiatives including Audi, BMW, Nissan, Mercedes Benz, and Delphi Automotive launched their own self-drive development programs.

  • And, in January 2016 the National Highway Transportation Safety Agency (NHTSA) outlined a series of self-drive roll-out initiatives as part of the Obama Administration’s $4 billion last-year-in-office regulatory effort. Even though self-drive technologies were still trying to navigate various practical beta phases efficiently, the administration’s “full steam ahead” approach suggested new concerns that could quickly become much more of a threat than a boon to the driving public.

The HMI Problem

From a pure vehicle dynamics perspective, the creation of an efficient Human-Machine-Interface (HMI) may represent one of the most daunting challenges in recorded history. This is largely due to emotion rather than science, since cars, trucks and vehicles in general are not so much “ridden in”, so much as they are “driven”; and the physical human experience associated with going from one place to another in one’s own vehicle, under one’s own control, offers a constellation of senses ranging from pure satisfaction to outright road rage.

Consequently, where digital systems are cool and disciplined, people are neither of those characteristics, thereby offering an intrinsically unstable friction between human and machine.

To illustrate the point, consider this excerpt from an accident report associated with Google test cars and human-driven vehicles taken directly from California DMV accident documents:

"Lexus RX450H (the "AV") in autonomous mode heading southbound on California St. in Mountain View, was stopped for a red light in the right lane (lane 2) at the Intersection of California St. and Shoreline Blvd. A Toyota Camry immediately behind the AV attemped to pass the AV on its right in the bike lane, in order to make a right turn onto Shoreline Blvd. While passing the AV, the Toyota's driver's side mirror brushed one of the AV's sensors located on the passenger's side of the AV. Neither vehicle was damaged."

Note that in this case, the self-driving vehicle was operating on an urban surface streets while experiencing a speed ranging from 0 to 5 mph. As a result, no one was injured, and body damage associated with the event was minimal.

However, the implications of the experience suggest that if human and machine-driven vehicles can’t cooperate efficiently at that speed in a highly-limited traffic environments, what is the likelihood of serious injury occurring at speeds of upwards of 35-50 mph on frontage or rural roads, and 55-80 mph on interstate highways?

Nevertheless, current statistics suggest a mix bag of opinions when it comes to the threat of accident and consequent liabilities. For example, according to a 2015 self-drive accident analysis produced by the University of Michigan Transportation Institute, “...the current best estimate is that self-driving vehicles have a higher crash rate per million miles traveled, than conventional vehicles, and similar patterns were evident for injuries per million miles traveled and for injuries per crash.”

However, in the same study the Institute also concluded that, “....the analysis compared the safety record of these vehicles with the safety record of all conventional vehicles in the U.S. for 2013 (adjusted for underreporting of crashes that do not involve a fatality).

Two important caveats should be considered when interpreting the findings. First, the distance accumulated by self - driving vehicles is still relatively low (about 1.2 million miles, compared with about 3 trillion annual miles in the U.S. by conventional vehicles). Second, self - driving vehicles were thus far driven only in limited (and generally less demanding) conditions (e.g., avoiding snowy areas). Therefore, their exposure has not yet been representative of the exposure for conventional vehicles.”

The Liability Question

Obviously these metrics represent early-days in the technology’s evolution nevertheless, as more and more self-drive vehicles begin to operate in urban settings, it is likely that the current number of incidents will tend to trend up accordingly, at least for for the foreseeable future. Some of these might be more serious than we have witnessed so far. In turn, then, this consideration goes directly to the title of this paper “Is There An Acceptable Level of Fatalities for Autonomous Vehicles?”

Since there are no statistics that currently apply to the question, perhaps a better way to consider the problem is to utilize analysis provided by insurance risk professionals such as the well-regarded Insurance Information Institute (iii.org), and in that case the results are mixed to be sure. For example, “...The risk of an accident is unlikely to be completely removed since events are not totally predictable and automated systems can fail. In addition, the transition from hands-off driving to hands-on promises to be tricky.

The need for drivers to control the car in an emergency is fraught with questions, not just those involved in the automotive technology. What kind of training will people need to safely handle these semi-autonomous vehicles? How well prepared will drivers be to handle emergencies when the technology returns control to the driver? How will early-stage drivers gain the necessary experience and how will experienced drivers stay sharp enough when they are only infrequently called upon to react?”

On top of those questions, in the event of an accident, a matter of commercial liability looms over the entire discussion including who will be at fault; the self-driven ‘rider’; a third-party human-driven vehicle; or an involved autonomous vehicle manufacturer?

Although these considerations are still coalescing, in general it appears that the insurance industry is ready to accept that there be a fair risk of injury or death when riding in driverless vehicles for the foreseeable future.

For example, more information from a February 2015 report compiled by the Insurance Information Institute:

  • “Google, the company that has been the public face of self-driving cars in the United States for the past few years, announced in May 2014 that it is building a fleet of vehicles without a steering wheel or role for a driver because its technology has not been able to successfully switch control back and forth from automated driving to the driver in an emergency and does not expect to be able to accomplish that soon. The prototype will have a top speed of 25 mph and will be summoned by a smartphone, in effect serving as an automated taxi service.

  • Other companies building autonomous cars said that they will continue to work on vehicles that will be able to safely make that switch. Volvo says that it expects to have its cars tested on city streets by consumers towards 2017. However, experts say the size and the cost of the sensors powered by lasers used to steer the driverless cars must be reduced before such cars can be put into mass production.

  • In a survey by IEEE, respondents were asked to specify the year in which some of today’s commonplace equipment will be removed from mass-produced cars, the majority said that rear view mirrors, horns and emergency brakes will be removed by 2030, and steering wheels and gas/brake pedals will follow by 2035.

  • In February 2014, federal agencies approved vehicle-to-vehicle (V2V) communications systems that will allow cars to “talk” to each other so that they know where other vehicles are and can compensate for a driver’s inability to make the right crash avoidance decisions because of blind spots or fast moving vehicles. V2V communication uses a very short range radio network that, in effect, provides a 360-degree view of other vehicles in close proximity. The Department of Transportation estimates that safety systems using V2V communications will be able to prevent 76 percent of crashes on the roadway.
These comments offer that self-driving initiatives are going to emerge whether the public is, or is not ready for that eventuality. The industry will look to companies pushing the bar on connected car technology for innovation and next steps towards the future.

At the end of the day, while the totality of current incident reports and analytical statistics suggest that the public is likely to experience higher than normal self-drive vehicle incidents for some time yet; an complete understanding of the ultimate impacts posed by human versus self-drive vehicle accidents will have to remain moot for the time being.

So, are there be an “acceptable” levels of fatal incidents in self-drive cars? One should hope that ultimately the answer will be “zero.” However, efficient self-drive safety programs won’t exist as series’ of theoretical snapshots, but instead, will be based on lessons-learned derived by real events, involving real people over time.

What is your take on this issue within the automotive industry?

Topics: Autonomous/Driverless Cars

Michael O'Shea

Michael O’Shea is the Founder and CEO of Abalta Technologies. He is responsible for all aspects of executive management of Abalta and a direct participant in many client engagements, particularly in management advisory projects.

Leave A Reply