September 10, 2019
In July, the California Utilities Commission granted Waymo (formerly the Google self-driving car project) the state’s first permit to test its driverless vehicles without safety drivers on public roadways. And, by the end of this year, the company planned to launch a driverless taxi service in Phoenix. Ford has promised the public a “fully autonomous vehicle in commercial operation” by 2021. Tesla, which has led the bumpy path on semi-autonomous vehicles, has forecasted the introduction of as many as a million Tesla “robo taxis” on the road by the end of this year.
The generally accepted wisdom that driverless cars are the future and the future is now, has presaged an influx of investor dollars and ambitious plans to level up the fleet of vehicles with no driver controls. The optimism has been as unfettered as the regulatory landscape – which is to say that this Wild West has no sheriff.
In 2016, the National Highway Traffic Safety Administration announced that its approach to this technological transition would be the light hand of guidance, and issued a list of vague conceptual statements. Last October, the Department of Transportation released its third iteration, Preparing for the Future of Transportation: Automated Vehicles 3.0. This brightly-colored, substance-free piece of public relations affirms NHTSA’s Orwellian view that although it has the authority to regulate automated driving systems, it shouldn’t. Check out some examples of NHTSA Newspeak in AV 3.0:
“The right approach to achieving safety improvements begins with a focus on removing unnecessary barriers and issuing voluntary guidance, rather than regulations that could stifle innovation.” (Fewer regulations equal more safety.)
“ADS developers are encouraged to use these safety elements to publish safety self-assessments to describe to the public how they are identifying and addressing potential safety issues.” (Industry-promulgated transparency is truth.)
“Delaying or unduly hampering automated vehicle testing until all specific risks have been identified and eliminated means delaying the realization of global reductions in risk.” (You must put yourself at risk to reduce risk.)
Indeed, the document is remarkable for its lack of attention to the most basic mandatory protections for the motoring public.
This summer, the agency sought comments for a rulemaking to exempt driverless vehicles that lack traditional human-machine interfaces, such as steering wheels or brakes from the crash avoidance (100 series) of Federal Motor Vehicle Safety Standards which regulate those components. Specifically, NHTSA was seeking public comment on the near- and long-term challenges of testing and verifying the compliance of automated driving systems (ADS).
The Advance Notice of Proposed Rulemaking grew out of requests by Google and GM to reconcile safety standards written for traditional motor vehicles with driverless vehicles (DV). On February 4, 2016, NHTSA responded to several Google’s concerns about how it could certify a vehicle that does not include manual controls, such as a steering wheel, accelerator pedal, or brake pedal. The response also provided tables listing those standards that NHTSA could interpret Google’s ADS as the “driver” or “operator,” and a table listing those standards that NHTSA could interpret the human occupant seated in the left front designated seating position as the ‘‘driver.’’ The agency interpreted the term “driver” as applying to the ADS.
In January 2018, GM filed a petition seeking an exemption so it could run 2,500 Zero Emissions Automated Vehicles on some undisclosed roads, and still meet FMVSSs, despite having no driver or driver controls. GM categorized the FMVSSs as those designed to interface with a human driver, such as manual controls; those that provide human drivers with information, such a telltales and indicator lamps; and features to protect human occupants, such as air bags, and argued that “its ADS–DVs without traditional manual controls require only the third category of requirements.”
Based on the issues Google and GM raised, the agency noted in the ANPRM that it was considering four different regulatory approaches: keep an FMVSS if the control was necessary for the safety of all vehicles, even if it means redundancies on DVs; axe it if the requirement is no longer necessary for any vehicle; keep some FMVSSs required for traditional vehicles only; and write separate different control or equipment requirements for ADS–DVs. NHTSA also asked a series of questions related to the pros and cons of these approaches and about compliance testing.
The docket attracted nearly 100 commenters, including the usual suspects – industry groups, such as the Alliance of Automobile Manufacturers and Global Automakers, individual manufacturers, such as Ford and GM; and safety advocates, such as the Center for Auto Safety. At the same time, a bi-partisan, bi-cameral Congressional group has been seeking input from various stakeholders in advance of writing autonomous vehicle legislation.
Safety Research & Strategies, with its long history of safety advocacy, has submitted comments to both groups covering three topics: the lack of functional safety standards for critical vehicle controls; the lack of updated standards on the human-machine interface (HMI) of vehicle controls; and the lack of accessible data and interpretation tools to adequately monitor and identify vehicle systems for potential malfunctions.
We argue that the problems that more vehicle autonomy will bring can already be seen in the current vehicle electronic failures and automakers’ poor human-machine interface designs. For example, the advent of keyless ignition vehicles with push button Start/Stop is resulted in unintended consequences: carbon monoxide poisoning, rollaway crashes and easy thefts – hazard scenarios that were previously eliminated under the FMVSS 114 Theft Protection and Rollaway Prevention requirements applicable to traditional metal keys.
The lack of a functional safety standard for electronic controls results in scenarios in which a critical system intended to save lives can actually create a new hazard that can take lives. For example, in May, Fiat-Chrysler recalled 4.8 million 2014 to 2018 Chrysler, Dodge and Jeep models because of an electrical short circuit that prevents the driver from manually shutting off the cruise control or disengaging it with the brakes, resulting in the vehicle maintaining its current speed or even accelerating.
The current opacity of vehicles’ internal diagnostic and operational data is another huge problem, because it hinders outside entities’, such as NHTSA or consumers, ability to independently examine, document and identify potential vehicle-related failures.
As a vehicle takes over most of the operational functions, the amount of data it must gather, assess, and store, and the speed at which it must process this information will increase exponentially. Currently, autonomous test vehicles “typically generate between 5TB and 20TB of data per day, per vehicle.” Even in current Level 2 vehicles (defined by NHTSA as “partial automation” the vehicle has combined automated functions like acceleration and steering but the driver must remain engaged in the driving task and monitor the environment at all times.” – think Tesla’s Autosteer feature.) the amount of data that is transmitted between modules, which is stored to widely varying degrees amongst vehicles, is extraordinary, and the tools available to the public, law enforcement and diagnosticians are generally limited to OBD II diagnostic scans and Event Data Recorders.
This has already led to motorists’ being charged civilly and criminally for at-fault crashes without the ability to properly defend themselves. Despite the plethora of data circulating in a vehicle, it may not be recorded unless a preset active fault is flagged. Further, the publicly available tools used to examine the vehicle and driver behavior, which include scan tools to extract the data from the Event Data Recorder, are able to access only a fraction of what may be needed or available to the manufacturer.
For sure, NHTSA’s hands-free approach to steering the revolution has its fans. Global Automakers argued that a safety standard, such as FMVSS 103 (Windshield Defrosting and Defogging Systems), could be dropped because it “is intended to address forward visibility for the human driver, as measured from a specified eye point at the “driver’s” designated seating position. In this case, the availability of defroster/defogger systems becomes more of a customer satisfaction issue in these vehicles, which should be left to manufacturer discretion to address.” Yeah, everyone wants to ride in a vehicle where you can’t see out of the front windows. And if the ADS’s steering system fails and the vehicle is headed for a tree, who wants to see that?
But many more commenters to the NHTSA docket pointed out that the agency’s desire to forge ahead left wide gaps in implementation. There has been little to no discussions about so many aspects of this complex evolution – among them, the interconnectedness of autonomous vehicles and the roadways they traverse, raised by the American Association of State Highway and Transportation Officials (AASHTO), and state drivers’ licensure laws, raised by the American Association of Motor Vehicle Administrators (AAMVA).
They also offered pointed criticisms of the current safety assurance process of test drives. Former Lockheed Martin systems engineer Michael DeKort, who, in 2008 won IEEE’s Carl Barus Award for Outstanding Service in the Public Interest for his efforts to expose safety and security problems with the U.S. Coast Guard Deepwater Project, chimed in to “strongly urge NHTSA to look past the hype and to the aerospace, DoD and the FAA regarding proper development and testing due diligence of not only the autonomous vehicle system but the use and qualification of proper simulation. This as a tenable and safe alternative to the untenable and reckless process of public shadow and safety driving being used now by most driverless vehicle makers.”
It is impossible to drive the one trillion miles or spend over $300B to stumble and restumble on all the scenarios necessary to complete the effort. In addition, the process harms people for no reason. This occurs two ways. The first is through handover or fall back. A process that cannot be made safe for most complex scenarios, by any monitoring and notification system, because they cannot provide the time to regain proper situational awareness and do the right thing the right way, especially in time critical scenarios. The other dangerous area is training the systems to handle accident scenarios. In order do that AV makers will have to run thousands of accident scenarios thousands of times. that will cause thousands of injuries and deaths. The solution is to replace 99.9% of that public shadow and safety driving with aerospace/DoD/FAA simulation technology and systems/safety engineering practices. (Not the gaming architecture-based systems most are using now. That technology has critical real-time, model fidelity and loading/scaling issues. These will cause improperly trained systems, false confidence and real-world tragedies.)
Strange bedfellows Advocates for Highway and Auto Safety and the National Automobile Dealers Association challenged NHTSA’s premise that regulations are barriers, and that driverless cars shouldn’t be subject to certain safety standards because they reference actions by human drivers. From NADA:
Proposed changes to various FMVSS to accommodate ADS-DVs should preserve the safety purpose of those FMVSS. For example, while an ADS-DV with fully automated steering may not need a steering wheel to safely navigate the roads, the ADS-DV should be able to maintain at least the same level of steering performance as an experienced and well-trained human driver operating a vehicle with a steering wheel.
Science-for-hire behemoth Exponent made a similar point: “To ensure demonstrable equivalence between any new certification approach to current performance requirements, there must be a scientific or engineering linkage to assure vehicle level performance characteristics equivalent to existing FMVSS requirements for conventional vehicles driven by a human.”
Beyond the docket, embedded systems experts have raised concerns about the rapid adoption of autonomous technology outside of any required safety protocol. For example, where is the discussion of fail-safes? When autonomous technology goes awry, will the human occupants have an intervention mechanism? Dr. Philip Koopman, co-founder and CTO of Edge Case Research, an autonomous systems safety consulting company, and a professor at Carnegie Mellon University, regularly discusses the issues and conflicts presented by vehicle autonomy, including the unrealistic expectations of human drivers in driverless cars. From his blog Safe Autonomy:
High-end driver assistance systems might be asking the impossible of human drivers. Simply warning the driver that (s)he is responsible for vehicle safety doesn't change the well-known fact that humans struggle to supervise high-end autonomy effectively, and that humans are prone to abusing highly automated systems.
One doesn’t have to move far beyond the happy promises of industry and its primary cheerleader, NHTSA, to see that their confidence in a carefree transition to driverless vehicles is not based on really anything. Maybe that’s why public confidence in automotive technology is low. A 2019 Ipsos/Reuters poll found that “half of U.S. adults think automated vehicles are more dangerous than traditional vehicles operated by people, while nearly two-thirds said they would not buy a fully autonomous vehicle.”