New Analysis Challenges Bold Tesla Claims

On May 7, 2016, Joshua Brown, a Tesla enthusiast, died, when his 2015 Model S in Autopilot mode, collided with a tractor trailer crossing a highway near Williston, Florida. A month later, the agency opened an investigation to throw open the hood of Tesla’s technology and probe its Automatic Emergency Braking (AEB) system design and performance, the human-machine interface issues, modifications Tesla had made to its Autopilot and AEB systems and Tesla crash data.

Six months later, the Office of Defects Investigation closed the Preliminary Evaluation saying they could find nothing wrong – in fact, the agency’s examination of the crash data showed that Tesla’s Autopilot system, beefed up with Autosteer, was a god-damned miracle! By the agency’s calculations, airbag deployments in Tesla vehicles with Autosteer dropped by 40 percent after the installation of the technology – either as original equipment or through an over-the-air software update.

But, a new analysis of the original data by Randy Whitfield, of Quality Control Systems (QCS) Corp., actually shows the opposite. For the subset of vehicles (those that had mileage before and after the installation of Autosteer) in which all of the relevant data are known – the exact mileage at the installation of the technology, Whitfield found that the airbag deployment crash rate increased by 59 percent after Autosteer technology was added. (NHTSA’s Implausible Safety Claim for Tesla’s Autosteer Driver Assistance System)

He concluded: “Our replication of NHTSA’s analysis of the underlying data shows that the Agency’s conclusion is not well-founded.”

Ahem. That understatement doesn’t begin to capture all of the ridiculous, but troubling elements of this story. Sean Kane, president and founder of Safety Research & Strategies, and a frequent collaborator with Whitfield, says the agency’s bad math coupled with its resistance to transparency bodes ill for public safety and push for unregulated autonomous vehicles.

“NHTSA has shown its unwillingness to regulate the safety of the electronics that control modern vehicles and to properly assess potential safety defects in increasingly complex vehicles. This once-storied public health agency, built on epidemiological principles, now resorts to hiding data and promoting the business interests of companies they were entrusted with regulating as it promotes autonomous vehicles and surrenders its oversight role in favor of industry ‘guidance.’” Kane says.

NHSTA Exponent-izes the Tesla Data

In the report NHTSA submitted along with the Closing Resume of its 2016 Tesla investigation, the agency claimed that Tesla’s addition of Autosteer had a significant and measurable effect on safety:  

ODI analyzed mileage and airbag deployment data supplied by Tesla for all MY 2014 through 2016 Model S and 2016 Model X vehicles equipped with the Autopilot Technology Package, either installed in the vehicle when sold or through an OTA update, to calculate crash rates by miles travelled prior to [fn. 21] and after Autopilot installation. [fn. 22] Figure 11 shows the rates calculated by ODI for airbag deployment crashes in the subject Tesla vehicles before and after Autosteer installation. The data show that the Tesla vehicles crash rate dropped by almost 40 percent after Autosteer installation.

That NHTSA decided to feature airbag deployments as a determinant of the efficacy of autonomous technology was odd, because in its Information Request to Tesla, the agency never asked Tesla for airbag deployment numbers. Nonetheless, for the ensuing year and a half, Tesla dined out on this claim, trotting out the 40 percent reduced crash rates whenever another one of its vehicles crashed. By May 2018, the agency was forced by the journalistic clamor for the basis of this startling statistic, to walk it back. But only Whitfield persisted and succeeded in obtaining the underlying data to reveal how statistically weak the figure was.

Crash rates are born of numerators and denominators. In this case, the numerator was the number of airbag deployments. The denominator was comprised of vehicle miles travelled – which represented the vehicle’s exposure to the risk of a crash, and therefore a scenario in which the airbag might deploy. In order to pin down the denominator, NHTSA needed to know the vehicle mileage at the time Autosteer was installed, but for most of the vehicles in the study, Tesla didn’t provide the exact data.

Whitfield found that “the actual mileage at the time the Autosteer software was installed appears to have been reported for fewer than half the vehicles NHTSA studied.” Of the 43,781 vehicles studied, only 5,714 vehicles – or 13 percent – had complete mileage data and driving experience before and after installing Autosteer.

For the data missing the exact Autosteer installation mileage number, “NHTSA treated the exposure mileage that could not be classified as either before or after the installation of Autosteer as if it were zero mileage,” Whitfield says. “This results in an undercount of the denominators. The problem is the under-count affected the “before” category much more than it did the “after” category.” 

The Nearly Two-Year Battle for the Data

It took Whitfield, plus a lawyer, and 641 days to get the data.

Whitfield was suspicious of NHTSA findings, and the lack of back-up data from “one of the most incessantly self-professed data-driven government agencies” and sought to replicate its analysis. On February 24, 2017, QCS filed a Freedom of Information Act (FOIA) request for “all of the mileage and airbag deployment data supplied by Tesla analyzed by ODI to calculate the crash rates shown in Figure 11…” He also asked for any “statistical formulas, models, adjustments, sample weights, and/or any other data or methods relied upon to calculate the crash rates.”

At the end of March, the agency promised to respond by mid-April. Three months later, when no response seemed forthcoming, Whitfield filed a FOIA lawsuit for the data in U.S. District Court in Washington D.C.  On July 21, 2017, NHTSA notified Whitfield that it had denied his request, based on two exemptions to the FOIA – Exemption 4, which shields information that could cause competitive harm, and Exemption 5 – which shields an agency’s “deliberative process” from public view. NHTSA tends to hand these out like after-dinner mints. In the past, the agency has misused Exemption 5 to deem any piece of information – a photograph, a number – as a critical part of its deliberative process, and deny a FOIA request. (DOT Settles Lawsuit over Toyota UA Documents, New Congressional Inquiry Raises More Questions)  

On September 30, 2018, U.S. District Judge Dabney L. Friedrich denied motions by both QCS and DOT for a summary judgement (a favorable ruling). However, in her 13-page ruling ordering the parties to prepare for further proceedings, Judge Friedrich made it abundantly clear that she found both Tesla’s claims of competitive harm and NHTSA claims of deliberative process to be less than persuasive.

On the matter of the competitive harm, Judge Friedrich scratched her head over Tesla Director of Field Performance Engineering, Matthew L. Schwall’s lengthy December 20, 2017 declaration describing the various ways the data could reveal “proprietary secrets.” She methodically eviscerated Schwall’s five arguments – a shorter version might be: You wrote many, many words. None support your position.

Then, she swept aside NHTSA’s arguments that Office of Defect Investigation’s Jeffrey Quandt used some super-secret deliberative methods that could not be exposed to the light of day:

It thus appears that Quandt performed a straightforward mathematical calculation involving categories of data clearly identified in Figure 11. Based on Figure 11 and his declaration, it appears that Quandt simply divided the total number of airbag activations by the total number of miles driven to determine the average crash rate (per million miles) for select Tesla vehicle models (both before and after Autosteer installation).

Following this judicial beat-down, NHTSA told Tesla that it was rescinding its grant of Confidential Treatment for the data QCS requested and turned it over in late November. (The data provided to QCS by NHTSA is available here.)

The Moral of the Story

Last May, an American Automobile Association (AAA) released the results of its latest survey tracking consumer trust in automotive autonomous technology, and found that it has “quickly eroded. Today, three-quarters (73 percent) of American drivers report they would be too afraid to ride in a fully self-driving vehicle, up significantly from 63 percent in late 2017. Additionally, two-thirds (63 percent) of U.S. adults report they would actually feel less safe sharing the road with a self-driving vehicle while walking or riding a bicycle.”

And stories like this aren’t going to move the numbers upward.

NHTSA owes its public health mission and the driving public its due care and transparency in guiding the transition. Instead, the agency has let the industry auto-steer us towards their next big business model, while cheering from the sidelines.

We can only hope the burns sustained from the exploding 40-percent claim will discourage NHTSA in the future from throwing out statistical spitballs, providing automakers with marketing copy and trying to hide from independent investigators.

 

Quality Control Systems Corp. Sues DOT for Tesla Data

Quality Control Systems (QCS) Corp. has filed a Freedom of Information Act (FOIA) lawsuit in the U.S. District Court for the District of Columbia in pursuit of Tesla airbag deployments data that the National Highway Traffic Safety Administration (NHTSA) has withheld from public view.

R. A. Whitfield, the company’s director, said that the company wanted to test the validity of claim made by NHTSA that airbag deployments in Tesla vehicles dropped by almost 40 percent after the installation of a component of the Tesla’s Autopilot system, Autosteer. NHTSA asserted this decrease in a report accompanying the Closing Resume of Preliminary Evaluation 16-007. The investigation was prompted by the May 2016 death of Joshua Brown, a Tesla enthusiast who was driving his Tesla Model S in Autopilot mode, when it crashed into an 18-wheel tractor-trailer truck that was turning left in front of it on US 27A, west of Williston, Florida. The report stated:

“ODI analyzed mileage and airbag deployment data supplied by Tesla for all MY2014 through 2016 Model S and 2016 Model X vehicles equipped with the Autopilot Technology Package, either installed in the vehicle when sold or through an OTA update, to calculate crash rates by miles travelled prior to and after Autopilot installation. Figure 11 shows the rates calculated by ODI for airbag deployment crashes in the subject Tesla vehicles before and after Autosteer installation. The data show that the Tesla vehicles crash rate dropped by almost 40 percent after Autosteer installation.”

Whitfield says he wants to know if the methodology NHTSA used is scientifically valid and whether their results can be replicated. Other questions include whether the reduction in crash rates is actually due to Autosteer itself and whether the claimed crash reductions could be expected to continue over a longer period of time.

“The surprising improvement in crash safety that NHTSA associates with Autosteer would be very welcomed if the dramatic safety claims prove to be scientifically sound. But it is concerning that the crash reductions are associated with the installation of Autosteer, rather than the actual use of Autosteer,” Whitfield says. “And NHTSA’s analysis is just as astonishing for the fact that it lacks the most basic, information necessary for reaching well-founded conclusions about the claimed crash rate reductions. It is very remarkable that the published description of the Agency’s findings do not meet long-established scientific standards that would allow for an assessment of statistical confidence intervals or of statistical significance. Even the numerators and the denominators of the calculated crash rates are AWOL.”

NHTSA’s Office of Defects Investigation opened the Tesla probe on June 28, 2016, focusing on whether the Automatic Emergency Braking (AEB) or Autopilot systems had functioned as designed, increasing the risk of a crash. It closed six months later with no defect finding, saying that the system performed as designed, and blaming Brown for the crash. Tesla’s four responses submitted to the public investigation file were almost wholly redacted. For more information about PE16007 and its lack of transparency, read Autonomous Vehicles, the Beta Test Coming to a Roadway Near You.

Safety Research & Strategies has long advocated for NHTSA transparency. For example, in February 2014, SRS submitted comments in advance of the agency finalizing its 2014 – 2018 Strategic Plan, highlighting its concern with NHTSA’s lack of transparency. SRS founder and President Sean Kane wrote: “Access to NHTSA’s investigations and data are increasingly difficult and expensive for the public and researchers as the agency assigns significant costs to provide information in response to FOIA requests. In some cases they have also refused to release information that should be public requiring FOIA litigation that has cost the Agency thousands of tax-payer dollars to settle.”

Since 2010, SRS has sued the Department of Transportation six times seeking public records on everything from child safety seats to unintended acceleration. All of these cases have been settled to our satisfaction. The four against NHTSA have ended with the agency agreeing to turn over more records and paying our fees, before a court judgement was rendered. You can read about our latest FOIA lawsuit here.

Whitfield says such an important conclusion by the agency should not be based on data that the government is withholding from researchers who want to examine NHTSA’s results.

“If the safety benefits of Autosteer are as positive as the Agency claims, why wouldn’t they want independent scientists to have the data in order to replicate these extraordinary results?” Whitfield asked.

 

Autonomous Vehicles – The Beta Test Coming to a Roadway Near You

The National Highway Traffic Safety Administration’s Automated Vehicles web page breathlessly forecasts: “Vehicle safety technologies signal the next revolution in roadway safety. We see great potential in these technologies to save lives—more than 30,000 people die on our roads every year and we can tie 94 percent of crashes to human choice—transform personal mobility and open doors to communities that today have limited mobility options.”

Sounds like an amazing marvel of tomorrow – and none too soon in the wake of recent news that traffic fatalities may have risen 6 percent over 2015. The National Safety Council estimated last month that 40,200 people died in motor vehicle crashes in 2016– the highest annual death toll since 2007. The increase comes on the heels of an 8 percent rise in traffic deaths in 2015. The two-year jump would be the largest in 53 years, according to the safety group.

But will autonomous vehicles – already driving among us with little to no oversight, regulation, or data collection – really deliver on its promise?

Already, two Tesla drivers have died in crashes that occurred while the vehicle was in Autopilot mode. In January 2016, a 23-year-old Chinese man died when his Tesla Model S crashed into a road-sweeper on a highway south of Beijing, according to a report on a Chinese government news channel. Five months later, Joshua Brown, a 40-year-old Tesla enthusiast died when his Tesla Model S crashed into a semi-trailer that had made a left-hand turn in front of it. NHTSA later said that the Tesla’s sensing system failed to distinguish the white tractor trailer against the sky and did not apply the brakes.  

By September, Tesla Motors Chief Executive Elon Musk announced that Tesla would impose limits on the vehicle’s semi-autonomous Autopilot function that would have potentially prevented Mr. Brown’s demise. According to news reports, the improvements were delayed by fears of the system over-reacting to information in the driving environment.

Musk said he had wanted to improve Autopilot’s capabilities last year but was told it was impossible to do so without incurring more “false positives.”  Two months after the Brown crash, Musk tweeted publicly that supplier Bosch had given him encouraging news about   improvements to its radar.

“I wish we could have done it earlier,” Musk said in an interview. “The perfect is the enemy of the good.”

Feds Bypass Regulation

The federal government, for the most part, has responded to the advent of automotive autonomy by vigorously shaking its pom-poms, consistently declaring its intention to “speed the deployment of lifesaving technology”

From The Federal Automated Vehicle Policy: “At each level, the safety potential grows as does the opportunity to improve mobility, reduce energy consumption and improve the livability of cities. To realize these tremendous benefits, NHTSA believes it should encourage the adoption of these technologies and support their safe introduction.”

From the preamble of NHTSA Enforcement Guidance Bulletin 2016–02: Safety-Related Defects and Emerging Automotive Technologies: “As the world moves toward autonomous vehicles and innovative mobility solutions, NHTSA is interested in facilitating the rapid advance of technologies that will promote safety.”

From NHTSA’s Automated Vehicles website: “The DOT Federal Automated Vehicles Policy sets the framework for the safe and rapid deployment of these advanced technologies.”

To hurry things along, NHTSA has issued two guidance bulletins, accompanied by requests for comments and held two public meetings, billed as an exploration of related topics, such as adapting the Federal Aviation Administration’s pre-market approval approach as a replacement for self-certification.

Enforcement Bulletin

First, NHTSA re-asserted its powers under The Safety Act to regulate autonomous vehicles. In April, the agency published in the Federal Register Enforcement Guidance Document 2016-02, a brief declaration of the agency’s “broad enforcement authority, under existing statutes and regulations, to address existing and emerging automotive technologies. technologies—including its view that when vulnerabilities of such technology or equipment pose an unreasonable risk to safety, those vulnerabilities constitute a safety-related defect—and suggests guiding principles and best practices for motor vehicle and equipment manufacturers in this context.”

The five-page Federal Register Notice walked readers through some of the seminal court cases of the 1970s that established enforcement standards: The 1975 “Wheels” decision, involving broken wheels in GM pickup trucks, which defined a safety defect; and the 1977 Pitman Arms Case, involving the failure of the steering mechanism in GM Cadillacs, which defined the concept of “unreasonable risk to safety.”

Then, the agency defined automated vehicle technologies, systems, and equipment – including software, even code that enables devices not located in or on the vehicle – as motor vehicle equipment, whether original equipment or after-market.

The agency warned would-be violators to iron out the kinks in their systems before bringing their products to market and to promptly follow the requirements of The Safety Act. At the same time, the agency was clear that it was not establishing a binding set of rules, or implementing a one-size-fits-all enforcement policy. In fact, it rallied its all-purpose, case-by-case approach to the cause:

 “NHTSA’s statutory enforcement authority is general and flexible, which allows it to keep pace with innovation.” (To the contrary, the agency’s history has repeatedly shown that it trails innovation badly, that it will ignore deaths and injuries caused by changes in automotive technology until Congress forces it to act, or regulates after a particular technology – warts and all – is already in widespread use, and codifies bad designs. For example, the agency has yet to correct the rollaway and carbon monoxide poisoning introduced by keyless ignitions via regulations.)

And, the agency signaled that its approach would be expansive:

“NHTSA considers the likelihood of the occurrence of a harm (i.e., fire, stalling, or malicious cybersecurity attack), the potential frequency of a harm, the severity of a harm, known engineering or root cause, and other relevant factors. Where a threatened harm is substantial, low potential frequency may not carry as much weight in NHTSA’s analysis.”

In the case of an unprotected network that hackers might access, the agency said that it would weigh several factors in determining the probability of a malicious cyber-attack: the amount of time that had elapsed since the vulnerability was discovered, how hard it would be to breach the system, the level of expertise and the equipment needed; whether the public had access to information about how the system works and the window of opportunity to exploit the system.

NHTSA offered the following example of a foreseeable vulnerability that might trigger agency action, even if no incidents have occurred:

“If a cybersecurity vulnerability in any of a motor vehicle’s entry points (e.g., Wi-Fi, infotainment systems, the OBD–II port)allows remote access to a motor vehicle’s critical safety systems (i.e., systems encompassing critical control functions such as braking, steering, or acceleration), NHTSA may consider such a vulnerability to be a safety related defect compelling a recall.”

The enforcement bulletin drew 37 commenters. (Safety Research & Strategies was among them, expressing concern that the framing language in the Guidance Bulletin was contradictory and its emphasis misplaced: “More importantly, we note that the agency, is in fact, doing very little to regulate automotive software and new technology, and absent rulemaking in this area, the rapid cycle automotive defect crises will continue and potentially accelerate.”)

The Telecommunications Industry Association re-stated a position it has long held – that the agency has no jurisdiction over their products: “We are concerned that NHTSA’s proposed guidance would potentially bring a broad range of  technologies under the agency’s enforcement authority beyond what is intended by the governing statute.”

Carmakers were more alarmed by the agency’s intentions of enforcing The Safety Act as it relates to cyber security. Tesla, the Alliance of Automobile Manufacturers and the Global Automakers all pushed back against the agency’s intention to treat a cyber-vulnerability as a defect:

“The Alliance submits, however, that a defect related to motor vehicle safety does not automatically exist merely because the engineering or root cause of a cybersecurity “vulnerability” is known. As discussed in more detail below, a theoretical “vulnerability”       to potential system failures through hacking or otherwise is not the same as an actual “risk” that is currently present in an item of motor vehicle equipment.”

NHTSA responded in the Final Notice of its enforcement bulletin, published that September by backing off, saying that it would take up cyber-security issues in a future interpretations and guidance.  

The Federal Automated Vehicles Policy

In tandem, the agency published the Federal Automated Vehicles Policy. The 116-page document, also released for comment in September 2016, was “intended as a starting point that provides needed initial guidance to industry, government, and consumers.” The document outlined what the agency deems best practices in safe design, development and testing prior to sale or deployment on public roads.

NHTSA declined to undertake actual rulemaking “to speed the delivery of an initial regula­tory framework and best practices to guide manufacturers.”

Instead, the Guidance is full of fine, vague language regarding system safety, cyber security, consumer privacy and education. Automakers should “follow a robust design and validation process based on a systems-engineering approach with the goal of designing HAV systems free of unreasonable safety risks.” They should “follow guidance, best practices, design principles, and standards developed by established standards organiza­tions.” Given how automakers have introduced innovations such as keyless ignitions and drive-by-wire systems without regulations, this gives us little comfort.

The agency also noted its interest in automakers’ definitions of the Operational Design Domain (ODD) of their HAVs – meaning the vehicle’s capabilities according to roadway types, the geographic area, the speed range and the environmental conditions in which it will operate – and its Object and Event Detection and Response (OEDR) – how it detects and responds to other vehicles, pedestrians, cyclists, animals, and objects in a variety of conditions.

The Model State Policy is another aspect of the guidance document. Neither NHTSA nor automakers want a patchwork of state regulations impeding the agency’s damn-the-torpedoes strategy. In fact, it advises states to “evaluate their current laws and regulations to address unnecessary impediments to the safe testing, deployment, and operation of HAVs.” So, it has been working with the administrators of state Department of Motor Vehicles to develop the first set of ground rules for states that allow manufacturers to test their HAVs on public roads (See what California is doing below.) NHTSA wants each state to create a special bureaucracy for automated vehicles including a “jurisdictional automated safety technology committee,” to establish rules and authority for regulating autonomous cars, including registrations, and applications for testing vehicles and licenses for test drivers.

In the absence of federal regulations, the agency intends to rely on some new regulatory tools. In the short-term, the agency wants to launch a new quasi-certification report called a Safety Assessment. Safety Assessments are used by the Nuclear Regulatory Commission and the Food and Drug Administration in various forms as a way of methodically evaluating risks to food safety and the handling of nuclear waste. NHTSA is proposing that manufacturers voluntarily provide reports about how closely they are hewing to the recommendations in the guidance document. Automakers and suppliers would submit these assessments to the Chief Counsel’s Office outlining their adherence to NHTSA’s broad outlines, their timelines for testing and deployment on public roads. They would cover data recording and sharing; privacy; system safety; vehicle cybersecurity; human machine interface; crashworthiness; consumer education and training; registration and certification; post-crash behavior; federal, state and local laws; ethical considerations; operational design domain; object and event detection and response; fall back (minimal risk condition).

Automakers probably have plenty of time to prepare. NHTSA won’t publish a Federal Register notice implementing this reporting it until it clears the Paperwork Reduction Act, a 1995 law requiring that any public information request to get Office of Management and Budget approval. Given the anti-regulatory bent of the GOP Congress and the near-daily turmoil that is the Trump White House, we don’t see any disclosures happening anytime soon.

In the long-term, NHTSA is looking at several options to ensure that HAVs enter the marketplace safely. One is the pre-market approval authority process, modeled after that used by the Federal Aviation Administration. Another is a hybrid certification and approval process, in which manufacturers could certify compliance with FMVSS and NHTSA or a third-party expert could conduct pre-market approval for those HAV features that are not covered by an FMVSS

NHTSA has already held two meetings on this policy, gathering panels of stakeholders, which did not seem to include too many domain experts, to engage in short discussions about topics such as  Safety Assurance,  Pre-Market Approval Authority,  Imminent Hazard Authority; Expanded Exemption Authority for HAVs.

But, as the guidance documents tells us, this is just a first step. NHTSA needs to conduct more research, scout out new standards to govern the initial testing and deployment of HAVs, and the agency’s approach will evolve as the level of automation in HAVs rises. Or so the document promises.

The First HAV Investigation

If the agency’s first six-month enforcement investigation into the alleged failure of an automated vehicle is any measure, manufacturers shouldn’t fear that NHTSA is going to stray too far from its well-established habits.

On May 7, 2016, Joshua Brown, died when his Tesla Model S, in Autopilot mode, crashed into an 18-wheel tractor-trailer truck that was turning left in front of it on US 27A, west of Williston, Florida. According to the Florida Highway Patrol, the vehicle underrode the truck, tearing off its top before it proceeded east on U.S. 27A. The vehicle left the roadway and struck several fences and a power pole, before coming to rest about 100 feet south of the highway. Brown, a 40-year-old Tesla enthusiast from Ohio, died at the scene.

June 21, 2016, NHTSA sent a Special Crash Investigations team to the crash site to evaluate the vehicle and study the crash environment. It concluded that Brown had increased the cruise control speed to 74 mph two minutes before the crash, and took no evasive action, nor did he apply the brake. The tractor trailer should have been visible to the Tesla driver for at least seven seconds prior to impact.” This report has not yet been made publicly accessible.

On June 28, 2016, the Office of Defects Investigation opened Preliminary Evaluation 16-007 into the crash that killed Brown. Officially, the investigation focused on “the Automatic Emergency Braking (AEB) or Autopilot systems that “may not function as designed, increasing the risk of a crash.”

The agency’s nine-page Information Request, issued on July 8, contained very specific questions. Question 7, for example had 12 sub-parts, seeking – among other things – information about Tesla’s object recognition and classification process for rear-end collision and crossing path collisions, how the Tesla S’s system detects compromised or degraded sensor/camera signals, the kinematic models the Tesla S used to judge collision risk, and all inhibit and override/suppression conditions.

Over the course of the seven-month investigation, Tesla filed four responses that were placed in the publicly accessible file.

On January 19, the agency closed the investigation with no defect finding, saying that the system performed as designed, and blamed the Brown for the crash:

“NHTSA’s examination did not identify any defects in the design or performance of the AEB or Autopilot systems of the subject vehicles nor any incidents in which the systems did not perform as designed. AEB systems used in the automotive industry through MY 2016 are rear-end collision avoidance technologies that are not designed to reliably perform in all crash modes, including crossing path collisions. The Autopilot system is an Advanced Driver Assistance System (ADAS) that requires the continual and full attention of the driver to monitor the traffic environment and be prepared to take action to avoid crashes. Tesla’s design included a hands-on the steering wheel system for monitoring driver engagement. That system has been updated to further reinforce the need for driver engagement through a “strike out” strategy. Drivers that do not respond to  visual cues in the driver monitoring system alerts may “strike out” and lose Autopilot function for the remainder of the drive cycle.”

Yet, this investigation was full of oddities. For one, the agency played hardball in wresting information out of Tesla, and its former supplier Mobileye. Under the regime of the recently-departed Administrator Mark Rosekind, the agency did not hesitate to bring its full authority to bear on the information-gathering portion of the probe. In an unusual move, the agency actually issued a subpoena for the diagnostic log data for another apparent Autopilot failure, involving a Tesla Model X.

The agency also issued three Special Orders.

One Special Order was issued to supplier Mobileye, a technology company that makes advanced- driver assistance systems using cameras and software to scout objects in a vehicle’s path. The Order sought communications between it and Tesla regarding hands-free operation in Autopilot mode. The Order also asked for an explanation of the “proper and substantial technological restrictions and limitations” Mobileye thought should have been in place before hands-free operation was permitted; and any other safety concerns, guidance or warnings that Mobileye might have communicated to Tesla about Autopilot’s limitations, intended use and potential misuse.

The agency’s first Special Order to Tesla sought information about incidents (or alleged incidents) of Autopilot failures that Tesla was receiving – especially if Tesla wanted to make a public statement about it before reporting it to NHTSA – for the duration of the investigation. An Amended Special Order, issued in October, required Tesla to continue weekly reporting of incidents – by COB every Friday – until further notice.

These Special Orders appear to be responses, in part, to public statements by Tesla and Mobileye. It is not unheard of for a manufacturer or supplier to defend its reputation during an ongoing investigations – especially if it involves high profile deaths. However, Tesla and Mobileye were particularly determined to get out in front of the probe, and direct public attention to their own analyses.

Two days after receiving NHTSA’s Information Request, Tesla issued a public statement about the crash, attributing the Autopilot’s indifference to a large object directly in its path to a white-out conditions:

“Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S. Had the Model S impacted the front or rear of the trailer, even at high speed, its advanced crash safety system would likely have prevented serious injury as it has in numerous other similar incidents.”

In September, Mobileye issued a press release, pushing back against any suggestion that it bore responsibility for the crash. The release asserted that it had warned Tesla that the Autopilot should not have been allowed to operate hands-free “without proper and substantial technological restrictions and limitations,” but that Tesla disregarded those admonitions, releasing it “hands-free” in late 2015, after assuring Mobileye that it would not do so.

“Mobileye has made substantial efforts since then to take more control on how this project can be steered to a proper functional safety system. Tesla’s response to the May 7 crash, wherein the company shifted blame to the camera, and later corrected and shifted blame to the radar, indicated to Mobileye that Mobileye’s relationship with Tesla could not continue. Failing agreement on necessary changes in the relationship, Mobileye terminated its association with Tesla. As for Tesla’s claim that Mobileye was threatened by Tesla’s internal computer vision efforts, the company has little knowledge of these efforts other than an awareness that Tesla had put together a small team.”

Finally, this investigation was notable for the amount of black on Tesla’s four responses submitted to the public file. In its August 8 and September 2 submissions to the questions in the IR, every answer is redacted, as are separate submissions to answer Question 7 and Question 10, about the automaker’s assessment of the alleged defect. A fifth response has a few bits of information available for public consumption, but most of the test is blacked-out.

In reviewing literally hundreds of investigations, The Safety Record can honestly say that we have never seen investigation responses in the public file so redacted – especially with no request for confidentiality on file. In fact, the IR specifically instructs Tesla to submit all confidential business information directly to the Office of Chief Counsel. In addition, the IR notes, “do not submit any business confidential information in the body of the letter submitted to this office [ODI].” Instead, Tesla addressed its responses to ODI, which presumably painted them black and put them in the public file. There are no requests from Tesla for confidentiality in the public file.

Also missing from the public file were any responses Mobileye filed in response to its Special Order.

In the end, we know very little about Tesla’s responses to the crash, except – Tesla’s first response was to blame it on a white-on-white interpretation error. Only later did Tesla argue that its Automatic Emergency Braking system was never designed to recognize a front-to-side crash. According to one answer that escaped the censor’s Sharpie:

“Like other manufacturers’ AEB systems, Tesla’s AEB is designed to mitigate front-to-rear collisions,” Tesla said.

If that was the case, why didn’t Tesla say that from the beginning?

State Legislation

In the absence of federal regulations, several states have passed legislation related to autonomous vehicles, and more – six in 2012 to 20 in 2016 – consider such legislation each year. Currently California, Florida, Louisiana, Michigan, Nevada, North Dakota, Tennessee, Utah and Virginia—and Washington D.C. have laws on the books, while Arizona and Massachusetts governors have issued executive orders, according to the National Council on State Legislatures related to autonomous vehicles.

California was one of the most active regulatory schemas. In 2014, the state passed regulations for the registration of autonomous vehicles with the Department of Motor Vehicles. Currently, 22 companies have registered, including Google, Bosch, Delphi, Honda, Mercedes-Benz, Tesla, Ford, Volvo, Nissan, Subaru and GM. (Transportation problem child Uber has been  testing its autonomous vehicles without seeking a permit, throwing the litigation-magnet into more conflict.)

The state is now in the process of writing regulations to cover the deployment of autonomous vehicles.

California is also the only state collecting autonomous car crash data. Manufacturers are required to report all crashes involving autonomous vehicles within 10 business days. Once a year, they must also report instances when a human test driver seizes control of the vehicle from its autonomous system for safety’s sake.

To date, the California has collected reports in 24 crashes since October 2014. The majority involve Google vehicles – not surprising, as the tech giant has the largest fleet of self-driving  vehicles – including Toyota Prius, Lexus RX450 and other prototype vehicles on the road in Mountain View since 2009 and expanded to Kirkland, WA, Austin, TX and Phoenix, AZ. This year, Google, which has spun off its autonomous car project to a company called “Waymo,” intends to add 100 new 2017 Chrysler Pacifica Hybrid minivans to the fleet.

All were minor, non-injury crashes. Oddly, more than 60 percent of the crashes involved a conventional vehicle rear-ending a Google HAV in a scenario in which the driver in the striking car, clearly thought that the vehicle in front of it ought to have been moving. For example, here’s a narrative from an October 26, 2016 incident:

“A Google prototype autonomous vehicle (“Google A V”) traveling southbound in autonomous mode on Shoreline Boulevard in Mountain View was involved in an accident. The Google AV entered a slip lane in order to tum right onto El Camino Real and came to a stop to yield to westbound traffic on El Camino Real. As the Google A V began to move forward onto El Camino Real, another vehicle immediately behind the Google AV collided with the rear of the Google AV. At the time of the collision, the        Google AV was traveling at approximately 3 mph, and the other vehicle was traveling at approximately 6 mph. The Google A V sustained minor damage to its rear hatch. The        other vehicle sustained minor damage to its front bumper. There were no injuries reported by either party at the scene.”

NHTSA has estimated that rear-end crashes make-up about 23-30 percent of all crashes, so the reason for the high number of rear-enders bears more analysis.

There are many more disengagements – incidents, as defined by the California DMV, in which “a failure of the autonomous technology is detected,” or “when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.” In 2016, nine companies who tested autonomous vehicles reported 2,887 disengagements, for reasons including poor lane markings, weather and road surface conditions, construction, emergencies, and collisions

Ensuring the Safety of Autonomous Vehicles?

Are vehicles driven by LIDAR, cameras and algorithms safer than those driven by that passe technology, people? The federal government, Tesla and other autonomous vehicle enthusiasts are insulted that you have even posed the question.

This groups loves the numerator of crashes and the denominator of vehicle miles travelled to show that so far, the self-driving fleet should feel very proud of its record. For example, NHTSA used this simple division to acquit Tesla’s autopilot function, as it closed PE16-007, a probe into the crash that killed Brown, in January with no defect finding.

“ODI analyzed mileage and airbag deployment data supplied by Tesla for all MY 2014 through 2016 Model S and 2016 Model X vehicles equipped with the Autopilot Technology Package, either installed in the vehicle when sold or through an OTA update, to calculate crash rates by miles travelled prior to and after Autopilot installation. Figure 11 shows the rates calculated by ODI for airbag deployment crashes in the subject Tesla vehicles before and after Autosteer installation. The data show that the Tesla vehicles crash rate dropped by almost 40 percent after Autosteer installation.”

(NHTSA mis-characterizes these figures as crash data, when it is actually instances of airbag deployments. There are two problems with this: there are many more crashes than airbag deployments; and it is unlikely that Tesla knows about all airbag deployments. Both of these factors affect how we interpret the comparison.)

Tesla relies on this basic calculation, as does Waymo, which, in an uplifting video, talks about the million vehicle miles travelled without incident.

Nidhi Kalra and Susan M. Paddock, researchers from the RAND Corporation, have recently challenged these assessments, and concluded that the paucity of data make it impossible to determine whether autonomous vehicles will have a measureable effect on death and injury rates. The duo’s statistical analysis shows that “Autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their reliability in terms of fatalities and injuries.” Further, “Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles—an impossible proposition if the aim is to demonstrate their performance prior to releasing them on the roads for consumer use.”

Therefore, Kalra and Paddock note, in terms of fatalities and injuries, the entire raison d’etre for pushing vehicle autonomy ahead of their oversight and regulation, test-driving alone “cannot provide sufficient evidence for demonstrating autonomous vehicle safety.” They call for the development of new and innovative ways to demonstrate safety and reliability and most importantly for the development of “adaptive regulations that are designed from the outset to evolve with the technology so that society can better harness the benefits and manage the risks of these rapidly evolving and potentially transformative technologies.”

Big Promise, Mixed Messages

What, ultimately, do we expect from autonomous cars? That we will be passengers in our own vehicles. The informational video fronting Waymo’s website shows a couple snuggling, a grandmother and a child absorbed in an I-pad screen, and teens chatting as the landscape flies by, like back-seat passengers in a cab, wholly unconcerned about how they will get to their destination. As Waymo originally envisioned it, fully autonomous cars would have no steering wheels or foot pedals. That future raises a lot of questions: do we license vehicles instead of drivers? What do public roads look like when then large numbers of driverless vehicles mingle with equally large numbers of conventional vehicles? What are the fail safes?

The present reality is the semi-autonomous vehicle, equipped with software such as Tesla’s Autopilot, which is not reliable enough to offer a hands-free experience, let alone an attention-free experience. Some Tesla drivers have found that out the hard way. A video posted earlier this month shows a Tesla operated in the Autopilot mode clip the side of a highway barrier after the lanes suddenly shift to the right due to construction.

The official, explicit message is that that Autopilot “is an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while using it. And the system is designed to warn drivers who test its limits with visual and audible alerts, gradually slowing the vehicle down until the steering wheel detects the presence of the driver’s hands.

But there is another message out there – one that promises that you can ride shot-gun while the car does all of the work. At an April conference in Norway, Musk told the Norway Transport and Communications Minister Ketil Solvik-Olsen:

“The probability of having an accident is 50 percent lower if you have Autopilot on, even with our first version. So we can see basically what’s the average number of kilometers to an accident — accident defined by airbag deployment. Even with this early version, it’s almost twice as good as a person.”

In an October 19 blog post, the company published: “We are excited to announce that, as of today, all Tesla vehicles produced in our factory – including Model 3 – will have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.”

YouTube is full of videos showing Tesla owners demonstrating Autopilot, including Musk’s two-time wife and ex-wife (the pair married in 2010, divorced in 2012, re-married in 2014 and divorced again in 2016.)  Talulah Riley turned to a camera, giggling with fluttering jazz hands as her car zips down a crowded highway.

And that gives the public a very different impression – one which NHTSA recognized in its automated vehicle guidance document carries some dangers:

“New complexity is introduced as HAVs take on driving functions, in part because the vehicle must be capable of accurately conveying information to the human driver regarding intentions and vehicle performance. This is particularly true of SAE Level 3 systems in which human drivers are expected to return to the task of mon­itoring and be available to take over driving responsibilities, but drivers’ ability to do so is limited by humans’ capacity for staying alert when disengaged from the driving task. Manufacturers and other entities should consider whether it is reasonable and appropri­ate to incorporate driver engagement monitoring to Level 3 HAV systems.”  

And really, what is the point of Autopilot if you have to pay constant attention – and be prepared to grab the wheel in a split second? That seems more stressful than actually driving.

After the Brown crash, Musk got roundly criticized for these contradictory messages, and updated the software to lock-out drivers from the feature who fail to heed the system’s warning to keep their hands on the wheel.  

He also back-pedaled a bit, saying that Autopilot “is by far the most advanced driver assistance system on the road, but it does not turn a Tesla into an autonomous vehicle and does not allow the driver to abdicate responsibility.”

And what about NHTSA? What is its responsibility? Musk has always been very clear that Autopilot was introduced to the marketplace as a beta test. Just to be perfectly clear, a beta test is “a trial of machinery, software, or other products, in the final stages of its development, carried out by a party unconnected with its development.” Not only is the Tesla driver participating in this beta test on public roads, but so is every vehicle, pedestrian, and cyclist that share those roads.

How is Tesla permitted to dragoon the public into testing its automotive software?

Well, there’s no law against it.