The bloody math of Mr. Musk

This Wednesday, Tesla announced “Full Self-Driving Hardware on All Cars.” All future Model S, X and 3 will come equipped with Level 5 capabilities, the highest designation for automated car technologies.

And Tesla expects to demonstrate self-driven cross-country trip next year:

Musk hopes that, by the end of 2017, a Tesla will be able to drive itself from Los Angeles to New York, drop the “driver” in Times Square, and then go park itself in a garage. He says it will be accomplished “without the need for a single touch, including charging.”

Now, my question is: What does it mean “a safety level substantially greater than that of a human driver”? Substantially Safer than the average driver? Safer than Lewis Hamilton? Safer than driving in USA, in Sweden, in Libya? Under what conditions? What are we talking about?

We are only too much used to technology hype… so is Elon Musk overselling? Can we/he prove his assertions? Or more specifically, how can we prove them? How difficult and how long is going to take to prove those assertions? Let’s see:

The message that Tesla and other technology companies behind self-driving cars are promoting goes along this line: Each year, more than 30,000 Americans die, and many more are injured in car accidents. More than 1M die worldwide. The vast majority of traffic fatalities are caused by human error, including drunk driving, speeding, distraction, and fatigue. Driverless cars could eliminate 90% of these deaths and injuries. Some go as far as saying that “that 100-times-safer self-driving cars are not too much farther away, and expect “a factor 10 under clear driving conditions within three years”. At some point, this technology will become so advanced that lawmakers will be forced to debate whether to outlaw manual driving.

Not everybody is comfortable with this message, of course, for a number of different reasons. German authorities have asked Tesla Motors to stop using the name “Autopilot” for its semi-autonomous feature, arguing it may mislead motorists about the true capabilities of the vehicle. Drivers cannot relax behind the wheel while playing with their smartphones. They still have to remain alert at the wheel.

The truth is that, as cars with self-driving capabilities are hitting the streets, they have been involved in several collisions, including a fatal crash in Florida in May, where Joshua D. Brown died when his car’s cameras failed to distinguish the white side of a turning tractor-trailer from a brightly lit sky and didn’t automatically activate its brakes.

After the accident, Elon Musk is having o tough time with journalists, and he had a particularly heated argument with Fortune:

Indeed, if anyone bothered to do the math (obviously, you did not) they would realize that of the over 1M auto deaths per year worldwide, approximately half a million people would have been saved if the Tesla autopilot was universally available. Please, take 5 mins and do the bloody math before you write an article that misleads the public.

According to the bloody math of Mr. Musk, “it is a fact” that the “better-than-human” threshold had been crossed and robustly validated internally:

1) That Tesla Autopilot had been safely used in over 100 million miles of driving by tens of thousands of customers worldwide, with zero confirmed fatalities and a wealth of internal data demonstrating safer, more predictable vehicle control performance when the system is properly used.

2) That contrasted against worldwide accident data, customers using Autopilot are statistically safer than those not using it at all.

3) That given its nature as a driver assistance system, a collision on Autopilot was a statistical inevitability, though by this point, not one that would alter the conclusion already borne out over millions of miles that the system provided a net safety benefit to society.

However the (simple, not so bloody) math is less conclusive:

According to a study published by the Virginia Tech Transportation Institute, self-driving cars on average were involved in fewer accidents compared to vehicles driven by humans. The study analysed more than 50 self-driving vehicles of Google, which have travelled about 1.3 million miles in total across California and Texas without being driven by humans. The study estimates that human-driven vehicles find themselves in 4.2 crashes per million miles, as opposed to self-driving cars that find themselves in 3.2 crashes per million miles. The researchers stressed that additional studies are needed to say for sure that self-driving cars are indeed safer than their human.

Not surprisingly, a similar study by the University of Michigan Transportation Research Institute in October 2015, compared the crash rates among the self-driving cars of Google, Delphi and Audi. The study found results opposite to the Virginia Tech’s study – with the self-driving cars having a higher rate of crashes compared to human-driven vehicles. It highlights two important caveats when interpreting the findings. First, the distance accumulated by self-driving vehicles is still relatively low, about 1.2 million miles, compared with about 3 trillion annual miles only in the U.S. by conventional vehicles. Second, self driving vehicles were thus far driven only in limited (and generally less demanding) conditions (e.g., avoiding snowy areas). Therefore, their exposure has not yet been representative of the exposure for conventional vehicles.

miles-to-safety
Miles Needed to Demonstrate with 95% Confidence that the Autonomous Vehicle Failure Rate Is Lower than the Human Driver Failure Rate. Nidhi Kalra, Susan M. Paddock, RAND Corporation

Another report from RAND Corporation finds that autonomous vehicles would need to be tested “hundreds of millions of miles and sometimes hundreds of billions of miles” to gain enough information to compare its safety to human-driven automobiles. Such thorough testing would require “tens and sometimes hundreds of years,” which would make it impractical to accomplish before clearing the vehicles for regular consumer use, the report said.

With such conflicting, inconclusive results and no defined national and/or global safety standards for self-driving cars—beware!— we can be safely predict that:

We are going to see more and more studies with difficult to reconcile results, and we will have to swallow a lot more sloppy statistics—what I call intellectual pollution.

I can understand the frustration of Elon Musk. A simple fatality should be seen as just that. It does not imply we are not making progress in the right direction.

Mr Musk, we are behind you! You are a visionary, a doer, a leader. We bloodily need people like you to look and move us all forward, either to Mars, Heaven or elsewhere. But that doesn’t mean we have to swallow your bloody statistics. And yes, you are overselling, Mr. Musk, and you need not!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s