Wired: Self Driving Car Hype Crashes Into Harsh Realities

For some time, Lambert and I have been pooh-poohing the idea that self-driving cars, particularly true self-driving cars (as opposed to ones that have humans lurking in the background to take control) would be here any time soon, much the less in the widely-ballyhooed time frame of 2018 to 2020.

A new Wired article, describing the newfound sobriety in the self-driving car development community, confirms our long-standing views. It begins with a a series of examples of how companies like Volvo and Google have not only pushed back their self driving car delivery due dates, but have also scaled back their promises of what they will be producing. More important, the piece also describes obstacles that are so significant that even the more cautious forecasts still sound optimistic.

The big problem is that the people engineering these systems have yet to come close to mastering basic design requirements. They think they know how to get there, but that is sort of like being able to describe what it would take to sail across the Pacific solo and actually doing it.

One set of problems is that the self driving car creators have apparently settled on using three different types of sensors and then integrating the inputs. The types of sensors individually don’t appear to be able to operate at the required performance levels. The Wired story uses a Medium post by Bryan Salesky, the CEO of self driving car company Argo AI as its point of departure. Oddly, that article appeared in October, so it is curious to see Wired taking note of it now. From the Wired account:

First, he [Salesky] says, came the sensor snags. Self-driving cars need at least three kinds to function—lidar, which can see clearly in 3-D; cameras, for color and detail; and radar, with can detect objects and their velocities at long distances. Lidar, in particular, doesn’t come cheap: A setup for one car can cost $75,000. Then the vehicles need to take the info from those pricey sensors and fuse it together, extracting what they need to operate in the world and discarding what they doesn’t.

As Lambert pointed out, the data integration problem sounds an awful lot like that of a hugely expensive white elephant, the F-35 helmet. And those helmets have human pilots interpreting all the data.

As an aside, it is curious to see how the designers are framing the “how the car ‘sees'” problem. You don’t need to see in color to drive. One of the cases in an Oliver Sacks book was of a man who had monochrome vision, which it turned out was far more acute at distance vision than for those of us who see in color. Similarly, you don’t need to see in three dimensions to judge distances accurately and drive and park safely.

On top of that, a truly autonomous car needs to be able to recognize objects and interpret the behavior of other cars. Again from Wired:

Salesky cites other problems, minor technological quandaries that could prove disastrous once these cars are actually moving through 3-D space. Vehicles need to be able to see, interpret, and predict the behavior of human drivers, human cyclists, and human pedestrians—perhaps even communicate with them. The cars must understand when they’re in another vehicle’s blind spot and drive extra carefully. They have to know (and see, and hear) when a zooming ambulance needs more room.

If you can’t make the algo work, the solution is apparently to simplify the inputs. One idea is for the vehicles to do only shuttle-type runs. One example was in a retirement community.

But bus driver do more than drive. They keep pranksters from vandalizing the vehicle or shaking down passengers, or (gah!) homeless people taking up residence. And even set routes are not static. What happens in the event of a street repair or a car accident?

The self driving car proponents are also bizarrely eager to introduce a less than fully autonomous car, presumably to increase customer acceptance, when it is likely to backfire. The fudge is to have a human at ready to take over the car in case it asks for help.

First, as one might infer, the human who is suddenly asked to intervene is going to have to quickly asses the situation. The handoff delay means a slower response than if a human had been driving the entire time. Second, and even worse, the human suddenly asked to take control might not even see what the emergency need is. Third, the car itself might not recognize that it is about to get into trouble. Recall that Uber tried to blame a car accident when its self driving car was making a left turn on the oncoming driver, when if you parsed the story carefully, it was the Uber car that was in the wrong.

The newest iteration of the “human takeover” fudge is to have remotely located humans take over navigating the car. Help me. Unlike a driver in a vehicle, they won’t have any feel for the setting. That means an even slower reaction in what will typically be an emergency situation. This is a prescription for bad outcomes, meaning a much worse safety record than with people as drivers, fatally undermining a key claim for self driving cars, that they’d be safer than human operated ones.

Wired finishes its account with a sunny exhortation, but Salesky’s warning, in a article written in corporate-speak, is nowhere near as cheerful:

Those who think fully self-driving vehicles will be ubiquitous on city streets months from now or even in a few years are not well connected to the state of the art or committed to the safe deployment of the technology. For those of us who have been working on the technology for a long time, we’re going to tell you the issue is still really hard, as the systems are as complex as ever.

One thing does look certain: there won’t be any Hail Mary breakthroughs before Uber’s planned IPO in 2019. Shame, that.

Print Friendly, PDF & Email