Yves here. I’ve always thought this remote-controlled self-driving car concept was madness, but it is still useful to have a well-argued takedown.
By Kevin Cashman, who currently works at the Center for Economic and Policy Research (CEPR). In the past, he has been employed in climate change/environmental protection and as a bicycle mechanic. The views expressed are his own. Originally published at his website
A Phantom Auto employee remotely driving a vehicle, via Phantom Auto.
Although hype about self-driving vehicles is everywhere, the actual technology behind them is starting to disappoint. Early predictions about when fully autonomous vehicles — that is, vehicles that would never need a human driver since a computer could handle every possible situation that would arise — would be available are starting to be proven wrong. Self-driving technology has also claimed its first life: in Tempe, Arizona, a test vehicle plowed into a pedestrian and killed her during what would seem like ideal conditions for a computer. Some preliminary evidence suggests that this means that the technology is, today, less safe than human operation.
As investors and others see signs of this failure, there is increasing interest in technologies that could bridge the gap between partially autonomous operation and fully autonomous operation. One of these technologies is “teleoperation,” or when a human would remotely take over to drive when the computer can no longer drive safely. This means that even though the technology cannot be fully autonomous, passengers would still not need to be involved in the operation of the vehicle at all.
The New York Times profiled one company peddling this technology, Phantom Auto. Shai Magzimof, the chief executive of Phantom Auto, says that his company “[…] want[s] to be the OnStar for the autonomous industry” to address the so-called “edge cases” where computers need help driving. To do this, Phantom Auto talks about how it reduces latency — the time it takes for data to pass between the human driver sitting far away and the car that he’s driving — by combining signals from various cellular networks.
Combining signals from various providers is an interesting possible solution to some situations where one network might lose coverage or have an interruption, but it isn’t clear how it would reduce the latency overall from a network, which has a maximum rate at which it can send data (and it isn’t likely Phantom Auto can change this). This rate is generally sufficient for things like web browsing and watching videos, but it remains to be seen if it is adequate for driving. If a car is traveling at a high speed, for example, very low latency becomes even more crucial to safe operation. Any delay in response time is dangerous, which is one major reason that one cannot drive safely under the influence of alcohol or other drugs.
Even ignoring the problems with Phantom Auto’s claims about latency, teleoperation will not be able to otherwise solve the problems with self-driving car technology if customers expect continuous operation of their vehicles. One obvious aspect where this is the case is the availability of a data connection. Remote operators won’t be able to take control in rural areas with no networks at all, or when all networks are overloaded, or when there are certain kinds of technical problems.
Another problem is the time in which it would take a remote operator to connect and then become aware of the situation the computer needs help with. Certainly, there will be less serious edge cases where an operator will have time to connect, become aware of the situation, and start to operate the vehicle. However, many cases need to be addressed right away. If a computer driving a vehicle doesn’t know that it needs help until the last second before it crashes, a human will not be able to intervene and avoid the situation. There are also likely edge cases where the computer does not even have the awareness that it needs intervention before it enters a potentially disastrous situation. Phantom Auto’s technology does nothing to help with these last two types of edge cases, which are the most important and serious types. It wouldn’t have saved the pedestrian in Tempe, for example.
Phantom Auto’s approach to managing its staff which would handle intervention also seems to dramatically misinterpret their role. Magzimof says in another article that “[…]one remote driver at Phantom can handle five vehicles at a time, perhaps moving up to 10 in a year and eventually to a thousand as AI gets progressively better at eliminating more corner cases.” While this sounds like how a customer service call center might operate, it’s not how a teleoperation service could function. Call centers can tolerate high demand in various ways: they can put callers on hold, ask more employees to come in, reallocate resources, simply stop responding to customers, etc.
Teleoperation has none of these luxuries. If there’s a snowstorm that completely confounds self-driving technology, thousands of remote operators need to be immediately available to take over for those hapless vehicles even if self-driving technology has eliminated most edge cases. These remote operators must also instantly take over, meaning that they all need to be sitting at their desks attentively waiting for a situation that could or could not arise. This would be a very taxing job to have, and like backup drivers, who are supposed to sit behind a wheel and do nothing as self-driving cars are tested, and likely impossible for most people to do well all the time.
Teleoperation is one of the latest developments that is supposed to excuse the lack of progress on self-driving technology overall. Instead of a viable way to address that technology’s failings, it’s a way to paper over the problems of an industry that is not achieving what it said it would. As a transition technology, teleoperation has more drawbacks than simply a self-driving car that allows passengers to take over. For situations in which there might not be passengers able or willing to drive, the technology cannot take over fast enough to avoid the worst edge cases. In these contexts, continuous operation would need to be sacrificed in order for teleoperation to be useful for less serious edge cases. Noncontinuous operation, such as a driverless truck pausing when it needed to use teleoperation, would present new safety concerns and probably require significant public investment in infrastructure to make feasible on a wide scale.
In general, fully autonomous self-driving technology has the potential to save many lives and be very useful — in certain contexts. Policymakers should focus on the steady development and safe testing of the technology until it can meet our needs, rather than rush to get it on the road for the sake of industry profits and expectations.