Sloppy Journalism about “Fully Autonomous” (Except Not) Robot Cars on (Some of) Manhattan’s Streets

By Lambert Strether of Corrente

Usage note: I’m going to start saying “robot cars,” instead of “self-driving cars” (they don’t have selves), or “autonomous vehicles” (it’s too long to type, and anyhow “auto” implies a self, too. I guess we’ve had this category error for some time, come to think of it). And by “robot car,” I mean a fully autonomous Level 5 vehicle.

Let’s start by quoting Governor Cuomo’s deceptive press release (which I’ve helpfully annotated):

Governor Cuomo Announces Cruise Automation Applying to Begin First Fully Autonomous Vehicle Testing in New York State

Governor Andrew M. Cuomo today announced General Motors and Cruise Automation are applying to begin the first sustained testing of vehicles in [1] fully autonomous mode in New York State in early 2018. Through Governor Cuomo’s recent legislation allowing the testing of autonomous technology, GM and Cruise are applying to begin testing in Manhattan, where mapping has begun in a [2]geofenced area. All testing will include [3]an engineer in the driver’s seat to monitor and evaluate performance, and a second person in the passenger seat…. Cruise’s planned testing would be the first time [4]Level 4 autonomous vehicles will be tested in New York State…

At [2] and [3] we get some detail that would indicates that the testing is going to be carefully circumscribed, so the autonomy is for some definition of “fully.” However, [1] and [4] are contradictory: “Fully autonomous” robot cars are, in the jargon of the field, “Level 5,” not “Level 4”, as we explained here. In lay terms:

From Dr. Steve Shladover of Berkeley’s Partners for Advanced Transportation Technology:

[High Automation]: [Level 4] has multiple layers of capability, and it could allow the driver to, for example, fall asleep while driving on the highway for a long distance trip…

That could also be a low-speed shuttle that would operate within a confined area, like a retirement community or a resort or shopping complex, where the interactions with other vehicles might be limited so that that helps keep it safe.

[Full Automation]: Level 5 is where you get to the automated taxi that can pick you up from any origin or take you to any destination… If you’re in a car sharing mode of operation, you want to reposition a vehicle to where somebody needs it. That needs Level 5.

Level 5 is really, really hard.

And the Daily Mail, amazingly enough, provides better coverage than all the stories I am about to look at, including a chart with a more technical definitino of the levels:

Level Four – The system can cope will all situations automatically within defined use [for some defintion of “defined use” –lambert] but it may not be able to cope will all weather or road conditions. System will rely on high definition mapping

Level Four – Full automation. System can cope with all weather, traffic and lighting conditions. It can go anywhere, at any time in any conditions

Furthermore, GM’s Cruise Automation hasn’t even developed Level 5 software. (Neither has Tesla.) Fully autonomous, despite a lot of wishful thinking means just that: “Fully.” “Autonomous.” It doesn’t mean “autonomous only in some areas and with a human in the loop.” Fully autonomous = Level 5 ≠ Level 4. So Cuomo’s press release is deceptive.

* * *

Now let’s look at how some major journalistic enterprises covered the story. We’ll see that most of them fell for Cuomo’s intitial “fully autonomous,” and didn’t read on to see “Level 4.” Some of them also add interesting details that aren’t in the press release, or in other stories.

Reuters:

GM to test self-driving cars in N.Y. in early 2018: Gov. Cuomo

General Motors Co (GM.N) plans to test vehicles in fully autonomous mode in New York state in early 2018, according to New York Governor Andrew Cuomo.

The planned testing by GM and its self-driving unit, Cruise Automation, will be the first by a Level 4 autonomous vehicle in the state, Cuomo said in a statement.

Reuters, to its credit, read Cuomo’s whole press release, and mentions both “fully autonomous” and “Level 4,” but without noticing they contradict each other.

Financial Times:

General Motors to test self-driving cars in New York City

The Detroit-based automaker, whose shares have risen 25 per cent in recent weeks on investor expectations that it could beat rivals to the introduction of a mass market autonomous vehicle service, will test Chevrolet Bolt fully autonomous electric cars in its most complex market so far: lower Manhattan.

The Financial Times falls for “fully autonomous,” and doesn’t mention “Level” anywhere in the story. Careless. Also, the hard problem is not Manhattan as a “complex market,” but Manhattan as a complex streetscape; exactly the sort of category error one would expect an organ like the FT — much as I love the pink paper — to make.

Dow Jones News Wire:

GM to Test Fleet of Self-Driving Cars in New York — Update

GM will deploy a fleet of self-driving Chevrolet Bolt electric cars early next year in a 5-square-mile section of lower Manhattan that engineers are mapping, said Kyle Vogt, chief executive of Cruise Automation, the driverless-car developer GM acquired last year…. [Manhattan, like San Francisco] offers a congested environment with a high concentration of hairy situations that fully automated cars must learn to navigate.

Dow Jones, like the FT, doesn’t mention levels at all, emitting instead vague terms like “self-driving” and “driverless,” and implying, without actually saying, that the Chevy Bolts will be “fully automated” (as opposed to “autonomous.” They do pick up, however, that GM’s testing will take place in a 5-square-mile area, which GM is mapping. (Existing maps won’t do, then?)

New York Times:

Self-Driving Cars Could Come to Manhattan

The driverless trials will include two passengers: an engineer sitting behind the wheel to monitor and evaluate performance, and a second person in the passenger seat, according to the governor’s announcement.

The New York Times doesn’t get into Levels either — I guess they didn’t read that far down in the press release, although, to their credit, they did interview some cab drivers[1]. Like Dow Jones, they equivocate with “driverless” and “self-driving” (though I suppose we could get into the semantics of what “self-driving” can mean with two people in the car who one of whom is an engineer).

Recode:

Self-driving Chevy Bolts will roam New York City streets next year

General Motors will operate a handful of semi-driverless Chevy Bolts within a five-square-mile area in lower Manhattan for at least a year to test the technology.

The legislation also requires state police escorts to accompany the test cars, but how that’s implemented is still being worked out. Each car being tested in New York must also have a $5 million insurance policy.

Like Dow Jones and The Times, Recode doesn’t get into levels. Unlike them, it further qualifies the already qualified “driverless” with “semi” (because of the engineer behind the wheel, I suppose). The detail on the state police escort (!) and the five million insurance policy per car is not in the press release, however, or any other story I read, so kudos.

The Verge:

GM will be the first company to test self-driving cars in New York City

Cruise Automation, the self-driving unit of General Motors, announced today that it will test its autonomous Chevy Bolts in one of the most torturously congested cities in the world: New York City. According to New York Governor Andrew Cuomo, the company will be the first to test Level 4 autonomous vehicles in the state.

To its credit, the Verge mentions “Level 4,” and does not say fully autonomous, unlike Cuomo. (And most of Manhattan is a grid; it may be torture, but it is not tortuous.)

New York Daily News:

General Motors PAC donated $17G to Cuomo months before picked to test self-driving cars in New York[2]

The company will be the first to test fully automated, or “level four” vehicles in the state, Cuomo’s office said Tuesday.

Conclusion

I know robot cars are seen as a technological inevitability, so why spend time on the story? But what we’re seeing is a test taking place in a five-square-mile area that has yet to be mapped[3], in a “Level 4” vehicle that is not “fully autonomous,” at least as the Society of Automotive Engineers defines the term, with an engineer behind the wheel, a passenger with carefully undefined duties, trailed by a police car, and with a five million dollar insurance policy. Frankly, this is an impressive enough technical achievement without going all giddy. Is it really too much to ask our famously free press to read beyond the press release and get the basics right?

NOTES

[1] The lead: “Pity the poor taxi drivers. First came Uber, now comes no one.” Paging Thomas Frank!

[2] Of course.

[3] And how come GM gets to pick its own test area, anyhow?

Print Friendly
Tweet about this on TwitterDigg thisShare on Reddit0Share on StumbleUpon0Share on Facebook0Share on LinkedIn0Share on Google+0Buffer this pageEmail this to someone
This entry was posted in Auto industry, Guest Post, Media watch on by .

About Lambert Strether

Readers, I have had a correspondent characterize my views as realistic cynical. Let me briefly explain them. I believe in universal programs that provide concrete material benefits, especially to the working class. Medicare for All is the prime example, but tuition-free college and a Post Office Bank also fall under this heading. So do a Jobs Guarantee and a Debt Jubilee. Clearly, neither liberal Democrats nor conservative Republicans can deliver on such programs, because the two are different flavors of neoliberalism (“Because markets”). I don’t much care about the “ism” that delivers the benefits, although whichever one does have to put common humanity first, as opposed to markets. Could be a second FDR saving capitalism, democratic socialism leashing and collaring it, or communism razing it. I don’t much care, as long as the benefits are delivered. To me, the key issue — and this is why Medicare for All is always first with me — is the tens of thousands of excess “deaths from despair,” as described by the Case-Deaton study, and other recent studies. That enormous body count makes Medicare for All, at the very least, a moral and strategic imperative. And that level of suffering and organic damage makes the concerns of identity politics — even the worthy fight to help the refugees Bush, Obama, and Clinton’s wars created — bright shiny objects by comparison. Hence my frustration with the news flow — currently in my view the swirling intersection of two, separate Shock Doctrine campaigns, one by the Administration, and the other by out-of-power liberals and their allies in the State and in the press — a news flow that constantly forces me to focus on matters that I regard as of secondary importance to the excess deaths. What kind of political economy is it that halts or even reverses the increases in life expectancy that civilized societies have achieved? I am also very hopeful that the continuing destruction of both party establishments will open the space for voices supporting programs similar to those I have listed; let’s call such voices “the left.” Volatility creates opportunity, especially if the Democrat establishment, which puts markets first and opposes all such programs, isn’t allowed to get back into the saddle. Eyes on the prize! I love the tactical level, and secretly love even the horse race, since I’ve been blogging about it daily for fourteen years, but everything I write has this perspective at the back of it.

81 comments

  1. lyman alpha blob

    Semi-driverless seems to be a fairly accurate term here. My question is how is that all that much different than checking one’s mobile while driving a regular car and only paying attention to the vehicle when absolutely necessary as is currently the fashion?

    Jetpacks or bust!

    Reply
    1. subgenius

      Semi-driverless car is a great descriptor for many of the hordes of vehicles circulating hell-A’s sclerotic arterial routes.

      Reply
    2. RepubAnon

      If cab drivers are replaced by self-driving cars, who will Tom Friedman interview for pithy insights into the way the world works?
      (/snark)

      More seriously, I’m not sure anyone’s thought this through. What happens when a long line of lidar/radar-equipped cars are all sending out laser/microwaves in the same direction at the same time? Do we run a first-year physics experiment into wave frequency additions/interference?

      What happens when a nation-state (or a skilled hacker) sends out the command for the cars to misbehave – either by bricking them, or causing collisions? How about GPS jammers?

      It’s easy to be dazzled by new shiny technology – right up until it bites you. Remember how the computer industry was thought to be “clean” because there were no smokestacks? Now, we worry about the hazardous solvents those “clean” companies released into the ground water…

      Reply
      1. rusti

        What happens when a long line of lidar/radar-equipped cars are all sending out laser/microwaves in the same direction at the same time? Do we run a first-year physics experiment into wave frequency additions/interference?

        I’ve seen some buzz from radar suppliers about introducing modulation techniques (in a similar manner to how 3G works) to mitigate cross-talk, but I’m not sure how far they’ve come on implementing this and how scalable they are. No idea how LIDAR would handle this, it’s not really my area.

        GPS jamming and spoofing is a hot topic, especially in the defense industry after Iran tricked a US drone into landing on hostile ground in 2011. There are a lot of things you can do to identify spoofers or jammers and mitigate them, but I don’t think there’s any magic solution. Which is bad news for Pokemon Go

        Reply
      2. dee dee

        I think this is the worst idea ever. The hackers will have a field day with this. And privacy? What’s that? Everyone has lost their minds. Its just More tracking—- someday these young people will wake up and it will be too late!
        Technology is not looking out for US its just big brother watching every single thing. Everyone has lost their mind.

        Reply
  2. Disturbed Voter

    Marketing fraud, like all of AI. It is possible to have trams … even trams that look like cars, that follow fixed routes (on wheels instead of rails). Think Uber without exploited Uber drivers. And those car/trams are coordinated with the street signals. But there is a liability issue with driverless trams today … which is why there is always a driver … that way the driver can be blamed, not the tram manufacturer/owner.

    Reply
      1. WobblyTelomeres

        Actually, they handle the “people popping out between cars” situation fairly well. At least, the one I rode in did (and that was 7 years ago). It was CMU’s DARPA Urban Challenge vehicle on their test track in Pittsburgh and that was one of the specific situations they had to deal with. Was pretty DAMN cool!

        Reply
          1. WobblyTelomeres

            Game as “jump out in front to watch the overfed passenger bounce off their shoulder belt as the vehicle slammed on the brakes”??? Yep. You’re right. Safer would be to throw a beachball into traffic. Would have the same effect.

            I’m not quite sure if you are trying to make a point, though.

            Reply
            1. Chris

              agreed, which is one of the reasons this will only happen if the roads are given over to the robots and the humans are kept well away

              Reply
            2. Lambert Strether Post author

              Like that, yes. From the NC post linked to above:

              And while such hacking represents a worst-case scenario, there are many other potentially disruptive problems to be considered. It is not uncommon in many parts of the country for people to drive with GPS jammers in their trunks to make sure no one knows where they are, which is very disruptive to other nearby cars relying on GPS . Additionally, recent research has shown that a $60 laser device can trick self-driving cars into seeing objects that aren’t there. Moreover, we know that people, including bicyclists, pedestrians and other drivers, could and will attempt to game self-driving cars, in effect trying to elicit or prevent various behaviors in attempts to get ahead of the cars or simply to have fun. Lastly, privacy and control of personal data is also going to be a major point of contention. These cars carry cameras that look both in and outside the car, and will transmit these images and telemetry data in real time, including where you are going and your driving habits. Who has access to this data, whether it is secure, and whether it can be used for other commercial or government purposes has yet to be addressed.

              In other words, the better that robot cars are at reacting to the environment, the more people will be able to game them by manipulating the environment.

              As Clive points out, that logic leads to keeping robot cars on their own roadways, far away from people. So why not just build trains?

              Reply
      2. Vatch

        How about when some pigeon poop gets on the robot’s optical sensor?

        Maybe this is one of the positive aspects of the huge reduction in flying insects — there’s less likelihood of something splatting on the sensor. Still, I don’t think that robo-cars are important enough to compensate for a large scale ecological crisis.

        Reply
        1. rusti

          For a mining application where there was lots of dirt and dust everywhere, a truck I saw had a little mini washer-wiper system for the LIDARs, kind of like some trucks have for their headlights. Different sensors all have their weaknesses, and I think snow is one of the most brutal things. GNSS and inertial navigation can still help a vehicle orient itself, but lane detection cameras become worthless, lidars and radars can get confused, and cameras will have a hell of a time especially when it’s dark.

          Reply
      3. subgenius

        Take the initiative and drop ol’ papa legba’s veve on them. The guardian of the crossroads is not amused by these soulless contrivances

        Reply
    1. Juan Tootreego

      Has anyone tested robo-cabs with blotto drunks or dementia sufferers? Every human cab driver has faced these, and it is taxing even with intuition as a guide.

      Reply
      1. Mike Smitka

        As you suspected, that’s indeed a common fallacy: that car sharing / ride hailing won’t need labor. I sat in a cafe next to an Autolib’ bank of recharging stations in Paris: a constant parade of staff cleaning cars, which required that the driver be dropped off and then picked up because they couldn’t do a proper cleaning on the street. And then (electric but not autonomous cars) they had to move them from low to high demand locations, as the model is pick up anywhere, drop off anywhere. All very labor intensive. Not conducive to actually making any money. Since EVs are expensive, Autolib’ also has a slow replacement cycle, so most of the cars are visibly very well used, er, abused.

        Reply
    1. Lambert Strether Post author

      I stand corrected (I misread torturous as tortuous. English is the best language!)

      Your comment does make me wonder, however, whether Manhattan’s grid pattern is the sort of special case that might make programming/training a lot easier, and the whole testing program a special case, since most of the turns are 90 degrees. Rather like testing self-driving Indianapolis 500 cars out on the track, where they only turn left, or is it right?

      Reply
      1. ryan

        I don’t know! I am not a programmer by any means, but I can’t help but think that city congestion, in general that is, can lead to unforeseen problems. I grew up out in a more country/suburban sprawl/beach-tourist-trap hybrid area where it was relatively easy to navigate compared to the urban center I now inhabit. What I deal with now-with turning lanes that jump out of nowhere, cars lined up on the side of street at unpredictable locations (sometimes the rightmost lane just becomes a parking lane by some sort of default), not to mention a few five point intersections-and this is all in a downtown, relatively grid-like area. I think maybe a birds eye view of the situation fails to take in into account some of the on-the-ground complexity of the street.

        Also-New York traffic patterns may be a little more leveled out, compared to where I am (triangle area North Carolina), which is just exploding in population. There’s almost no public transit to top that off.

        Reply
      2. rusti

        I think Manhattan is a pretty ambitious test environment, even with the way they’ve artificially bounded the problem. It’s an urban canyon and crowded with mixed traffic (pedestrians, bicyclists).

        I’m curious to know if they plan on testing these in the dark or in the snow.

        Reply
  3. voteforno6

    I’m still trying to figure out what transportation problem is actually going to be solved by these vehicles.

    That being said, it seems like people think that these will be just like regular cars, except that they’ll be able to drive on their own. It seems like the model being implemented will require significant back-end support, just to replicate what one human driving a car can do. How is that more efficient? For that matter, how is that safer? It’s just introducing more things that can break.

    Reply
    1. Lambert Strether Post author

      > what transportation problem is actually going to be solved

      Not being with other people, especially cab drivers.

      There is the argument that 37,000 lives due to auto accidents will be saved. I should unpack that at some point. I would start by pointing out the rural states are over-represented in drunk driving deaths, and that robot cars aren’t likely to make it to rural states, if any significant infrastructure investment at all is required.

      Reply
      1. voteforno6

        Those numbers are overstated, I think. They’re assuming that these robot cars will always operate perfectly, and in perfect conditions, while comparing it to human drivers operating in all conditions.

        Reply
    2. subgenius

      what transportation problem is actually going to be solved by these vehicles

      It’s not solving a problem; it’s 3 card Monte…look at the shiny geegaw your manipulator insists is your future while he and his associates fleece you blind.

      You may recognize them by their snappy and frequently slightly reichy/bond-villainy names….Otto / Uber / Elon , etc

      Reply
    3. rd

      I can think of five major ones right off the bat:

      1. Elderly and handicapped drivers (including the blind) can be mobile at low cost;
      2. Drug/alcohol impaired driving (a major cause of accidents) would largely go away;
      3. Sleep-deprived driving (another major cause of accidents) becomes a thing of the past;
      4. Texting while driving (another major cause of accidents) would become irrelevant; and
      5. The car could go park itself away from where you are dropped off – searching for parking spots near your destination is a major source of urban traffic.

      Reply
      1. Lambert Strether Post author

        That’s a good list, but I can see problems:

        1. Elderly and handicapped (blind) drivers won’t be able to take over in emergencies, i.e., when lives are lost, so Level 4 won’t help them. And Level 5, so far, is a fantasy.

        2. Drunk driving deaths aren’t evenly distributed geographically. I doubt the flyover areas where they disproportionately occur will get the infrastructure improvements needed to make the algos work (I’m recalling the company President who complained that his robot cars didn’t work because the white lines on the road were faded or non-existent)

        3. On sleep, same argument as #1.

        4. True, with a decrement for missing an emergency in Level 4. And surely the clever engineers in Silicon Valley could disable text functions when the phone “senses” it’s in a vehicle, so we don’t have to spend squillions of dollars on robot cars?

        5. I’m not sure the driver would be comfortable with that. I suppose if the robot car is hired by the hour, who cares? But sending a very expensive piece of property I own off to park itself, I know not where, seems like a risky proposition. And that’s assuming the car seeking a parking spot isn’t gamed by, say, kids painting white lines on the road to lure it into a “robot car trap,” and then stripping it.

        Reply
    4. PlutoniumKun

      The original autonomous project, Europes Eureka Prometheus Project was intended to address congestion. I recall sitting in lectures from traffic engineers back in the late 1980’s being told that this was the alternative to building more highways. The promise was that autonomous cars would move in a more rational and predictable manner than human drivers, resulting in significant capacity increases for existing highways. The selling point was that the cost of implementing the project would be off-set by not having to build more road capacity. Nobody seemed to consider that one obvious result would be people choosing longer commutes. But road engineers never liked having it pointed out that increasing traffic speeds only resulted in greater numbers of cars.

      Safety was another issue that was regularly brought up, although from my memory of the lectures, it was very much secondary to getting people to work quicker and more reliably.

      Prometheus was a public sector initiative, heavily led by transport engineers, hence the extreme focus on ‘efficiency’ in a narrow sense. I think it was only in the last decade that silicon valley clued into the potential for making money from it.

      Reply
  4. jCandlish

    Flying into Instrument Meteorological Conditions requires a great deal of mandated redundant equipment, in addition to a rated pilot.

    The lack of published equipment minimums is a clear tell that nobody knows what they are doing.

    Where is the NTSB on accident and incident reporting requirements?

    Airplanes require certified mechanics. Instrumentation requires frequent calibration and certification intervals.

    So many missing details …

    Reply
      1. jCandlish

        So the standardization necessary to make this scheme work will be the sole property of whichever entities are able to withstand the liability challenges their reckless course of invention causes?

        Once proven on the public roadways it would be most efficient for the competitors to use the same open system. Otherwise they have to solve the problem of black box algorithms 2nd guessing each other.

        But maybe market conditions make the technically superior cooperative solution impossible?

        Reply
  5. Synoia

    Is there a corollary between Level 4 robot cars and the reported drop in piloting skills on airplanes with robot flying (aka auto-pilot)?

    The humans become both less able to respond in a emergency (skills require practice, and decay without continuing use).

    Expecting the humans to both be able to snooze yet also requiring the human to be vigilant if the robot meets a situation it cannot handle appears to me to be such a contradiction as to make potential accidents worse.

    Or am I completely missing the point?

    Reply
      1. Juan Tootreego

        Two articles: the first is on Nvidia’s new self-driving computer from 1/2016, bragging it has the power of 150 MacBook Pros. Next up is a similar article from earlier this month on the NEW New Nvidia auto computer, bragging that it has 10 times the power of the earlier one. In essence, that’s an admission from Nvidia that, despite their expertise, they underestimated the difficulty of the job by at least a factor of 10.

        The article also mentions that “Most car companies have said they will probably skip Level 3 and 4 because it’s too dangerous, and go right to Level 5.” Meaning the levels are hogwash.

        Better: Level 1: vehicle operating in a cooperative environment. Amazon warehouse robots might be a good example of this, but note lots of room for improvement. Level 2: vehicle operating in a benign environment. Your example in the OP is an excellent example of this:

        That could also be a low-speed shuttle that would operate within a confined area, like a retirement community or a resort or shopping complex, where the interactions with other vehicles might be limited so that that helps keep it safe.

        The fact there are not multiple commercial deployments of such systems speaks volumes on where development actually stands. Level 3: Operation in the natural world. That is to say, the vehicle is driving on crowded icy roads right after a football game, surrounded by drunks, dogs and revelers, while at the same time Ukrainian mobsters are hacking into your car’s computer to mine Bitcoin… :-)

        Reply
        1. Synoia

          That could also be a low-speed shuttle that would operate within a confined area, like a retirement community or a resort or shopping complex, where the interactions with other vehicles might be limited so that that helps keep it safe.

          Isn’t that a train?

          Reply
          1. Lambert Strether Post author

            Not really because the scale is different.

            Nevertheless, it would be funny if GM bet the company on what turns out to be a niche market: Self-driving golf carts in retirement communities, driving very slowly.

            Reply
    1. JerryDenim

      Great point, but I’m sure the greed-head/techno-utopians behind this robot car push are unfazed. Humans with severely degraded or never-cultivated driving skills means ‘TINA’ when it comes to level 5 cars. Widespread adoption of level 4 cars creates a impetus for level 5 automation as humans lose their driving skills. The interesting question is how many people die in automation related accidents while the technology for safe, functioning autonomous cars is developed? Does society have the tolerance for a sizable number of deaths attributable to buggy automation and end-user automation screw-ups? Can the court system or our corrupt elected officials stop the Waymos and Musks of the world? Sounds like we are going to find out.

      As an airline pilot flying new model aircraft equipped with the latest automation technology the aviation world has to offer, I must admit I am skeptical. After decades on the market and constant refinements, the automation technology in airliners is quite buggy and frequently requires human intervention. The automation is just good enough to lull a trusting or lazy person into not paying attention, which is precisely where the danger lies. Based on my experience with automation and human factors I see a very painful rollout of this new technology. Mix in some shameless advertising and grandiose marketing promises, lax regulation, a poorly understood complex system embraced by a distracted and self-medicated public and it’s easy to imagine a bloodbath. So, yes, count me among those skeptical of this entire experiment.

      Reply
      1. Lambert Strether Post author

        > The interesting question is how many people die in automation related accidents while the technology for safe, functioning autonomous cars is developed?

        As many as necessary, Jerry! What’s wrong with you?

        Reply
  6. Tom

    The press release points out that:

    “New York City is one of the most densely populated places in the world and provides new opportunities to expose our software to unusual situations, which means we can improve our software at a much faster rate.”

    I say bring those robot cars out to rural Michigan for some real novel situations. How about trying to navigate a severely pot-holed dirt road that requires slalom-like steering, while encountering a large combine coming at you from the opposite direction, just after a momma deer has leapt across the road but before its trailing fawns have made the crossing. That’s not unusual – that’s a morning commute.

    Reply
    1. Lambert Strether Post author

      One of my more cynical scenarios for infrastructure spending is that much of it goes for improving roadways in big cities, so we can have lots of robot Ubers (and not take the subway or buses with smelly proles). Your unusual situations in Michigan would then remain exactly the same. No robot cars for the flyover states!

      Reply
      1. Juan Tootreego

        My own cynical suspicion is that this is all an immense propaganda campaign to get trillions of public dollars spent on infrastructure that will allow the less-than-autonomous vehicles, that can actually be produced (eventually), to operate and make scads of $$$ for their makers.

        Reply
        1. rusti

          I think you attribute far too much competence to the people putting out press releases. My experience in the industry is that there are a whole lot of people trying to climb the ladder and be “visionaries” who aren’t all that interested in learning about the technology itself, and they really believe the absurd stuff they say at conferences or in press releases.

          There’s a bit of an interesting dynamic with “robot driving” technology, in that it’s super easy to build demonstrators that show something cool (2-3 engineers with the right competences could implement something like Mercedes’ autonomous runway clearing system in less than a year) but it is INCREDIBLY difficult to build robust systems that will operate on public roads under many different conditions, and harder yet if malicious actors are going to be taken into consideration. I always grit my teeth when I read press releases about stuff I’ve worked on, because the caveats are inevitably missing and no one sees all the ugly workarounds that went into getting the demo working.

          Reply
      2. subgenius

        And in what is becoming a predictable pattern, those damn russkies will manage to negate all that perfectly executed planning, technology, insight, dedication and funneling of billions to the deserving few through the use of some small band of underfunded trolls armed with all the surplus street paint they could acquire.

        I call for the immediate banning of all white materials that could threaten the right to be driven by an infallible autonomous agent the Creator and owner of which you have waived any rights to sue (see section 3200.12 sub sections g-z of the end user loser license agreement you agreed to when purchasing this service) before a single one of your Betters is delayed in their urgent and important business of running you down.

        Reply
  7. Michael Fiorillo

    No, you’re right: Air France flight 447, which crashed off the coast of Brazil a number of years ago, went down for pretty much exactly the reasons you describe.

    Reply
      1. DonCoyote

        I used to have a link to a nicely written article about the Air France crash talking autopilot/autodriving design (And the AF pilots had 2+ minutes to figure out the right thing to do {dive} and still did the wrong thing. I don’t think autodriving “failures” will allow the same leeway). Of course I can’t find it, but while searching the web for it, I came across this one:

        Artificial Stupidity: Fumbling The Handoff From AI To Human Control with some nice “money” quotes:

        “That human-machine handoff is a major stumbling block for the Pentagon’s Third Offset Strategy, which bets America’s future military superiority on artificial intelligence.” Boy I feel *so* much safer now.

        “…the combination of human and artificial intelligence is more powerful than either alone. To date, however, human and AI sometimes reinforce each other’s failures.” Mutually assured destruction?

        ” “Handing off to the human in an emergency is a crutch for programmers facing the limitations of their AI, said Igor Cherepinsky, director of autonomy programs at Sikorsky: “For us designers, when we don’t know how to do something, we just hand the control back to the human being… even though that human being, in that particular situation (may) have zero chance of success.” ”

        ““You can get lulled into a sense of complacency because you think, ‘oh, there’s a person in the loop,’” said Scharre. When that human is complacent or inattentive, however, “you don’t really have a human in the loop,” he said. “You have the illusion of human judgment.” ”

        ““The inherent difficulty of integrating humans with automated components has created a situation that has come to be known as the ‘dangerous middle ground’ of automation – somewhere between manual control and full and reliable automation.” It’s the worst of both worlds.”

        So yeah level 4 is great…until it’s level 0 and you’re not paying attention and have forgotten how to drive. How many lives is this supposed to save again? Illusion of judgement = delusion of benefit.

        Reply
        1. flora

          Great comment.

          “” “Handing off to the human in an emergency is a crutch for programmers facing the limitations of their AI, said Igor Cherepinsky, director of autonomy programs at Sikorsky: “For us designers, when we don’t know how to do something, we just hand the control back to the human being… even though that human being, in that particular situation (may) have zero chance of success.” ”

          Yep, program computers to navigate the routine tasks. Train human users to expect situational expertise/”awareness” from the computers. Combine. Yikes!

          Reply
          1. flora

            adding: If companies called this Advanced Automated Switching (A2S) instead of Artificial Intelligence (AI) they would be more accurate, even if the more accurate name had less PR woo. Greater accuracy in naming would lead to clearer thought about both deployments and human (pilots, in this instance) training in the use of the machinery, imo.

            Reply
        2. Lambert Strether Post author

          So robot cars will simultaneously make us stupider drivers by degrading our driving skills while simultaneously handing us control during sudden emergencies. Surprise! What could go wrong?

          Reply
        3. XXYY

          Artificial Stupidity: Fumbling The Handoff From AI To Human Control

          Some good things in here, though the problems being described are generally familiar. E.g.:

          “Tesla was like, ‘oh, it’s not our fault….The human driving it clearly didn’t understand,’” said Scharre. Given normal human psychology, however, it’s hard to expect Brown to keep watching Autopilot like a hawk, constantly ready to intervene, after he had experienced it not only driving well in normal conditions but also preventing accidents. Given Brown’s understandable inattention, Autopilot was effectively on its own — a situation it was explicitly not designed to handle, but which was entirely predictable. “You [the system developer] can get lulled into a sense of complacency because you think, ‘oh, there’s a person in the loop,’” said Scharre. When that human is complacent or inattentive, however, “you don’t really have a human in the loop,” he said. “You have the illusion of human judgment.”

          This is a good point, rarely heard:

          “One of the most common myths about automation is that as a system’s automation level increases, less human expertise is required,” he wrote. The opposite is true: “Operators often must have a deep knowledge of the complex systems under their control to be able to intervene appropriately when necessary.”

          I think the point here is that *automated* systems are necessarily *more complex* systems, not only in terms of parts count but also in terms of the variety and difficulty of interactions expected of the operator, and the (explicit–see above) assumption that the operator is expected to be the backstop for all manner of system faults and situations that the system designers could not automate well. So an automated system requires *more and deeper* training. This runs exactly contrary to the normal capitalist assumption that automation allows dumber/cheaper humans to man the system, or ideally no humans at all.

          This is really insightful:

          “How do you keep the human involved, keep the human creativity, judgment, compassion?” asked Matt Johnson, a Navy pilot turned AI researcher, during a recent conference at the Johns Hopkins University Applied Physics Laboratory. “A lot of times, we think about the goal as (being) to make autonomous systems. We want autonomous cars, we want autonomous drones, whatever the case may be. We want to take a machine that’s dependent on people and make it independent.” “I would suggest that’s not the right goal,” Johnson said. “What I want is an interdependent system that works with me to do what I want to do.”

          You can read far and wide in automation literature and not find this point: The proper goal for automation is to *complement* the human operator and thus make the overall system *better*, not haphazardly *displace* the behavior of the human operator along with his or her strengths. This obviously requires deep and creative thinking, and a willingness to be honest about what both humans and machines do best.

          Reply
          1. DonCoyote

            Excellent analysis XXYY.

            I finally found the original article I was searching for: Crash: how computers are setting us up for disaster. Just a few revisits of the themes:

            “This problem has a name: the paradox of automation. It applies in a wide variety of contexts, from the operators of nuclear power stations to the crew of cruise ships, from the simple fact that we can no longer remember phone numbers because we have them all stored in our mobile phones, to the way we now struggle with mental arithmetic because we are surrounded by electronic calculators. The better the automatic systems, the more out-of-practice human operators will be, and the more extreme the situations they will have to face. The psychologist James Reason, author of Human Error, wrote: “Manual control is a highly skilled activity, and skills need to be practised continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practising these basic control skills … when manual takeover is necessary something has usually gone wrong; this means that operators need to be more rather than less skilled in order to cope with these atypical conditions.” ”

            So , summarizing the paradox of automation:
            1) Helps the less-skilled still do the task (under normal conditions)
            2) Removes the need for practice–so the less skilled do not become more skilled and the more skilled become less skilled
            3) Fails in “unusual” situations, precisely when a more skilled/most skilled response is needed.
            4) Reliance on algorithms blunts our efforts to solve the problem other ways (since it solves some of the problems/part of the problem), which leads to even greater reliance on algorithms, etc.

            “We fail to see that a computer that is a hundred times more accurate than a human, and a million times faster, will make 10,000 times as many mistakes. This is not to say that we should call for death to the databases and algorithms. There is at least some legitimate role for computerised attempts to investigate criminal suspects, and keep traffic flowing. But the database and the algorithm, like the autopilot, should be there to support human decision-making. If we rely on computers completely, disaster awaits.”

            “An alternative solution is to reverse the role of computer and human. Rather than letting the computer fly the plane with the human poised to take over when the computer cannot cope, perhaps it would be better to have the human fly the plane with the computer monitoring the situation, ready to intervene. Computers, after all, are tireless, patient and do not need practice. Why, then, do we ask people to monitor machines and not the other way round?

            When humans are asked to babysit computers, for example, in the operation of drones, the computers themselves should be programmed to serve up occasional brief diversions. Even better might be an automated system that demanded more input, more often, from the human – even when that input is not strictly needed. If you occasionally need human skill at short notice to navigate a hugely messy situation, it may make sense to artificially create smaller messes, just to keep people on their toes.”

            So maybe we should regard Level 2 as the highest we *should* go, and require a certain % of driving to be level 1, and work to have the best Levels 1/2 we can?

            Reply
  8. MichaelSF

    Chevy Bolts will roam

    From an online dictionary – – Roam: move about or travel aimlessly or unsystematically, especially over a wide area. “tigers once roamed over most of Asia”

    Using roam to describe a tightly-controlled driving area seems inappropriate.

    Reply
  9. Tom

    Took a look at the State of New York applications for Autonomous Vehicle Demonstration and Testing and found a few more interesting tidbits.

    First of all, page two of the application clearly specifies what Levels are allowed to be demo’d or tested:

    SAE International’s Level of Automation (1-4 only) per J3016TM rev 09/2016

    So it may not even be legally possible to test fully automated, Level 5 vehicles in New York at this time.

    To increase the gap between a real-world, Level 5 test and what is really being proposed here, Part II of the two-part application states:

    The route shall NOT include construction zones or school zones.

    What fun is that? (And how hard is it going to find a route without those two complications?)

    Bonus factoid: The NY State Police Dept. is going to bill testers such as GM $92.73 per hour ($131.67 overtime) plus 53.5¢ per mile to supervise the ongoing festivities.

    Reply
    1. Lambert Strether Post author

      Excellent catch. Thank you!

      So, apparently these things are going to be ready in a couple of years even though they can’t be legally tested in school zones. I can’t find a map of Manhattan school-zoned streets but here is a map of the schools:

      So presumably the five-square-mile area won’t include streets near these schools. (Interesting, a search on “schools” in Google maps produced charter schools, but not all the public schools in the map above; and the search on “public schools,” as you see, doesn’t include elementary schools.

      Makes me think that mapping needs to be done from scratch.

      Reply
      1. Michael Fiorillo

        The public school density in Lower Manhattan is greater than that shown on the map by at least a factor of three. Add in the few remaining Catholic schools, charter and other private schools (and no, they’re not public schools, no matter what their promotors say), and there’s no way to maintain the prohibition and conduct the tests simultaneously. They could perhaps skirt that by conducting tests on weekends when school is out, but that negates the whole point of “real-world” testing.

        Every day that goes by makes me think this whole machine-governed car thing, at least according to the timetable(s) the Hype Machine is using, is part hustle, part mass delusion, part sinister social/urban engineering (readers please add to the list) …

        Reply
      2. Tom

        Overlay a map of Manhatten road closures and you’re really talking some fun and games. GM is really going to need to do some next level planning. According to the application, the testers of the robot car must submit detailed route specifications, including:

        … date, time, origin, destination, the sequence of roads on which it intends to travel, and total routing distance in miles to the nearest 1/10 mile.

        Reply
      3. rusti

        Makes me think that mapping needs to be done from scratch.

        This is the idea, yeah. To implement a base layer that maybe resembles something like you see there, and to have layers on top with things like curbs, lamp posts, lane markings, buildings and other things with high-precision that correspond to what a vehicle’s cameras, radars and lidars and such will perceive. Then each vehicle should synchronize with the map database and upload the picture of the world that it perceives to identify map changes.

        It’s probably going to be extremely difficult to do this in places like construction zones (part of the reason why they’ve artificially changed the inputs in this article), and while you can maybe use it for orienting yourself in absolute coordinates, it’s no guarantee that a new obstacle won’t pop up for the first time. Plus doing any sort of meshing is going to be rife with bugs for the infinite corner cases of what sensors can spit out.

        Reply
  10. Westcoaster

    I think a good part of this obfuscation (besides the laziness of the MSM), is designed to make “driverless cars” seem to be a foregone conclusion in the minds of the public. No way to fight progress marching on; etc. The decision has been made that we want and need this regardless of our own opinions.
    Seems inevitable to me some terribly tragic series of accidents will follow this due to owners stretching the capabilities of their Level 3 car to Level 5, or a glitch in the software, causing the car to enter the wrong way traffic on a freeway or turnpike.
    And God help us when they really get rolling with those driverless semi trucks.

    Reply
    1. Lambert Strether Post author

      “Robot Car Killed My Baby!”

      Which will actually happen. Of course, auto accidents kill people, too, but I don’t for a moment believe a real-world robot car implementation will eliminate all those deaths.

      Reply
      1. flora

        And “who” will be liable? The car company or the driver or the other person/car ? What will insurance rates be on “AI” driven cars? Is the whole AI thing an arbitrage on “who” (driver/car/other/car) is responsible. if courts defer to claims of algorithm perfection, and 2 car companies make that claim, how will courts decide. etc. (I begin to think basic coding in any language you like should be part of general education, the way English and maths and history are part of general education.)

        Reply
        1. flora

          adding: To be clear, I think a general coding course should be required in general high school and college education to de -mystify the claims made for computer algorithms. Currently, too many in positions in law and government do not understand the arguements ( through no fault of their own, it is the techincal transistion of the times), and cannot subject the argeements and claims to the “reasonable man” test, which in essence is the “common sense” test.

          Reply
            1. Synoia

              A general coding course is a course in LOGIC. Period. End of Story

              Not in entirely.

              It is also a course in clearly expressing the logic, testing its implementation, and integrating it into the real world (making it usable).

              Reply
  11. Glenn Mercer

    Okay, I am going to lob something in here, something I have brought up before in discussions about autonomous vehicles. If I am talking to AV zealots, they wave me away (or worse). But I would like a considered answer.

    In their fullest flowering, AVs are truly driverless. As in (an oft-cited example): “You can send the car to your daughter’s school, empty, and she can get in and be driven back home.” (Leaving aide what happens when the offspring refuses to get in the car for now…)

    If this kind of car ever exists, why can I not place an IED (triggered remotely or via timer) on the front seat (or in the trunk), and send the car off to some crowded street in New York, Delhi, or Manila?

    If I were being hopelessly snarky, I guess I could say that this sort of automation puts suicide bombers out of work. But more seriously, is an AV the terrorist’s perfect weapon?

    Reply
    1. cocomaan

      Cars are one of the most highly regulated areas of modern life for a reason and that reason is that they were and are part of most criminal enterprises. Ideas like “shotgun” and “getaway driver” are pretty much ingrained in our language by now as weaponized uses for cars.

      But you’re thinking a step too far. You could just drive the autonomous car into a crowd of people. Or the kid could. Or the computer could. Or a hacker could. Or whatever.

      I don’t see autonomous cars existing in twenty years. One is going to plow through a marathon and that’s going to be the end of it.

      Reply
      1. flora

        i.e., insurance companies are going to spead-sheet run the probabilities and set insurance premium rates accordingly. imo.

        Reply
  12. Mike Smitka

    We are now at what I call Peak Auto. That shows up with the Boards of car companies pushing CEOs to try to catch the euphoria around Tesla and Uber. Autonomy is another.

    In a previous Peak Auto car companies (Japan, Europe, US) all bought car rental companies. All then unloaded them, at a loss. This time they’re buying taxi companies, which I expect to end even worse, as per your reporting on Uber.

    But GM’s shares have risen with the ongoing tech makeover, and that will get other Boards to double down on their bets. The reality is the companies are all caught in a prisoner’s dilemma, spending billions now but with any revenue stream years away, and profits even further. After all, if everyone has the latest ADAS features, automated emergency braking and lane keeping and all that, then no one can charge a premium for them. And they’re fast becoming standard features, not ways to differentiate your products.

    Raise costs, not revenues is a bad business model. What you are describing goes way beyond sloppy journalism.

    Reply
    1. Lambert Strether Post author

      > Raise costs, not revenues is a bad business model. What you are describing goes way beyond sloppy journalism.

      Everything goes way beyond sloppy journalism :-) But I wanted to get a reading on how far I could trust the coverage. And the answer is: Not at all. I mean, when the Daily Mail blows the away the competition from the Times, the FT, Reuters, etc., we truly are living in Bizarro World.

      > Raise costs, not revenues is a bad business model

      I wonder if Uber has so many internal pathologies because, given its business model, the only way to make money is to cheat. And I wonder how many other Silicon Valley companies are like that (Amazon getting its start by evading state and local taxes, for example).

      Reply
  13. SteveB

    I seem to recall a protest right after the national 55 mph speed limit was imposed after the the oil shortages in the 70’s. Three fellows drove their cars side by side exactly at 55 mph across country on RT 80 not letting anyone pass them. The traffic jam behind them was miles and miles long People were furious, late to work, trucks delayed ect. I can’t help but think a similar thing will happen with all these computer controlled cars all driving at the legal limit.
    Here in Jersey the GSP is posted at 65 mph… but if you’re not going 80 you’ll get passed… somehow it works…

    Reply
  14. JBird

    Earthquakes, floods, fire, and riots?

    I remember driving after the last serious Bay Area earthquake and between the downed, damaged, blocked, coned off roads, freeways, overpasses, fires, lack of power, plus days (weeks?) of no traffic lights just how would these autonomous vehicles do? It was not even that bad of an earthquake. Rather a mild one for all that some died. How would these nice self driving cars do in those situations?

    I understand the dream. Heck, how many science fiction stories have them? It would be so cool. But. Instead of these toys what about catching up on all the deferred maintenance like potholes, do all the repair work, actually do all the planned road, freeway, and mass transit (including bus, BART, and light rail) expansions that have been in the works for decades, some since the 1960s and maybe rebuilt, and expand, the problematic conventional rail, as well as a complete high speed rail system. Oh, and did the same with the energy grid, starting by jailing the entire senior management of PG&E, so it could reliably function and support the increased energy demands.

    If we did all this, I could see working on autonomous vehicles. To make them work at all requires at least some real work on the roads. Maybe they would work at least in some areas. Bring back mobility to some. And it would be cool to see.

    But all of that would require taxes, bonds, serious detailed long term planning, funding, and construction like adults in a functional government at the state, regional, and municipal level. It’s much easier to do light fluff research, and investment, by our Lords of Silicon and fluffier advertising by the “news”’media.

    Reply
  15. Joel

    Can you imagine what amazing public transportation we could have with existing Level 4 technology? Automated vans running in dedicated lanes would allow more responsive service (fewer stops mean faster trips) on existing routes and new service on routes and in communities that can’t support normal buses.

    In the wealthy Anglosphere countries we still live in a profit-driven, rather than a people-driven, society, and so that won’t happen. But maybe Denmark or somewhere? Or if Corbyn wins big?

    Reply

Leave a Reply

  • Keep it constructive and courteous
  • Criticize ideas, not people
  • Flag bad behavior
  • Follow the rules

Please read our Comments Policies here.

Your email address will not be published. Required fields are marked *