Hawaii Nuke Scare Due to Lousy User Interface

Everyone seems to be eager, perhaps too eager, to move on from the nuclear attack false alarm in Hawaii over the weekend. It is lucky that nothing worse happened. Some people on the scene reported that drivers were going over 100 miles an hour trying to get to safety. Fortunately, there seem to be no reports of traffic deaths or similarly, heart attacks from panic or over-exertion to get to a less exposed spot.

Some media outlets complained that Trump was out golfing during this scare and so missed it entirely. They happen to be many of the same media outlets that worry about Trump being a hothead who might launch an nuclear attack of pique. Funny that they aren’t similarly concerned that he or the high command might take an aggressive action based on misinformation.

And before you contend that the false alarm was contained to the civilian alert system, consider what might have happened had the same mistake taken place on the mainland, say in California. The national media would have picked up on and amplified the alert quickly. That would have created more panic and pressure to Do Something. Now as with Hawaii, the odds still favor that someone would have gone to verify the report before taking a step, but more people involved and more moving parts increases the odds of screw-ups.

What makes this case particularly disconcerting wasn’t simply that it was the result of human error, but design-enabled human error. Lambert, who has worked in a lot of software-related jobs, and in particular has done both UX and UI designing, was particularly exercised that a botch of this sort occurred. One possibility is that the poor design resulted from lousy specs, and Hawaii may therefore not be the only place with problems of this sort.

From IT Pro (hat tip Richard Smith):

A poorly-designed user interface was reportedly behind the false alarm regarding an incoming missile that sent Hawaiian residents into a panic over the weekend….

The alert was supposed to have been an internal test of the Hawaii Emergency Management Agency’s (HEMA’s) missile alert system, conducted semi-regularly since tensions between the US and North Korea began escalating last year. According to The Washington Post, an employee mistakenly selected the wrong option from a drop-down list, issuing a genuine missile alert to the public instead of a dummy alert to HEMA staff.

The two options were labelled almost identically (‘test missile alert’ and ‘missile alert’) and placed one after another, while the only safeguard to prevent accidental alert launches was a single confirmation prompt.

The incident has drawn criticism from some experts, who say that such an important system should not be so open to human error.

“Even though the menu option still required confirmation that the user really wanted to send an alert, that wasn’t enough, on this occasion, to prevent the worker from robotically clicking onwards,” explained security expert Graham Cluley.

“There was an ‘are you sure?’ message, but the user clicked it anyway. Clearly the ‘are you sure?’ last-chance-saloon wasn’t worded carefully enough, or didn’t stand out sufficiently from the regular working of the interface, to make the worker think twice.”

And to add insult to injury, the alert cancellation system had better controls than the “sent the ‘end is nigh message’ system did:

Compounding the problem was the fact that it took more than half an hour for HEMA to send out a follow-up message after the first alert to reassure people that it was an error. Sending the retraction required an elevated level of permissions, and had to go through the Federal Emergency Management Agency (FEMA) for approval.

The article breezily states that this glitch has been fixed, so that a second person has to sign off on sending an alarm, while cancelling them will be easier. But this report comes amid a seemingly unending level of reports of software and hardware screw-ups. For instance, Finnish researchers have found yet another Intel security hole, with the saving grace that this one affects only laptops.

So the end of the world may not come about due to sea level rises or farmland turning into desert or too many bees dying or plastic killing the oceans. Maybe civilization will collapse under the weight of accumulated software bugs.

Print Friendly, PDF & Email

42 comments

  1. sd

    That’s scary. I seriously hope the user interface for launching a missile is more carefully designed. Any word on which defense contractor actually developed and designed the interface?

    1. fajensen

      The scary part is that for the strategic nuclear missiles a “Fail-Safe Design” means “Launch on Failure”.

      If there is a system failure, communication is lost for a time, failure to enter the correct “We are still alive, Do not launch”-codes a counter starts and techies rush to the scene to debug …. Those cold war nutters doing system design, decided it is much Better to Nuke Plotz by mistake than failing to Nuke something due to technical problems, them Soviets could be up to just Anything.

      If Wheelers multiverse theory is true we will find no aliens because were are in that extremely unlikely universe where we managed to live, out of many thousands where we nuked ourselves. There being aliens too is just too crazily unlikely on top of the odds of humanity also surviving it’s own stupidity. Like winning the lottery twice without cheating.

  2. integer

    Not to take anything away from the role the crapified user interface played in this fiasco, but one thing I have seen mentioned repeatedly is that this debacle was set in motion during a shift change. Why in the world would an internal test of an extremely important system be scheduled at a time when key personnel would be changing shift?

    1. bassposaunist

      To ensure that the system works under that condition (shift change). It would be negligent to only schedule tests during ideal conditions. In that sense, it was a successful test. It found a problem, which has been fixed.

      The real problem is that we are dealing with a paranoid, impulsive, leader of a nuclear-armed state, and also Kim Jong-un.

      1. Jeremy Grimm

        The problem found was a problem that should have been found and repaired before the software requirements were approved and if missed there should have been noticed and repaired in software design, review, and implementation and if missed there should have been noticed in software test and if missed there should have been noticed and a fix demanded in acceptance testing and if missed there should have been noticed and a fix demanded by the first user scanning the menus.

        As for the real problem it’s not Kim Jong-un. The real problem is that the United States has “a paranoid, impulsive, leader” with a government full of “paranoid, impulsive” generals and high officials at the helm of a nuclear-armed state which has demonstrated ruthless aggression in many areas of the world and has responded badly to Kim Jong-un and the situation he presented.

        1. Whiskey Bob

          Enterprise level software is often an afterthought in the US where the bare minimum is pumped out for the cheapest cost for the maximum profit. Add on top of this the incentives for private interests to milk contracts by selling patches to the original problems, and then the patches to new problems and so on. Business as usual.

      2. integer

        Regardless of the context in which they are used, I’m not sure the words “successful test” are appropriate here.

    2. fortiter

      The scroll on the TV screen, if actually from the incident (and Poynter yesterday had a screen grab of it) had ‘driving’ spelled ‘dryving.’

      ‘Wrong button’ doesn’t explain a bogus warning scroll. Anyone could have predicted zany explanations without reprimands or firings, and that is exactly what the public is getting.

  3. Katniss Everdeen

    “User interface.” “Drop-down menu.” “Shift change.”

    Same old issues that make medical software so inaccurate, dangerous and, potentially, deadly. At some point, you’d expect someone to realize that these “innovations” are not all they’re cracked up to be.

  4. MtnLife

    Seems that, instead of adding a second person to confirm, they could just make the actual warning a triple confirmation process with audio and visual warnings. I feel it is more likely two people will end up confirming one another’s mistakes more than someone missing the fact that their computer is blaring and the screen flashing red as they click yes to ever more pointed warnings.

    1. Oregoncharles

      that was in a movie – Star Trek, maybe? I have a vivid mental image of it. Of course, it’s only logical.

  5. roadrider

    There’s no perfect user interface that can rule out all human error. Remember, as soon as you design a “foolproof” system a better class of fool soon emerges.

    A better design would be to separate the pathways for tests and live options entirely:

    Step 1. Select “Initiate Test” (or for the real thing “Issue Live Warning”). Make them separate controls (buttons or menu items, or what have you) instead of members of the same list where its easy to fat finger the wrong choice.

    Step 2. A dialog with the prompt “What type of test do you want do initiate?” and a list of possible tests.

    A requirement for a second person to confirm (via their own independent computer) a live warning should still be in force as well as a big honking and loud alert saying that a live warning will be issued in x number of seconds with an opportunity to cancel in case of mistake.

    Still not completely foolproof in the case of a false warning where the operators are given reason to think that a live warning is warranted but probably better than what they have right now.

  6. PKMKII

    Why is the test, dummy alert even in the same system as the honest-to-goodness, the bombs are a comin’ system? Isn’t that UI design 101, separate programs for test/training environments and live ones? This doesn’t just scream bad UI, but lazy UI.

    1. Brooklin Bridge

      It can easily be argued that this goes beyond UI and is alarmingly bad overall program design. As you imply in remarking on “the same system”, the “this is just a test” software should be contained in an entirely different set of software modules than the real thing and the UI should reflect that. The two systems should be separated by a different underlying process, for instance, or a different web site, and not simply by a “new thread”, or (good grief) a simple drop down menu, and when that new process starts up, it should have appropriate, multiple level warnings that this is the real thing and “Are You Absolutely Sure” and “Please Present Your Proof Of Authority To Proceed”.

      At the UI level, the colors, layout, everything, should reflect a This Is Real scenario. It would not be overkill to have two entirely different software development teams (one of them -guess which- with commensurate skill) write the different systems to ensure distinctness and in the case of the real thing, reliabliity.

      1. bassposaunist

        In this case, how would you test for the “real thing”? Testing systems like this is not trivial. Keep in mind that the entire system, including interfaces to cell phone systems, broadcasters, government agencies, etc., is mission-critical. This is not a case of calling tech support if, for example, the state police interface doesn’t work.

        1. Brooklin Bridge

          I don’t see your point. Testing of this software (assuming they still do that) happens in a “safe” QA sand box not unlike any other software only (hopefully) far more thorough.

          I’m not sure we are talking about the same thing.

          My argument is that this alert software, because of it’s dangerous nature, should be architecturally two separate systems. One for “this is a test” and another entirely for “This is not a test”. Don’t mistake the purpose of the software, a missile alert system which broadcasts – this is a test/this is not a test, to be the same thing as testing the software that runs it.

          Back to the foom-pas, If this error had occurred at the White House and not Hawaii, we would all be smoke. This is crapification at it’s most alarming and the lack of coverage is complacency at it’s worst. Our existence is getting closer and closer to depending on a fluke.

          1. Mel

            I see your point, I guess. They broke the system User Interface by adding a test mode to it — though in this case it was the test mode that broke, by generating a real alert. If the extra option hadn’t been there, it wouldn’t have been picked erroneously.
            It remains a fact, though, that you ultimately want to test the production system to prove that all the interfaces, etc. really work. “Dummying” or “mocking” them, as the term of art goes, will only take you so far.

            1. Brooklin Bridge

              I think you have it slightly wrong. As described in the article, this was user error. The “test mode” didn’t erroneously broadcast a real alert, that is, the software – such as it is – didn’t fail; rather, the human being who was using the software selected. “not a test” (or something like that) from a drop down list where the other choice was “a test.” Even then, a warning box appeared asking for confirmation and the user in effect answered “go ahead”. Making such a mistake is much easier than one might imagine. Just being tired is enough. So at a minimum, the User Interface, or in this case the drop down list even with a confirmation dialog, was woefully inadequate. It made it too easy to mistake one for the other.

            2. Brooklin Bridge

              As to testing, I agree with you. Testing the software rigorously as you suggest might have caught the frailty or inadequacy of the drop down box, but then again it might not – it “will only take you so far”.

              What ever the problem, it occurs because saving money has become so vastly more important than saving lives.

        2. Brooklin Bridge

          If you are indeed asking how does the software confirm that the user’s intention is to indeed broadcast “This is a real alert, this is not a test.”, there are multiple ways, such as codes or keys to establish authority, but one of them (that in no way negates the use of others) is to do everything possible to make absolutely sure the user is aware of what he or she is about to do. A drop down, even followed by a warning confirmation box, is, in this case, a terrifyingly inadequate crappy solution.

  7. Jeff N

    I thought the article was going to be about how the *message recipients* were the ones with the bad UI. i.e. no way to tap a button to double-check the authenticity.

  8. Wukchumni

    I don’t know anything about computers-not my bag.

    We’ve sure come a long way in a century, from when the Paris Gun could blast terror from 81 miles away, with no warning.

    Ever since the lid shut on Pandoras Box in Nagasaki, we’ve been living on borrowed time, and if anything the threat is infinitely larger than during the Cold War, as we’ve become complacent to the consequences.

  9. vegeholic

    I believe we are approaching a state of diminishing returns on complexity. When complex systems fail we “fix” them by adding more layers of complexity. This cannot go on forever. This is the same reasoning that leads me to advocate going back to paper ballots for elections instead of computerized voting systems. As in the recent Intel processor bugs, the fixes themselves may open new, unintended opportunities for hacking, instability, and general mischief. It may be time to step away from the paradigm.

  10. shinola

    “…people on the scene reported that drivers were going over 100 miles an hour trying to get to safety.”

    Just where, on a relatively small island, did they think safety would be? Maybe the old TV mini-series “The Day After” needs to be dragged out of mothballs.

    1. Fastball

      The government says to “get inside” — preferably underground and to the centers of tall buildings.

      That way, when Bill Gates, the Kochs and Warren Buffet come from their bunkers in New Zealand in a few years, with their work crews, the roads will be clear and the skeletons will be neatly stacked for easy disposal.

    2. blennylips

      If it was me, I’d be heading to the The Byodo-In Temple, where you got a large mountain between you and Pearl Harbor, presumably a prime target. Goog drive says 30 to 60 minutes from Honolulu. Relatively safer?

      1. tooearly

        Yep , you and 700,000 other residents of Honolulu…
        no chance for everyone to be stuck in trafiic …

      2. fajensen

        Yeah – If only you can get there in the about 15-25 minutes there is between the warning and the ‘Boom’.

        In Vietnam, they made thousands of small bomb shelters using concrete man-hole wells all over the cities. Those could probably work for most people, although the people unlucky enough to be inside those “man-hole-shelters” that have a direct line-of-sight to the nuke going off will be probably be fried.

  11. David Jacobs

    Hawaii resident here. This Washington Post article shows you the interface.

    https://www.washingtonpost.com/news/morning-mix/wp/2018/01/16/that-was-no-wrong-button-in-hawaii-take-a-look/

    There are so many things wrong with that list.
    * Test and non test options are not grouped or color coded

    * Test designations don’t even use consistent language (Drill, Test, Demo Test) or location (at beginning or end).

    Of course since these operators can only send preset messages, it took waaaay too long to get a new preset message created for the cancellation notice.

    All of this also ignores how vulnerable Hawaii would be after a disaster (missile or weather). Move Peurto Rico 3 thousand miles from the nearest help and you begin to see how bad things could get. With global warming the weather threat is probably the bigger one.

    1. ChrisPacific

      Holy crap, that’s awful. To give just one example, every time there is an alert for a Hana Road landslide the user is millimeters away from sending a statewide missile alert. What if they clicked the wrong thing by mistake? Does the confirmation give a clear indication of what they are about to do? And we’ve already heard that confirmation dialogs were common enough that people were prone to clicking through on autopilot.

      I think the key point here is that HEMA’s risk management processes have been exposed as totally inadequate. In any public agency with this kind of responsibility, there should be a comprehensive risk register that tracks all problems that have been identified, translates them into real world scenarios and potential outcomes, and phrases them in language that managers can understand. Tell a manager that their systems are antiquated and poorly designed and they need to come up with millions of dollars to fix them, and their eyes are likely to glaze over. But tell them that they might be responsible for starting a global nuclear war by accident? That’ll get their attention.

    2. tooearly

      That Screen Shot is really a SERIOUS PIECE OF WORK!
      Who approves something like this? Who designs something like this?

  12. Enquiring Mind

    The Hawaii Fiasco, now there’s some airport fiction awaiting an author. ;p

    As someone down range from those Nork Nukes, I’m concerned that the system people didn’t factor in a few items, especially for the catastrophic error modes or fail safes. For example, what would be the impact of a few quick, ill-considered keystrokes by a shift-changee? In other words, What Is The Worst That Could Happen? Now, how to ensure that doesn’t occur?

  13. Jeremy Grimm

    Aside from the bad user interface what purpose does this system perform? Was it set up solely to warn against a single small nuclear strike? A couple of warheads armed with hydrogen bombs wouldn’t leave much. What good is a warning to the population? It’s like the sirens that told all us school kids to hide under our desks in the early 1960s. I didn’t feel safer under my desk then and certainly not now.

    A better response to the threat of a small scale nuclear strike might be some efforts to protect against EMP so that any regions that survive can communicate and coordinate their efforts to continue surviving.

    1. tooearly

      Put your head between your legs and kiss your A## good bye?

      Amazing the pretense of this policy of shelter in Place.

  14. JohnnySacks

    The last paragraph brings to mind one of the fictional scenarios in my absolute favorite sci-fi read. A solar system society destroyed not by an interplanetary conflict, but internally via cascading failures of highly integrated and optimized software systems. Vernor Vinge’s A Deepness in the Sky, prescient in many ways.

  15. The Rev Kev

    I am afraid that I have a pet peeve about computer user interfaces (of course I do). It’s not like that there is not a large body of work that has been done on how to design a great user interface. Apple use to have a good interface, from what I am told, until the marketing droids went to work on it. And yet when you go out into the world you see stuff on computer screens that resemble stuff from the 90s. Stuff that resembles nothing less that a DOS Shell of all things. Really? Seriously?
    On some level, it offends my sense of what a great interface should be and how it should be work. It’s like the software crew were just happy to get the program running long enough to push out the door and couldn’t be bothered how it would look or figured that it was somebody else’s job. The best one that I have ever seen done in fiction was the LCARS operating system as depicted in Star Trek’s Voyager. Even the colour palette for that sytem was designed for human eyes (http://www.lcarscom.net/lcars_colors.gif) and not the harsh colours that I see in use today and the klutzy design.
    Bad design can have real world consequences When the US Navy shot down an Iranian airliner back in ‘88, bad interface design (http://xenon.stanford.edu/~lswartz/vincennes.pdf) was one of the nominated causes. With the Hawaiian interface problem, when the person at the computer chose an option that meant that it was real, all the borders and the like on the screen should have changed to the colour red to show that this was a real situation and not a drill. And yes, I have seen that approach on the LCARS system. A change state like that would tell the user that this was a real ballistic missile event and would need confirmation before proceeding. See, not very hard, is it?

  16. JBird

    Does anyone else remember feeling during the Cold War that we were men walking? I remember it being pointed out that with all the nuclear powers ( there many more numbers of bombs and generally more powerful too) on hair triggers a war could destroy civilization during an afternoon picnic. One could easily spend a long lunch while the world burned especially if you were in someplace like the Sierras.

    I think it’s the unseriousness that’s the problem not the interface(well it is but not the important one). If your family enjoyed the 2nd World War and you grew up in the Cold War it was drummed into you that nukes was serious stuff. People at least pretended governance and wars were important adult stuff. But look at Puerto Rico, or the obsession with having war with Iran or North Korea. Now it’s tweeting threats. It’s like a game with us as the pieces on board. Oh well. Another reason to have a beer.

  17. ewmayer

    Lots of discussion about ‘better design’ and ‘technical fixes’, but let’s not let that obscure the fundamental issue here, which is that the only foolproof fix to this sort of threat is to eliminate the thousands of megaton warheads in arsenals worldwide, i.e. to get rid of the existential stupidity amplifier system which is a nuclear-armed world.

Comments are closed.