Detecting and Mitigating Automation Surprise

Robert Mauro

Automation surprise occurs when an automated system behaves in ways that the operators do not expect.  At best automation surprise leads to higher workload and inefficient operations.  At worst, it can lead to disaster.  All automation surprises are not the same.  They are the result of different phenomenon and therefore must be detected and mitigated by different approaches. A “one-size-fits-all-approach” to the mitigation of automation surprise cannot succeed.  In this project we explore the mechanisms underlying the two main causes of automation surprise – inadequate mental models and inadequate information about the state of the automation – and seek to develop prototype interventions.

Although this work is focused on aviation safety, the work has implications for any situation in which intelligent or “quasi-intelligent” computer systems are used.

The capabilities of automated flight systems increased rapidly following the introduction of the electronic autopilot in the 1940’s.  In normal operations, the automated flight system of the modern airliner can now control nearly all of the functions required for flight.  The effect of increased automation has been largely positive, greatly reducing errors due to pilot fatigue and allowing consistent precise navigation and performance.  However, automation has given rise to new problems caused by faulty interactions between the pilot and the automated flight systems (AFS). In these cases, the flight crew expects the automation to command one behavior and is surprised when the automation commands another behavior.  Automation surprises create additional workload. When they do not jeopardize the safety of flight, automation surprises are a nuisance.  But when the automation commands an aircraft trajectory that violates airspace or operational limits, automation surprise becomes a major problem and has been a cause of several major airline accidents.

For pilots to construct adequate mental models of the automation, they do not need to know the intricacies of the underlying engineering, but they must know how the system interacts with the environment – how it obtains information, what it controls, and what targets it is trying to achieve. But having this knowledge is not sufficient.  It merely provides a framework.  At every point in time, the model must be populated with current information and how it relates to the intended operation.  This requires: that the pilots know where to find the relevant information, that they attend to these sources, interpret the information correctly, and integrate this information with their stored knowledge of the automated flight system and intended operation.  This analysis suggests that there are two somewhat distinct problems that must be addressed: 1) problems in training that lead to the development of inadequate mental models and 2) problems in flight in attention, interpretation, and integration of the available information.  Both are addressed in this work.