Introduction
It has been already nearly 25 years since our original chapter was published in the first edition of this book (Parasuraman & Mouloua, 1996). That previous chapter covered many of the critical human performance issues related to highly automated systems and, in particular, it emphasized the aviation system. However, the same problems that were examined in the original book chapter remain pertinent today, albeit across other domains and applications (e.g., health care and medicine, industrial process control, nuclear, etc.). The proliferation of automated systems and devices continues to increase at a remarkable rate due to the evident benefits to human performance and safety as has been documented in previous publications (see Billings, 1997; Bogner, Mouloua, & Parasuraman, 1994; Garland & Wise, 2010; Mouloua & Koonce, 1997; Mouloua & Parasuraman, 1994; Parasuraman & Mouloua, 1996; Scerbo & Mouloua 1999; Sheridan, 2002; Vincenzi, Mouloua, & Hancock, 2004a, 2004b; Wiener & Nagel, 1988). Collectively, these texts have also covered a wide array of chapters pertaining to problems often encountered in these highly automated systems. For example, several of these problems were attributed to automation-induced complacency and/or automation-induced monitoring inefficiency. With the associated concerns for de-skilling human operators, such problems have also been documented in accident reports (e.g., National Aeronautics and Space Administration's Aviation Safety Reporting Program, NASA ASRS). Similarly, a line of programmatic work by Endsley and her associates has also covered a variety of problems related to loss of situation awareness in highly automated systems (Endsley & Garland, 2000; Endsley & Jones, 2004; Endsley & Strauch, 1997). With the advent of even more fully autonomous and semiautonomous systems, in both the military and the civilian airspace, as well as military surface environments, entertainment, learning, and medical systems, it seems inevitable that the same automation problems will continue to persist as long as the machines and intelligent agents are replacing the active role of the human systems operators. Such replacements place the human in a more passive or supervisory role, a role that is often not well suited for humans (Hancock, 2013; Parasuraman & Mouloua, 1996). The present chapter is an updated evolution from our previous work, published in our original book. Here, we provide an update contingent upon developments in the literature, centered on human capabilities in automation monitoring.
The revolution ushered in by the digital computer in the latter half of the last century transformed many of the characteristics of work, leisure, and travel for most people throughout the world. Even more radical changes have occurred during this century, as computers have increased in power, speed, availability, flexibility, and in that elusive concept known as âintelligence.â Only a neo-Luddite would want to operate in the 21st century without the capabilities that the new computer tools provide; and perhaps even a latter-day Thoreau would not wish to trade in his word processor for pen and paper. And, yet, although we have become accustomed to the rise of computers and as consumers demanded that they perform even greater feats, many have felt a sense of unease at the growth of computerization and automation in the workplace and in the home. Although there are several aspects to this disquiet, there is one overriding concern: Who will watch the computers?
The concern is not just the raw material for science-fiction writers or germane only to the paranoid mind but something much more mundane. Computers have taken over more of human workâostensibly leaving humans less to do, to do more in less time, to be more creative in what they do, or to be free to follow other pursuits. For the most part, computers have led to these positive outcomesâthey have freed us from the hard labor of repetitive computation and allowed us to engage in more creative pursuits. But in some other cases, the outcomes have not been so sanguine; in these instances, human operators of automated systems may have to work as hard or even harder, for they must now watch over the computers that do their work. This may be particularly true in complex human-machine systems in which several automated subsystems are embedded, such as the commercial aircraft cockpit, the nuclear power station, and the advanced manufacturing plant. Such complex, high-risk systems, in which different system subcomponents are tightly âcoupled,â are vulnerable to system monitoring failures that can escalate into large-scale catastrophes (Perrow, 1984; Weick, 1988). Editorial writers have rightly called for better understanding and management of these low-probability, high-consequence accidents (Koshland, 1989).
One of the original reasons for the introduction of automation into these systems was to assist humans in dealing with complexity and to relieve them of the burden of repetitive work. The irony (Bainbridge, 1983) is that one source of workload may be replaced by another: Monitoring computers to make sure they are doing their job properly can be as burdensome as doing the same job manually and can impose considerable mental workload on the human operator. Sheridan (1970) first discussed how advanced automation in modern human-machine systems changes the nature of the task demands imposed on the human operator of such systems. He characterized the role of the human operator in highly automated systems as altered from that of an active, manual controller to a supervisor engaged in monitoring, diagnosis, and planning. Each of these activities can contribute to increased mental workload.
Many of the changes brought about by automation have led to significant system benefits, and it would be difficult to operate many complex modern systems such as nuclear power plants or military aircraft without automation (Sheridan, 1992). Although users of automated systems often express concerns about the trend of âautomation for automation's sakeâ (Peterson, 1984), many automated systems have been readily accepted and found invaluable by users (e.g., the horizontal situation indicator map display used by pilots). At the same time, some other changes associated with automation have reduced safety and user satisfaction, and a deeper understanding of these changes is necessary for successful implementation and operation of automation in many different systems (Mouloua & Parasuraman, 1994; Wickens, 1994; Wiener, 1988). Among the major areas of concern is the impact of automation on human monitoring. Automation of a task for long periods of time increases the demand on the operator to monitor the performance of the automation, given that the operator is expected to intervene appropriately if the automation fails. Because human monitoring can be subject to error in certain conditions, understanding how automation impacts on monitoring is of considerable importance for the design of automated systems. This chapter discusses the interrelationships of automation and monitoring and the corresponding implications for the design of automated systems.
Examples of Operational Monitoring: Normal Performance and Incidents
It has become commonplace to point out that human monitoring can be subject to errors. Although this is sometimes the case, in many instances, operational monitoring can be quite efficient. In general, human operators perform well in the diverse working environments in which monitoring is required. These include air traffic control, surveillance operations, power plants, intensive-care units, and quality control in manufacturing. In large part, this probably stems from general improvements over the years in working conditions and, in some cases (although not generally), from increased attention to ergonomic principles. In one sense, when the number of opportunities for failure are consideredâvirtually every minute for these continuous, 24-hour systemsâthe relatively low frequency of human monitoring errors is quite striking.
This is not to say that errors do not occur. But often when human monitoring is imperfect, it occurs under conditions of work that are less than ideal. Consider the monitoring performance of personnel who conduct X-ray screening for weapons at airport security checkpoints. These operators are trained to detect several types of weapons and explosives, yet they may rarely encounter them in their daily duty periods. To evaluate the efficiency of the security screening, Federal Aviation Administration (FAA) inspectors conduct random checks of particular airline screening points using several test objects corresponding to contraband items, including guns, pipe bombs, grenades, dynamite, and opaque objects. The detection rate of these test objects by airport X-ray screening personnel is typically good, although not perfect, as shown in Figure 1.1 (Air Transport Association, 1989). Founded in 2001, the Transportation Safety Administration (TSA) was tasked to administer airport security screening and safety procedures throughout the U.S. airports. This has led to much improved safety standards related to TSA personnel selection and training. However, there still exist some human factors challenges that are readily understandable given even a cursory evaluation...