Coping with Computers in the Cockpit
eBook - ePub

Coping with Computers in the Cockpit

Sidney Dekker, Erik Hollnagel, Sidney Dekker, Erik Hollnagel

Share book
  1. 248 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Coping with Computers in the Cockpit

Sidney Dekker, Erik Hollnagel, Sidney Dekker, Erik Hollnagel

Book details
Book preview
Table of contents
Citations

About This Book

First published in 1999, this volume examined how increasing cockpit automation in commercial fleets across the world has had a profound impact on the cognitive work that is carried out on the flight deck. Pilots have largely been transformed into supervisory controllers, managing a suite of human and automated resources. Operational and training requirements have changed, and the potential for human error and system breakdown has shifted. This compelling book critically examines how airlines, regulators, educators and manufacturers cope with these and other consequences of advanced aircraft automation.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Coping with Computers in the Cockpit an online PDF/ePUB?
Yes, you can access Coping with Computers in the Cockpit by Sidney Dekker, Erik Hollnagel, Sidney Dekker, Erik Hollnagel in PDF and/or ePUB format, as well as other popular books in Scienze sociali & Sociologia. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2018
ISBN
9780429864209
Edition
1
Subtopic
Sociologia

1 Computers in the Cockpit: Practical Problems Cloaked as Progress

SIDNEY DEKKER AND ERIK HOLLNAGEL
Linköping University, Sweden

Introduction

Another book on aviation automation? Well, perhaps this is not a book on aviation automation per se. It is a book, rather, on how the entire aviation industry is coping with automation. Or more precisely, on how it is coping with the human consequences of automation, which it has fielded over the last two decades. The aviation domain, and the cockpit in particular, is frequently seen to be on the forefront of technological and human-machine interface developments. And indeed, in some sense, progress in the cockpit has been enormous. But from another angle, innovations presented as progress have brought along a large number of unanticipated practical problems – practical problems that today form the inherited by-products of once-vaunted automation technologies. Practical problems cloaked as progress, in other words.
Not only individual pilots have to learn how to operate automated aircraft. The entire aviation industry is learning how to deal with the profound implications that automation carries for the operation, design, regulation and certification of passenger aircraft. The industry is struggling to find ways to meaningfully educate and train operators for their new and different work in the automated environment. It is reconsidering who to select for these jobs and how. And now that current cockpit designs are firmly in place and its problems better-accounted for, it is regrouping to begin to regulate and certify cockpit equipment on the basis human factors criteria. This while manufacturers are voicing continued concern over the lack of concrete and specific ideas for better feedback design in the next generation of flightdeck automation.
One result of being ahead of the pack is that an industry encounters and, hopefully, solves a host of new problems and thereby generates an experience that can be helpful to others. It is therefore quite ironic that many other industries are in fact (re)-discovering similar automation related problems for themselves as they stumble ahead on technology-driven paths. For example, ship bridges are seeing more and more moded automation technology become responsible for navigation and many other on-board tasks. Standardised design of interfaces or system logic does not appear to exist and formal operator training is neither required nor well-organised. The result is that ships have begun to show the same pattern of human-machine breakdowns and automation surprises that were discovered in aviation years ago (see for example the grounding of the Royal Majesty, NTSB, 1996). Hence the need for this book: not only is it relevant to exchange experiences and swap lessons across one industry – aviation – it is also critical to show how one industry has to cope with the consequences of its own automation to industries that are poised to adopt similar systems in their operational environments.

Practicalproblems galore

To some extent research efforts and operational experience are beginning to pay off. In itself, this book is an outflow of the increasing realisation that automation is a mixed blessing. It reflects operational, educational and regulatory countermeasures that were for example inspired by the 1996 U.S. Federal Aviation Administration report on human-automation interfaces onboard modem airliners (FAA, 1996). Closer to the ground, many organisations that deal with complex automation acknowledge that changes in technology can be a double-edged sword. One defence procurement agency for example, says that they must strike a balance between simpler equipment and highly automated equipment. The reason they cite is that the former imposes greater manpower burdens but the latter can create excessive demands on operator skills and training. Such lessons learned indicate that old myths about automation (for instance that it reduces investments in human expertise) are becoming unstuck.
Nevertheless, almost all sectors of the aviation industry are still struggling in one way or another to adapt to the emerging realities of automation technology – to which this entire book is testimony. The training of pilots from the ab initio (zero-hour) level upward, for instance, has come to the fore as a key issue relative to automated flight decks (Nash, 1998; Lehman, 1998). Does requisite time in single piston aircraft of light wing loading have anything to do with becoming a jet transport pilot in a world of near sonic, satellite-guided computer-managed flight at 35,000 feet? These questions emerge during a time when European operators and regulators are attempting to harmonise training and licensing standards across the continent and while North-American operators are gradually losing a major source of pilots (the military), with collegiate aviation programs working to fill the gap (NRC, 1997). Questions about preparing pilots for their new supervisory roles do not stop at the ab initio level. The debate about optimal training strategies pervades the airline induction (multi-crew, operational procedures) and type rating stages as well. A new pilot’s first encounter with automation is often delayed to late in his or her training. This means it may fall together with the introduction to multi-crew and jet-transport flying, creating excessive learning demands. Telling pilots later on to be careful and not to fall into certain automation traps (a common ingredient in classroom teaching as well as computer-based training – CBT) does little to prevent them from falling into the traps anyway. The end result is that much of the real and exploratory learning about automation is pushed into line-flying.
Automation also erodes the traditional distinction between technical and non-technical skills. This tradition assumes that interactions with the machine can be separated from crew co-ordination. But in fact almost every automated mishap indicates that the two are fundamentally interrelated. Breakdowns occur at the intersection between crew co-ordination and automation operation. Crew resource management training is often thought to be one answer and is by now mandatory. It is also regulated to include some attention to automation. But all too often CRM is left as a non-technical afterthought on top of a parcel of technical skills that pilots are already supposed to have. Air carriers are coming to realise that such crew resource management training will never attain relevance or operational leverage.
Another issue that affects broad sections of the aviation industry is the certification of flight decks (and specifically flight management systems) on the basis of human factors criteria (Harris, 1997; Courteney, 1998). One question is whether we should certify the process (e.g. judging the extent and quality of human factors integration in the design and development process) or the end-product. Meanwhile, manufacturers are working to reconcile the growing demand for user-friendly or human-centred technologies with the real and serious constraints that operate on their design processes. For example, they need to design one platform for multiple cultures or operating environments. But at the same time they are restricted by economic pressures and other limited resource horizons (see e.g. Schwartz, 1998). Another issue concerns standardisation and the reduction of mode complexity onboard modem flight decks. Not all modes are used by all pilots or carriers. This is due to variations in operations and preferences. Still all these modes are available and can contribute to complexity and surprises for operators in certain situations (Woods & Sarter, 1998). One indication of the disarray in this area is that modes which achieve the same purpose have different names on different flight decks (Billings, 1997).
Air traffic control represents another large area in the aviation industry where new technology and automation are purported to help with a variety of human performance problems and efficiency bottlenecks (e.g. Cooper, 1994). But the development of new air traffic management infrastructures is often based on ill-explored assumptions about human performance. For example, a common thought is that human controllers perform best when left to manage only the exceptional situations that either computers or airspace users themselves cannot handle (RTCA, 1995). This notion directly contradicts earlier findings from supervisory control studies (e.g. the 1976 Hoogovens’ experience) where far-away operators were pushed into profound dilemmas of when and how to intervene in an ongoing process.

Technology alone cannot solve the problems that technology created

In all of these fields and areas of the aviation system we are easily fooled. Traditional promises of technology continue to sound luring and seem to offer progress towards yet greater safety and efficiency. For example, enhanced ground proximity warning systems will all but eradicate the controlled flight into terrain accident. We become focused on local technological solutions for system-wide, intricate human-machine problems. It is often very tempting to apply a technological solution that targets only a single contributor in the latest highly complex accident. In fact, it is harder take co-ordinated directions that offer real progress in human-centred or task-centred automation than to go with the technological’, the latest box in the cockpit that can putatively solve for once and for all the elusive problems of human reliability.
Many of our endeavours remain fundamentally technology centred. Ironically, even in dealing with the consequences of automation that we have already, we emphasise pushing the technological frontier. We frame the debate of how to cope with computers in the cockpit in the technical language of the day. For example, can we not introduce more PC-based instrument flight training to increase training effectiveness while reducing costs? Should we put Head-Up-Displays on all flight decks to improve pilot awareness in bad weather approaches? How can we effectively teach crew resource management skills through computer-based training tools? With every technical question asked (and putatively answered), a vast new realm of cognitive issues and problems is both created and left unexplored. The result, the final design, may beleaguer and surprise the end-user, the practitioner. In turn the practitioners’ befuddlement and surprise will be unexpected and puzzling to us. Why did they not like the state-of-the-art technology we offered them? The circle of miscommunication between developer and user is complete.
One reason for this circle, for this lack of progress, is often seen to lie in the difficulties of technology transfer – that is, the transfer of research findings into usable or applicable ideas for system development and system improvement. This book is one attempt to help bridge this gap. It provides yet another forum that brings together industry and scientific research.

Investing in human expertise and automation development

The book echoes two intertwined themes. The first theme explores how and where we should invest in human expertise in order to cope with computers in the cockpit today and tomorrow. It examines how practitioners can deal with the current generation of automated systems, given that these are likely to stay in cockpits for decades to come. It examines how to prepare practitioners for their fundamentally new work of resource management, supervision, delegation and monitoring. For example, various chapters converge on what forms cockpit resource management training could take in an era where flying has become virtually equated with cockpit resource management (managing both human and automated resources to carry out a flight). There are chapters that target more specific phases in a pilot’s training career, for instance the ab initio phase and the transition training phase. Yet another chapter makes recommendations on how an air carrier can proceduralise the use of automation in terms of how different levels of automation affect crewmember duties, without getting bogged down in details that are too prescriptive or too fleet-specific.
The second theme explores what investments we must make in the development of automated systems. The industry would like to steer the development of additional cockpit equipment and air traffic management systems in human-centred directions – but how? Current processes of development and manufacturing sometimes seem to fail to check for even the most basic human-computer interaction flaws in for example flight management systems coming off the line today. Two chapters examine whether and how certification and increased regulation could help by setting up standards to certify new or additional cockpit equipment on the basis of human factors criteria. Although these chapters represent current European state of the art in this respect, much work needs to be done and much more agreement needs to be reached, for example on whether to pursue quantitative measures of human error and human performance in system assessment. Another chapter lays out how we could finally break away from the traditional but unhelpful, dichotomous notion of function allocation in our debates about automation. As this automation is becoming more and more powerful, allocation of a priori decomposed functions misses the point. Also, such increasingly powerful automation needs to show feedback about its behaviour, not just its state or currently active mode – an issue targeted in a chapter on automation visualisation. Finally, one chapter looks into the problem of extracting empirical data about the future. As technology development goes ahead, in aviation and in many other fields of human endeavour, it becomes ever more important to be able to evaluate the human factors consequences of novel technological solutions before huge resources are committed to a particular system design. This chapter explains how to generate empirical data relating to human performance in systems that do not yet exist.

Real progress

As automation has brought along many practical problems under the banner of continued progress, the aviation industry is struggling to cope with the human-machine legacy of two decades of largely technology-driven automation. The lessons learned so far and the lessons still to be learned, carry information not only for aviation, but for a number of industries that are opening their doors to similar problems. Real progress across multiple industries, not the kind that cloaks the sequential introduction of practical problems into different worlds of practice, can only be achieved through acknowledging the similarity in challenges that we have created for ourselves. Hopefully this book offers some leads.

Acknowledgements

The initiative for Coping with Computers in the Cockpit was developed in the Swedish Centre for Human Factors in Aviation at the Linkoping Institute of Technology with the encouragement and support from Luftfartsinspektionen, the Swedish Flight Safety Department. The editors owe particular gratitude to the Director of Flight Safety, Arne Axelsson, and the Flight Safety Department’s technical monitors of civil aviation human factors work at Linkoping University, Kaj Skarstrand and Bo Johansson.

2 Automation and its Impact on Human Cognition

SIDNEY DEKKER AND DAVID WOODS*
Centre for Human Factors in Aviation, Linköping Institute of Technology, Sweden
* The Ohio State University, USA

Introduction

We introduce new technology because we think it helps people perform better. For example, we expect technology – and especially automation technology – to reduce people’s workload, improve situation awareness, and decrease the opportunity for human error. Indeed, we value automation for its impact on human cognition. Through aiding the operator’s awareness and decision making, new technology can increase system safety and improve the economy or accuracy of operations.
Looking a little closer, it becomes clear that we express the promises of automation technology almost always in quantitative terms. For example, less human workload will result if we replace a portion of the human’s work with machine activity. And when we give the human less to do – when we shrink the bandwidth of human interference with system operations – we leave fewer opportunities for human error. Indeed, this is the traditional idea: that the replacement of human activity with machine activity has no larger consequences on the overall human-machine ensemble. The only thing that is affected is some kind of outcome measure. This outcome measure may be error count, or workload, or economy, and – indeed – all of them are quantifiable, and all of them somehow get better when we introduce automation technology.
Some of these quantitative effects have been realised, but only in a narrow empirical sense. For example, in highly automated systems there is less workload during certain times. There is also less opportunity – or no more opportunity – to make certain kinds of errors. No one is going to stick the wrong punch card into a flight management system: flight management systems don’t work on punch cards (although the equivalent of downloading pre-created flight plans from an airline’s operational base into an individual FMS does in fact exist).
But our pre-occupa...

Table of contents