Human Performance on the Flight Deck
eBook - ePub

Human Performance on the Flight Deck

Don Harris

Share book
  1. 384 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Human Performance on the Flight Deck

Don Harris

Book details
Book preview
Table of contents
Citations

About This Book

Taking an integrated, systems approach to dealing exclusively with the human performance issues encountered on the flight deck of the modern airliner, this book describes the inter-relationships between the various application areas of human factors, recognising that the human contribution to the operation of an airliner does not fall into neat pigeonholes. The relationship between areas such as pilot selection, training, flight deck design and safety management is continually emphasised within the book. It also affirms the upside of human factors in aviation - the positive contribution that it can make to the industry - and avoids placing undue emphasis on when the human component fails. The book is divided into four main parts. Part one describes the underpinning science base, with chapters on human information processing, workload, situation awareness, decision making, error and individual differences. Part two of the book looks at the human in the system, containing chapters on pilot selection, simulation and training, stress, fatigue and alcohol, and environmental stressors. Part three takes a closer look at the machine (the aircraft), beginning with an examination of flight deck display design, followed by chapters on aircraft control, flight deck automation, and HCI on the flight deck. Part four completes the volume with a consideration of safety management issues, both on the flight deck and across the airline; the final chapter in this section looks at human factors for incident and accident investigation. The book is written for professionals within the aviation industry, both on the flight deck and elsewhere, for post-graduate students and for researchers working in the area.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Human Performance on the Flight Deck an online PDF/ePUB?
Yes, you can access Human Performance on the Flight Deck by Don Harris in PDF and/or ePUB format, as well as other popular books in Technology & Engineering & Industrial Health & Safety. We have over one million books available in our catalogue for you to explore.

Information

1
A Systems Approach to Human Factors in Aviation

A Short (and Skimpy) History of Human Factors in Aviation

Human Factors, as a whole, is a relatively new discipline. Its roots lie firmly in the aviation domain with the work undertaken in the UK and North America during and shortly after World War II. It is also a somewhat fragmented and multifaceted subject, for its science base drawing on psychology, sociology, physiology/medicine, engineering and management science (to name but a few disciplines). And, as the Human Factors science base has grown over the last 60 years, with this increasing knowledge has come further specialisation and fragmentation with sub-disciplines in topics such as human-centred design, training and simulation, selection, management aspects (organisational behaviour) and, health and safety.
In a book looking back at the naissance of Aviation Human Factors, Chapanis (1999) reviewed his work at the Aero Medical Laboratory in the early 1940s where, among other things, he was asked to investigate why after landing some pilots occasionally retracted the undercarriage instead of the flaps in certain types of aircraft (particularly the Boeing B-17, North American B-25 and the Republic P-47). In these cases he observed that the actuation mechanisms in the cockpit for the undercarriage and the flaps were identical in shape and located next to each other. The corresponding controls on the Douglas C-47 were not arranged in this way and their methods of actuation were quite different from each other. The pilots flying this aircraft rarely unintentionally retracted the undercarriage after landing. This (what now seems obvious) insight into performance, especially in stressed or fatigued pilots, enabled him to understand how they may have confused the two controls. The remedy to the solution he proposed was simple: physically separate the controls in the cockpit and/or make the shape of the controls analogous to their corresponding component (hence, the flap lever was redesigned to resemble a trailing edge flap and the undercarriage lever to resemble a wheel with a tyre).
US Army Air Corps pilot losses during World War II were roughly equally distributed between three principal causes: about one-third of pilots were lost in training crashes; a further third in operational accidents, and the remaining third were lost in combat (Office of Statistical Control, 1945). This suggested various different deficiencies inherent in the system. In further work investigating cockpit design inadequacies Grether (1949) described the difficulties of reading the early three-pointer altimeter. Previous work and numerous fatal and non-fatal accidents had shown that pilots frequently misread this instrument. The effects of different designs of altimeter on interpretation time and error rate were investigated. Six different variations of altimeter design were evaluated using combinations of one, two or three pointers (both with and without an inset counter also displaying the altitude of the aircraft) as well as altimeters using three different formats of vertically moving scale. The results showed marked differences in the error rates for these different designs. The three-pointer altimeter took slightly over seven seconds to interpret and produced over 11 per cent errors of 1,000 feet or more. Vertical, moving scale designs took less than two seconds to interpret and produced less than 1 per cent of errors in excess of 1,000 feet.
These early examples of control and display design demonstrated that ‘pilot error’ alone was not a sufficient description for the causes of many accidents. In these cases, pilots fell into a trap left for them by their design (what has become known as ‘design-induced’ error). Fortunately, blaming the last person in the accident chain (usually the pilot) has lost all credibility in modern accident investigation. The modern approach is to take a systemic view of error and attempt to understand the relationships between all the components in the system, both human and technical. This is partially what this book hopes to achieve.
The principal focus for human performance research in the UK during this time was the Medical Research Council, Applied Psychology Unit in Cambridge. Here fundamental work was being undertaken on issues such as the direction of motion relationships between controls and displays (Craik and Vince, 1945) and Mackworth was developing his famous (if you are a psychology student) ‘clock’ to investigate the effects of prolonged vigilance in radar operators (Mackworth, 1944, 1948). As a result of the relatively small number of RAF pilots at this time and the large number of sorties being flown during the Battle of Britain, pilot fatigue became an issue. This problem re-surfaced in the latter half of World War II, when longer-range high-performance aircraft began to place increasing demands on their pilots. Losses as a result of fatigue rather than combat began to rise. At the MRC Kenneth Craik developed the ‘Fatigue Apparatus’ which later universally became known as the ‘Cambridge Cockpit’. This was based on the cockpit of a disused Supermarine Spitfire. Even as early as 1940, experiments using this seminal piece of apparatus established that control inputs from a fatigued pilot were different to those from a non-fatigued pilot.
The tired operator tends to make more movements than the fresh one, and he does not grade them appropriately 
 he does not make the movements at the right time. He is unable to react any longer in a smooth and economical way to a complex pattern of signals, such as those provided by the blind-flying instruments. Instead his attention becomes fixed on one of the separate instruments. 
 He becomes increasingly distractible. (Drew, 1940)
A further excellent history of the early work in Human Factors can be found in Roscoe (1997).
During these early years of the Human Factors discipline researchers happily turned their skills to many aspects of human performance. However, with increasing knowledge and specialisation, the discipline of Human Factors began to fragment. The sub-disciplines referred to earlier began to develop. Nevertheless, from the late 1950s and early 1960s Human Factors began to make increasingly large contributions, particularly in the three domains of selection, training and flight deck design. However, it was in the 1970s that the Human Factors discipline really began to take off with the advent of the ‘CRM revolution’ and the development of the ‘glass cockpit’.
The CRM (Cockpit – later Crew – Resource Management) revolution introduced applied social psychology and management science onto the flight deck. CRM evolved as a result of a series of accidents involving perfectly serviceable aircraft. The main cause of these accidents was attributed to a failure to utilise all the crew resources available on the flight deck in the best way possible, for example in the crash involving the Lockheed L-1011 (Tristar) in the Florida Everglades (National Transportation Safety Board, 1973). At the time of the accident the aircraft had a minor technical failure (a blown light bulb on the gear status lights) but actually crashed because nobody was flying it! The crew were ‘head down’ on the flight deck trying to fix the problem. Other accidents highlighted instances of Captains trying to do all the work while the other flight crew members were almost completely unoccupied; dominant personalities suppressing teamwork and error checking; or simply as a result of a lack of poor crew cooperation, coordination and/or leadership.
The CRM revolution also stimulated further research and changes in practice in the way that flight crew were selected and trained. This coincided with a change in the nature of work of the airline pilot from being a ‘hands on throttle and stick’ flyer to one of being a flight deck/crew manager of a highly automated machine. This change has been particularly pronounced in the last half of this period.
Throughout the 1950s and 1960s airlines tended to rely heavily on the military for producing already trained pilots. Selection techniques assumed that candidates were trained and competent (e.g. interviews, reference checks and maybe a flight check – see Suirez, Barborek, Nikore and Hunter, 1994). However, in the 1970s and 1980s, particularly in Europe where there was less emphasis on recruiting pilots from a military background and where there was also an increasing demand for commercial pilots, greater emphasis began to be placed upon the selection processes for ab initio trainees. In this case it was important to assess the candidate’s potential to become a successful airline pilot (i.e. not fail the training, which was very costly). Psychometric and psychomotor tests became more commonplace, similar in many ways to the procedures military selection panels had been using for some years. However, as the management role of pilots began to develop (especially with the increasing uptake of CRM) qualities such as judgement and problem solving, communication, social relationships, personality and motivation became as important as the technical skills involved in flying a large jet (Bartram and Baxter, 1996).
Simultaneously, the nature of pilot training began to evolve from the 1970s onwards. Initially, there was increasing use of flight simulators which began to improve in fidelity (and in relative terms) began to decrease in cost. More training was now accomplished on the ground than in the air. The content of the training programmes also began to change (and is still changing). Hitherto, training and licensing concentrated on flight and technical skills (manoeuvring the aircraft, navigation, system management and fault diagnosis, etc.). It addressed issues such as flying the aircraft manually or dealing with emergencies resulting from a technical failure (e.g. engine failure at V1 or performing flapless approaches). However, with increasing technical reliability, it was evident that the major cause of air accidents had now become human error and that this often resulted from the failure of the flight deck crew to act in a well co-ordinated manner. Nontechnical skills training (CRM instruction) was introduced, initially for flight deck crew and subsequently throughout the aircraft to include all the cabin crew. This was partly contingent upon a change in philosophy towards Line Oriented Flight Training (LOFT) where training as a full crew and acting as a team member was increasingly stressed. Performance was evaluated with respect to how the crew handled flying the aircraft, how they dealt with the technical aspects of a problem, and most importantly, how the people on the flight deck (and if necessary elsewhere in the aircraft) were employed to address the issue (see Foushee and Helmreich, 1988).
In terms of the design of the pilot interface, flight deck displays have progressed through three eras: the ‘mechanical’ era; the ‘electro-mechanical’ era; and most recently, the ‘electro-optical’ (E-O) era (Reising, Ligget and Munns, 1998). With the advent of the ‘glass cockpit’ revolution (where electro-optical display technology began to replace the electro-mechanical flight instrumentation in the early 1980s) opportunities were presented for new formats for the display of information and Human Factors started to play an increasingly important part in their design. However, while the new display technology represented a visible indication of progress (e.g. as implemented on the ‘hybrid’ Airbus A300/A310 and Boeing 757/767 aircraft) the true revolution resided in the less visible aspects of the flight deck, particularly the increased level of automation available as a result of the advent of the Flight Management System (FMS) or Flight Management Computer (FMC) (see Billings, 1997). The electro-optical display technology was merely the phenotype: the digital computing systems being introduced represented the true change in the genotype of the commercial aircraft. Not only did these higher levels of automation allow opportunities for reducing the number of crew on the flight deck from three to two (eradicating the function of the Flight Engineer – something that had been done in less highly automated, shorter-range commercial aircraft some years earlier) but it also required a change in the skill set required by pilots. Aircraft were now ‘managed’ and not ‘flown’.
With these increasing levels of automation on the flight deck in the 1980s and early 1990s a great deal of research was undertaken initially in the area of workload measurement and prediction, and subsequently in enhancing the pilot’s situation awareness. Autoflight systems had certainly reduced the physical workload associated with flying and computerised display systems (coupled with the FMS/FMC) had also reduced the mental workload associated with many routine computations linked to in-flight navigation. However, it was wrong to say that automation reduced workload overall: it had simply changed its nature. Wiener (1989) labelled the automation in these first generation ‘glass cockpit’ aircraft ‘clumsy’ automation. It reduced crew workload where it was already low (e.g. in the cruise) but increased it dramatically where it was already high (e.g. in terminal manoeuvring areas).
The new breed of multifunctional electro-optical displays, together with high levels of automation, had both the ability to promote Situation Awareness through new, intuitive display formats and also to simultaneously degrade it by hiding more information than ever before. Automation made many systems ‘opaque’. Dekker and Hollnagel (1999) described automated flight decks of the time as being rich in data but poor in information. While the advanced technology aircraft being introduced were much safer and had a far lower accident rate (Boeing Commercial Airplanes, 2009) they introduced a new type of accident (or perhaps they merely exaggerated an already existing underlying problem). Dekker (2001) described these as ‘going sour’ accidents, where the aircraft was ‘managed’ into disaster, usually as a result of a common underlying set of circumstances: human errors, miscommunications and mis-assessments of the situation. But as stated previously, it would be very wrong to blame the pilots in these cases. The accidents resulted from a number of factors pertaining to the design of the flight deck, the situation and the crews’ knowledge/training which all conspired against their ability to coordinate their activities with the aircraft’s automation. It was increasingly apparent that it was almost impossible to separate design, from procedures and from training if it was desired to optimise the whole system of operation, including the human element.
Today the practice of Human Factors in the aerospace industry today is largely an incremental development of the situation described up to this point; however there is now universal acceptance of its importance to safe operations. Human Factors is firmly embedded in the selection, training and design processes and is also a cornerstone of all safety management systems. Good Human Factors practice is effectively mandated through many regulations. For example, an effective safety management system is now an essential part of any Air Operator’s Certificate (AoC) – for instance see EU-OPS (2009): since September 2007 Certification Specification (CS) 25.1302 and AMC (Acceptable Means of Compliance) 25.1302 have been adopted by EASA to mandate for ‘good’ Human Factors design on the flight deck: training in Human Performance and Limitations and CRM is now mandatory for all pilots, and recurrent training in these topics is needed for all professional pilots (see Civil Aviation Publication 737: CAA 2006). In some cases, though, more control has been delegated back to the airlines. Training curriculum requirements may now be delegated back to suitably experienced airlines. The approach of developing training needs directly from line operational requirements reflects the training philosophy outlined by the Federal Aviation Administration (FAA) in the Advanced Qualificati...

Table of contents