Practical Human Factors for Pilots
eBook - ePub

Practical Human Factors for Pilots

Capt. David Moriarty

Condividi libro
  1. 304 pagine
  2. English
  3. ePUB (disponibile sull'app)
  4. Disponibile su iOS e Android
eBook - ePub

Practical Human Factors for Pilots

Capt. David Moriarty

Dettagli del libro
Anteprima del libro
Indice dei contenuti
Citazioni

Informazioni sul libro

Practical Human Factors for Pilots bridges the divide between human factors research and one of the key industries that this research is meant to benefit—civil aviation. Human factors are now recognized as being at the core of aviation safety and the training syllabus that flight crew trainees have to follow reflects that. This book will help student pilots pass exams in human performance and limitations, successfully undergo multi-crew cooperation training and crew resource management (CRM) training, and prepare them for assessment in non-technical skills during operator and license proficiency checks in the simulator, and during line checks when operating flights.

Each chapter begins with an explanation of the relevant science behind that particular subject, along with mini-case studies that demonstrate its relevance to commercial flight operations. Of particular focus are practical tools and techniques that students can learn in order to improve their performance as well as "training tips" for the instructor.

  • Provides practical, evidence-based guidance on issues often at the root of aircraft accidents
  • Uses international regulatory material
  • Includes concepts and theories that have practical relevance to flight operations
  • Covers relevant topics in a step-by-step manner, describing how they apply to flight operations
  • Demonstrates how human decision-making has been implicated in air accidents and equips the reader with tools to mitigate these risks
  • Gives instructors a reliable knowledge base on which to design and deliver effective training
  • Summarizes the current state of human factors, training, and assessment

Domande frequenti

Come faccio ad annullare l'abbonamento?
È semplicissimo: basta accedere alla sezione Account nelle Impostazioni e cliccare su "Annulla abbonamento". Dopo la cancellazione, l'abbonamento rimarrà attivo per il periodo rimanente già pagato. Per maggiori informazioni, clicca qui
È possibile scaricare libri? Se sì, come?
Al momento è possibile scaricare tramite l'app tutti i nostri libri ePub mobile-friendly. Anche la maggior parte dei nostri PDF è scaricabile e stiamo lavorando per rendere disponibile quanto prima il download di tutti gli altri file. Per maggiori informazioni, clicca qui
Che differenza c'è tra i piani?
Entrambi i piani ti danno accesso illimitato alla libreria e a tutte le funzionalità di Perlego. Le uniche differenze sono il prezzo e il periodo di abbonamento: con il piano annuale risparmierai circa il 30% rispetto a 12 rate con quello mensile.
Cos'è Perlego?
Perlego è un servizio di abbonamento a testi accademici, che ti permette di accedere a un'intera libreria online a un prezzo inferiore rispetto a quello che pagheresti per acquistare un singolo libro al mese. Con oltre 1 milione di testi suddivisi in più di 1.000 categorie, troverai sicuramente ciò che fa per te! Per maggiori informazioni, clicca qui.
Perlego supporta la sintesi vocale?
Cerca l'icona Sintesi vocale nel prossimo libro che leggerai per verificare se è possibile riprodurre l'audio. Questo strumento permette di leggere il testo a voce alta, evidenziandolo man mano che la lettura procede. Puoi aumentare o diminuire la velocità della sintesi vocale, oppure sospendere la riproduzione. Per maggiori informazioni, clicca qui.
Practical Human Factors for Pilots è disponibile online in formato PDF/ePub?
Sì, puoi accedere a Practical Human Factors for Pilots di Capt. David Moriarty in formato PDF e/o ePub, così come ad altri libri molto apprezzati nelle sezioni relative a Technology & Engineering e Industrial Health & Safety. Scopri oltre 1 milione di libri disponibili nel nostro catalogo.

Informazioni

Anno
2014
ISBN
9780128007860
1

Introduction to human factors

This chapter defines human factors using the example of the nuclear accident at the Three Mile Island nuclear power station in 1979. Human factors is a multidisciplinary field that tries to optimize the human (liveware), procedural (software), and machine (hardware) elements of a system as well as trying to optimize the interactions among these elements. These elements comprise the SHEL model (software, hardware, environment, and liveware). Human factors is at the heart of aviation safety. In aviation we can consider that the pilots and other crew members are the humans, the standard operating procedures are the procedures, and the aircraft is the machine element. This book focuses mainly on optimizing the pilots and how they interact with the other elements. Crew Resource Management (CRM) is how pilots are normally exposed to human factors training. Non-technical (CRM) skills are assessed during simulator checks and line checks.

Keywords

behavioral marker; Crew Resource Management; human factors; non-technical skill; NOTECHS; SHEL model

1.1 The start of modern human factors

In March 1979 an argument took place in an aircraft hangar in Pennsylvania.1 The argument was heated and was between two experts working in a highly specialized field, each one arguing that his interpretation of events was the correct one. The stakes of this argument were about as high as one could imagine. If one of the experts was proved right, the nuclear reactor nearby was about to explode, spreading radioactive devastation across the USA in the same way as Chernobyl would spread radiation across Europe and Russia a few years later. To make the situation even more critical, the President of the United States, Jimmy Carter, and his wife were shortly to land at the airport before traveling to the plant to carry out an inspection. An explosion would then not only cause catastrophic radioactive fallout but would also kill the President of the USA.
The problems had started a few days earlier at Unit 2 of the Three Mile Island nuclear power plant near Harrisburg, Pennsylvania. The problem seemed simple enough at first. Under normal circumstances, water passing through the core of the reactor in the primary cooling system is heated to high temperature by the nuclear reaction. The pipework carrying this heated, radioactive water then comes into contact with the pipework of the secondary cooling system. Heat passes from the hot water in the primary cooling system to the cold water in the secondary cooling system, but no water is transferred between the two. For this reason, the water in the primary cooling system is radioactive but the water in the secondary cooling system is not. The water in the secondary cooling system must be extremely pure because when it turns to steam, it must drive carefully engineered turbine blades to generate electricity. Any impurities in the water would make this inefficient. A filter system keeps this water as pure is possible, although the filter in Unit 2 was known to be problematic and had failed several times recently. At 4 a.m. on 28 March 1979, a small amount of non-radioactive water from the secondary cooling system leaked out through the filter unit. Many of the instruments that are used to monitor the nuclear power plant rely on pneumatic pressure, and it seems that some of the water got into this air system and led to some instruments giving incorrect readings. One of the systems affected by this leak was the feedwater pumping system that moved the hot radioactive water in the primary cooling system into proximity to the cool, non-radioactive water in the secondary cooling system. The water contaminating the feedwater pumping system caused it to shut down, and with no heat being transferred from the primary to the secondary cooling system, an automated safety device became active and shut down the turbine.
As well as circulating water in the primary cooling system so that heat can be transferred to the secondary cooling system, the constant flow of water caused by the feedwater pumps was what kept the temperature of the core of the reactor under control. With these pumps shut down, emergency feedwater pumps activated to keep the core temperature under control. For some reason though, the valves in both of the emergency feedwater systems had been closed for maintenance and had not been reopened. The emergency pumps were working and this was indicated in the control room, but the operators did not realize that although the pumps were working, the pipes were closed and so feedwater could not reach the reactor to cool it down. There was an indication in the control room that the valves were closed but one of these was obscured by a tag hanging from the panel above, and the default assumption of the operators was that these valves would be open as common knowledge had it that the valves are only closed during maintenance and testing.
With heat building in the core, the reactor automatically activated a system that would stop further heat from being generated. Graphite control rods dropped into the core to stop the nuclear chain reaction by absorbing the neutrons that would normally sustain it. Even though the chain reaction had stopped, this huge stainless steel reactor was still incredibly hot and required vast quantities of water to cool it to a safe level. Without this cooling, the residual high temperatures would continue to heat the remaining water in the core and could lead to a significant build-up of pressure, enough to critically damage the reactor. To avoid this, an automatic pressure relief valve opened to allow some of the hot, radioactive water to leave the core and so reduce the pressure. The pressure relief valve is only meant to be open for a short period to regulate the pressure. Unfortunately, the pressure relief valve failed to close after the pressure in the core had been reduced. Unbeknownst to the operators, the core now had a huge hole in it (the open relief valve). To add to the confusion in the control room, even though the valve was now stuck open, the system that would normally close it sent a signal to the control room saying that it had ordered the relief valve to close and this was interpreted as confirmation that the relief valve was actually closed.
When it comes to problem solving in complex systems such as nuclear power plants and modern aircraft, it is often the highly automated nature of such systems that makes it a lot more difficult for the human operators to keep up with an evolving situation. Within 13 seconds of the initial failure in the filter unit, the automated safety devices had initiated a sequence of major changes in how the reactor was being controlled. Not only was the plant in a very different configuration than it had been 13 seconds previously, but the problems with the various valves being in the incorrect position and the fact that this was not being clearly signaled in the control room meant that the operators were completely in the dark regarding the true nature of the problem that they had to deal with. Problems continued to mount over the following days to the extent that a mass evacuation was ordered and churches in the local area offered general absolution to their congregations, something that is normally only offered when death is imminent. As all levels of local, state and federal government tried to solve the problem, the situation continued to deteriorate. The argument in the aircraft hangar was about whether a dangerous concentration of hydrogen gas was accumulating at the top of the reactor to such high pressures that it could detonate, blowing open the reactor core and spreading tons of nuclear material all over the USA. Fortunately for the population, it seemed that while hydrogen was accumulating, it was not at sufficient pressure to detonate spontaneously. By the time President Jimmy Carter had landed and was taken on a tour of the plant, an improvised system of pipes was feeding water to the core, thus cooling it, and the unexpected positions of the various valves had been discovered and corrected. Three Mile Island did not explode but the reactor was critically damaged and a substantial amount of the nuclear material had melted in the core, rendering it unusable. There had also been some release of nuclear material, but not enough to pose a significant health risk.2
This case illustrates many of the technological, procedural and human limitations that can be seen in just about every accident or incident in any industry. The human operators had to deal with a highly automated, highly complex nuclear power plant that could radically change its own configuration at a speed that they could not keep up with. The indicators that they relied on to form an adequate mental model of the situation were unreliable, and many assumptions were made during the decision making process that not only failed to fix the problem but actually made it worse. This accident started in the early hours of the morning when humans are not best suited to complex problem solving. Because of a limited number of phone lines running into the plant, it was almost impossible for people with vital information (such as the company that built the plant) to communicate this information to the people making the decisions. Information processing, decision making, error management, communications, leadership, fatigue and automation management were all impaired, and it was the subsequent investigation of this accident th...

Indice dei contenuti