Practical Human Factors for Pilots
eBook - ePub

Practical Human Factors for Pilots

Capt. David Moriarty

Share book
  1. 304 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Practical Human Factors for Pilots

Capt. David Moriarty

Book details
Book preview
Table of contents
Citations

About This Book

Practical Human Factors for Pilots bridges the divide between human factors research and one of the key industries that this research is meant to benefit—civil aviation. Human factors are now recognized as being at the core of aviation safety and the training syllabus that flight crew trainees have to follow reflects that. This book will help student pilots pass exams in human performance and limitations, successfully undergo multi-crew cooperation training and crew resource management (CRM) training, and prepare them for assessment in non-technical skills during operator and license proficiency checks in the simulator, and during line checks when operating flights.

Each chapter begins with an explanation of the relevant science behind that particular subject, along with mini-case studies that demonstrate its relevance to commercial flight operations. Of particular focus are practical tools and techniques that students can learn in order to improve their performance as well as "training tips" for the instructor.

  • Provides practical, evidence-based guidance on issues often at the root of aircraft accidents
  • Uses international regulatory material
  • Includes concepts and theories that have practical relevance to flight operations
  • Covers relevant topics in a step-by-step manner, describing how they apply to flight operations
  • Demonstrates how human decision-making has been implicated in air accidents and equips the reader with tools to mitigate these risks
  • Gives instructors a reliable knowledge base on which to design and deliver effective training
  • Summarizes the current state of human factors, training, and assessment

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Practical Human Factors for Pilots an online PDF/ePUB?
Yes, you can access Practical Human Factors for Pilots by Capt. David Moriarty in PDF and/or ePUB format, as well as other popular books in Technology & Engineering & Industrial Health & Safety. We have over one million books available in our catalogue for you to explore.
1

Introduction to human factors

This chapter defines human factors using the example of the nuclear accident at the Three Mile Island nuclear power station in 1979. Human factors is a multidisciplinary field that tries to optimize the human (liveware), procedural (software), and machine (hardware) elements of a system as well as trying to optimize the interactions among these elements. These elements comprise the SHEL model (software, hardware, environment, and liveware). Human factors is at the heart of aviation safety. In aviation we can consider that the pilots and other crew members are the humans, the standard operating procedures are the procedures, and the aircraft is the machine element. This book focuses mainly on optimizing the pilots and how they interact with the other elements. Crew Resource Management (CRM) is how pilots are normally exposed to human factors training. Non-technical (CRM) skills are assessed during simulator checks and line checks.

Keywords

behavioral marker; Crew Resource Management; human factors; non-technical skill; NOTECHS; SHEL model

1.1 The start of modern human factors

In March 1979 an argument took place in an aircraft hangar in Pennsylvania.1 The argument was heated and was between two experts working in a highly specialized field, each one arguing that his interpretation of events was the correct one. The stakes of this argument were about as high as one could imagine. If one of the experts was proved right, the nuclear reactor nearby was about to explode, spreading radioactive devastation across the USA in the same way as Chernobyl would spread radiation across Europe and Russia a few years later. To make the situation even more critical, the President of the United States, Jimmy Carter, and his wife were shortly to land at the airport before traveling to the plant to carry out an inspection. An explosion would then not only cause catastrophic radioactive fallout but would also kill the President of the USA.
The problems had started a few days earlier at Unit 2 of the Three Mile Island nuclear power plant near Harrisburg, Pennsylvania. The problem seemed simple enough at first. Under normal circumstances, water passing through the core of the reactor in the primary cooling system is heated to high temperature by the nuclear reaction. The pipework carrying this heated, radioactive water then comes into contact with the pipework of the secondary cooling system. Heat passes from the hot water in the primary cooling system to the cold water in the secondary cooling system, but no water is transferred between the two. For this reason, the water in the primary cooling system is radioactive but the water in the secondary cooling system is not. The water in the secondary cooling system must be extremely pure because when it turns to steam, it must drive carefully engineered turbine blades to generate electricity. Any impurities in the water would make this inefficient. A filter system keeps this water as pure is possible, although the filter in Unit 2 was known to be problematic and had failed several times recently. At 4 a.m. on 28 March 1979, a small amount of non-radioactive water from the secondary cooling system leaked out through the filter unit. Many of the instruments that are used to monitor the nuclear power plant rely on pneumatic pressure, and it seems that some of the water got into this air system and led to some instruments giving incorrect readings. One of the systems affected by this leak was the feedwater pumping system that moved the hot radioactive water in the primary cooling system into proximity to the cool, non-radioactive water in the secondary cooling system. The water contaminating the feedwater pumping system caused it to shut down, and with no heat being transferred from the primary to the secondary cooling system, an automated safety device became active and shut down the turbine.
As well as circulating water in the primary cooling system so that heat can be transferred to the secondary cooling system, the constant flow of water caused by the feedwater pumps was what kept the temperature of the core of the reactor under control. With these pumps shut down, emergency feedwater pumps activated to keep the core temperature under control. For some reason though, the valves in both of the emergency feedwater systems had been closed for maintenance and had not been reopened. The emergency pumps were working and this was indicated in the control room, but the operators did not realize that although the pumps were working, the pipes were closed and so feedwater could not reach the reactor to cool it down. There was an indication in the control room that the valves were closed but one of these was obscured by a tag hanging from the panel above, and the default assumption of the operators was that these valves would be open as common knowledge had it that the valves are only closed during maintenance and testing.
With heat building in the core, the reactor automatically activated a system that would stop further heat from being generated. Graphite control rods dropped into the core to stop the nuclear chain reaction by absorbing the neutrons that would normally sustain it. Even though the chain reaction had stopped, this huge stainless steel reactor was still incredibly hot and required vast quantities of water to cool it to a safe level. Without this cooling, the residual high temperatures would continue to heat the remaining water in the core and could lead to a significant build-up of pressure, enough to critically damage the reactor. To avoid this, an automatic pressure relief valve opened to allow some of the hot, radioactive water to leave the core and so reduce the pressure. The pressure relief valve is only meant to be open for a short period to regulate the pressure. Unfortunately, the pressure relief valve failed to close after the pressure in the core had been reduced. Unbeknownst to the operators, the core now had a huge hole in it (the open relief valve). To add to the confusion in the control room, even though the valve was now stuck open, the system that would normally close it sent a signal to the control room saying that it had ordered the relief valve to close and this was interpreted as confirmation that the relief valve was actually closed.
When it comes to problem solving in complex systems such as nuclear power plants and modern aircraft, it is often the highly automated nature of such systems that makes it a lot more difficult for the human operators to keep up with an evolving situation. Within 13 seconds of the initial failure in the filter unit, the automated safety devices had initiated a sequence of major changes in how the reactor was being controlled. Not only was the plant in a very different configuration than it had been 13 seconds previously, but the problems with the various valves being in the incorrect position and the fact that this was not being clearly signaled in the control room meant that the operators were completely in the dark regarding the true nature of the problem that they had to deal with. Problems continued to mount over the following days to the extent that a mass evacuation was ordered and churches in the local area offered general absolution to their congregations, something that is normally only offered when death is imminent. As all levels of local, state and federal government tried to solve the problem, the situation continued to deteriorate. The argument in the aircraft hangar was about whether a dangerous concentration of hydrogen gas was accumulating at the top of the reactor to such high pressures that it could detonate, blowing open the reactor core and spreading tons of nuclear material all over the USA. Fortunately for the population, it seemed that while hydrogen was accumulating, it was not at sufficient pressure to detonate spontaneously. By the time President Jimmy Carter had landed and was taken on a tour of the plant, an improvised system of pipes was feeding water to the core, thus cooling it, and the unexpected positions of the various valves had been discovered and corrected. Three Mile Island did not explode but the reactor was critically damaged and a substantial amount of the nuclear material had melted in the core, rendering it unusable. There had also been some release of nuclear material, but not enough to pose a significant health risk.2
This case illustrates many of the technological, procedural and human limitations that can be seen in just about every accident or incident in any industry. The human operators had to deal with a highly automated, highly complex nuclear power plant that could radically change its own configuration at a speed that they could not keep up with. The indicators that they relied on to form an adequate mental model of the situation were unreliable, and many assumptions were made during the decision making process that not only failed to fix the problem but actually made it worse. This accident started in the early hours of the morning when humans are not best suited to complex problem solving. Because of a limited number of phone lines running into the plant, it was almost impossible for people with vital information (such as the company that built the plant) to communicate this information to the people making the decisions. Information processing, decision making, error management, communications, leadership, fatigue and automation management were all impaired, and it was the subsequent investigation of this accident th...

Table of contents