Introduction
What role might organizational and management factors play in enhancing or undermining safety in aviation systems? Despite a long and successful tradition of work into the important relationship between safety and individual aspects of behaviour and attitudes, under the general heading of human factors, wider organizational factors have only recently been clearly identified as contributing significantly to accident causation, and hence as a topic of concern for both aviation safety researchers and practitioners.
This does not, of course, necessarily mean that organizational causes of accidents are in themselves a fundamentally new phenomenon in aviation; these factors have almost certainly, to a greater or lesser extent, been present since the earliest days of civilian and military aviation. What, however, has undoubtedly changed in recent years has been our thinking about the human origins of large-scale accidents and incidents. This development derives in part from a number of prominent accidents and disasters that have occurred internationally over the past decade, although the majority of the research work prompted by these events has been concerned with safety in contexts other than aviation, such as the chemical process, nuclear and surface transportation industries.
Examples of significant accidents, across a wide variety of large-scale hazardous systems, include: in the United Kingdom, the Kingâs Cross Underground fire, the Clapham Junction rail crash and the Piper Alpha oil rig disaster; in India the Bhopal Chemical disaster; in the relatively low-technology arena of marine transportation the Zeebrugge Ferry capsize as well as the Exxon Valdez and the Braer oil-tanker disasters; and in aerospace, perhaps the most vivid image of the 1980s will remain that of the destruction of the Space Shuttle Challenger in January 1986.
We would argue that some of the recent hard lessons learned concerning the role of organizational factors and safety cannot be ignored by aviation practitioners, a point now explicitly recognized by the International Civil Aviation Organization, amongst others, in a recent digest on Human Factors in Management and Organization (ICAO, in press). In addressing this important issue the chapter introduces, and is structured around, two very general theoretical notions. The first is that of socio-technical system failure, and the second the idea of organizational safety culture.
A part of our argument is that these theoretical ideas are not solely of academic interest, but are also of direct practical importance. Socio-technical systems theory has provided new and wide ranging insights into the preconditions of large-scale accidents, and in doing so has suggested expanded approaches to accident and incident analysis and the diagnosis of fundamental background causes of such events. More recently, work on the concept of safety culture points to a number of ways of understanding, and in turn possibly influencing, some of the high level social factors that serve to undermine safety in aviation, as well as in a wide range of other contexts where the management of risk and hazard is the responsibility of large organizations.
The chapter begins with an illustration of why the current broadening of perspectives on the human element and safety is essential, and must include organizational as well as individual factors. As frameworks for understanding these issues, two theoretical models of large-scale accidents, namely Turnerâs (1978) disaster incubation model and Perrowâs (1984) complexity-coupling account of failures in socio-technical systems, are outlined.
We then go on to discuss the more recent concept of organizational safety culture. This term first arose as the result of a European analysis of the specific human and organizational factors underlying the 1986 Chernobyl disaster in the former Soviet Union (OECD, 1987). Moreover, safety culture can be related to a number of more general social science treatments of culture, as well as to parallel literature in the safety field.
In the final section we consider some of the implications of the concept of safety culture for aviation practice, relating this to the question of institutional or organizational design for safety. However, in doing so we seek to adopt a critical perspective with respect to the recent discussions of poor and good safety cultures. It will be no simple matter either to translate the many theoretical treatments of the concept into practical action, or to resolve a number of the generic dilemmas which arise in any attempt at institutional design.
From human factors to socio-technical systems
Since the Second World War, aviation in the developed world has been marked out from many other high-technology activities by its early and increasingly successful commitment to the application of human factors psychology to questions both of safety, and to other more general ergonomics problems (e.g. Hawkins, 1987). The reasons for this are not difficult to discern. Members of the public perceive many high-consequence/low-probability technological hazards, including those associated with flying, in complex and often subtle ways (see Pidgeon et al., 1992a for a review). This is one of the reasons why, whenever an accident does occur, there will invariably be strong social and political pressures for thorough investigation and remedial action as well as a collective desire to apportion blame after the event. In addition to this, flight deck crew, air traffic control staff and maintenance personnel play out critical roles and responsibilities, as the front-line actors within a set of highly structured and visible humanâmachine processes. This, coupled with the many opportunities for learning that are presented when things do go wrong in either an actual accident or a significant incident, has inevitably drawn (and continues to draw) the investigative focus towards the ways in which individual human errors contribute to such events.
A number of writers within the aviation research community have recently argued that there is now an urgent need to complement analyses of individual human error by moving towards an understanding of the role played by broader system factors in accidents. Murphy (1992) notes that while civilian passenger risk, measured as deaths per passenger mile flown, has decreased steadily in the past decades, the year-on-year numbers of accidents involving commercial aircraft have remained remarkably stable. And the ICAO (in press) state that:
The late 70âs, the 80âs and 90âs will undoubtedly be remembered as the golden era of aviation Human Factors. Cockpit (and then Crew) Resource Management (CRM), Line-Oriented Flight Training (LOFT), Human Factors training programmes, attitude-development programmes and similar efforts have multiplied, and a sustained campaign to increase the awareness of human error in aviation safety has been initiated. But much to the consternation of safety practitioners and the entire aviation community, human error continues to be at the forefront of accident statistics (p. 1).
The authors of the digest then go on to describe, with the aid of casestudy illustrations, how âhuman errorâ is often precipitated by more systemic, background management and organizational factors. Similarly, Adams and Payne (1992) review the contribution of pilot-errors to air ambulance accidents, making a distinction between pilot-generated (e.g. individual abilities, attitudes and judgements) and system-generated (e.g. training, procedures, supervision and air crew selection, and general management) causes. They point out that one implication of this for risk management is that âwe can achieve only limited success in reducing pilot-error accident rates if the pilot is the only part of the operational problem being fixedâ (Adams and Payne, 1992, p. 40).
Enders (1992) makes the additional point that most aviation accidents have several causes (a feature in common with accidents in many other hazardous technologies, to which we return later) and that to seek a single âprobableâ cause of an event, such as pilot or maintenance error, therefore misses opportunities for learning. He goes on to suggest that causes involving âmanagement or supervisory inattention at all levelsâ are the most prevalent category, and perhaps contribute as much to accidents as the total numbers of and maintenance errors
In a similar vein, Johnston (1991) argues that the preoccupation in current aircraft accident investigations with the immediately visible causes of accidents, typically technical malfunction and front-line operator errors, diverts attention away from consideration of whether underlying organizational causes may be present. Johnston is particularly concerned that aircraft accident analysts adopt a wider âinvestigative realityâ. For example, where individuals are found to have failed to follow Standard Operating Procedures, any conclusion âthe accident occurred because X failed to follow procedure Yâ should not be the end of the matter, but should always be accompanied by the question âand why was this so?â
A recent case study example, which illustrates a number of these concerns, is the sudden in-flight structural break-up and crash, with the loss of all 14 lives aboard, of a twin-engined Continental Express Embraer 120 on 11 September 1991 near Eagle Lake, Texas. The catastrophic structural failure of the aircraft occurred without warning during a descent in good weather through approximately 12 000 feet en route to landing at Houston Intercontinental Airport. Analysis of the cockpit voice and flight data recorders, together with the pattern of wreckage, revealed that neither pilot actions or weather contributed to the accident. Rather, the sudden loss of control and subsequent structural break-up was triggered by the separation of the leading edge assembly from the left side of the horizontal stabilizer on top of the aircraftâs T-type tail.
The Embraer 120 leading edge assembly is normally fixed to the horizontal stabilizer by two rows of screws, at the top and bottom. However, the accident damage was consistent with the top row of 49 screws being missing during the whole of the flight, and a consequent sudden separation of the partially attached assembly under the peak (but nevertheless within normal limits) dynamic loads present during the descent. Since the top of the horizontal stabilizer is not visible from the ground, the fact that screws were missing would not have been apparent to the crew during their pre-flight checks.
The US National Transportation Safety Board report (NTSB, 1992) into the accident documents how the failure was not purely a âmechanicalâ circumstance, but the result of deficiencies rooted in the maintenance, management and regulatory systems surrounding the operation of the aircraft. The immediate reasons for the missing screws were found to reside in the events of the evening prior to the crash, when the Embraer 120 had undergone scheduled maintenance operations to replace the deicing assemblies, known as deice âbootsâ, installed on both the left and right leading edges of the horizontal stabilizer. The operations to change the deice boots required separation of the leading edge assemblies from the aircraft by removal of both the top and bottom rows of screws, respectively.
During the course of two shifts over the evening and night, maintenance personnel successfully replaced and resecured the right-hand leading edge assembly and boot. However, work on the left-hand assembly was started but not completed. On the first âeveningâ shift the top rows of screws for both left and right assemblies were removed in preparation by an inspector, while two mechanics began work to remove the old boot on the right-hand leading edge assembly. However, when the second âmidnightâ shift took over the aircraft they successfully completed the right-hand deice boot change, but did no further work on the left-hand assembly.
A num...