Organizational Learning at NASA
eBook - ePub

Organizational Learning at NASA

The Challenger and Columbia Accidents

Julianne G. Mahler

Share book
  1. 256 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Organizational Learning at NASA

The Challenger and Columbia Accidents

Julianne G. Mahler

Book details
Book preview
Table of contents
Citations

About This Book

Just after 9: 00 a.m. on February 1, 2003, the space shuttle Columbia broke apart and was lost over Texas. This tragic event led, as the Challenger accident had 17 years earlier, to an intensive government investigation of the technological and organizational causes of the accident. The investigation found chilling similarities between the two accidents, leading the Columbia Accident Investigation Board to conclude that NASA failed to learn from its earlier tragedy.

Despite the frequency with which organizations are encouraged to adopt learning practices, organizational learning—especially in public organizations—is not well understood and deserves to be studied in more detail. This book fills that gap with a thorough examination of NASA's loss of the two shuttles. After offering an account of the processes that constitute organizational learning, Julianne G. Mahler focuses on what NASA did to address problems revealed by Challenger and its uneven efforts to institutionalize its own findings. She also suggests factors overlooked by both accident commissions and proposes broadly applicable hypotheses about learning in public organizations.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Organizational Learning at NASA an online PDF/ePUB?
Yes, you can access Organizational Learning at NASA by Julianne G. Mahler in PDF and/or ePUB format, as well as other popular books in Politique et relations internationales & Affaires publiques et administration. We have over one million books available in our catalogue for you to explore.
PART ONE
Recognizing the Value of Organizational Learning

CHAPTER 1
Uncanny Similarities

The Challenger and Columbia Accidents

ON FEBRUARY 1, 2003, the space shuttle Columbia began its homeward journey, and at 8:44 a.m. it initiated re-entry into the Earth’s atmosphere. During the ensuing sixteen minutes the orbiter would experience tremendous heat, with the leading-edge temperatures of the wings rising to more than an estimated 2,800 degrees Fahrenheit. At first all seemed to go well. Then, at 8:54, the flight director at mission control in Houston was informed by the maintenance, mechanical, and crew systems office, known as MMACS, that the temperature sensors on the left side of the vehicle were “lost.” The following records the exchange:
MMACS: “Flight—MMACS.”
Flight director: “Go ahead, MMACS.”
MMACS: “FYI, I’ve just lost four separate temperature transducers on the left side of the vehicle, hydraulic return temperatures. Two of them on system one and one in each of systems two and three.”
Flight: “Four hyd [hydraulic] return temps?”
MMACS: “To the left outboard and left inboard elevon.”
Flight: “Okay, is there anything common to them? DSC [discrete signal conditioner] or MDM [multiplexer-demultiplexer] or anything? I mean, you’re telling me you lost them all at exactly the same time?”
MMACS: “No, not exactly. They were within probably four or five seconds of each other.”
Flight: “Okay, where are those? Where is the instrumentation located?”
MMACS: “All four of them are located in the aft part of the left wing, right in front of the elevons, elevon actuators. And there is no commonality.”
Flight: “No commonality.” [Columbia Accident Investigation Board 2003 (hereafter CAIB), 42]
In this context, no commonality meant that the temperature transducers were not on the same electrical circuit, and the implication was that a random malfunction in the electrical circuit produced the readings. Further developments did not augur well for the Columbia. An attempt by Columbia’s commander, Rick Husband, to communicate with ground control resulted in a broken transmission: “And uh, Hou—.” Immediately afterward, at 8:59, instrumentation at Mission Control indicated that there was no tire-pressure reading for the tires on the left side of the shuttle, both inboard and outboard. At the flight director’s command, those in direct communication with the crew (CAPCOM) communicated the new developments:
CAPCOM: “Columbia—Houston. We see your tire pressure messages, and we did not copy your last call.”
Flight: Is it instrumentation, MMACS? Gotta be …”
MMACS: “Flight—MMACS. Those are also off-scale low.” (CAIB, 43)
As the Columbia approached Dallas, Texas, at 8:59, a response of “Roger, [cut off mid-word]” came in from Commander Husband to Mission Control. At 9:00, while Mission Control was still trying to regain communication with Columbia, the orbiter was breaking up, a process that was recorded as bright flashes on the postflight videos and imagery. Clearly, NASA had a catastrophic event on its hands, one equal to the Challenger accident of 1986: a shuttle program failure that could have been avoided.

OVERVIEW OF THE CHALLENGER AND COLUMBIA ACCIDENTS

In both shuttle accidents, the safety issues that brought the NASA shuttle program to a standstill were “low-tech,” had surfaced many times before, and were well-known to shuttle managers at key levels. With Challenger, the explosion in the first minutes of launch was caused by hot gasses escaping from the solid rocket booster at one of the joints between the segments of the rocket. Engineers began raising the red flag about the poor design of the joints and their seals as early as 1977. A steady stream of warning about the problem emerged from with the space shuttle organization up to the time in July 1985, just six months before the Challenger catastrophe, when managers determined that a new design for the joints was needed. But the shuttle continued to fly, even though the joint seal was classified as criticality 1, designating an essential component without a backup, so that any failure would lead to disaster. The fear was that any leakage of hot gases at one of the joints could quite easily rupture the fuel tanks. On January 28, 1986, this is exactly what happened. Shortly after liftoff, hot gases burned through the seals of one of the joints in the right solid rocket booster. In a matter of seconds the shuttle broke up, disappearing behind a cloud of vapor. As would happen with the Columbia, all crew members perished.
While it was a flawed O-ring in Challenger’s right solid rocket booster that had led to the destruction of that shuttle, the CAIB concluded in 2003 that the technical cause of the Columbia disaster was a large debris strike to the orbiter’s wing from foam that had broken off the left bipod ramp that attached the shuttle to its large external liquid-fuel tank. Similar to the Challenger accident, numerous early warnings about the problem—in this case, the loss of foam and the dangers from debris striking the orbiter—had been sounded. In sixty-five of the seventy launches for which there was a photographic record, foam debris had been seen, and some of the largest pieces of foam striking the orbiter came from the left bipod ramp. Strikes from this ramp area were confirmed by imagery at least seven times, including the previous launch of the Columbia (CAIB, 123). In fact, the first known instance of foam loss took place during Challenger’s second mission (STS-7). The foam came from the left bipod ramp and was of a significant size, nineteen inches by twelve inches (CAIB, 123).

FAILURE TO LEARN

Although the technical flaws behind the Challenger and Columbia accidents differed, the accidents themselves were eerily similar in several ways. Of course, both were highly visible disasters, but both also involved a series of organizational and managerial failures. In researching the causes of the accidents, the 1986 Presidential Commission on the Space Shuttle Challenger Accident (often and hereafter called the Rogers Commission) and the CAIB looked at the contributing organization problems as well as the engineering and hardware flaws. In both accidents, early evidence of technical malfunctions had been noted and deemed undesirable but “acceptable.” Decision makers were isolated, were under intense pressure to launch, did not listen to experienced engineers, and did not or could not openly acknowledge and discuss unresolved problems. In the words of Sally Ride, a member of the CAIB, former astronaut, and former member of the Rogers Commission, “there were ‘echoes’ of Challenger in Columbia” (CAIB, 195). The organization had received early warnings of safety problems in both cases, but it failed to take them seriously.
Pointing to these similarities, the Columbia Accident Investigation Board concluded that NASA was not a learning organization. Even after numerous instances of foam loss and a very serious debris strike just two flights before the launch of the Columbia, foam losses were still not considered dangerous enough to halt flights. In similar fashion, prior to the Challenger accident, instances of eroded seals between the segments of the solid rocket boosters were recognized but tolerated. The CAIB saw this pattern as evidence of “the lack of institutional memory in the Space Shuttle Program that supports the Board’s claim … that NASA is not functioning as a learning organization” (CAIB, 127).
The charge that NASA did not learn is especially puzzling in light of the attention paid inside and outside the organization to improving just those organizational features that contributed to the accidents. The Rogers Commission laid down a series of nine major recommendations unanimously adopted “to help assure the return to safe flight” (Rogers Commission 1986, 198).1 These recommendations were intended to prevent a problem like the one that destroyed the Challenger, an often seen and avoidable flaw, from ever happening again. The commission pointed to several characteristics of the space shuttle program that contributed to this accident, including a “silent safety program” that did not assert itself or confront the managers who routinely downgraded the seriousness of the O-ring problems on the solid rocket boosters. To remedy these flaws, the commission recommended a number of organizational changes as well as improved engineering designs. Greater authority was to be given to shuttle program managers to direct the project elements. NASA was to create a more robust safety organization of working engineers who would be independent of the shuttle project components they were to oversee and would only be accountable to the headquarters Office of Safety, Reliability, and Quality Assurance. Improvements in organization structure and reporting relationships and in tracking and resolving critical problems were recommended. NASA was to adopt a flight rate “consistent with its resources” to target the problems generated by pervasive launch-schedule pressures coupled with diminished resources (Rogers Commission 1986, 201). Other internal and external investigations reinforced these recommendations and are reported in the chapters that follow.
In virtually all cases, the CAIB found little evidence of changes made in response to these recommendations at the time of the Columbia accident. It noted that “despite all the post-Challenger changes at NASA and the agency’s notable achievements since, the causes of the institutional failure responsible for Challenger have not been fixed” (195). As later chapters will show, many of the factors contributing to the organizational failures behind the Challenger accident were equally important in shaping the organizational outcome that was the Columbia disaster. Our object is to understand how this could have happened.

QUESTIONS ABOUT LEARNING AT NASA

Two sets of questions are posed here. First, what does it mean to say that NASA did not learn? It seems surprising that such an accomplished organization would not be a learning organization. To make sense of the evidence supporting the claim that NASA did not in fact learn from the Challenger accident and did not become a learning organization, we must first establish what is meant by organizational learning and how we can recognize learning or failure to learn. This conceptual analysis leads in chapter 2 to a definition of the processes of learning. Using this definition, we will be able to closely examine case materials surrounding four commonly recognized causes of the accidents and the organizational and procedural changes adopted by NASA following the Challenger disaster to determine what led to the CAIB’s characterization. In each of the four case studies, the accidents will be compared to determine whether they are essentially similar in their organizational and managerial flaws, thus justifying the CAIB’s assessment, or whether there is evidence that learning, as defined here, occurred.
Second, after examining the evidence, we will ask why NASA failed to identify and act on the organizational danger signals. Did it ever effectively adopt the lessons from Challenger? If not, what blocked learning in an organization committed to acquiring knowledge? As an alternative to the hypothesis that NASA simply failed to learn, we will look at factors that might have intervened during the years between the Challenger and Columbia accidents, causing the agency to intentionally discard critical lessons in favor of new, apparently less useful, ones. In other words, we will look for evidence that NASA purposely “unlearned” the lessons from Challenger. As a third possibility, we will look for evidence that the agency did learn, but then somehow forgot, these lessons, and ask what could have led to the unintentional loss of administrative knowledge and a repeated pattern of error?
In answering these questions we have several aims. One is to uncover potentially hazardous public management practices at NASA and other large, complex organizations with technologically hazardous missions. We hope this analysis will help build the learning capacity of such organizations, especially advanced technology and national security institutions. Additionally, we hope to advance our understanding of how all organizations, but especially public organizations, learn or fail to learn. Thus our analysis of NASA’s response to the space shuttle accidents offers an opportunity to develop a specialized theory of organizational learning and its contributing processes in public agencies and provides a basis for moving beyond assessing the potential for learning in public organizations to explaining the process of learning and its limitations (Brown and Brudney 2003).
Several books have examined particular lessons about management that the NASA case offers, and we review a number of these below. Our investigation, however, differs from these. It looks at the underlying processes of organizational learning and the factors that limited NASA’s learning capacity. We also suggest the NASA case offers an unusually good laboratory within which to study organizational learning because of the intensity of the investigations into the events surrounding the accidents and into the actions, motives, and perceptions of key actors before and after the accidents. There has been an almost continuous stream of internal and external inquiries into NASA’s administration, offering a unique opportunity to study just those elements of learning that have been most elusive.

WHY STUDY ORGANIZATIONAL LEARNING?

Much is hoped for from learning in government agencies. In principle at least, organizational learning is claimed to be a model for self-correcting, self-designing organizations that can address many of the concerns of reform-minded public administration practitioners and theorists who want to decentralize and deregulate organizations without detaching them from their authorized missions. From this perspective, organizational learning is an especially valuable method of agency change and continual self-improvement.
Learning engages agency professionals themselves in an ongoing internal and external search for effective program technologies and management developments, and makes use of their own contextual interpretations of why a program is failing and how it could do better, even in the face of environmental and budgetary constraints. Unlike other recent public management change models such as TQM or re-engineering, learning is not a sweeping, generic, consultant-driven technique. Rather, as the idea is developed here, it is an indigenous organizational process: Agencies naturally tend to learn. Grandori (1984), in discussing the possibility of a contingency theory of decision making, considers that satisficing is heuristic decision making, adding to organizational knowledge with each choice. Similarly, learning may be reasonably common in analyzing and making decisions about programs and management. It may lead to a major change in the face of undeniable failure, but it may also occur less dramatically, almost routinely, as agency actors, motivated by professional norms, political values, the public interest, personal ambition, or agency growth and survival seek to elaborate and improve their programs based on a consideration of past results. Yet many circumstances thwart the learning process, as we show in succeeding chapters. Thus to foster learning may be to enhance or unblock an indigenous process rather than impose a designed, generic one.
Learning theories highlight the role that administrative ideas play in the design of new programs, procedures, structures, or management techniques. These ideas may be bred from the unique experience of the agency or from views imported from outside professionals, policy elites, and management gurus. Learning in agencies, as developed here, examines the evolution of new ideas about administration as the basis for changes in policy and management. It considers the ways new ideas and beliefs are discovered and evolve to prominence within the agency. The adoption of the learning perspective also represents a shift in understanding change in government agencies from sole reliance on analyzing power, resource acquisition, the self-interest of actors, or legal mandates as the sources of change to an appreciation of the role of program and management ideas. To be sure, change occurs against a background of shifting resources, political conditions, and policy requirements, but it may also reflect the considered experiences, professional analyses, and scholarly developments in the field.

WHY STUDY NASA?

This study focuses on NASA for several reasons. First, NASA’s mission fascinates countless Americans and space buffs around the world. One indication of the widespread interest in the organization is the sheer number of books published about it. Amazon.com lists over sixteen thousand titles by or about the agency and its programs. A small sample of these books, and the approaches they take to the study of NASA, are described below. We are also among those who think the agency’s work is important to understand our place in the cosmos, and we would like to make a contribution to its success as an organization.
In addition, NASA is an important type of public organization, fitting the model of a highly complex organization with a hazardous mission. It is the kind of organization that Charles Perrow (1999) has characterized as subject to “normal accidents.” That is, its core processes are tightly coupled, their interactions are not wholly predictable, and failure is enormously costly in lives, resources, and national stature. In addition, the CAIB was critical of NASA’s failure to operate as a “high reliability organization” (LaPorte and Consolini 1991), capable of recognizing crises and shifting from a tightly programmed mode of operations to a self-organizing mode in response. We will have more to say about these models in the next chapter. Here we simply note that examining the successes and failures at NASA can help us understand the wide range of organizational processes that are critical to the safe management of increasingly complex and dangerous organizational technologies in public health, energy production, and national security.
Finally, because of the care and intensity with which NASA itself, external scholars, journalists, and government investigators including Congress and the GAO have studied the agency, we have a wealth of information about NASA. Published studies of its leaders, organizational structures, culture, history, and procedures—and how they changed over time—have been conducted by scholars, government panels, and journalists. Exhaustive accounts of testimony by NASA and contractor personnel before Congress, the Rogers Commission, and the CAIB have been published. NASA’s own extensive internal and external panel findings and reviews at many critical stages between the two accidents are also easily accessible. It is this period, from 1986 to 2003, that is of greatest interest, for it was in this time span that learning, if it occurred, would appear. It is also the time during which unlearning or forgetting might have taken place. These documents provide a record of the participants’ own words and interpretations at the time of the events in question, unclouded by later tragic events and the natural effects of the passage of time. Such contemporary insider accounts from interviews and testimony are precisely the kinds of information needed to investigate learning in large public organizations, and it is for this reason that these sources, rather than present-day interviews, form the basis of the case studies of commonly acknowledged causes of the accidents in chapters 3 through 6. The core process of organizational learning (i.e., change in the understanding of causes and effects) has been the most difficult to identify. In consequence, most scholarly efforts to investigate organizational learning focus instead on determining the availability of information that could lead to learning. Rarely do we have the documentation for a case that allows us to track the learning process itself over time. NASA is such a case.

LITERATURE ABOUT NASA

In addition to the investigative reports mentioned above, N...

Table of contents