CHAPTER 1
Beginnings
Introduction
This is a short book about taking the step from preparation of a workflow map to producing a dynamic model of one or more processes. An appropriate place to start a volume like this one is to examine the state of our knowledge. We can begin construction of a dynamic model of our company or organization from nearly any state of prior preparationâ we may have done a preliminary workflow analysis and decided that more thorough investigation of a process is needed, we may have already changed the workflow process because there was a glaring need to do so, or we may have done nothing. Ideally, I would prefer the situation where we have done a Workflow Mapping and Analysis (WFMA) project using the Kmetz method, of course, but that often will not be the case.
In most of what I will present in the following chapters, such a WFMA project will be assumed to have been done, so that we have some preliminary knowledge of how a process is constructed and what role tacit knowledge plays in it. The absence of that information, however, does not prevent moving ahead with modelingâit only prevents us from having a broader range of variables to investigate at the beginning of our effort. Ultimately, we will be very likely to end up with the same understanding of the system.
This chapter will focus on several examples of early modeling efforts that I either learned of, was involved in, or both. In reviewing these, I hope to persuade the reader that there is great value in system dynamic (SD) modeling, and that the practice of it can yield great rewards.
Much of what is presented in this volume draws heavily on other sources. I have chosen this approach to achieve two results: one is to make some of the most important points that other authors have made in very succinct terms; the other is to acknowledge the significant work that has already been done in the field, to encourage further study and mastery of it by the reader.
The functioning of the human mind has long been a source of fascination and study by many observers. One of the most important things learned in this quest is that humans tend to use a âsimplified model of realityâ in conceptualizing the world.1 This is not necessarily a bad thing, in that it prevents us from having to evaluate every sensory input we receive from the environment around us, and thus literally prevents us from experiencing âparalysis by analysis.â But a cost of this simplification is that we tend to be very poor at conceiving and dealing with more complex models, and this has been noted by dynamic modelers many times;2 we will return to this point presently.
Early Simulation Modeling Awareness and Experience
The Curious Case of the Old Saturday Evening Post
One of my first and most influential encounters with SD modeling came in late 1976, when I discovered an article written a few months earlier by Roger Hall.3 This was a major breath of fresh air at the time, since I had become quite disillusioned with the status of the academic literature, and this article was in an academic journal, Administrative Science Quarterly.4 In this paper, Hall described his research on the demise of the old Saturday Evening Post, a magazine which was once among the top three magazines in the United States, had a wide following and readership, and traced its origins to the Pennsylvania Gazette, first published by Benjamin Franklin. My parents had a subscription to it for many years, and I remember reading it as a boy.
His initial observations about the demise of not only the Post, but also several other similar magazines were enough to hook me:
Considerable interest has been generated in the plight of the large publishing firms. For instance, the Curtis Publishing Company was eclipsed in 1969 with the death of the Saturday Evening Post after a series of panic measures that included canceling subscriptions to reduce production costs (author references). The Cowles Communications Inc. announced a planned reduction in circulation and advertising rates of the Look magazine following a period of financial loss (author reference) and stopped publication in 1971. A similar move was made by the Time Inc. over its magazine, Life, following a poor reported financial performance (author reference) before Life also was discontinued. Conflicting reasons have been given for the demise of these magazines: competition with other media, such as TV, sharply rising printing and postal costs, substantial increases in the cost of acquiring additional readers, lost touch with readers, erratic behavior of advertisers, and plain bad management (author references).
At the time of its initial crisis, each of these magazines reported its highest circulation and largest revenue. There must be an explanation for such a paradoxical situation wherein a record circulation and revenue is associated with poor profit performance. It is hardly credible that a large number of the leading magazines were being mismanaged simultaneously. These magazines continued to grow in spite of keen competition with other large circulation magazines and other mass communications media until they reached a critical point in their history. This suggests that the pathology of magazine publishing is, perhaps, a complex phenomenon.
Hallâs investigation of the demise of the magazine would not have been possible without SD simulation software. In the next several pages of his paper, he described the development of his model and the use, at the time, of Dynamo, a dedicated SD programming language, to assemble it. There were four clusters of variables in the model: (1) accounting information flows; (2) measures of performance; (3) managed variables; and (4) relationships with the environment. Interlinking all of these were feedback loops, and as Hall noted, âComplex systems with many feedback loops can give rise to counter-intuitive situations, whereby the intuitive judgmental decisions made by people in the system may, on occasion, not correct an out-of-control situation and may even make it worse âŚ.â (Readers of Volume I of this series should immediately recognize some of these terms and concepts.) He then went on to test the model by both validating the assumptions the model was built on, and demonstrating that it could produce measures of performance consistent with the past history of the magazine for the last 20 years of its life.
We should note that many of the variables in the model, in fact most of them, were not under control of management decisions. The management knew the measures of performance, which were (1) relative growth of revenue; (2) the profit margin; and (3) the relative growth of readers. Management could also control its managed variables, which were (1) the subscription rate charged to subscribers; (2) the advertising rate charged to advertisers, and (3) the circulation promotion expense. Two additional variables in this group were largely determined by standard industry practice, and therefore not under immediate control of top management: (4) the advertising selling expense, and (5) the magazine volume in pages per year. The latter two variables could be changed if management felt the need to do so, but would likely trigger retaliatory changes from the industry, and thus were treated as highly constrained.
Having created and validated his model, Hall then embarked on a series of experiments to see what would happen to the Post under different conditions for a simulated period of 20 years of operations. These experiments yielded a series of unsustainable declines in the profit margin, with increasing rates of decline after 10 simulated years. In all cases except the final experiment, where the lessons learned from the first experiments were applied as management decisions, the magazine failed.
What was responsible for the failures? In short, what was discovered was a positive (âhotâ) feedback loop in the variables influencing the number of pages in the magazine, and hence its cost, to the number of readers; the âhotâ designation means that the loop feeds back upon itself, so that change is exponential, not linearâgrowth in readers caused rapid and increasing growth; similarly, decline in readers led to rapid and increasing decline. Neither this feedback loop nor the relationships with managed variables, which they controlled and which affected it, were directly visible to management. Further, since the loop caused exponential change, immediate reaction to the managed variables would have only limited effect on performance measures, and the much greater changes were lagged and did not come into effect (or become visible) until years later. Those lagged changes then came as a great surprise, usually leading to an overreaction by management, and unintentionally setting the stage for the next disaster.
In his final experimental run, Hall held the subscription rate and circulation promotion expense constant, but adjusted the advertising rate per thousand readers each year to hold it constant. This is not an intuitive strategy at all, and requires a deeper understanding of the complex relationships in the industry, the nature of the hot feedback loop, and the time lags inherent in the impact of changes to the managed variables on the performance measures. The net effect results in a profit-maximizing system behavior rather than a revenue-maximizing behavior, and if it had been followed, the Post might have survived.
This was not a one-time event. Hall used his methodology to more broadly investigate policy failures in general, and specifically in the Manitoba curling club, Canada.5 In the latter case, his work was able to illustrate the causes of decline in the club membership, suggest options for change in club rules and memberships, and became the basis for the club managers to make significant changes that resulted in revitalization and survival of the club. The detailed report he produced is required reading for all new club directors.
A fundamental point, and one that I will repeat many times, is that one of the major purposes of modeling is to learn from it. Hallâs work, and that of many others in SD modeling, shows the potential for such modeling to enable that result. It does not guarantee that management will make the right policy decisionsâthe right to fail is not excluded by such efforts. But if we assume that most managers who take on control of an organization do not intend to fail, modeling becomes a powerful tool to help them learn how to achieve that.
The problem with the development of SD modeling software, up to this point, was that while it had become somewhat easier to master, it still required some major effort to learn, and was beyond the reach of anyone not interested in becoming a specialist. I had the good fortune to meet Roger Hall at a conference in West Berlin in 1978, and we talked about that; he admitted that while the Dynamo language made programming SD simulations easier, it was not, in absolute terms, âeasy.â In fact, truly âeasyâ may be too much to expect, but we have come much closer in the nearly 40 years since that time. We will expand on this issue presently.
Modeling the Naval Avionics Maintenance Workflow
My next major engagement with SD modeling came as an outgrowth of my U.S. Naval Air Systems Command (NAVAIR) study, which began in 1976. Described in more detail in Volume I, that study was concerned with the apparent failure of the Navyâs Versatile Avionics Shop Test (VAST) system to meet its performance expectations at sea, and keep the Cold War Naval Air Force in a state of operational readiness. When aircraft carriers went to sea and began running regular air operations, the on-board stocks of major avionics components that VAST was supposed to maintain were instead quickly depleted, meaning that a large fraction of the air wing was not operational (often approaching 50 percent on some ships). The VAST shops began to accumulate large backlogs of repairs, and it seemed they could not keep up with demand. However, when examined closely, VAST was found to not only be meeting but exceeding its performance specifications: whether evaluated on Operational Availability (uptime); Mean Time Between Failure; Mean Time Between Unscheduled Maintenance Actions; Mean Time to Repair; or any of a large number of more specific measures, VAST was working well. Not only was this a major conundrum, but it was a grave concern to the Navy.
I became engaged in this study during my time with the University of Southern Californiaâs (USC) Systems Management Center, and my initial role was to evaluate the Navyâs management practices in the VAST repair cycle. Having studied a number of works on systems theory and information processing in organizations,6 I began by analyzing the flows of information through the system, in addition to practices of general management.7 NAVAIR managers soon realized that the problem was bigger and more complex than they had realized, and my role was changed to be a âfloaterâ who had license to go anywhere in the maintenance system to investigate possible causes of systemic problems.
In this capacity I soon met John Gray, who was not only a graduate of the USC program I taught in, but was in charge of a particular unit at the Patuxent River Naval Air Station. That unit was named the Aircraft Intermediate Maintenance Support Office (AIMSO), so named because it sounded rather like a third-tier staff office that had little role in operations and was therefore not a threat to anyoneâs career or other aspirations. In fact, AIMSO was a creation of the Chief of Naval Operations (CNO), who had become so frustrated with the conflicting reports on VAST that he basically trusted no one in the line Navy; AIMSO was therefore tasked with getting to the bottom of the problem, reported directly to the CNO, and was exclusively accountable to him. John Gray and I were both believers in systems theory, and immediately became sympatico.
A large part of my work during this period was to statistically analyze data on the functioning of the VAST shops. The Navy had never done this, and also had never developed any standard times for individual component repair; thus, there were no data to allow either relative or absolute comparisons to a standard. Through the statistical analysis, we found that many of the components were processed in a relatively short time, but that for a variety of reasons some took much longer, and several components were consistently âbad actorsâ in terms of ease of troubleshooting and repair. The immediate benefit was that we were able to set limits on the time that a component would be allowed to âcookâ on a VAST station, and if repair was not effected in that time, the component was removed from the station and another put in its place.
In my âfloaterâ role I had found several instances of both Navy and civilian personnel taking initiatives to resolve informational problems that had been found to create difficulties in managing the VAST workflow.8 While working with John Gray and AIMSO, we were able to communicate directly with these individuals and begin to assemble what they had learned into a larger model that could encompass all VAST operations, and when that was achieved, be able to optimize them. While I did not have to program any of my inputs to this system, I was able to add what I had found, and some of what I suspected, to the model.
Many of those inputs were the kind of system feedbacks that Hall had found in his work with magazine publishing, that is, feedback loops that were not fully understood or appreciated in terms of their effects on performance. Given the objectives of the AIMSO work, these were mostly focused on small and subtle feedbacks that collectively had significant impact on VAST production. For example, lengthy item codes for replacement parts had to be entered onto a five-part shop data-collection form; these could be entered mistakenly, or simply be not legible for those who had to work with part five. The AIMSO model automated this process, and also accumulated an exhaustive listing of parts used for all VAST repairs, so that if a sailor mistakenly transposed two figures for a part, the terminal would not accept it and would force correct data entry; this had immediate benefits in terms of properly maintaining spare-parts inventories as well as correct data on which parts were actually involved in a component failure. In another case, tracking personnel training through the model to ensure that every shift was staffed with personnel having both VAST maintenance skills and component repair skills reduced the already low amount of VAST downtime even more, which meant production stayed up. This also helped identify training needs and improved the overall skill level of the VAST shop. Discovery of one feedback often led to successive onesâpersonnel whose training had not been successful were identified and retrained to prevent errors of component diagnosis, maintenance diagnosis, or even things as mundane as the care of cables. These effects were cumulative for both the carriers and shore sites, since carrier crews regularly rotated through the U.S. shore-based VAST shops when a cruise ended, and thus still another feedback that enhanced VAST performance was uncovered and exploited. Success quickly fed back to create further success.
While the effect of many of these individual feedbacks was small, their cumulative effect was major. By the end of 1983, the maintenance-workflow model that resulted was not only able to prevent VAST from becoming overloaded, but enabled it to expand its workload, so much so that when President Reagan decided to build five new aircraft carriers, it was not necessary to restart the VAST production line. VAST stations were taken from other sites and carriers where they had become redundant and placed on the new carriers; this saved, in 2016 terms, between USD 800 million and 1.2 billion.9
Programming, however, was still a specialist activity in the early 1980s, and a task of this magn...