Modern work, with its increasing reliance on automation to support human action, is focusing attention on the cognitive aspects of work that are not accessible to direct observation. For example, it is obvious that the physical acts of button pushing that occur in the command center of a modern ship are of less intrinsic importance than the mental decision processes executed via those actions. The mental processes organize and give meaning to the observable physical actions. Attempts to analyze a task like air traffic control with traditional behavioral task analysis techniques made the shortcomings of those techniques strikingly clear (Means, 1993). Starting in the 1960s, the cognitive revolution in academic psychology has both increased our awareness of the extensive cognitive activity underlying even apparently simple tasks and provided research techniques and theories for characterizing covert cognition. Hence, the term cognitive task analysis is coming into use to describe a new branch of applied psychology. The relative newness of this enterprise is evidenced by the fact that, as of this writing, a search of the entire Psyc INFO database with the term yielded only 28 items, some irrelevant, and a search in the Science Citation Index yielded 30 items. The high current interest in cognitive task analysis is evidenced by recent literature review efforts undertaken by a British aerospace company (confidential) and by the French military (Doireau, Grau, & Poisson, 1996) as well as the NATO Study Group effort reported here.
Analyses of jobs and their component tasks may be undertaken for a wide variety of purposes, including the design of computer systems to support human work, the development of training, or the development of tests to certify job competence. An emerging frontier of modern task analysis is the analysis of entire working teamsâ activities. This is done for purposes such as the allocation of responsibilities to individual humans and cooperating computer systems, often with the goal of reducing the number of humans who must be employed to accomplish the team function. Given the purposes and constraints of particular projects, several (cognitive) task analysis approaches merit consideration. Savvy customers and practitioners of cognitive task analysis must know that one approach will not fit all circumstances. On the other hand, a thorough-going cognitive task analysis may repay the substantial investment required by proving applicable to purposes beyond the original intent. For example, Zachary, Ryder, and Hicinbothom (Chap. 22, this volume) analyzed the tasks of the AEGIS antiair warfare team in order to build an artificially intelligent training system, but these same analyses are being used to guide the design of advanced work stations and new teams with fewer members.
This book is the ultimate product of a NATO study group aiming to capture the state of the art of cognitive task analysis. The intent is to advance it toward a more routine engineering disciplineâone that could be applied reliably by practitioners not necessarily educated at the doctoral level in cognitive psychology or cognitive science. To that end, two major activities were undertaken. One was a review of the state of the art of cognitive task analysis, focusing on recent articles and chapters claiming to review cognitive task analysis techniques. This effort produced a bibliographic resource appearing as chapter 28 in this book. We hope that this chapter gives sufficient information to help students and other readers decide which of these earlier contributions to the field they should read for their particular purposes. The second major activity of the NATO study group was an international workshop intended to provide an up-to-date snapshot of cognitive task analyses, emphasizing new developments. Invitations were extended to known important contributors to the field. The opportunity to participate was also advertised widely through electronic mailing lists to capture new developments and ongoing projects that might not be known to the study group members organizing the workshop. This book is largely the product of that workshop, sharing its insights into the state of the art of this new field. This introduction provides an overview of these two activities. First, we sketch a prototypic cognitive task analysis, based on results from the NATO study group. Next, we describe the organization of the chapters in this book that resulted from the international workshop.
The Prototypic Cognitive Task Analysis Process as Seen in Prior Literature
Ironically, the cognitive analysis of tasks is itself a field of expertise like those it attempts to describe. Reviewing recent discussions of cognitive task analysis reveals that the explicitly stated state of the art is lacking specification of just those kinds of knowledge most characteristic of expertise. A large number of particular, limited methods are described repeatedly. However, little is said about how these can be effectively orchestrated into an approach that will yield a complete analysis of a task or job. Little is said about the conditions under which an approach or method is appropriate. Clearly, the relevant conditions that need to be considered include at least the type of task being analyzed, the purpose for which the analysis is being done (human-computer interaction design, training, testing, expert system development), and the resources available for the analysis, particularly the type of personnel available to do the analysis (cognitive scientists, cognitive psychologists, educational specialists, subject-matter experts). The literature is also weak in specifying the way in which the products of task analysis should be used in designing either training or systems with which humans will interact. The prior literature on cognitive task analysis is also limited by a focus on the tasks of individuals, almost exclusively existing tasks for which there are existing task experts.
Nevertheless, the literature review effort did, within these limits, provide the image of a prototypic ideal case of the cognitive task analysis process, as it might be when unhampered by resource limitations. What emerges as the ideal case, assuming that resource limitations are not a problem? Although the answer to this question may vary somewhat, depending on the purpose for which the analysis is being done, we set aside that consideration for a while or assume that the purpose is training and associated proficiency measurement. Several of the articles we reviewed are strong in their presentation of an inclusive recommended approach to cognitive task analysis (e.g., Hall, Gott, & Pokorny, 1995; Hoffman, Shadbolt, Burton, & Klein, 1995; Means, 1993; DuBois & Shalin, 1995). In the present volume, the following chapters also present reasonably inclusive descriptions of the process: chapter 3 by DuBois and Shalin, chapter 6 by Flach, and chapter 9 by Seamster, Redding, and Kaempf.
Preliminary Phase
One should begin a cognitive task analysis with a study of the job or jobs involved to determine what tasks merit the detailed attention of a cognitive task analysis. Standard approaches from personnel psychology are appropriate for this phase of the effort, using unstructured interviews and/or questionnaires to determine the importance, typicality, and frequency of tasks within job performance. Hall et al. (1995) discussed this preliminary phase, as did DuBois and Shalin (1995) with somewhat more methodological detail. DuBois and Shalin also pointed out the importance of focusing on the tasks or problems within general tasks that discriminate more expert performance from routine performance, even though these may not be high-frequency events. Klein Associatesâ approach seems to embody the same view, with an emphasis on gathering data about past critical incidents in expertsâ experience.
Depending on the availability of written materials about the job or task, such as existing training materials, the first step for those responsible for the analysis probably should be to read those materials to gain a general familiarity with the job or task and a knowledge of the specialized vocabulary (this is referred to as bootstrapping by Hoffman et al. [1995], and table-top analysis by Flach [chap. 6, this volume]). The major alternative is to begin with informal, unstructured interviews with persons who have been identified as experts. In the ideal case, the task analysis becomes a team effort among one or more experts in cognitive task analysis and several subject-matter experts. Of course, it is important to obtain the time, effort, and cooperation of experts who are in fact expert. Hall et al. (1995) discussed the issue of the scarcity of true experts and the selection of appropriate experts in moderate detail. Hoffman et al. (1995) were also concerned with the gradations of expertise. Articulate experts with recent experience in both performing and teaching the skill are particularly useful. For example, the MYCIN (Buchanan & Shortliffe, 1984) expert was reknowned for his ability to teach medical diagnosis.
It is also true that not just anyone is suitable for acting as a cognitive task analystânot even just anyone who is educated in cognitive psychology and cognitive science. Analysts must have the social skills to establish rapport with the subject-matter experts (SMEs), sometimes across the barriers of different social, cultural and economic backgrounds. If doing unstructured or even structured interviews, they must be verbally adept to adapt to the changing circumstances of the interview. They must be intelligent, quick earners because they have to learn a great deal about the task to analyze it effectively. Hoffman et al. (1995) and Crandall, Klein, Militello, and Wolf (1994) discussed some of these issues about the requirements for cognitive task analysts. Forsythe and Buchanan (1993) also appears to be a reference of interest on these points. There is also a good deal of literature from the expert systems community dealing with the practicalities of interviewing and with requirements that both the knowledge engineer and the expert must meet (e.g., Firlej & Hellens, 1991; McGraw & Harbison-Briggs, 1989; Meyer & Booker, 1991; Waterman, 1986).
Identifying Knowledge Representations
A major goal for the initial unstructured interviews with the SMEs should be to identify the abstract nature of the knowledge involved in the task, that is, the type of knowledge representations that need to be used. This can order the rest of the task analytic effort. This point is not explicit in the literature, but the more impressive, convincing approaches are organized around a knowledge representation or set of knowledge representations appropriate for the job or task. For example, DuBois and Shalin (1995, chap. 3, this volume) use a goal/method graph annotated with additional information about the basis for method selection and the explanation of the rationale or principles behind the method. Less explicitly, the PARI method (Hall et al., 1995) gathers essentially the same information supplemented by information about the expertsâ mental organization of device structure and function. Crandall et al. (1994) advocated collecting mental models of the task and of the team context of work, as well as of the equipment. For eliciting knowledge about how a device or system works, Williams and Kotnur (1993) described Miyakeâs (1986) constructive interaction. Benysh, Koubek, and Calvez (1993) proposed a knowledge representation that combines procedural information with conceptual information. Similarly, in ongoing work, Williams, Hultman, and Graesser (1998) have collaborated on ways to combine the representations of declarative and procedural knowledge.
Semantic networks are probably overrepresented in reviews of knowledge acquisition methods relative to their actual utility. Although measures of conceptual relatedness or organization are sensitive to growth in expertise, they may actually be derived from more complex knowledge organizations in the expertsâ minds, such as those mentioned earlier that integrate procedural and declarative knowledge. For example, it might be a mistake to attempt to directly train the conceptual organizations one deduces from studies of experts. However, semantic networking or clustering techniques have been successfully used to structure more effective computer interfaces (Patel, Drury, & Shalin, 1998; Roske-Hofstrand & Paap, 1986; Vora, Helander, & Shalin, 1994). As we gain experience with cognitive task analysis, it may become possible to define a taxonomy of tasks that, in effect, would classify tasks into types for which the same abstract knowledge representations and the same associated knowledge-elicitation methods are appropriate. However, we should always keep in mind the possibility that the particular task of concern may involve some type of knowledge not in the stereotype for its assigned position in the classification scheme.
Knowledge-Elicitation Techniques
Having identified the general framework for the knowledge that has to be obtained, the analysts can then proceed to employ the knowledge-elicitation techniques or methods discussed in the articles reviewed. Structured interviews can be used to obtain informationâan approach that is well discussed in Hoffman et al. (1995), Randel, Pugh, and Reed (1996), and Crandall et al. (1994). The extreme of the structured interview is the computer-aided knowledge-elicitation approach, discussed in reviews by Williams and Kotnour (1993) and Cooke (1994) and exemplified by Shuteâs (chap. 5, this volume) DNA cognitive task analysis software and Williamsâ (chap. 11, this volume) CAT and CAT-HCI tools. The latter structure and support a generalized version of a GOMS-style analysis, generating much the same sort of goal/method representation recommended by DuBois and Shalin. Of course these interviews and other methods must be focused on an appropriate representative set of problems or cases previously identified, as alluded to earlier. The PARI method (Hall et al., 1995) featu...