Expertise and Technology
eBook - ePub

Expertise and Technology

Cognition & Human-computer Cooperation

  1. 312 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Expertise and Technology

Cognition & Human-computer Cooperation

About this book

Technological development has changed the nature of industrial production so that it is no longer a question of humans working with a machine, but rather that a joint human machine system is performing the task. This development, which started in the 1940s, has become even more pronounced with the proliferation of computers and the invasion of digital technology in all wakes of working life. It may appear that the importance of human work has been reduced compared to what can be achieved by intelligent software systems, but in reality, the opposite is true: the more complex a system, the more vital the human operator's task. The conditions have changed, however, whereas people used to be in control of their own tasks, today they have become supervisors of tasks which are shared between humans and machines.

A considerable effort has been devoted to the domain of administrative and clerical work and has led to the establishment of an internationally based human-computer interaction (HCI) community at research and application levels. The HCI community, however, has paid more attention to static environments where the human operator is in complete control of the situation, rather than to dynamic environments where changes may occur independent of human intervention and actions.

This book's basic philosophy is the conviction that human operators remain the unchallenged experts even in the worst cases where their working conditions have been impoverished by senseless automation. They maintain this advantage due to their ability to learn and build up a high level of expertise -- a foundation of operational knowledge -- during their work. This expertise must be taken into account in the development of efficient human-machine systems, in the specification of training requirements, and in the identification of needs for specific computer support to human actions. Supporting this philosophy, this volume

*deals with the main features of cognition in dynamic environments, combining issues coming from empirical approaches of human cognition and cognitive simulation,
*addresses the question of the development of competence and expertise, and
*proposes ways to take up the main challenge in this domain -- the design of an actual cooperation between human experts and computers of the next century.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Expertise and Technology by Jean-Michel Hoc,Pietro C. Cacciabue,Erik Hollnagel,P. Carlo Cacciabue in PDF and/or ePUB format, as well as other popular books in Psychology & Cognitive Psychology & Cognition. We have over one million books available in our catalogue for you to explore.
1
Work with Technology: Some Fundamental Issues
Erik HOLLNAGEL
Human Reliability Associates
Pietro Carlo CACCIABUE
CEC Joint Research Centre, Ispra, Italy
Jean-Michel HOC
CNRS - University of Valenciennes
COGNITION AND WORK WITH TECHNOLOGY
Working with technology is tantamount to working in a joint system where in every situation the most important thing is to understand what one is supposed to do. The joint system is the unique combination of people and machines that is needed to carry out a given task or to provide a specific function. In this book the focus is on a particular group of people that are called operators. An operator is the person who is in charge of controlling the system and who also has the responsibility for the system’s performance. In the joint system, both operators and machines are necessary for function; it follows that operators need and depend on machines and that machines need and depend on their operators. The decision of how far to extend the notion of people and machines, that is, how much to include in the description of the system, is entirely pragmatic and should not worry us in this context. The important thing is to recognize that the joint system exists in an organizational and social context, and that it therefore should be studied in vivo and not in vitro. A particular consequence of this is that expertise should not be seen as the individual mastery of discrete tasks, but as a quality that exists in the social context of praxis (cf. Norros, chapter 9, this volume). Where the boundaries of the joint system are set may, for all practical purposes, be determined by the nature of the investigation and the level of the analysis. In some cases, the boundaries of the joint system coincide with the physical space of the control room. But it is frequently necessary to include elements that are distributed in space and time, such as management, training, safety policies, software design, and so forth.
It is quite common to refer simply to a man-machine system (MMS)1, hence a joint system as the combination of a human and a machine needed to provide a specific function. The reason for using the singular ā€œman and machineā€ rather than ā€œpeople and machinesā€ is partly tradition and partly the fact that we are very often considering the situation of the individual operator (although not necessarily an operator who is single or isolated). The term machine should not be understood as a single physical machine, for example, a lathe, a pump, or a bus, but rather as the technological part of the system, possibly including a large number of components, machines, computers, controlling devices, and so forth. An example is an airplane, a distillation column, a train, or even a computer network. Similarly, the term man should not be understood as a single person (and definitely not as a male person) but rather as the team of people necessary for the joint system to function. An example is the team of controllers in air traffic control.
The onus of understanding, of course, lies with the operator. Although it does make sense to say that the machine must, to a certain degree, understand the operator, the machine’s understanding is quite limited and inflexible — even allowing for the wonders that artificial intelligence and knowledge-based systems may eventually bring. In comparison, the operator has a nearly unlimited capacity for understanding. The operator, who in all situations understands what to do, is a de facto expert and that expertise is slowly developed through use of the system. Given enough time, we all become experts of the systems we use daily. Some of us become experts in the practical sense that we are adept at using the system. Some of us become experts in the sense that we know all about the system, how it works, what the components are, how they are put together. Some of us become experts in the sense that we can explain to others how to use the system, or how the systems really should be working.
Understanding How A System Works
If we consider working with a system that is completely understood, the use of it will be highly efficient and error free. The system can be completely understood either because it is so simple that all possible states can be analyzed and anticipated, or because its performance is stable and remains within a limited number of possible states although the system itself may be complex. (The two cases are, of course, functionally equivalent.) An example of the former is writing with a pencil. There is practically no technology involved in using the pencil, save from making sure that the lead is exposed, and there are few things that can go wrong (the lead can break, the pencil can break, the paper can tear).2 Furthermore, everyone within a given culture, say, the Western civilization, knows what a pencil is for and how it should be used. (To illustrate that this is not an obvious assumption, consider for a moment how many Europeans would be able to write with ink and brush as effortlessly as the Japanese can.) Instructions are therefore not necessary and the person can use the pencil freely for whatever he wants (and problem-solving psychologists enjoy finding alternative uses). When it comes to pencils, we are all experts. We know how to use them and we can probably also explain how they should be used and why they work — at least on a practical level.3
This example is deliberately trivial, but things need only get slightly more complicated to see the problems that are characteristic of work with technology. Even a mechanical pencil or a ball-point pen may suffice, because the mechanism by which it is operated may not be obvious. Consider, for instance, how many different ways in which a mechanical pencil or a ball-point pen can be made. The mode of operation is not always obvious and getting the device into a state where it can be operated (e.g., where the lead is exposed so the pencil can write) can present a problem, although usually a small one. A second difference is that it is no longer possible to observe directly the state of the system (e.g., how much lead that is left). The same goes for ball-point pens and fountain pens; it may, for instance, be impossible to write with a ball-point pen because it is out of ink, because the ink has dried, because there is not enough friction on the surface, because the ball has gotten stuck. To find out what is the cause requires diagnosis. When it comes to mechanical pencils and ball-point pens we still all know how to write with them.4 We can usually make them work, but it is less easy to explain how they work; for instance, what the internal mechanism of a ball-point pen is. Fortunately, we do not necessarily need to know the details of the mechanism in order to use the ball-point pen. Even for such a simple machine there are, however, several ways in which the device can malfunction, thereby introducing issues of diagnosis and repair.
An example of the latter, a complex system with a highly stable performance, can be found in many places. In daily life we need only think of cars and computers. In working contexts many processes spend most of the time in a highly stable region of normal performance, which may mislead the operator to think that he understands the system completely. In fact, it is the noble purpose of design and control engineering to ensure that the performance of the system is highly stable, whether it is a refinery, a blast furnace, a nuclear power plant, or an aircraft. If the presentation of the system states is mediated by a graphical user interface, the resulting ā€œcorrupted realityā€ may foster an impression of complete understanding. As long as the system performance remains stable, this does not present a problem. But the moment that something goes wrong, and in complex systems this seems to be inevitable (Perrow, 1984), the brittleness of the understanding becomes clear.
In order to work efficiently with technology we must have a reasonable degree of understanding of how we can get the technology or the machine to function. We need to be practical experts, but not theoretical ones. We can become practical experts if the machine is well designed and easy to use, that is, if the functions of the system are transparent (Hollnagel, 1988), if the information about its way of functioning and its actual state (feedback) is comprehensible, if it is sufficiently reliable to offer a period of undisturbed functioning where learning can take place, and if we are provided with the proper instructions and the proper help during this learning period. But we are not really experts if we only are able to use the system when everything works as it should, but unable to do so if something go wrong. All these are issues that are important for the use of technology, and are treated in this volume. And all are related to human cognition, and in particular to the ways in which we can describe and explain human cognition.
Working With Dynamic Systems
This book is about working with dynamic systems. It is characteristic of a dynamic system that it may evolve without operator intervention. Thus, even if the operator does not interact with the system, for instance, when he is trying to diagnose or plan, the state of the system may change.5 The interaction may be paced or driven by what happens in the process, and the system does not patiently await the next input from the operator. Much of the work done in the field of human-computer interaction (HCI) refers to systems which are nondynamic, systems where there is little time pressure and where there are few, if any, consequences of delaying an action (Hollnagel, 1993a). The study of MMS and the study of HCI intersect in the design of interfaces for process control. The differences between dynamic and nondynamic systems, however, means that the transfer of concepts, methods, and results between the two disciplines should be done with caution.
Working with a dynamic system not only means that time may be limited, but also means that the mental representation of the system must be continuously updated. The choice of a specific action is based on the operator’s understanding of the current state of the system and his expectations of what the action will accomplish.
If the understanding is incomplete or incorrect, the actions may fail to achieve their purpose. But dynamic systems have even more interesting characteristics as follows (Hoc, 1993):
• the system being supervised or controlled is usually only part of a larger system, for example, a section of a production line or a phase of a chemical process;
• the system being controlled is usually dynamically coupled with other parts of the system, either being affected by state changes in upstream components or itself causing changes in downstream components;
• effective process control often requires that the scope is enlarged in time or space; the operator must consider previous developments and possible future events, as well as parts of the system that are physically or geographically from the present position;
• crucial information may not always be easy to access, but require various degrees of inference;
• the effects of actions and interventions may be indirect or delayed, thus introducing constraints on steering and planning; this effect may be worsened by using the corrupted reality of advanced graphical interfaces (Malin & Schreckenghost, 1992);
• the development of the process may be so fast that operators are forced to take risks and resort to high-level but inaccurate resource management strategies;
• and finally process evolution may be either discontinuous, as in discrete manufacturing, or continuous, as in the traditionally studied processes.
In the context of dynamic systems, expertise refers to the availability of operational knowledge that has been acquired through a prolonged experience with the plant, rather than to academic or theoretical knowledge based on first principles. The operational knowledge is strongly linked to action goals and the available resources. It contains a large amount of practical knowledge that is often weakly formalized and partly unknown to engineers, because it has arisen from situations that were not foreseen by the plant designers. Operational knowledge is structured to facilitate rapid actions rather than a complete understanding. It is concerned with technical aspects of the process as well as the lore that goes with the process environment. The latter is crucial in the supervision of processes with small time constants where resource management is of key importance.
INTENTION AND WORK WITH TECHNOLOGY
When people work with technology they usually have an intention, a formulated purpose or goal that guides and directs how they act as well as how they perceive and interpret the reactions from the machine or the process. The intentions can be externally provided by instructions, by written procedures, or by unwritten rules, or, be a product of the operator’s own reasoning — and, of course, a mixture of the two. In the former case, it is important that operator can accept and understand the goals that are stated in the instructions and that he knows enough about the machine to be able to comply with these goals. In the latter case, it is important that the operator has an adequate understanding or model of the machine, because otherwise he may reason incorrectly and reach the wrong conclusions, and follow goals that are not appropriate or correct.
When people work together, they are usually able to grasp the intentions of each other, either implicitly through inference or explicitly through communication. This mutual understanding of intentions is actually one of the foundations for efficient collaboration. Conversely, misunderstanding another operator’s intentions may effectively block efficient collaboration and lead to unwanted consequences.
In work with technology, two problems are often encountered. The first is that operators may have problems in identifying or understanding a machine’s intentions; clearly, a machine does not have intentions in the same way that a human has. Yet the machine has been designed with a specific purpose (or set of purposes) in mind, and the functionality of the machine is ideally an expression of these purposes. Therefore, if operators understand the purpose or the intention (the intended function) of the machine, as it is expressed through the design, they may be in a better position to use it efficiently and effortlessly. This understanding can be facilitated by an adequate design of the interface, in particular an adequate presentation of system states, system goals, and system functions. Much of this understanding is, however, achieved through practice. By working with the system, the operators gradually come to understand how it works and what the designers’ intentions were. This can clearly only be achieved if the system has long periods of stable performance. Consequently, that operators have little possibility of understanding the system in abnormal situations where paradoxically the need for understanding may be largest. It is therefore very important that designers realize this problem, and increase the emphasis on proper interaction during contingencies.
The second problem is that the machine has no way of understanding the operator’s purpose or intentions. (We here disregard the few attempts in artificial intelligence to apply intent recognition because they are of limited practical value.) This means that the machine is unable to react to what happens except by means of predefined response patterns (cf. Hollnagel, chapter 14, this volume). Many years ago, Norbert Wiener pointed out that the problem with computers (hence with technology and machines in general) is that they do what we ask them to do, rather than what we think we ask them to do. If machines could only recognize what the operator intended or wanted them to do and then did it, life would be so much easier.
The operator’s intentions must somehow be translated into actions to achieve the goal. We must therefore be able to establish a correspondence between how we believe the machine works and how it actually works. This has also been expressed as the problem of mapping between psychological variables and physical variables. Norman (1986), in particular, has described what he called the gulf of execution and the gulf of evaluation. The gulf of execution denotes the situation that exists when the operator has a goal but does not know how to achieve it, when he does not know how to manipulate or control the machine. The gulf of evaluation characterizes the situation where the operator does not understand the measurements and indications given off by the system, when he cannot interpret or make sense of the current system state. Another way of expressing this is by saying that we must have a model of the system which enables us to come up with the appropriate sequence of actions as well as to understand the indications and measurements that the system provides. However...

Table of contents

  1. Cover
  2. Halftitle
  3. Title
  4. Copyright
  5. Contents
  6. Series Foreword
  7. Editors’ Foreword
  8. List of Contributors
  9. Chapter 1 Work with Technology: Some Fundamental Issues
  10. Section 1 Cognition and Work with Technology
  11. Section 2 Development of Competence and Expertise
  12. Section 3 Cooperation Between Humans and Computers
  13. Subject Index