Our interest in the design of machines for human use runs the full gamut of machine complexity—from the design of single instruments to the design of complete systems of machines which must be operated with some degree of coordination.
A. Chapanis, W. Garner, & C. Morgan
The quote with which we begin this chapter is from the first textbook devoted specifically to human factors, Applied Experimental Psychology: Human Factors in Engineering Design,
by Alphonse Chapanis, Wendell Garner, and Clifford Morgan. Designing machines and systems, whether simple or complex, for human use was not only the central concern of their pioneering book but also the driving force for subsequent research on human factors and ergonomics over the past 69 years. The following quotation from the U.S. National Academy of Engineering in their report The Engineer of 2020
, now more than 10 years old, captures the ever increasing importance of the role of human factors in the introduction of new technologies and products:
Engineers and engineering will seek to optimize the benefits derived from a unified appreciation of the physical, psychological, and emotional interactions between information technology and humans. As engineers seek to create products to aid physical and other activities, the strong research base in physiology, ergonomics, and human interactions with computers will expand to include cognition, the processing of information, and physiological responses to electrical, mechanical, and optical stimulation. (2004, p. 14)
It is our purpose in this textbook to summarize much of what we know about human cognitive, physical, and social characteristics and to show how this knowledge can be brought to bear on the design of machines, tools, and systems that are easy and safe to use.
In everyday life, we interact constantly with instruments, machines, and other inanimate systems. These interactions range from turning on and off a light by means of a switch, to the operation of household appliances such as stoves and digital video recorders (DVRs), to the use of mobile smartphones and tablet computers, to the control of complex systems such as aircraft and spacecraft. In the simple case of the light switch, the interaction of a person with the switch, and those components controlled by the switch, forms a system. Every system has a purpose or a goal; the lighting system has the purpose of illuminating a dark room or extinguishing a light when it no longer is needed. The efficiency of the inanimate parts of this system, that is, the power supply, wiring, switch, and light bulb, in part determines whether the system goal can be met. For example, if the light bulb burns out, then illumination is no longer possible.
The ability of the lighting system and other systems to meet their goals also depends on the human components of the systems. For example, if a small person cannot reach the light switch, or an elderly person is not strong enough to operate the switch, then the light will not go on and the goal of illumination will not be met. Thus, the total efficiency of the system depends on both the performance of the inanimate component and the performance of the human component. A failure of either can lead to failure of the entire system.
The things that modern electronic and digital equipment can do are amazing. However, how well these gadgets work (the extent to which they accomplish the goals intended by their designers) is often limited by the human component. As one example, the complicated features of video cassette recorders (VCRs) made them legendary targets of humor in the 1980s and 1990s. To make full use of a VCR, a person first had to connect the device correctly to the television and cable system or satellite dish system that provided the signal and then, if the VCR did not receive a time signal from the cable or satellite, accurately program its clock. When the person wanted to record a television program, she had to set the correct date, channel number, intended start and end times, and tape speed (SP, LP, or EP). If she made any mistakes along the way, the program she wanted would not be recorded. Either nothing happened, the wrong program was recorded, or the correct program was recorded for the wrong length of time (e.g., if she chose the wrong tape speed, say SP to record a 4 hour movie, she would get only the first 2 hours of the show). Because there were many points in this process at which users could get confused and make mistakes, and different VCRs embedded different command options under various menus and submenus in the interface, even someone who was relatively adept at programming recorders had problems, especially when trying to operate a machine with which he was unfamiliar.
Usability problems prevented most VCR owners from using their VCRs to their fullest capabilities (Pollack, 1990). In 1990, almost one-third of VCR owners reported that they had never even set the clock on the machine, which meant that they could never program the machine for recording at specific times. Usability problems with VCRs persisted for decades after their introduction in 1975.
Electronic technology continues to evolve. Instead of VCRs, we now have DVRs and DVR devices like TiVo and Roku. These products still require some programming, and, in many cases, they must be connected to other devices (such as a television set or a home Internet router) to perform their functions. This means that usability is still a major concern, even though we do not have to worry about setting their clocks any more.
You might be thinking right now that usability concerns only apply to older people, who may not be as familiar with technology as younger people. However, young adults who are more technologically sophisticated still have trouble with these kinds of devices. One of the authors of this textbook (Proctor) conducted, as a class project, a usability test of a modular bookshelf stereo with upper-level college students enrolled in a human factors class. Most of these students were unable to program the stereo’s clock, even with the help of the manual. Another published study asked college students to use a VCR. Even after training, 20% of them thought that the VCR was set correctly when in fact it was not (Gray, 2000).
Perhaps nowhere is rapid change more evident than in the development and proliferation of computer technology (Bernstein, 2011; Rojas, 2001). The first generation of modern computers, introduced in the mid-1940s, was extremely large, slow, expensive, and available mainly for military purposes. For example, in 1944, the Harvard-IBM Automatic Sequence Controlled Calculator (ASCC, the first large-scale electric digital computer in the U.S.) was the length of half of an American football field and performed one calculation every 3–5 s. Programming the ASCC, which had nothing like an operating system or compiler, was not easy. Grace Hopper, the first programmer for the ASCC, had to punch machine instructions onto a paper tape, which she then fed into the computer. Despite its size, it could only execute simple routines. Hopper went on to develop one of the first compilers for a programming language, and in her later life, she championed standards testing for computers and programming languages.
The computers that came after the ASCC in the 1950s were considerably smaller but still filled a large room. These computers were more affordable and available to a wider range of users at
businesses and universities. They were also easier to program, using assembly language, which allowed abbreviated programming codes. High-level programming languages such as COBOL and FORTRAN, which used English-like language instead of machine code, were developed, marking the beginning of the software industry. During this period, most computer programs were prepared on decks of cards, which the programmer then submitted to an operator. The operator inserted the deck into a machine called a card reader
and, after a period of time, returned a paper printout of the run. Each line of code had to be typed onto a separate card using a keypunch. Everyone who wrote programs during this era (such as the authors of this textbook) remembers having to go through the tedious procedure of locating and correcting typographical errors on badly punched cards, dropping the sometimes huge deck of cards and hopelessly mixing them up, and receiving cryptic, indecipherable error messages when a program crashed.
In the late 1970s, after the development of the microprocessor, the first desktop-sized personal computers (PCs) became widely available. These included the Apple II, Commodore PET, IBM PC, and Radio Shack TRS-80. These machines changed the face of computing, making powerful computers available to everyone. However, a host of usability issues arose when computers, once accessible only by a small, highly trained group of users, became accessible by the general public. This forced the development of user-friendly operating system designs. For example, users interacted with the first PCs’ operating systems through a text-based, command line interface. This clumsy and unfriendly interface restricted the PC market to the small number of users who wanted a PC badly enough to learn the operating system commands, but development of a “perceptual user interface” was underway at the Xerox Palo Alto Research Center (PARC). Only 7 years after Apple introduced the Apple II, they presented the Macintosh, the first PC to use a window-based graphical interface. Such interfaces are now an integral part of any computer system.
Interacting with a g...