Human Performance and Situation Awareness Measures
eBook - ePub

Human Performance and Situation Awareness Measures

Valerie Jane Gawron

Share book
  1. 190 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Human Performance and Situation Awareness Measures

Valerie Jane Gawron

Book details
Book preview
Table of contents
Citations

About This Book

This book was developed to help researchers and practitioners select measures to be used in the evaluation of human/machine systems. The book begins with an overview of the steps involved in developing a test to measure human performance. This is followed by a definition of human performance and a review of human performance measures. Another section defines situational awareness with reviews of situational awareness measures. For both the performance and situational awareness sections, each measure is described, along with its strengths and limitations, data requirements, threshold values, and sources of further information. To make this reference easier to use, extensive author and subject indices are provided.

Features



  • Provides a short engineering tutorial on experimental design
  • Offers readily accessible information on human performance and situational awareness (SA) measures
  • Presents general description of the measure
  • Covers data collection, reduction, and analysis requirements
  • Details the strengths and limitations or restrictions of each measure, including proprietary rights or restrictions

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Human Performance and Situation Awareness Measures an online PDF/ePUB?
Yes, you can access Human Performance and Situation Awareness Measures by Valerie Jane Gawron in PDF and/or ePUB format, as well as other popular books in Psychologie & Verhaltensforschung. We have over one million books available in our catalogue for you to explore.

Information

Publisher
CRC Press
Year
2019
ISBN
9780429671272
1
Introduction
Human factors specialists, including ergonomists, industrial engineers, engineering psychologists, human factors engineers, and many others, continually seek better (more efficient and effective) ways to characterize and measure the human element as part of the system so we can build trains, planes, and automobiles; process control stations, and other systems with superior human/system interfaces. Yet the human factors specialist is often frustrated by the lack of readily accessible information on human performance, workload, and Situational Awareness (SA) measures. To fill that void, this book was written to guide the reader through the critical process of selecting the appropriate measures of human performance, workload, and SA for objective evaluations.
There are two types of evaluations of human performance. The first type is subjective measures. These are characterized by humans providing opinions through interviews and questionnaires or by observing others’ behavior. There are several excellent references on these techniques (e.g., Meister, 1986). The second type of evaluation of human performance is the experimental method. Again, there are several excellent references (e.g., Keppel, 1991; Kirk, 1995). This experimental method is the focus of this book.
Chapter 1 is a short tutorial on the experimental design. For the tutorial, the task of selecting between aircraft cockpit displays is used as an example. For readers familiar with the general principles of experimentation, this should be simply an interesting application of academic theory. For readers who may not be so familiar, it should provide a good foundation of why it is so important to select the right measures when preparing to conduct an experiment.
Chapter 2 describes measures of human performance and Chapter 3 describes measures of SA. Each measure is described, along with its strengths and limitations, data requirements, threshold values, and sources of further information. To make this desk reference easier to use, extensive author and subjective indices are provided.
1.1 The Example
An experiment is a comparison of two or more ways of doing things. The “things” being done are called independent variables. The “ways” of doing things are called experimental conditions. The measures used for comparison are dependent variables. Designing an experiment requires: defining the independent variables, developing the experimental conditions, and selecting the dependent variables. Ways of meeting these requirements are described in the following steps.
1.1.1 Step 1: Define the Question
Clearly define the question to be answered by the results of the experiment. Let’s work through an example. Suppose a moving map display is being designed and the lead engineer wants to know if the map should be designed as track up, north up, or something else. He comes to you for an answer. You have an opinion but no hard evidence. You decide to run an experiment. Start by working with the lead engineer to define the question. First, what are the ways of displaying navigation information, that is, what are the experimental conditions to be compared? The lead engineer responds, “Track up, north up, and maybe something else.” If he cannot define something else, you cannot test it. So now you have two experimental conditions: track up versus north up. These conditions form the two levels of your first independent variable, direction of map movement.
1.1.2 Step 2: Check for Qualifiers
Qualifiers are independent variables that qualify or restrict the generalizability of your results. In our example, an important qualifier is the type of user of the moving map display. Will the user be a pilot (who is used to track up) or a navigator (who has been trained with north-up displays)? If you run the experiment with pilots, the most you can say from your results is that one type of display is best for pilots. There is your qualifier. If your lead engineer is designing moving map displays for both pilots and navigators, you have only given him half an answer or worse, if you did not think about the qualifier of type of user, you may have given him an incorrect answer. So, check for qualifiers and use the ones that will have an effect on decision making as independent variables.
In our example, the type of user will have an effect on decision making, so it should be the second independent variable in the experiment. Also in our example, the size of the display will not have an effect on decision making since the lead engineer only has room for an 8-inch display in the instrument panel. Therefore, size of the display should not be included as an independent variable.
1.1.3 Step 3: Specify Conditions
Specify the exact conditions to be compared. In our example, the lead engineer is interested in track up versus north up. So, the movement of the map will vary between the two conditions but everything else about the displays (e.g., scale factor, display resolution, color quality, size of the display, and so forth) should be exactly the same. This way, if the participants’ performance using the two types of displays is different, that difference can be attributed only to the type of display and not to some other difference between the displays.
1.1.4 Step 4: Match Participants
Match the participants to the end users. If you want to generalize the results of your experiment to what will happen in the real world, try to match the participants to the users of the system in the real world. This is extremely important since participants’ past experiences may greatly affect their performance in an experiment. In our example, we added a second independent variable to our experiment specifically because of participants’ previous experiences (that is, pilots are used to track up, navigators are trained with north up). If the end users of the display are pilots, we should use pilots as our participants. If the end users are navigators, we should use navigators as our participants. Other participant variables may also be important; in our example, age and training are both very important. Therefore, you should identify what training the user of the map display must have and provide that same training to the participants before the start of data collection.
Age is important because pilots in their forties may have problems focusing on near objects such as map displays. Previous training is also important: F-16 pilots have already used moving map displays while C-130 pilots have not. If the end users are pilots in their twenties with F-16 experience and your participants are pilots in their forties with C-130 experience, you may be giving the lead engineer the wrong answer to his question of which type of display is better.
1.1.5 Step 5: Select Performance Measures
Your results are influenced to a large degree by the performance measures you select. Performance measures should be relevant, reliable, valid, quantitative, and comprehensive. Let’s use these criteria to select performance measures for our example problem.
Criteria 1: Relevant. Relevance to the question being asked is the prime criteria to be used when selecting performance measures. In our example, the lead engineer’s question is “What type of display format is better?” Better can refer to staying on course better (accuracy) but it can also refer to getting to the waypoints on time better (time). Participants’ ratings of which display format they prefer does not answer the question of which display is better from a performance standpoint because preference ratings can be affected by factors other than performance.
Criteria 2: Reliable. Reliability refers to the repeatability of the measurements. For recording equipment, reliability is dependent on careful calibration of equipment to ensure that measurements are repeatable and accurate (i.e., an actual course deviation of 50.31 feet should always be recorded as 50.31 feet). For rating scales, reliability is dependent on the clarity of the wording. Rating scales with ambiguous wording will not give reliable measures of performance. For example, if the question on the rating scale is “Was your performance okay?” the participant may respond “No” after his first simulated flight but “Yes” after his second simply because he is more comfortable with the task. If you now let him repeat his first flight, he may respond, “Yes.” In this case, you are getting a different answer to the same question in the same condition. Participants will give more reliable responses to less ambiguous questions such as “Did you deviate more than 100 feet from course in this trial?” Even so, you may still get a first “No” and a second “Yes” to the more precise question, indicating that some learning had improved his performance the second time.
Participants also need to be calibrated. For example, if you are asking which of eight flight control systems is best and your metric is an absolute rating (e.g., Cooper-Harper Handling Qualities Rating), your participant needs to be calibrated with both a “good” aircraft and a “bad” aircraft at the beginning of the experiment. He may also need to be recalibrated during the course of the experiment. The symptoms that suggest the need to recalibrate your participant are the same as those that indicate that you should recalibrate your measuring equipment: (a) all the ratings are falling in a narrower band than you expect, (b) all the ratings are higher or lower than you expect, and (c) the ratings are generally increasing (or decreasing) across the experiment independent of experimental condition. In these cases, give the participant a flight control system that he has already rated. If this second rating is substantially different from the one he previously gave you for the same flight control system, you need to recalibrate your participants with an aircraft that pulls their ratings away from the average: bad aircraft if all the ratings are near the top, good aircraft if all the ratings are near the bottom.
Criteria 3: Valid. Validity refers to measuring what you really think you are measuring. Validity is closely tied to reliability. If a measure is not reliable, it can never be valid. The converse is not necessarily true. For example, if you ask a participant to rate his workload from 1 to 10 but do not define for him what you mean by workload, he may rate the perceived difficulty of the task rather than the amount of effort he expended in performing the task.
Criteria 4: Quantitative. Quantitative measures are easier to analyze than qualitative measures. They also provide an estimate of the size of the difference between experimental conditions. This is often very useful in performing trade-off analyses of performance versus cost of system designs. This criterion does not preclude the use of qualitative measures, however, because qualitative measures often improve the understanding of experiment results. For qualitative measures, an additional issue must be considered – the type of rating scale. Nominal scales assign an adjective to the system being evaluated, (e.g., easy to use). “A nominal scale is categorical in nature, simply identifying differences among things on some characteristic. ...

Table of contents