Usability Testing for Survey Research
eBook - ePub

Usability Testing for Survey Research

Emily Geisen,Jennifer Romano Bergstrom

Share book
  1. 250 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Usability Testing for Survey Research

Emily Geisen,Jennifer Romano Bergstrom

Book details
Book preview
Table of contents
Citations

About This Book

Usability Testing for Survey Research provides researchers with a guide to the tools necessary to evaluate, test, and modify surveys in an iterative method during the survey pretesting process. It includes examples that apply usability to any type of survey during any stage of development, along with tactics on how to tailor usability testing to meet budget and scheduling constraints.

The book's authors distill their experience to provide tips on how usability testing can be applied to paper surveys, mixed-mode surveys, interviewer-administered tools, and additional products.

Readers will gain an understanding of usability and usability testing and why it is needed for survey research, along with guidance on how to design and conduct usability tests, analyze and report findings, ideas for how to tailor usability testing to meet budget and schedule constraints, and new knowledge on how to apply usability testing to other survey-related products, such as project websites and interviewer administered tools.

  • Explains how to design and conduct usability tests and analyze and report the findings
  • Includes examples on how to conduct usability testing on any type of survey, from a simple three-question survey on a mobile device, to a complex, multi-page establishment survey
  • Presents real-world examples from leading usability and survey professionals, including a diverse collection of case studies and considerations for using and combining other methods
  • Discusses the facilities, materials, and software needed for usability testing, including in-lab testing, remote testing, and eye tracking

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Usability Testing for Survey Research an online PDF/ePUB?
Yes, you can access Usability Testing for Survey Research by Emily Geisen,Jennifer Romano Bergstrom in PDF and/or ePUB format, as well as other popular books in Computer Science & Human-Computer Interaction. We have over one million books available in our catalogue for you to explore.

Information

Chapter 1

Usability and Usability Testing

Abstract

This chapter provides a brief history of usability and explain the key components of usability testing—the product, the users of the product, users’ goals, the context of use, and the metrics of evaluation. We explain what these usability components mean when evaluating the usability of surveys. We discuss the importance of usability testing as a pretesting method, but note that it does not replace good questionnaire design. We conclude with a brief overview of the usability testing process as applied to survey research.

Keywords

Usability; usability testing; product; users; goals; key components of usability; context of use; usability metrics; accuracy; efficiency; satisfaction; pilot testing; cognitive testing
When I (Emily) was attending graduate school, I met an engineer who worked at Ford Motor Company. He explained that it was his job to take artists’ concept drawings and use them to engineer a working car. He noted that while the designs were usually beautiful, modern, and stylish, they were not always usable. As a result, his conversations usually went something like this (Fig. 1.1):
ENGINEER: “This is a lovely design, but a car really must have wheels to function.”
ARTIST: “Oh, but wheels are so ugly!”
image

Figure 1.1 A car without wheels might have a nice design, but people cannot use it.
While it is obvious that cars need wheels to work, many aspects of what makes a design usable are not clear, which necessitates usability testing. In his ground-breaking book, The Design of Everyday Things, Norman (2002) demonstrated that design—and consequently, usability—affects things that people use, from teapots to airplanes to surveys.
In this chapter, we provide a brief history of usability, make the case for why usability is needed for evaluating surveys, explain what it means—generally and specifically for survey research—and conclude with an overview of the usability testing process.

A Brief History

The concept of usability, which stems from the discipline of Human Factors, is grounded in industrial efficiency and has been around for centuries. Intuitive design, ease of use, and error reduction have long been used in war scenarios, such as in training soldiers and in designing airplane cockpits.
The concept has been used for survey research for decades. Beginning in the late 1970s, a significant body of research evaluated how respondents completed paper surveys and forms, identifying designs and layouts that made surveys easier to use (Dillman, 1978, 1991, 1995; Dillman, Sinclair, & Clark, 1993; Jenkins & Dillman, 1997; Marquis, Nichols, & Tedesco, 1998).
The terms “usability engineering” and “usability” were first used in 1979 to discuss how people interacted with computers (Bennett, 1979). In the 1980s, as personal computers became more affordable, there was value in designing intuitive computer interfaces.
With the emergence and rise of computer-assisted interviewing in the 1990s, researchers began to assess not only the feasibility of computer-based surveys (i.e., how likely it was that the new technology would work), but also their usability (Couper, 2000; Hansen, Fuchs, & Couper, 1997).
Couper (2000) predicted that usability testing would become a standard questionnaire-pretesting technique. Although usability testing has become significantly more prominent, it has not yet become standard in many organizations. Of those organizations that regularly conduct usability testing, few have documented their process. To become a standard, practitioners must first share their methods and theories, so the field can reach a consensus on best practices. The primary purpose of this book is to fill that gap and present a model for incorporating usability testing as a standard pretesting technique for surveys and to share knowledge about best practices.

Defining Modern Usability

The International Organization for Standardization (9241-11, 1988) defines usability in this way:
The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction, in a specified context of use.
We start by breaking apart that definition into the five key components.
1. The product
2. The specified users of the product
3. The goals of the users
4. The context of use
5. Metrics of evaluation (effectiveness, efficiency, and satisfaction)
To relate these concepts to a more traditional situation, let us imagine that we will usability-test a desk chair (the product). We would test how well teachers (the specified users in this example) can use the test chair at their desk in the classroom (the context of use). We would give them tasks that are identical to how they normally would use the chair. For example, the teachers’ task might be to sit in the chair and adjust it to their preferred height (the goals of the users). We would measure usability by evaluating (metrics) if and how well they can adjust the height (effectiveness), how quickly they can adjust the height (efficiency), and how satisfied they are with the height they adjusted the chair to (satisfaction).
Additionally, we would conduct iterative usability testing, in which changes would be made to the chair based on the usability testing findings, and then we would test the chair again with a new set of participants, using the same tasks and metrics. We would compare metrics in each round of testing to the previous round, and if usability improves, so would our metrics. This iterative process would continue until optimal usability is achieved.

Defining Usability for Surveys

Usability testing of surveys is really no different—we give realistic tasks to participants who represent the real survey respondents. Then we assess how well participants can use the survey to complete tasks, which often include entering responses, navigating, and finding information.

The Product

Survey products include anything from paper surveys to web-based surveys, and self-administered surveys to interviewer-administered surveys. In addition to surveys, usability testing also can be helpful for evaluating forms and other products related to surveys, such as supplementary items, like showcards needed during interviews, project and data dissemination websites, data collection monitoring systems or dashboards, and custom control systems.
We test self-administered surveys because they are very prone to usability errors, regardless of the mode of administration (e.g., paper or web-based: desktop computer, laptop, tablet, smartphone). This is largely because of the absence of an interviewer to help navigate the survey, provide additional information, or resolve consistency errors. Consequently, a respondent may provide inaccurate data or become frustrated and break off the survey. Usability testing is one method that can be used to identify, evaluate, and ultimately resolve some of these issues.
Interviewer-administered surveys have an advantage over self-administered surveys because interviewers are usually trained on how to administer the survey correctly, they practice using the survey, and they conduct the survey multiple times. Therefore it may seem that these survey products need less testing. Although the presence of an interviewer reduces the likelihood of certain types of usability errors, a poorly designed interviewer-administered survey can still affect data quality, burden interviewers, or unnecessarily lengthen interview times.
For example, when an interviewer asks a respondent, “What is your date of birth?,” the respondent could give a variety of valid responses, such as August 28th 1975, or 8-28-1975, or 1975-8-28, or 28-8-1975. However the interviewer may be able to enter responses in only one format, such as 8/28/1975. Requiring interviewers to convert the name of a month to a numeric format in their head during the interview could introduce error. This extra step could also add burden.
Usability testing is likely to detect these types of errors in interviewer-administered surveys by observing a long pause or an error as interviewers convert a verbal response to a numeric response. Alternately, the interviewer may suggest how the survey could be revised to fix a problem that they experienced during the test. In this example, a good practice is validating date of birth by having the interviewer repeat it to the respondent; it is always better to prevent errors rather than correct them.
Another reason to test interviewer-administered surveys is to evaluate the navigation strategies that are the most intuitive for interviewers to use, which can decrease survey-administration times. Let us look at an example.
Fig. 1...

Table of contents