User Interface Inspection Methods
eBook - ePub

User Interface Inspection Methods

A User-Centered Design Method

Chauncey Wilson

Share book
  1. 146 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

User Interface Inspection Methods

A User-Centered Design Method

Chauncey Wilson

Book details
Book preview
Table of contents
Citations

About This Book

User Interface Inspection Methods succinctly covers five inspection methods: heuristic evaluation, perspective-based user interface inspection, cognitive walkthrough, pluralistic walkthrough, and formal usability inspections.

Heuristic evaluation is perhaps the best-known inspection method, requiring a group of evaluators to review a product against a set of general principles. The perspective-based user interface inspection is based on the principle that different perspectives will find different problems in a user interface. In the related persona-based inspection, colleagues assume the roles of personas and review the product based on the needs, background, tasks, and pain points of the different personas. The cognitive walkthrough focuses on ease of learning.

Most of the inspection methods do not require users; the main exception is the pluralistic walkthrough, in which a user is invited to provide feedback while members of a product team listen, observe the user, and ask questions.

After reading this book, you will be able to use these UI inspection methods with confidence and certainty.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on ā€œCancel Subscriptionā€ - itā€™s as simple as that. After you cancel, your membership will stay active for the remainder of the time youā€™ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlegoā€™s features. The only differences are the price and subscription period: With the annual plan youā€™ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weā€™ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is User Interface Inspection Methods an online PDF/ePUB?
Yes, you can access User Interface Inspection Methods by Chauncey Wilson in PDF and/or ePUB format, as well as other popular books in Design & UI/UX Design. We have over one million books available in our catalogue for you to explore.

Information

Year
2013
ISBN
9780124104488
Topic
Design
Subtopic
UI/UX Design
Chapter 1

Heuristic Evaluation

Heuristic evaluation is a usability inspection method that asks usability practitioners and other stakeholders to evaluate a user interface based on a set of principles or commonsense rules. This method was originally conceived as a discount usability method that could be used to find problems early using wireframes, prototypes, and working products. A side benefit of the method is that evaluators learn about the principles that support good usability. Heuristic evaluation and related inspection methods have become one of the most common methods for finding usability problems.

Keywords

Heuristic; heuristic evaluation; inspection; usability inspection; walkthrough
Outline
Overview of Heuristic Evaluation
When Should You Use Heuristic Evaluation?
Strengths
Weaknesses
What Resources Do You Need to Conduct a Heuristic Evaluation?
Personnel, Participants, and Training
Hardware and Software
Documents and Materials
Procedures and Practical Advice on the Heuristic Evaluation Method
Planning a Heuristic Evaluation Session
Conducting a Heuristic Evaluation
After the Heuristic Evaluations by Individuals
Variations and Extensions of the Heuristic Evaluation Method
The Group Heuristic Evaluation with Minimal Preparation
Crowdsourced Heuristic Evaluation
Participatory Heuristic Evaluation
Cooperative Evaluation
Heuristic Walkthrough
HE-Plus Method
Major Issues in the Use of the Heuristic Evaluation Method
How Does the UX Team Generate Heuristics When the Basic Set Is Not Sufficient?
Do Heuristic Evaluations Find ā€œRealā€ Problems?
Does Heuristic Evaluation Lead to Better Products?
How Much Does Expertise Matter?
Should Inspections and Walkthroughs Highlight Positive Aspects of a Productā€™s UI?
Individual Reliability and Group Thoroughness
Data, Analysis, and Reporting
Conclusions
Alternate Names: Expert review, heuristic inspection, usability inspection, peer review, user interface inspection.
Related Methods: Cognitive walkthrough, expert review, formal usability inspection, perspective-based user interface inspection, pluralistic walkthrough.

Overview of Heuristic Evaluation

A heuristic evaluation is a type of user interface (UI) or usability inspection where an individual, or a team of individuals, evaluates a specification, prototype, or product against a brief list of succinct usability or user experience (UX) principles or areas of concern (Nielsen, 1993; Nielsen & Molich, 1990). The heuristic evaluation method is one of the most common methods in user-centered design (UCD) for identifying usability problems (Rosenbaum, Rohn, & Humburg, 2000), although in some cases, what people refer to as a heuristic evaluation might be better categorized as an expert review (Chapter 2) because heuristics were mixed with additional principles and personal beliefs and knowledge about usability.
A heuristic is a commonsense rule or a simplified principle. A list of heuristics is meant as an aid or mnemonic device for the evaluators. Table 1.1 is a list of heuristics from Nielsen (1994a) that you might give to your team of evaluators to remind them about potential problem areas.
Table 1.1
A Set of Heuristics from Nielsen (1994a)
Image
There are several general approaches for conducting a heuristic evaluation:
ā€¢ Object-based. In an object-based heuristic evaluation, evaluators are asked to examine particular UI objects for problems related to the heuristics. These objects can include mobile screens, hardware control panels, web pages, windows, dialog boxes, menus, controls (e.g., radio buttons, push buttons, and text fields), error messages, and keyboard assignments.
ā€¢ Task-based. In the task-based approach, evaluators are given heuristics and a set of tasks to work through and are asked to report on problems related to heuristics that occur as they perform or simulate the tasks.
ā€¢ An objectā€“task hybrid. A hybrid approach combines the object and task approaches. Evaluators first work through a set of tasks looking for issues related to heuristics and then evaluate designated UI objects against the same heuristics. The hybrid approach is similar to the heuristic walkthrough (Sears, 1997), which is described later in this book.
In task-based or hybrid approaches, the choice of tasks for the team of evaluators is critical. Questions to consider when choosing tasks include the following:
ā€¢ Is the task realistic? Simplistic tasks might not reveal serious problems.
ā€¢ What is the frequency of the task? The frequency of the task might determine whether something is a problem or not. Consider a complex program that you use once a year (e.g., US tax programs). A program intended to be used once a year might require high initial learning support, much feedback, and repeated success messagesā€”all features intended to support the infrequent user. These same features might be considered problems for the daily user of the same program (e.g., a tax accountant or financial advisor) who was interested in efficiency and doesnā€™t want constant, irritating feedback messages.
ā€¢ What are the consequences of the task? Will an error during a task result in a minor or major loss of data? Will someone die if there is task failure? If you are working on medical monitoring systems, the consequences of missed problems could be disastrous.
ā€¢ Are the data used in the task realistic? We often use simple samples of data for usability evaluations because it is convenient, but you might reveal more problems with ā€œdirty data.ā€
ā€¢ Are you using data at the right scale? Some tasks are easy with limited data sets (e.g., 100 or 1000 items) but very hard when tens of thousands or millions of items are involved. It is convenient to use small samples for task-based evaluations, but those small samples of test data may hide significant problems.
Multiple evaluators are recommended for heuristic evaluations, because different people who evaluate the same UI often identify quite different problems (Hertzum, Jacobsen, & Molich, 2002; Molich & Dumas, 2008; Nielsen, 1993) and also vary considerably in their ratings of the severity of identical problems (Molich, 2011).
The Evaluator Effect in Usability Evaluation
The common finding that people who evaluate the usability of the same product report different sets of problems is called the ā€œevaluator effectā€ (Hertzum & Jacobsen, 2001; Jacobsen, Hertzum, & John, 1998a,b). The effect can be seen in both testing and inspection studies. There are many potential causes for this effect including different backgrounds, different levels of expertise, the quality of the instructions for conducting an evaluation, knowledge of heuristics, knowledge of the tasks and environment, knowledge of the user, and the sheer number of problems that a complex system (e.g., creation-oriented applications like PhotoShop and AutoCAD), with many ways to use features and complete tasks, can present to users (Akers, Jeffries, Simpson, & Winograd, 2012). Knowing that evaluators will find different problems, from the practical point of view, can be dealt with by:
ā€¢ Using multiple evaluators with both UX and domain knowledge.
ā€¢ Training evaluators on the method and materials used (checklists, heuristics, tasks, etc.). Providing examples of violations of heuristics and training on severity scales can improve the quality of inspections and walkthroughs.
ā€¢ Providing realistic scenarios, background on the users, and their work or play environments.
ā€¢ Providing a common form for reporting results and training people on how to report problems.
ā€¢ Providing evaluators with the UX dimensions (e.g., learnability, memorability efficiency, error prevention, and aesthetics) that are most critical to users.
ā€¢ Considering how something might be a problem to a novice and a delighter to an expert.
This book discusses the strengths and weaknesses of each approach and provides tips from academic and practical perspectives on how to make inspections and walkthroughs more effective.
During the heuristic evaluation, evaluators can write down problems as they work independently, or they can think aloud while a colleague takes notes about the problems encountered during the evaluation. The results of all the evaluations can then be aggregated into a composite list of usability problems o...

Table of contents