User Interface Inspection Methods
eBook - ePub

User Interface Inspection Methods

A User-Centered Design Method

Chauncey Wilson

Buch teilen
  1. 146 Seiten
  2. English
  3. ePUB (handyfreundlich)
  4. Über iOS und Android verfügbar
eBook - ePub

User Interface Inspection Methods

A User-Centered Design Method

Chauncey Wilson

Angaben zum Buch
Buchvorschau
Inhaltsverzeichnis
Quellenangaben

Über dieses Buch

User Interface Inspection Methods succinctly covers five inspection methods: heuristic evaluation, perspective-based user interface inspection, cognitive walkthrough, pluralistic walkthrough, and formal usability inspections.

Heuristic evaluation is perhaps the best-known inspection method, requiring a group of evaluators to review a product against a set of general principles. The perspective-based user interface inspection is based on the principle that different perspectives will find different problems in a user interface. In the related persona-based inspection, colleagues assume the roles of personas and review the product based on the needs, background, tasks, and pain points of the different personas. The cognitive walkthrough focuses on ease of learning.

Most of the inspection methods do not require users; the main exception is the pluralistic walkthrough, in which a user is invited to provide feedback while members of a product team listen, observe the user, and ask questions.

After reading this book, you will be able to use these UI inspection methods with confidence and certainty.

Häufig gestellte Fragen

Wie kann ich mein Abo kündigen?
Gehe einfach zum Kontobereich in den Einstellungen und klicke auf „Abo kündigen“ – ganz einfach. Nachdem du gekündigt hast, bleibt deine Mitgliedschaft für den verbleibenden Abozeitraum, den du bereits bezahlt hast, aktiv. Mehr Informationen hier.
(Wie) Kann ich Bücher herunterladen?
Derzeit stehen all unsere auf Mobilgeräte reagierenden ePub-Bücher zum Download über die App zur Verfügung. Die meisten unserer PDFs stehen ebenfalls zum Download bereit; wir arbeiten daran, auch die übrigen PDFs zum Download anzubieten, bei denen dies aktuell noch nicht möglich ist. Weitere Informationen hier.
Welcher Unterschied besteht bei den Preisen zwischen den Aboplänen?
Mit beiden Aboplänen erhältst du vollen Zugang zur Bibliothek und allen Funktionen von Perlego. Die einzigen Unterschiede bestehen im Preis und dem Abozeitraum: Mit dem Jahresabo sparst du auf 12 Monate gerechnet im Vergleich zum Monatsabo rund 30 %.
Was ist Perlego?
Wir sind ein Online-Abodienst für Lehrbücher, bei dem du für weniger als den Preis eines einzelnen Buches pro Monat Zugang zu einer ganzen Online-Bibliothek erhältst. Mit über 1 Million Büchern zu über 1.000 verschiedenen Themen haben wir bestimmt alles, was du brauchst! Weitere Informationen hier.
Unterstützt Perlego Text-zu-Sprache?
Achte auf das Symbol zum Vorlesen in deinem nächsten Buch, um zu sehen, ob du es dir auch anhören kannst. Bei diesem Tool wird dir Text laut vorgelesen, wobei der Text beim Vorlesen auch grafisch hervorgehoben wird. Du kannst das Vorlesen jederzeit anhalten, beschleunigen und verlangsamen. Weitere Informationen hier.
Ist User Interface Inspection Methods als Online-PDF/ePub verfügbar?
Ja, du hast Zugang zu User Interface Inspection Methods von Chauncey Wilson im PDF- und/oder ePub-Format sowie zu anderen beliebten Büchern aus Design & UI/UX Design. Aus unserem Katalog stehen dir über 1 Million Bücher zur Verfügung.

Information

Jahr
2013
ISBN
9780124104488
Chapter 1

Heuristic Evaluation

Heuristic evaluation is a usability inspection method that asks usability practitioners and other stakeholders to evaluate a user interface based on a set of principles or commonsense rules. This method was originally conceived as a discount usability method that could be used to find problems early using wireframes, prototypes, and working products. A side benefit of the method is that evaluators learn about the principles that support good usability. Heuristic evaluation and related inspection methods have become one of the most common methods for finding usability problems.

Keywords

Heuristic; heuristic evaluation; inspection; usability inspection; walkthrough
Outline
Overview of Heuristic Evaluation
When Should You Use Heuristic Evaluation?
Strengths
Weaknesses
What Resources Do You Need to Conduct a Heuristic Evaluation?
Personnel, Participants, and Training
Hardware and Software
Documents and Materials
Procedures and Practical Advice on the Heuristic Evaluation Method
Planning a Heuristic Evaluation Session
Conducting a Heuristic Evaluation
After the Heuristic Evaluations by Individuals
Variations and Extensions of the Heuristic Evaluation Method
The Group Heuristic Evaluation with Minimal Preparation
Crowdsourced Heuristic Evaluation
Participatory Heuristic Evaluation
Cooperative Evaluation
Heuristic Walkthrough
HE-Plus Method
Major Issues in the Use of the Heuristic Evaluation Method
How Does the UX Team Generate Heuristics When the Basic Set Is Not Sufficient?
Do Heuristic Evaluations Find “Real” Problems?
Does Heuristic Evaluation Lead to Better Products?
How Much Does Expertise Matter?
Should Inspections and Walkthroughs Highlight Positive Aspects of a Product’s UI?
Individual Reliability and Group Thoroughness
Data, Analysis, and Reporting
Conclusions
Alternate Names: Expert review, heuristic inspection, usability inspection, peer review, user interface inspection.
Related Methods: Cognitive walkthrough, expert review, formal usability inspection, perspective-based user interface inspection, pluralistic walkthrough.

Overview of Heuristic Evaluation

A heuristic evaluation is a type of user interface (UI) or usability inspection where an individual, or a team of individuals, evaluates a specification, prototype, or product against a brief list of succinct usability or user experience (UX) principles or areas of concern (Nielsen, 1993; Nielsen & Molich, 1990). The heuristic evaluation method is one of the most common methods in user-centered design (UCD) for identifying usability problems (Rosenbaum, Rohn, & Humburg, 2000), although in some cases, what people refer to as a heuristic evaluation might be better categorized as an expert review (Chapter 2) because heuristics were mixed with additional principles and personal beliefs and knowledge about usability.
A heuristic is a commonsense rule or a simplified principle. A list of heuristics is meant as an aid or mnemonic device for the evaluators. Table 1.1 is a list of heuristics from Nielsen (1994a) that you might give to your team of evaluators to remind them about potential problem areas.
Table 1.1
A Set of Heuristics from Nielsen (1994a)
Image
There are several general approaches for conducting a heuristic evaluation:
Object-based. In an object-based heuristic evaluation, evaluators are asked to examine particular UI objects for problems related to the heuristics. These objects can include mobile screens, hardware control panels, web pages, windows, dialog boxes, menus, controls (e.g., radio buttons, push buttons, and text fields), error messages, and keyboard assignments.
Task-based. In the task-based approach, evaluators are given heuristics and a set of tasks to work through and are asked to report on problems related to heuristics that occur as they perform or simulate the tasks.
An object–task hybrid. A hybrid approach combines the object and task approaches. Evaluators first work through a set of tasks looking for issues related to heuristics and then evaluate designated UI objects against the same heuristics. The hybrid approach is similar to the heuristic walkthrough (Sears, 1997), which is described later in this book.
In task-based or hybrid approaches, the choice of tasks for the team of evaluators is critical. Questions to consider when choosing tasks include the following:
Is the task realistic? Simplistic tasks might not reveal serious problems.
What is the frequency of the task? The frequency of the task might determine whether something is a problem or not. Consider a complex program that you use once a year (e.g., US tax programs). A program intended to be used once a year might require high initial learning support, much feedback, and repeated success messages—all features intended to support the infrequent user. These same features might be considered problems for the daily user of the same program (e.g., a tax accountant or financial advisor) who was interested in efficiency and doesn’t want constant, irritating feedback messages.
What are the consequences of the task? Will an error during a task result in a minor or major loss of data? Will someone die if there is task failure? If you are working on medical monitoring systems, the consequences of missed problems could be disastrous.
Are the data used in the task realistic? We often use simple samples of data for usability evaluations because it is convenient, but you might reveal more problems with “dirty data.”
Are you using data at the right scale? Some tasks are easy with limited data sets (e.g., 100 or 1000 items) but very hard when tens of thousands or millions of items are involved. It is convenient to use small samples for task-based evaluations, but those small samples of test data may hide significant problems.
Multiple evaluators are recommended for heuristic evaluations, because different people who evaluate the same UI often identify quite different problems (Hertzum, Jacobsen, & Molich, 2002; Molich & Dumas, 2008; Nielsen, 1993) and also vary considerably in their ratings of the severity of identical problems (Molich, 2011).
The Evaluator Effect in Usability Evaluation
The common finding that people who evaluate the usability of the same product report different sets of problems is called the “evaluator effect” (Hertzum & Jacobsen, 2001; Jacobsen, Hertzum, & John, 1998a,b). The effect can be seen in both testing and inspection studies. There are many potential causes for this effect including different backgrounds, different levels of expertise, the quality of the instructions for conducting an evaluation, knowledge of heuristics, knowledge of the tasks and environment, knowledge of the user, and the sheer number of problems that a complex system (e.g., creation-oriented applications like PhotoShop and AutoCAD), with many ways to use features and complete tasks, can present to users (Akers, Jeffries, Simpson, & Winograd, 2012). Knowing that evaluators will find different problems, from the practical point of view, can be dealt with by:
Using multiple evaluators with both UX and domain knowledge.
Training evaluators on the method and materials used (checklists, heuristics, tasks, etc.). Providing examples of violations of heuristics and training on severity scales can improve the quality of inspections and walkthroughs.
Providing realistic scenarios, background on the users, and their work or play environments.
Providing a common form for reporting results and training people on how to report problems.
Providing evaluators with the UX dimensions (e.g., learnability, memorability efficiency, error prevention, and aesthetics) that are most critical to users.
Considering how something might be a problem to a novice and a delighter to an expert.
This book discusses the strengths and weaknesses of each approach and provides tips from academic and practical perspectives on how to make inspections and walkthroughs more effective.
During the heuristic evaluation, evaluators can write down problems as they work independently, or they can think aloud while a colleague takes notes about the problems encountered during the evaluation. The results of all the evaluations can then be aggregated into a composite list of usability problems o...

Inhaltsverzeichnis