We had a problem. In a usability test that we were conducting for a Japanese client, the interpreter could not keep up and the clients who were observing were getting frustrated. True, the moderator had a fairly rapid rate of speech, but this was not the major problem. We tried slowing the moderator's rate of speech down. This helped a little, but it did not solve the problem. The problem was the language itself: English, when translated into Japanese, expands for linguistic and cultural reasons. Written language is different; Japanese tends to be more compact than English. In addition to slowing the rate of speech of the moderator, we also had to ask her to insert pauses into the flow of the test allowing the interpreter to catch up. The expansion was mildly disruptive to the session flow and resulted in a change to the schedule, as 60-minute sessions went to 75-minute sessions. All came out well in the end â problem solved, but we learned a lesson. These are the lessons of this book.
1.1. Globalization, Localization, and User Research
As the world gets smaller and more flat (in the sense of Friedman [2005]), user research is increasingly focused on globalization and localization of products. In the context of user experience, we define âglobalizationâ as user experiences that are common for all users. âLocalizationsâ are those experiences that demand that the experience be different when introduced to different user communities because of either language or cultural artifact (e.g., colors, signs, and rituals). A mobile phone whose hardware does not change as it is deployed from market to market creates the same user experience within each market; it is globalized. However, the method for sending an SMS message must be different between, for example, China and France due to different languages and different text entry methods. These changes are localizations. Localizations are the subject of a series of advertisements by HSBC. These ads are truly remarkable in their simplicity and poignancy to âNever underestimate the importance of local knowledge.â They also point out how our own cultures sometimes blind us to artifacts (e.g., the use of the color) that may be very significant and interpreted very differently by people from other cultures.
As organizations expand markets, they discover that what they know and are comfortable with in their own cultures often falters in other, even seemingly similar, cultures. Case in point: George Bernard Shaw once described England and America as âtwo countries separated by a common language,â as I discovered in a fruitless search for a âBand-Aidâ in London (in the United Kingdom, the term is âplasterâ). Much of the purpose of this book is to make practitioners aware of what needs to be in place to both control (where possible) and understand how cultural and linguistic differences influence user research. We must also be mindful of the inequality of technological expertise throughout the world. What is assumed by a Finnish mobile phone user to be a basic technical skill may be an advanced concept for someone in any one of a dozen countries. This reality, more than anything, begs for global user research. Our assumptions must be tested in order to ensure that we have met our objectives as user researchers.
To get a better handle on global user research, it is important to put the term âuser researchâ in historical context. Next, the term is defined in some detail, and then we briefly compare user research to other complementary disciplines.
1.2. Origins of User Research
One of the defining characteristics of human nature is that we make and use tools. 1 Humans use tools to control their environment, from using sticks to extend reach to fashioning pots to boil water to constructing medieval siege machines to make war, and to using iPhones to check e-mail. For the majority of human existence, the development of tools was largely the role of the craftsman who needed the tool. If a blacksmith needed a forge, he built a forge. Surely, a mason might rely on a carpenter to make a hammer handle or parts of a lathe, but the carpenter did not have a research team studying the mason's shop and declaring âWe have an idea that will make your work easier.â This would come later.
Fast forward to the industrial revolution. With the concepts of division of labor, productivity, and workflow, institutional emphasis was placed on understanding how to make tools better so people could produce more. Production increased because we invented machines to mechanize labor (e.g., weaving by machine rather than by hand); it also improved because we facilitated how people used the machines. Craftsmen looked at the piecemeal work that users had to do, and they designed and built increasingly specific tools to better accomplish the tasks.
We increased productivity by continuing to adapt our skills as tool makers. At the turn of the twentieth century, the confluence of two key forces created the kernel of user research as it is understood today. First, whereas previously the task of the tool builder was to improve mechanical advantage, tools were beginning to be necessary to extend cognitive and perceptual functions (e.g., calculate faster using adding machines). Second, theory and method in the behavioral sciences were maturing. Research that had direct implications on how to improve human performance was being conducted. 2 For example, one of the first research programs with practical applications in human performance was conducted by Miles Tinker from the 1920s to the 1950s on the legibility of print (Tinker, 1963). Tinker showed us, among other things, that reading speeds varied by type font, type case, line spacing, etc. With this knowledge, researchers showed that we could communicate more effectively through the form of the printed document. A more rigorous understanding of human capabilities was derived from the development of accepted experimental techniques.
During this period, another movement in âhuman factors,â as it was known, emerged largely in physical ergonomics and workplace safety. World War II catalyzed the cognitive component as workers moved from doing the work to supervising machines doing the work. The information age has only served to tune and amplify these advances. As the industrial age gave way to the information age in the developed world (circa 1980s), what began as human factors spawned a new generation of tool designers and tool builders.
Today's technology designers and builders bear little resemblance to the smithies and carpenters of old, yet they have similar objectives. They augment and extend cognitive functions and enable work to proceed on levels never before imagined. To extend that reach, we have to know what humans (users) need and what they are capable of doing. In short, tools of old were used to magnify physical labor; tools today magnify cognitive labor. To be sure, there are physical characteristics as well, but they are of a secondary concern.
What has this change meant for tool designers and builders? The tendency for the tool's user to also be the designer has continued to erode as we have progressed from the industrial to the information age. People who build tools are often not the people who design them; for instance, people who program an electronic medical record application generally did not design it. This is true in other disciplines, such as architecture and construction. In information age terms, those involved in construction (e.g., planners and programmers) are not usually those involved in architecture (e.g., system designers and user researchers). Thus, even within modern tool building, we have a division of labor and specialization of labor. Our specialization of labor has gone so far as to have researchers do research on how to do user research.
It may be slightly bold of me to borrow the history of tool building as the foundation of user research, but it is fitting. The mission of user research is to design better tools. Unlike psychology, user research is not directed toward understanding human behavior. Rather, user research puts our understanding and knowledge of human behavior into practice in improving performance. From this standpoint, user research shares more common elements with industrial engineering (IE) than with the behavioral sciences. Yet, in general, IE is largely insufficient to explain human cognitive performance. User research lies at the nexus between IE and psychology; in fact, many university programs recognize this gap and offer cross-listed coursework and degrees.
What separates the social part of work in the past from the modern instantiation is scope, scale, and complexity. In the information age, tools are collaborative, and we assume that users have learned to use sophisticated tools in complex ways and can use them in coordination with others. Thus, the user researcher must be mindful not only of the capabilities of individuals but also of those of cooperative groups. Most so-called âknowledge workersâ in modern societies are familiar with tools such as WebEx, PowerPoint, and conference calling. Indeed, the process of developing and launching a Web site â the information age equivalent of a barn raising â involves the coordinated efforts of dozens of people who possess, know, and use sophisticated tools and often must do so cooperatively and simultaneously. Today, the scale and complexity are higher than in the past, as work is distributed over time zones, countries, languages, and cultures. Distributed models of application development are also present in user research. For instance, many enterprise software companies hire talented user researchers to understand the users in markets where the tools are to be deployed. User researchers and designers in Sunnyvale, Bangalore, Beijing, and Sofia combine knowledge of human performance, results of user testing, and market expectations and turn those into useful and usable designs. This cooperation is challenging but can be very rewarding.
With this brief history and context in mind, a thoughtful construction of a definition of user research is in order.