Internet, Phone, Mail, and Mixed-Mode Surveys
eBook - ePub

Internet, Phone, Mail, and Mixed-Mode Surveys

The Tailored Design Method

Don A. Dillman, Jolene D. Smyth, Leah Melani Christian

Share book
  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Internet, Phone, Mail, and Mixed-Mode Surveys

The Tailored Design Method

Don A. Dillman, Jolene D. Smyth, Leah Melani Christian

Book details
Book preview
Table of contents
Citations

About This Book

The classic survey design reference, updated for the digital age

For over two decades, Dillman's classic text on survey design has aided both students and professionals in effectively planning and conducting mail, telephone, and, more recently, Internet surveys. The new edition is thoroughly updated and revised, and covers all aspects of survey research. It features expanded coverage of mobile phones, tablets, and the use of do-it-yourself surveys, and Dillman's unique Tailored Design Method is also thoroughly explained. This invaluable resource is crucial for any researcher seeking to increase response rates and obtain high-quality feedback from survey questions. Consistent with current emphasis on the visual and aural, the new edition is complemented by copious examples within the text and accompanying website.

This heavily revised Fourth Edition includes:

  • Strategies and tactics for determining the needs of a given survey, how to design it, and how to effectively administer it
  • How and when to use mail, telephone, and Internet surveys to maximum advantage
  • Proven techniques to increase response rates
  • Guidance on how to obtain high-quality feedback from mail, electronic, and other self-administered surveys
  • Direction on how to construct effective questionnaires, including considerations of layout
  • The effects of sponsorship on the response rates of surveys
  • Use of capabilities provided by newly mass-used media: interactivity, presentation of aural and visual stimuli.
  • The Fourth Edition reintroduces the telephone—including coordinating land and mobile.

Grounded in the best research, the book offers practical how-to guidelines and detailed examples for practitioners and students alike.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Internet, Phone, Mail, and Mixed-Mode Surveys an online PDF/ePUB?
Yes, you can access Internet, Phone, Mail, and Mixed-Mode Surveys by Don A. Dillman, Jolene D. Smyth, Leah Melani Christian in PDF and/or ePUB format, as well as other popular books in Education & Evaluation & Assessment in Education. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Wiley
Year
2014
ISBN
9781118921302

Chapter 1
Sample Surveys in Our Electronic World

Hundreds of times every day someone decides to create a survey. The variety of organizations and individuals who make this decision is enormous, ranging from individual college students to the largest corporations. Community service organizations, nonprofit foundations, educators, voluntary associations, special interest groups, research scientists, and government agencies also all collect needed information by conducting surveys. The topics of these surveys vary greatly, from questions about health, education, employment, and political preferences to inquiries about television viewing, the use of electronic equipment, and interest in buying a new car, among many other things.
The reasons for deciding to conduct a survey are as diverse as the range of survey sponsors and topics. Sometimes, the justification is that the sponsors do not know the opinions or beliefs of those they want to survey. More typically, the sponsor has interests that go much deeper, wanting to know not just how many individuals in a group have a particular attitude, but how that attitude varies with other respondent characteristics that will be asked in the survey, such as across men and women or across different age or socioeconomic groups.
While the need to know something that is unknown drives the decision to conduct most surveys, the uses of survey results are as diverse as those who sponsor them. For example, one of us recently completed a community survey that was used to decide what facilities to include in a new neighborhood park that was about to be developed. University leaders use results from surveys of students to revise their undergraduate and graduate education programs. Public opinion pollsters use results from surveys of likely voters to predict who will win national and local elections. The Federal Reserve uses estimates of the unemployment rate produced monthly in the Current Population Survey to help set economic policy. Data from this same survey are used by individuals and businesses throughout the United States to make investment, hiring, and policy decisions. Market researchers use surveys to provide insights into consumer attitudes and behaviors. Nonprofit groups use surveys to measure attitudes about issues that are important to them and support for possible programs the group might pursue.
Surveys are both large and small. For example, over the course of a year the U.S. Census Bureau asks a few million households to respond to the American Community Survey. Others ask only a few hundred or even fewer individuals to respond. The survey response mode also varies, with some surveys being conducted by a single mode—in-person, web, telephone, or paper—while others provide multiple modes for answering questions. Sometimes respondents are asked to respond only once, while in other surveys a single individual may be asked to answer questions repeatedly over months or years, and surveys may be conducted in just a few weeks or over several months or years. In some cases people are asked to provide information about themselves or their households, and in other cases they are asked to provide information about a particular business or other organization with which they are affiliated.
Despite this diversity, all surveys still have a lot in common. Each is motivated by the desire to collect information to answer a particular question or solve a particular problem. In some cases the desired information is not available from any other source. In other cases, the information may be available, but it cannot be connected to other important information—such as other characteristics or related attitudes and behaviors—that need to be known in order to solve the problem or answer the question.
In most surveys only some of those in the population of interest are asked to respond. That is, the survey is based on a sample rather than being a census of every member of the target population. In addition, those who respond are asked questions they are expected to answer by choosing from among predetermined response categories or, occasionally by providing open-ended answers in their own words. These commonalities and the enormous amount of money and effort now spent on surveys point to their importance as a tool for learning about people's characteristics, opinions, and behaviors, and using those results to inform and direct public policy, business decisions, and for many other purposes.
Other nonsurvey means, both quantitative and qualitative, are available to social scientists, marketing professionals, government officials, special interest groups, and others for collecting useful information that will produce insight into the attitudes and behaviors of people and the groups they are a part of. These include unstructured interviews, focus groups, participant observation, content analyses, simulations, small group experiments, and analyses of administrative records or organic data such as birth and death records, sales transactions, records of online searches, social media, and other online behavior. Each of these methods can yield different types of information, and for some questions they are more appropriate than surveys or may be used in combination with surveys to answer the research question or community problem.
The feature of the probability sample survey that distinguishes it from these other methods of investigation is that it can provide a close estimate of the distribution of a characteristic in a population by surveying only some members of that population. If done correctly, it allows one to generalize results with great precision, from a few to the many, making it a very efficient method for learning about people and populations.
The efficiency and importance of the probability sample survey might best be illustrated by considering an alternative way to learn about a population—a census. Every 10 years the U.S. Census Bureau attempts to contact and survey every household in the United States, as required by our Constitution. The resulting information is used to reapportion the U.S. House of Representatives so that each member represents about the same number of U.S. residents. This massive survey, known as the Decennial Census, costs billions of dollars to conduct. A smaller organization that wants to know the opinions of all U.S. residents on a particular issue could hardly afford such an undertaking. But with a probability sample survey, it can learn those opinions for considerably lower costs by selecting only some members of the population to complete the survey.
Even on a smaller scale, few would be able to afford to survey every undergraduate student at a large university in order to assess students' satisfaction in the education they are receiving. If this were necessary, studies of student satisfaction would seldom, if ever, be done. But probability sample surveys allow us to be much more efficient with our resources by surveying only a sample of students in a way that enables us to generalize to the entire student population.
Whatever the target population or research question, limiting our data collection to a carefully selected sample of the population of interest allows us to concentrate limited resources (e.g., time and money for follow-up communications, data cleaning, and analysis) on fewer individuals, yet obtain results that are only slightly less precise than they would be if every member of the population were surveyed.
Our purpose in this book is to explain how to conduct effective probability sample surveys. We discuss the fundamental requirements that must be met if one wants to generalize results with statistical confidence from the few who are surveyed to the many they are selected to represent. We also describe specific procedures for designing surveys in which one can have high confidence in the results. Regardless of whether your interest in surveys is to understand one of the many national surveys that are conducted for policy purposes or to gain knowledge of how to design your own survey of organization members, college students, customers, or any other population, it is important to understand what it takes to do a good survey and the multiple sources of error that can reduce the accuracy of the survey results—or completely invalidate them.

Four Cornerstones of Quality Surveys

In general, survey error can be thought of as the difference between an estimate that is produced using survey data and the true value of the variables in the population that one hopes to describe. There are four main types of error that surveyors need to try to minimize in order to improve the survey estimates.
  1. 1. Coverage Error occurs when the list from which sample members are drawn does not accurately represent the population on the characteristic(s) one wants to estimate with the survey data (whether a voter preference, a demographic characteristic, or something else). A high-quality sample survey requires that every member of the population has a known, nonzero probability of being sampled, meaning they have to be accurately represented on the list from which the sample will be drawn. Coverage error is the difference between the estimate produced when the list is inaccurate and what would have been produced with an accurate list.
  2. 2. Sampling Error is the difference between the estimate produced when only a sample of units on the frame is surveyed and the estimate produced when every unit on the list is surveyed. Sampling error exists anytime we decide to survey only some, rather than all, members of the sample frame.
  3. 3. Nonresponse Error is the difference between the estimate produced when only some of the sampled units respond compared to when all of them respond. It occurs when those who do not respond are different from those who do respond in a way that influences the estimate.
  4. 4. Measurement Error is the difference between the estimate produced and the true value because respondents gave inaccurate answers to survey questions. It occurs when respondents are unable or unwilling to provide accurate answers, which can be due to poor question design, survey mode effects, interviewer and respondent behavior, or data collection mistakes.
We consider reducing the potential for these errors as the four cornerstones of conducting successful sample surveys. Surveyors should attempt to limit each to acceptable levels. None of them can be ignored. As such, each receives detailed attention in the chapters that follow. Because these sources of error are so essential for defining survey quality, we describe each of them here in more detail.

Coverage Error

As we previously mentioned, the strength of a probability sample survey is that it allows us to collect data from only a sample of the population but generalize results to the whole, thus saving considerable time, money, and effort that would be incurred if we had to survey everyone in the population. However, in order to draw a sample, one has to have a sample frame, or a list of members of the target population, and any errors in that list have the potential to introduce coverage error into the final estimates that are produced. If some units from the target population are not included on the sample frame (i.e., undercoverage) and they differ from those that are in ways that are important to the survey, the final estimates will contain error.
For example, all other error sources aside, a landline random digit dial telephone survey would likely overestimate the prevalence of higher socioeconomic status because the well-off are more likely than the poor to have landline telephone service (i.e., the well-off are more likely to be on the landline random digit dial sample frame) (Blumberg & Luke, 2013). In fact, one of the challenges now being faced in conducting household telephone surveys is that only about 58% of households still have landlines (Blumberg & Luke, 2013), the traditional source of random digit dialing samples, and those who have them are quite different from those who do not on a number of important characteristics. Using the landline telephone frame alone (without supplementing it with a cell phone frame) for a national household survey would leave out significant portions of the population who are likely to differ in important ways from those included on the frame.
Similarly, conducting a national household survey by Internet would leave out significant portions of the population because, as of May 2013, only 73% of American adults have Internet access in the home (Pew Internet & American Life Project, 2013b). In comparison, an Internet survey of undergraduate students at a university, where all students are required to use the Internet, would likely have little coverage error, provided a list of all students could be obtained. In Chapter 3 we discuss in detail the threat of coverage error, its likely sources, and how to limit it.

Sampling Error

The extent to which the precision of the survey estimates is limited because only some people from the sample frame are selected to do the survey (i.e., sampled) and others are not is known as sampling error. If we have a sample frame with complete coverage (i.e., the list matches the population perfectly), we can say that sampling error is the difference between the estimates produced and the true value because we survey only a sample of the population and not everyone. The power of probability sampling, which is also discussed in detail in Chapter 3, is that estimates with acceptable levels of precision can usually be made for the population by surveying only a small portion of the people in the population. For example, a researcher can sample only about 100 members of the U.S. general public and, if all 100 respond, achieve estimates with a margin of error of +/−10%. Successfully surveying a sample of 2,000 individuals reduces the margin of error to about +/−2%. Surveying 100 or even 2,000 people rather than the approximately 315 million people in the United States represents an enormous and desirable cost savings, but doing so means that one has to be willing to live with some sampling error in the estimates.
Sampling error is an unavoidable result of obtaining data from only some rather than all members on the sample frame and exists as a part of all sample surveys. For this reason, we describe the importance of reducing survey error to acceptable levels, rather than being able to eliminate it entirely. By contrast, censuses—in which all members on the sampling frame are selected to be surveyed—are not subject to sampling error.
Many novice surveyors find sampling error to be somewhat nonintuitive. They find it difficult to imagine only needing to survey a few hundred or thousand to learn about millions of households or individuals. Yet, during each presidential election in the United States, surveys of between 1,000 and 2,000 likely voters are conducted that correctly estimate (within the limits of sampling error) the votes for each candidate. For example, across polls conducted in the final week of the 2012 campaign, the average error for each candidate was about 2 percentage points. Just as nonintuitive for some beginning surveyors to grasp is that in order to predict the outcome of a local election for a particular state or medium sized U.S. city with perhaps 50,000 voters, nearly as many people need to be surveyed as are needed for predicting a national election.
The exact sampling error is easily calculated mathematically, as described in Chapter 3. However, the ease of making those calculations and the mathematical preciseness of the result leads to overreliance on it as a singular measure of the amount of error in a survey statistic. This tendency should be avoided. Sampling error calculations reflect the completed sample size, that is, only received responses are considered. The larger the number of responses, the greater the reported precision and statistical confidence. But they ignore the possibility for coverage error as well as the fact that many and sometimes most of the invited participants did not respond, which raises the potential for a third source of error, nonresponse error.

Nonresponse Error

Many sponsors think of a survey'...

Table of contents