Mathematics

Survey Bias

Survey bias refers to the systematic error introduced in survey results due to the design, conduct, or analysis of the survey. It can occur when the survey questions are leading or ambiguous, the sample is not representative, or there is non-response bias. Survey bias can distort the findings and lead to inaccurate conclusions.

Written by Perlego with AI-assistance

7 Key excerpts on "Survey Bias"

  • Book cover image for: Measurement Errors in Surveys
    • Paul P. Biemer, Robert M. Groves, Lars E. Lyberg, Nancy A. Mathiowetz, Seymour Sudman, Robert M. Groves, Paul P. Biemer, Lars E. Lyberg, Nancy A. Mathiowetz, Seymour Sudman, Robert M. Groves, Paul P. Biemer, Lars E. Lyberg, Nancy A. Mathiowetz, Seymour Sudman(Authors)
    • 2011(Publication Date)
    • Wiley
      (Publisher)
    Although other disciplines use survey data (e.g., sociology and political science), they appear to employ similar languages to one of those two. Some attention to terminological differences is necessary to define measurement error unambiguously (see Deming, 1944; Kish, 1965; Groves, 1989; Chapter 24, this volume). A common conceptual structure labels the total error of a survey statistic the mean squared error; it is the sum of Views expressed are those of the author and do not necessarily reflect those of the U.S. Census Bureau. Measurement Errors in Surveys. Edited by Biemer, Groves, Lyberg, Mathiowetz, and Sudman. Copyright « 1991 by John Wiley & Sons, Inc. ISBN: 0-471-53405-6 2 MEASUREMENT ERROR ACROSS THE DISCIPLINES all variable errors and all biases (more precisely, the sum of variance and squared bias). Bias is the type of error that affects the statistic in all implementations of a survey design; in that sense it is a constant error (e.g., all possible surveys using the same design might overestimate the mean years of education per person in the population). A variable error, measured by the variance of a statistic, arises because achieved values differ over the units (e.g., sampled persons, interviewers used, questions asked) that are the sources of the errors. The concept of variable errors inherently requires the possibility of repeating the survey, with changes of units in the replications (e.g., different sample persons, different interviewers). A survey design defines the fixed properties of the data collection over all possible implementations within a fixed measurement environ-ment. Hansen, Hurwitz, and Bershad (1961) refer to these as the essential survey conditions. For example, response variance is used by some to denote the variation in answers to the same question if repeatedly administered to the same person over different trials or replications.
  • Book cover image for: Marketing Research
    • Carl McDaniel, Jr., Roger Gates(Authors)
    • 2020(Publication Date)
    • Wiley
      (Publisher)
    Strategies for minimizing other types of survey errors are summarized in Exhibit 6.2. refusal rate Percentage of persons contacted who refused to participate in a survey. response bias Error that results from the tendency of people to answer a question incorrectly through either deliberate falsification or unconscious misrepresentation. Types of Errors in Survey Research 121 Practicing Marketing Research Unconscious Misrepresentation Can Come From Many Sources But There Are Ways to Avoid It Unconscious misrepresentation can be due to a number of factors. Among the most common are • Acquiescence bias—statistical effort in the responses of subjects caused by some respondents’ tendency to agree with all questions or to concur with a particular position; the “yes effect.” • Administrative error—results are unrepresentative due to human/process errors, independent of survey content. • Apathy bias—statistical error in the responses of subjects caused by some respondents’ lack of emotion, motivation, or enthusiasm. • Auspices bias—statistical error in the responses of sub- jects caused by the respondents being influenced by the organization conducting the study (e.g., sales rep for a pharmaceuticals company completes a survey related to the effectiveness of one of the company’s new drugs). • Extremity bias—statistical error in the responses of subjects caused by some respondents’ tendency to use extremes when responding to questions. The opposite phenomenon, whereby respondents temper their extreme opinions, is called central tendency bias. • Memory bias—statistical error in the responses of subjects caused by enhanced or impaired recall or the alteration of what the respondent remembers (e.g., respondent is asked to rate the facilities of a resort she visited on a trip where she contracted malaria).
  • Book cover image for: Survey Errors and Survey Costs
    1.8 IMPORTANT FEATURES OF LANGUAGE DIFFERENCES 31 “describersn above and those labeled “modelers.” Modelers most often deal with statistics that are functions of variances and covariances of survey measures, and hence some shared or constant part of measurement errors may not affect their statistics. Describers, however, are more often affected by these. The second question determines whether a problem is viewed as a bias or as a component of variance of the statistic. One of the most common instances of this in the experience of sampling statisticians is the following. A researcher who is the client of the sampling statistician, after having collected all the data on the probability sample survey (for simplicity, let us assume with a 100 percent response rate), observes that there are “too many” women in the sample. The researcher calls this a “biased” sample; the statistician, viewing this as one of many samples of the same design (and being assured that the design is unbiased), views the discrepancy as evidence of sampling variance. In the view of the sampler, the sample drawn is one of many that could have been drawn using the design, with varying amounts of error over the different samples. If properly executed the proportion of women would be correct in expectation over all these samples. The sampler claims the sample proportion is an unbiased estimate of the population proportion. This is a conflict of models of the research process. The sampler is committed to the view that the randomization process on the average produces samples with desirable properties; the analyst is more concerned with the ability of this single sample to describe the pop~lation.~ The third question determines whether some types of error in the statistic of interest are eliminated by the model assumptions. This is perhaps the most frequent source of disagreement about error in statistics.
  • Book cover image for: The Total Survey Error Approach
    eBook - PDF

    The Total Survey Error Approach

    A Guide to the New Science of Survey Research

    A low response rate can make the survey unscientific: “In some surveys, a relatively small percentage of people selected in a sample actually complete a questionnaire, and the respondents differ significantly from non-respondents on a characteristic of relevance to the survey. If this fact is ignored in the reporting of results, the survey fails to meet an important criterion of being scientific.” The final example is that a survey is not scientific if the question’s wording produces biased answers. This definition of a scientific sample survey has some ambiguities, which shows the complexity of defining a scientific survey. Take the matter of response rates. The nonresponse paragraph seems to indicate that a survey is scientific so long as it tries to minimize and/or adjust for nonresponse, but the later paragraph on nonscientific surveys seems to say that nonresponse can invalidate a survey. This inconsistency may properly reflect the nature of response rate as a problem—it can render a survey nonscientific but does not necessarily do so—but it also shows that there always will be ambiguities in working through what makes a survey scientific. T H E E S T A B L I S H M E N T O F A PA R A D I G M 21 The Statistical Impact of Error While it would be best if all of the types of error described above could be eliminated, that is not possible. The goals instead are to keep them at min-imal levels and to measure them so that their statistical impact can be assessed. Survey error affects the statistical analysis of survey data. Statisticians make two distinctions between types of error that have important implications for statistical analysis. The first key distinction is between systematic and random error. Systematic error is usually associated with a patterned error in the mea-surement. This patterned error is also known as bias . An example of sys-tematic bias would be measuring the average age from a survey if older people were less willing to be interviewed.
  • Book cover image for: Marketing Research Methods
    eBook - PDF

    Marketing Research Methods

    Quantitative and Qualitative Approaches

    One example is the Heckman correc- tion for self-selection (see Heckman, 1979). But this approach is conditional on the validity of the specified model. It may address the selection bias, but it also involves a risk (the potential bias if we misspecify the model of respondents self-selection); see Goldberger (1983) and Puhani (2000). Newey et al. (1990) consider semi-parametric estimators. In general, these issues are difficult to handle. Therefore, we should plan surveys aimed to minimize the incidence of this problem. 9781108834988book CUP/ESTEBAN-L1 December 5, 2020 8:48 Page-751 13.5 Sources of Survey Errors 751 Variance (Nonsystematic Errors) The survey may increase the presence of nonsystematic deviations, decreasing the precision of the estimators (increasing their variance). In the context of sampling error, a method leading to larger variance can be selected because of a cost trade-off (e.g. some cluster method instead of simple random sampling). Notice also that different probabilistic sampling methods lead to estimators with different variances (e.g. in systematic sampling it is usually larger than in SRS) and different costs. If the sample size is too small, the variances can also be quite large, inflat- ing confidence intervals. All these features are usually considered in the design of the sampling procedure. 13.5.2 Non-sampling Errors These are caused by some imperfect aspect of the research design or a mistake in the research execution. Usually there are three categories: administrative errors, respondent errors, and mea- surement errors. Because of the administrative and respondent errors, one must be cautious designing a survey and cautious interpreting the results. Administrative Error These are errors due to an incorrect survey management (faulty planning, implementation, and analysis). They can happen in the fieldwork or in the office. Besides errors related to sample selection, the main administrative errors are as follows.
  • Book cover image for: Margins of Error
    eBook - PDF

    Margins of Error

    A Study of Reliability in Survey Measurement

    1.2.1 Classifying Types of Survey Error There are a number of different ways to think about the relationship among the several types of survey error. One way is to describe their relationship through the application of classical statistical treatments of survey errors (see Hansen, Hurwitz, and Madow, 1953). This approach begins with an expression of the mean square error (MSE) for the deviation of the sample estimator (4;) of the mean (for a given sampling design) from the population mean (p), that is, MSE (4;) = E(y - P ) ~ . This results in the standard expression: MSE (4;) = Bias2 + Variance where Bias2 refers to the square of the theoretical quantity 4; - p, and Variance refers to the variance of the sample mean 0- Within this statistical tradition of concept- ualizing survey errors, bias is a constant source of error conceptualized at the sam- ple level. Variance, on the other hand, represents variable errors, also conceptualized at the sample level, but this quantity is obviously influenced by the within-sample sources of response variance normally attributed to measurement error. Following Groves’ (1989) treatment of these issues, we can regroup coverage, sampling, and nonresponse errors into a category of nonobservational errors and also group measurement errors into a category of observational errors. Obser- vational errors can be further subclassified according to their sources, e.g., into those that are due to interviewers, respondents, instruments, and modes of observa- tion. Thus, Groves’ fourfold classification becomes even more detailed, as seen in Table 1.1. Any treatment of survey errors in social research will benefit from the 2 Y Table 1.1.
  • Book cover image for: Introduction to Survey Quality
    • Paul P. Biemer, Lars E. Lyberg(Authors)
    • 2003(Publication Date)
    Errors that do not sum to zero when the sample observations are aver- aged are referred to as systematic errors (Figure 2.4). When the systematic errors are such that the errors in the positive direction dominate (or out- number) the errors in the negative direction, the sample average will tend to be too high or positively biased. Similarly, when the systematic errors are such that the negative errors dominate, the sample average will be negatively biased. It is important to note that in our discussion of nonsampling error, the defi- nitions of systematic error and variable error do not refer to what happens in the one particular sample that is selected. Rather, they refer to the collection of samples and outcomes of the same survey process over many repetitions under essentially the same survey conditions. This concept of the survey as a repeatable process is similar to the assumptions made in the literature on statistical process control. For example, consider a process designed for the manufacture of some product, say a computer chip. What is important to the designers of the process is the quality of the chips produced by the process over many repetitions of the process, not what the process yields for a parti- gauging the magnitude of total survey error 47 cular chip. (Of course, that may be of primary interest to the consumer who purchases the chip!) Similarly, a survey is a process—one that produces data. Although we as consumers of the data are interested primarily in what happens in a particular implementation of the survey, the theory of survey data quality is more concerned about the process and what it yields over many repetitions. Example 2.4.1 The survey question can be a source of either systematic or variable error in a survey. For example, consider a question that asks about a person’s consumption of alcohol in the past week. Respondents may try to estimate their consumption rather than recall exactly the amount they con- sumed.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.