Improving Survey Methods
eBook - ePub

Improving Survey Methods

Lessons from Recent Research

Uwe Engel, Ben Jann, Peter Lynn, Annette Scherpenzeel, Patrick Sturgis, Uwe Engel, Ben Jann, Peter Lynn, Annette Scherpenzeel, Patrick Sturgis

Compartir libro
  1. 430 páginas
  2. English
  3. ePUB (apto para móviles)
  4. Disponible en iOS y Android
eBook - ePub

Improving Survey Methods

Lessons from Recent Research

Uwe Engel, Ben Jann, Peter Lynn, Annette Scherpenzeel, Patrick Sturgis, Uwe Engel, Ben Jann, Peter Lynn, Annette Scherpenzeel, Patrick Sturgis

Detalles del libro
Vista previa del libro
Índice
Citas

Información del libro

This state-of-the-art volume provides insight into the recent developments in survey research. It covers topics like: survey modes and response effects, bio indicators and paradata, interviewer and survey error, mixed-mode panels, sensitive questions, conducting web surveys and access panels, coping with non-response, and handling missing data. The authors are leading scientists in the field, and discuss the latest methods and challenges with respect to these topics.

Each of the book's eight parts starts with a brief chapter that provides an historical context along with an overview of today's most critical survey methods. Chapters in the sections focus on research applications in practice and discuss results from field studies. As such, the book will help researchers design surveys according to today's best practices.

The book's website www.survey-methodology.de provides additional information, statistical analyses, tables and figures.

An indispensable reference for practicing researchers and methodologists or any professional who uses surveys in their work, this book also serves as a supplement for graduate or upper level-undergraduate courses on survey methods taught in psychology, sociology, education, economics, and business. Although the book focuses on European findings, all of the research is discussed with reference to the entire survey-methodology area, including the US. As such, the insights in this book will apply to surveys conducted around the world.

Preguntas frecuentes

¿Cómo cancelo mi suscripción?
Simplemente, dirígete a la sección ajustes de la cuenta y haz clic en «Cancelar suscripción». Así de sencillo. Después de cancelar tu suscripción, esta permanecerá activa el tiempo restante que hayas pagado. Obtén más información aquí.
¿Cómo descargo los libros?
Por el momento, todos nuestros libros ePub adaptables a dispositivos móviles se pueden descargar a través de la aplicación. La mayor parte de nuestros PDF también se puede descargar y ya estamos trabajando para que el resto también sea descargable. Obtén más información aquí.
¿En qué se diferencian los planes de precios?
Ambos planes te permiten acceder por completo a la biblioteca y a todas las funciones de Perlego. Las únicas diferencias son el precio y el período de suscripción: con el plan anual ahorrarás en torno a un 30 % en comparación con 12 meses de un plan mensual.
¿Qué es Perlego?
Somos un servicio de suscripción de libros de texto en línea que te permite acceder a toda una biblioteca en línea por menos de lo que cuesta un libro al mes. Con más de un millón de libros sobre más de 1000 categorías, ¡tenemos todo lo que necesitas! Obtén más información aquí.
¿Perlego ofrece la función de texto a voz?
Busca el símbolo de lectura en voz alta en tu próximo libro para ver si puedes escucharlo. La herramienta de lectura en voz alta lee el texto en voz alta por ti, resaltando el texto a medida que se lee. Puedes pausarla, acelerarla y ralentizarla. Obtén más información aquí.
¿Es Improving Survey Methods un PDF/ePUB en línea?
Sí, puedes acceder a Improving Survey Methods de Uwe Engel, Ben Jann, Peter Lynn, Annette Scherpenzeel, Patrick Sturgis, Uwe Engel, Ben Jann, Peter Lynn, Annette Scherpenzeel, Patrick Sturgis en formato PDF o ePUB, así como a otros libros populares de Psychology y Research & Methodology in Psychology. Tenemos más de un millón de libros disponibles en nuestro catálogo para que explores.

Información

Editorial
Routledge
Año
2014
ISBN
9781317629702

1
Improving Survey Methods

General Introduction
Uwe Engel, Ben Jann, Peter Lynn, Annette Scherpenzeel and Patrick Sturgis

1.1 Introduction

Surveys are conducted all over the world. They represent influential sources of information for public opinion formation and decision-making. Their quality should therefore be sufficient for this important function. We assume that surveys can fulfill this function only if appropriately designed and prepared to militate efficiently against the known sources of survey error (Biemer & Lyberg, 2003). Just what do these demands mean in a changing world? In responding to socio-technological change, survey research is changing itself. While new ways of conducting surveys emerge and replace others, they change the way in which people respond to survey requests and the way in which they answer survey questions. A case in point is the growing importance of web surveys, access panels, and the use of mobile communication devices like mobile phones, smartphones, and tablet PCs. Special challenges arise from the mixture of survey modes, contact modes, question and response formats.
We know that people respond differently to different implementations of survey research. Response rates, for instance, vary considerably between survey modes like face-to-face, telephone, and web. The same applies to the response behavior itself. Answers to sensitive questions are likely to be more honest if obtained in self-administered modes. Just how valid are answers if obtained in response to special techniques for asking sensitive questions? We know that individual response propensities may be affected by design features such as mixtures of contact modes, the use of prepaid incentives, and refusal conversion efforts. But what about the effects of such interventions on sample composition and the answering behavior of respondents? If the mixing of modes is an appropriate answer to declining response propensities and changing communication habits, what are its consequences? Do such interventions impair the comparability of survey responses and, if yes, is it possible to enhance comparability post hoc?
This all calls for continued research focus on the improvement of survey methods. To this end, the current volume presents recent methodological and statistical research from different European countries. It attends to major sources of survey error, established and emerging forms of conducting survey research, and recent approaches to meeting the challenges that they present. In this chapter, we shortly introduce these major sources of survey error, starting with the types of error encountered in the first stages of a survey, and going through the whole survey process. All chapters of this book relate to one or more of these survey design issues and the associated survey error, as we will show.

1.2 Nonresponse Bias

The first and perhaps most important source of survey error is caused by nonresponse. If surveys are used to arrive at sample estimates of unknown population characteristics, these estimates should come as close as possible to the true values. Most importantly, perhaps, they should be unbiased. It is clear that a survey will fail to reach this objective if systematic unit nonresponse distorts the randomness of a sample. Therefore survey research is well-advised to pay attention to factors that impair this randomness. Response propensity modeling is a powerful approach to identify such factors and quantify their impact. Such propensities are individual response probabilities estimated with respect to auxiliary variables. Some chapters of the present volume deal with this approach (Chapters 14, 18, 20, and 21). We can use response propensities, for instance, for the computation of modified Horvitz-Thompson estimates, the assessment of nonresponse bias of mean estimates, or for weighting adjustments (Bethlehem, Cobben, & Schouten, 2011). Propensity models are useful for the identification of selective forces in recruitment processes that should actually be random processes. In terms of statistical modeling, the approach is routinely applied by means of logistic regression modeling. Somewhat more challenging, however, is the collection of the necessary auxiliary variables, since information is needed about both nonrespondents and respondents to model individual response probabilities.
Estimates of individual response probabilities crucially depend on the sets of auxiliary variables available to the survey researcher. Such variables may come from the contact process and/or population registers. Sometimes information is used that classifies places of residence of respondents by district or neighborhood information. When using contact information, a further source of variation arises out of the consideration of refusal conversion efforts. Chapter 18 discusses in this context how estimates of nonresponse bias change if the set of auxiliary variables is altered. There will generally be the tendency to use as auxiliary variables the only ones that are available in a particular context. This availability of effective auxiliary variables is discussed in Chapter 14. Available variables may not, however, represent the best choice. The situation gets even worse in cross-national research, as discussed in Chapter 29.
Another option for collecting the necessary auxiliary variables consists in exploiting the contact process that leads target persons to accept or refuse a survey invitation. For that purpose surveys collect so-called paradata (Chapter 25). Amongst other uses, such data can be used to detect two kinds of factors, namely factors that affect the probability of making contact with target persons and factors that shape the probability of achieving cooperation. Another option consists of the application of approaches that aim at winning over persons to a survey who initially refuse to take part. One such approach is the “basic question” approach (Bethlehem, 2009), another one the Pre-Emptive Doorstep Administration of Key Survey Items (PEDAKSI) methodology (Lynn, 2003). Such approaches seek required auxiliary variables (i.e., the required background data) not by collecting data besides the survey data, i.e., by paradata, but by enlarging the body of survey data itself. Chapters 5 and 18, for instance, take advantage of this approach.
High response rates are often regarded as an indicator of sample quality. That nonresponse bias may tend to become smaller when response rates increase says something for this view. However, in view of the range of response rates typically achieved in survey research, substantial scope for nonresponse bias is likely to remain even when response rates are high. Hence high response rates alone cannot guarantee unbiased sample estimates. Typically response rates also vary across survey modes and countries. Chapter 14 discusses the former source of variation. Chapter 29 addresses the latter, presenting clear evidence from the European Social Survey and discussing possible reasons for the variation. It might accordingly be inappropriate to equate acceptable sample quality to a single overall benchmark such as a 70 percent target response rate. As Stoop notes with reference to Groves in Chapter 29, “what could be done is to shift the focus away from a blind pursuit of high response rates to an informed pursuit of high response rates (…)” and to consider as a criterion how balanced the response rate is across subgroups of a sample. This is, for instance, done in an approach which takes the average dispersion of individual response propensities around their mean value, i.e., around the response rate, as a building block for so-called R-indicators of “representativity” (Bethlehem et al., 2011).

1.3 Inducing Survey Response

Survey designs may strive to counterbalance low or even declining response propensities by the implementation of special design features. One particularly important tool consists of the use of respondent incentives. Earlier reviews showed the effectiveness of this method for postal and interviewer-assisted surveys (e.g., Church, 1993; Engel & Schnabel, 2004; Singer, Groves, & Corning, 1999). At that time, prepaid monetary incentives turned out to be especially effective in enhancing response rates for these surveys modes. Since then, however, survey research has experienced a substantial and growing increase in the use of web surveys and volunteer panels using the web. In addition to that development, single experimental findings have suggested that the effectiveness of monetary incentives have become even stronger in recent years (Engel, Bartsch, Schnabel, & Vehre, 2012, p. 128f.). One could speculate about a growing need for incentives to motivate respondents. Will incentives become indispensable to motivate respondents in the future, maybe as an inadvertent side effect of the widespread use of volunteer web panels that pay for answers on a regular basis? Chapter 28 presents an updated review of the effectiveness of incentives. Furthermore, it expands the respective knowledge to surveys carried out on the web. Additionally in this context, Chapter 19 reports on the effect of different types of incentives on sleeper reactivation in a randomly recruited internet panel. Another incentive experiment is reported in Chapter 5 for the mobile phone mode.
An idea of growing importance in survey methodology is that of turning away from survey designs that approach all sampled households in the same manner. It is generally recommended, for instance, to let interviewers counteract possible queries and concerns of interviewees. Chapter 18 discusses the strengths of various arguments an interviewer might use to convince reluctant persons to take part in a survey. Another variant consists in offering interviews of different lengths to persons whose motivation differs accordingly. This may be accomplished by the two approaches cited above, the PEDAKSI methodology and the “basic question” approach. Meanwhile, the literature talks of “adaptive survey designs” (Bethlehem et al., 2011) to express the idea that elements of survey designs respond to situational factors in different ways. However, as shown in detail in Chapter 27, the idea of adapting the nature of the survey protocol to the circumstances of sample members in order to improve response rates did not spread to other elements of the survey process than the interview length. Instead, most other aspects of survey design and implementation remain standardized across all sample members. More recently, researchers have begun to explore the idea of treating sample subgroups differently. Interest has focused on the idea of starting data collection in a standardized way but then changing it in different ways for different sample members as fieldwork progresses. Here a distinction can be made between targeted and tailored strategies. Targeted strategies involve treating each of a limited number of sample subgroups in different ways, while tailored strategies involve treating each individual sample member differently. Chapter 27 reviews both theory and practice regarding targeted strategies to improve response rates or response balance and identifies three main categories of targeted strategies.

1.4 Enhancing Survey Response or Survey Balance?

Instead of striving for “high response rates” only, “response balance” may be a more relevant target. This is certainly an insight one can derive from the chapters on nonresponse bias and survey response inducement described in the previous paragraphs. Prepaid monetary incentives are especially known for motivating survey cooperation. However should the use of incentives be regarded as a survey method to induce survey response only? For instance, we know from a field experiment that the use of prepaid incentives can reduce the “high-education bias” of samples considerably (Vehre, Bartsch, & Engel, 2013) in comparison to reference distributions from official statistics (Microcensus). A comparable question applies to refusal conversion efforts. We know for instance from another field experiment that refusal conversion efforts do not necessarily induce response, though such efforts change sample composition systematically at the same time (Engel et al., 2012, pp. 132f. and Chapter 18).
On the whole, the preceding paragraphs suggest that it is appropriate to aim for response balance as part of targeted inducement strategies while considering possible inadvertent side effects. Chapter 4 provides evidence for one such side effect in the shape of repeatedly found cross-sectional correlations between interviewers’ conversion refusal efforts and satisficing response behavior.

1.5 Web Surveys, Internet Panels, and Mixed-Mode Designs

A substantial proportion of survey research is now conducted over the Internet. Chapter 14 discusses three basic aspects that may complicate using the web for surveying the general population. It asks if web surveys can be used in official statistics and discusses in detail the methodological issues of under-coverage, sample selection, and nonresponse as well as some correction techniques. It highlights amongst others the really important distinction between self-selected “opt-in” panels and probability-based panels and discusses under what conditions web surveys can be used in official statistics. In that discussion the selection and recruitment mode play important roles. Probability sampling is regarded as indispensable. For web surveys probability sampling is possible by doing recruitment in a different mode. For instance, one can draw random samples from population registers to send out invitations to a web survey by mail. Another example consists in the use of random telephone sampling.
However, is it sufficient to recruit people at random? In view of differential response propensities of target persons, this is certainly not sufficient for unbiased estimates. Instead, we have to understand better who agrees to a survey request and who does not. This holds true for single surveys and for panels. There now exist two large probability panels that afford an opportunity to study the recruitment process into a panel in greater detail. In the Netherlands this is the Longitudinal Internet Studies for the S...

Índice