Psychological Research
eBook - ePub

Psychological Research

Innovative Methods and Strategies

  1. 304 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Psychological Research

Innovative Methods and Strategies

About this book

Starting a research project, however large or small can be a daunting prospect. New researchers can be confronted with a huge number of options not only of topic, but of conceptual underpinning. It is quite possible to conduct research into say, memory, from a number of research traditions. Psychology also has links with several other disciplines and it is possible to utilise their techniques; the difficulty is quite simply the wide variety of methodological approaches that psychological research embraces. In this collection, authors have been recruited to explain a wide range of different research strategies and theories with examples from their own work. Their successes as well as the problems they encountered are explained to provide a comprehensive and practical guide for all new researchers. The collection will be a great help to undergraduates about to start final year projects and should be required reading for all those thinking of graduate level research.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Psychological Research by John Haworth in PDF and/or ePUB format, as well as other popular books in Psychology & History & Theory in Psychology. We have over one million books available in our catalogue for you to explore.

Information

Chapter 1

Introduction

Contemporary psychological research: visions from positional standpoints

John Haworth



This chapter addresses the topic of contemporary psychological research and the different standpoints from which this is conducted. While much existing literature presents research as being experimental or non-experimental, this introduction examines the similarities in these paradigms, as well as the differences, and shows the potential for multi-method research. The chapter also highlights the debate on the nature of knowledge and how we perceive ourselves and the structures of society.
Psychological research is undertaken for many reasons. An aim can be to gain knowledge and understanding. An objective can be to improve well-being. The meaning of these terms is, of course, open to fierce debate, with different viewpoints and values influencing how research is undertaken (Jahoda 1981). Suppose, however, that one is interested in the effect of a new treatment to improve a certain mental illness. The treatment may be a drug or a form of psychotherapy. By observing the effect of the treatment on a group of patients, we may find that 60 per cent recover. At first sight this may seem to validate the effect of the treatment. But it may be that a certain percentage of patients recover spontaneously without any treatment. If this was 60 per cent, the effect of the treatment could be nil. It is obviously necessary to compare the effect of treatment on one group of patients with the effect of no treatment on a similar group of patients. We could set up two groups of patients who were similar in relation to factors which may influence the effect of the treatment, such as age, gender, occupation, etc., and see if the recovery rates were different in the two groups.
However, as we do not know the spontaneous recovery rate of individual patients, we would not have been able to match the groups on the very variable which could confound our study. It may be that in our assignment of patients to the treatment group we were influenced by some factor, such as the attractiveness of patients, which resulted in spontaneous recovery rate being systematically associated more with the treatment than the nontreatment group. To attempt to control systematic errors due to differences in subjects we would assign each patient by chance, by the toss of a coin, to the treatment or non-treatment group. This random assignment of subjects to the treatment, or ‘experimental group’, and non-treatment, or ‘control group’ would mean that the possibility of systematic errors due to differences in subjects was changed into random error, or a chance factor. We would then perform a statistical test to see at what level of probability the result had occurred by chance. Under these conditions, where patients had been randomly assigned to the treatment and non-treatment groups, if we found that there was only a 5 per cent probability (p= .05) that the result had occurred by chance, we could be tempted to accept the validity of the treatment.
Unfortunately, we know that some people will respond to an inert substance, or placebo, when they are told they are being treated, in a similar way to if they had received an active drug. In order to control for spurious treatment effects the patients could be randomly assigned to two groups and one group would receive the active drug, or a specific form of psychotherapy thought to be potent, while the other group would receive the placebo, the inert substance, or a form of psychotherapy similar to the form thought to be potent, but not containing the potent element, if this were possible. The active form of the treatment may then be shown to be greater, at a certain level of probability, than the non-active treatment.
But our problems are still not over. The person administering the treatment may know which is the active form and which is the non-active form, and this knowledge can in some cases influence susceptible people in their reaction to the treatment. These experimenter effects, as they are termed, can in the case of drug treatment be overcome if neither the patient nor the person giving the treatment knows whether the drug being administered is active or inert. This double blind procedure, which is used in clinical trials of certain drugs, is obviously very difficult if not impossible to undertake with certain forms of psychotherapy.
Another objection to this experiment is that it is not ethical to assign patients randomly to treatment and non-treatment groups without their permission, and that the use of volunteer patients for treatment, who may have different characteristics from non-volunteers, could possibly influence an assessment of how efficacious the treatment would be on non-volunteers. The experiment could thus be high on what is termed internal validity: that the administration of treatment, or manipulation of the independent variable, has had a significant real effect on the dependent variable, in this case mental health. However, the experiment could be low on external validity: the extent to which the results of research can be generalised across people, places, times and other measures of a complex variable, such as a form of psychotherapy.
Despite these shortcomings the results could be viewed as encouraging and further research undertaken to see if the results are repeatable on different samples of people under controlled conditions. The validity of the treatment, and more generally the validity of a causal statement that the manipulation of a particular independent variable causes a certain change in a dependent variable, is thus ultimately assessed by our general experience of the relationship. The validity of a causal statement cannot be proven by a single experiment. In fact, as we shall see later, it cannot be proven at all. Instead, all that can be achieved is increasing confidence in the relationship.

CONTROL IN RESEARCH

Carlsmith, Ellsworth and Aronson (1976) define an experiment as ‘a study in which the investigator has some control over the independent variables and can assign subjects to conditions at random’ (p. 26). They consider that, in general, random assignment is one of the experimenter’s most important tools for ruling out the dangers of systematic error, which can typically influence one condition without affecting another and can lead to a spurious finding. This control over potentially confounding subject variables by the ability to randomise is considered to be the critical attribute for defining a study as an experiment. They also point out that while random assignment is essential for eliminating systematic error due to the subjects being in different conditions, it cannot reduce the amount of random error or ‘background noise’ in the experiment. If the treatment is only one of a large number of factors influencing the subject’s behaviour in important ways, its influence may not be strong enough to stand out above the variability introduced by all the other extraneous factors. A common means of controlling random error is, they note, to hold important extraneous variables constant at a single level. This, however, can limit the generality of the conclusions of the experiment to highly specific situations. They also note that ‘it is never possible to control all sources of random error, the experimenter must use judgement in deciding which extraneous factors are most likely to produce large fluctuations in the particular variable being measured’ (p. 17).

Control and judgement

These are central concerns in research in psychology and the social sciences and are not without significance in other areas of enquiry. Campbell and Stanley (1966) point out that in studying complex phenomena it may be necessary to vary more than one variable at a time in order to study how one independent variable interacts with another to affect the dependent variable. For example, they discuss how a particular type of teacher, e.g. a spontaneous temporiser, may do best with a particular teaching method, e.g. group discussion. They also broaden their discussion of interaction effects to consider the external validity of results and the generalisability of research findings. They note that specific conditions exist at the time of experiments: a certain time of year, barometric pressure, orientation of the stars, gamma radiation level, etc. While these conditions would apply to both the experimental and the control groups, the independent variable may be interacting with one or more of these extraneous variables to produce its effect on the dependent variable. Application of the independent variable at some other time may not have an effect because the associated variables are not present. They note Hume’s Truism that ‘induction or generalisation is never fully justified logically’, and that logically we cannot generalise beyond the specific conditions of the experiment, i.e. that we cannot generalise at all. However, they also note that we do attempt generalisation, and that we learn about the justification of generalisations by our experience, and guessing at what can be disregarded in what circumstances. ‘The sources of external invalidity are thus guesses as to general laws in the science of a science: guesses as to what factors lawfully interact with our treatment variables, and, by implication, guesses as to what can be disregarded’ (1966: 17).
Campbell and Stanley note that an assumption made in science is one of finite causation: that the great bulk of potentially determining factors can be disregarded, or, in other words, that main effects are more likely than interactions. Related to this is the ‘appeal to parsimony’, which, while frequently erroneous in specific applications, underlies almost all use of theory in science indicating that, where a common feature can be identified in several sets of differences, this is more likely to be the cause than a series of separate explanations. This, they note, is not deductively justifiable but is rather a general assumption about the nature of the world. Though, one may add, the world they refer to is a modern rather than a postmodern world in which challenges are being made to some of the traditional tenets of science (Gergen 1994). With regard to the ability to generalise results, Campbell and Stanley state: ‘Our call for greater external validity will thus be a call for that maximum of similarity of experiments to the conditions of application which is compatible with internal validity’ (p. 18). In other words, the experiment should have some ecological validity.
It is apparent that obtaining a significant result in an experiment does not prove that the manipulation of the independent variable has caused a change in the dependent variable, or that a particular hypothesis or theory has been shown to be true. It is not possible to rule out that complex coincidences might have been operating to produce the experimental outcome. While experiments can never prove a particular hypothesis of a relation between variables, they can increase the plausibility of a relationship. Conversely, experiments can also increase the implausibility of other relationships where significant findings are not obtained, in circumstances where the probability is low that the findings are due to poor design or measurement of variables. Campbell and Stanley thus emphasise the importance of the replication of results. ‘The more numerous and independent the ways in which the experimental effect is demonstrated, the less numerous and less plausible any singular rival invalidating hypothesis becomes. The appeal is to parsimony. The “validity” of the experiment becomes one of the relative creditability of rival theories’ (p. 36).
Carlsmith et al. (1976) consider that while the experiment is an important method of conducting empirical research it is by no means the only method, and that many important and interesting questions are not amenable to experimental research. Kish (1959) also notes that ‘much research in the social, biological, and physical sciences – must be based on non- experimental methods’ (p. 331). With regard to social research, Kish states that ‘Searching for causal factors among survey data is an old, useful sport; and that extraneous and “spurious” correlations have taxed scientists since antiquity and will undoubtedly continue to do so’ (p. 329). Investigators can make measurements of variables as they occur in nature and look for relationships between them, or in other words undertake correlational studies. For example, studies have shown that increased aspects of control in daily life are associated with increased aspects of well-being, and a theory of the importance of control has developed. While correlational studies are always subject to ‘specification error’, that the relationship between two variables is caused by a third unspecified variable, just as experiments cannot fully rule out interaction effects, it is the case as Carlsmith et al. note: ‘When the measurement of naturally occurring phenomena provides enough evidence which tends to support a theory and none to refute it, causal statements are accepted’ (p. 26). This is, however, always a matter of judgement. As Kish (1959) states, ‘In considering the larger ends of any scientific research, only part of the total means required for inference can be brought under objective and firm control; another part must be left to more or less vague and subjective – however skillful – judgement’ (p. 332).

Control and representation

Kish (1959) notes that the scientist is faced with three basic problems of research – measurement, representation and control – and that in practice one generally cannot solve simultaneously all of these problems, rather one must choose and compromise. In any specific situation one method may be better or more practical than another, but there is no overall superiority in all situations for one method. While Kish is referring to the experimental and correlational methods, arguably this also applies to other methods of investigation, such as observation, critical analysis and interpretation.
Kish states that the experimental method is the scientific method par excellence – when feasible, but that in many situations experiments are not feasible. The experimental method he notes has some shortcomings. First, it is often difficult or impossible to design a properly controlled experiment. Thus, he considers that ‘Many of the initial successes reported about mental therapy, which later turn into vain hopes, may be due to the hopeful effects of any new treatment in contrast with the background of neglect’ (p. 333). Second, it is generally difficult to design experiments so as to represent a specified important population. ‘Both in theory and practice, experimental research has often neglected the basic truth that causal systems, the distribution of relations – like the distribution of characteristics – exists only within specified universes’ (p. 333). Third, contriving a similarity of experiments to the conditions of application is often not feasible. ‘Hence, what social experiments give sometimes are clear answers to questions the meaning of which are vague. That is, the artificially contrived experimental variables may have a tenuous relationship to the variables the researcher would like to investigate’ (p. 334).
Gergen (1982) broadens this critique of the experimental method to one which embraces the empiricist meta-theory. He claims that the experimental method is embedded in a paradigm with its roots in empiricist philosophy where one typically commences with the assumption of a fundamentally ordered nature to be reflected by scientific theory. He notes that
When scientists embraced the logical empiricist program for scient...

Table of contents

  1. COVER PAGE
  2. TITLE PAGE
  3. COPYRIGHT PAGE
  4. ILLUSTRATIONS
  5. CONTRIBUTORS
  6. FOREWORD
  7. CHAPTER 1: INTRODUCTION: CONTEMPORARY PSYCHOLOGICAL RESEARCH: VISIONS FROM POSITIONAL STANDPOINTS
  8. PART I: SURVEY RESEARCH
  9. PART II: QUALITATIVE RESEARCH
  10. PART III: CONTROLLED INVESTIGATION