Researching with Integrity
eBook - ePub

Researching with Integrity

The Ethics of Academic Enquiry

  1. 192 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Researching with Integrity

The Ethics of Academic Enquiry

About this book

There is increased emphasis internationally on ethically sound research, and on good training for research supervisors. Researching with Integrity aims to identify what and how research can be undertaken ethically and with 'virtue' from initial conception of ideas through to dissemination. It outlines the context in which academics engage in research, considering the impact of discipline and institutional culture, the influence of government audit of research 'quality', the role of government and quangos, professional organisations and business sponsors, and examines the effects of the increasing power and influence of funding bodies, university ethics committees and codes of practice.

Based on the notion of 'virtue' ethics, this book proposes an alternative approach to research, which focuses not only on ethical rules and protocol to avoid unethical research, but encourages academic, professional and character development and allows for the exercise of personal judgement.

Themes considered include:

Increased competitiveness between academics and concentration of funding in fewer universities

Increasingly bureaucratic approval of processes focused on the treatment of human and animals in research

Meeting the expectations of research sponsors

'Taboo' research topics and methods

Exposing findings to the scrutiny of peers, taking credit for the work of others and self-citation

Bullying of junior researchers and plagiarism

Power and influence of institutional, discipline-based and professional organisations

Illustrated throughout with short narratives detailing ethical issues and dilemmas from international academic researchers representing different disciplines, research cultures and national contexts, this books proposes a an alternative approach to research which provides all research professionals with the intellectual tools they need to cope with complex research.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Researching with Integrity by Bruce Macfarlane in PDF and/or ePUB format, as well as other popular books in Education & Education General. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2010
eBook ISBN
9781134109302
Edition
1

PART ONE
FROM PRINCIPLES TO VIRTUE

1
THE LEGACY OF NUREMBERG

The “betrayal of Hippocrates” had a broad basis within the German medical profession. (Ernst, 2001, 42)

INTRODUCTION

The phrase “research ethics” conjures up a set of concerns which is now largely taken for granted. It invokes a language, and a related set of questions, that mainly clusters around the treatment of research subjects. Are we treating such people with dignity and ensuring that their rights are fully respected? Is any data collected kept confidential? Is the anonymity of the research subject respected? In biomedical research this implies a duty not to harm someone who has agreed to participate in a study. In all forms of research involving humans one might ask whether there is “informed” consent. In other words, does the research subject really understand what they are letting themselves in for?
These are just some of the crucial questions in any consideration of research ethics but, as I will argue in subsequent chapters, not the only relevant ones to ask. If we are to understand the ethical challenges of research it is important to consider not just the duty of the researcher toward the research subject but the development of character necessary to navigate through the temptations of the entire research process.
This chapter will be concerned with the roots of research ethics. Why is research ethics today virtually synonymous with the treatment of research subjects? The answer to this question can be found largely through examining the lessons learned from the history of medical research during the twentieth century. I will focus on two notorious chapters from this history to illustrate their profound effect on our contemporary understanding of research ethics and reflect on the ethical theories that underpin the dominant principles which have emerged in response to this legacy.

THE NAZIS AT NUREMBERG

On June 2, 1948, seven Nazi doctors were hung at Landsberg prison in Bavaria. Among those sent to the gallows that day was Professor Karl Gebhardt, chief surgeon to the SS who held the rank of Major General and President of the German Red Cross. He was one of 20 medical doctors who had been tried before the International Military Tribunal at Nuremberg accused of war crimes. At the so-called “Doctors’ Trial,” Gebhardt was found guilty, inter alia, of performing medical experiments, without the consent of the subjects, on both prisoners of war and civilians of occupied countries, thereby taking part in the mass-murder of concentration camp inmates. He had co-ordinated surgical experimentation, mainly on young Polish women, at concentration camps in Ravensbrück and Auschwitz. Here, Gebhardt oversaw operations where victims were deliberately inflicted with battlefield wounds as a means of pursuing his interest in reconstructive surgery. Many were to die or suffer intense agony and serious injury as a result of Gebhardt’s operations.
The atrocious crimes committed by Nazi doctors like Gebhardt, or more infamous counterparts such as Josef Mengele, need to be understood as more than the actions of a few “mad” or “bad” men. The doctors found guilty at Nuremberg were the tip of a much bigger iceberg of complicity and wrong-doing within the German medical profession. Several hundred doctors were captured and tried at the end of the war by the Soviets, like Carl Clauberg, a professor of gynecology who conducted X-ray sterilization experiments on Jewish and Roma women without the use of anaesthetics. Many others, probably the vast majority, escaped punishment altogether. According to one estimate, around 350 doctors behaved in a criminal manner (Mitscherlich & Mielke 1949). Underlying this statistic, however, is a broader-based “betrayal of Hippocrates” within the German medical profession (Ernst, 2001).
The focus of the Nazi regime of the 1930s on military and race-based policies meant that scientists and medical academics were central to the pursuit of the political agenda. The pre-war Nazi regime introduced a number of measures, including legalized forced sterilization of disabled people and involuntary euthanasia for those deemed “unworthy of living” such as children with Down’s syndrome. These policies meant that doctors played a prominent executive role in Nazi society as “experts” on decision-making juries. A much higher percentage of doctors joined the Nazi party and its associated organizations, than comparable professions (Ernst, 2001). This is an oft-quoted indicator of the complicity of large swathes of the medical profession with Nazi policies. The actions of Gebhardt and his associates had a profound effect on the lives (and deaths) of tens of thousands of victims and their families. These actions also had a highly significant long-term effect on the development of research ethics in medical science and, as we will see, on other disciplines too.
The lack of clear international ethical standards for the conduct of scientific research was one of the excuses put forward on behalf of the defendants at the Doctors’ Trial. It was true that no formal code was in operation at this time but the “Hippocratic oath” had been, from the fourth century BC, the commonly accepted moral basis on which doctors were governed. Attributed to the Greek physician Hippocrates, this “oath” has a number of ancient and modern interpretations but, in essence, is based on the central tenets of treating patients with respect and to the best of one’s ability. However, while no international ethical code may have existed at the time of the Doctors’ Trial, the standards against which they were judged were ex post facto norms that any civilized human being should have understood (Jonsen, 1998). In other words, the lack of an international code was considered no excuse by the tribunal for treating human beings purely as a means to an end, and without humanity.
Another excuse put forward by the Nazi doctors was that some of the prisoners on whom they experimented had already been sentenced to death. Hence, it was argued that their experiments made no material difference to their fate; these prisoners would die anyway. In a clear repudiation of this excuse and the abhorrent actions of the defendants, the judgment in the Doctors’ Trial included what is now known as the “Nuremberg Code.” This was a 10-point statement of ethical and moral principles that, according to the court, should underpin medical research and experimentation in the future (see figure 1.1). At the heart of the Nuremberg Code is the principle of “voluntary consent.” This established that respectful treatment of human subjects must be the central tenet of any “ethical” research.
The publicity afforded to the Nuremberg Trials means that public attention has tended subsequently to focus on Nazi doctors as those most closely associated with cruel and unethical experimentation on human subjects. However, while they may have received comparatively less subsequent scrutiny, during World War II many similar abuses were carried out by the Japanese imperial army involving allied prisoners of war (McNeill, 1993; Powell, 2006). It is estimated that several thousand Chinese and Russian prisoners died during human experiments to develop chemical and biological weapons, particularly in Japanese-occupied Manchuria. In one of the most notorious incidents of abuse during this period most of the members of the crew of an American B-29 bomber were captured after crash landing in Japan. Eight members of the crew were taken to a university medical department in Fukuoka where they died after vivisection operations in which most of their vital organs were removed. Biological warfare experiments were also carried out by the Japanese in at least 11 Chinese cities during its period of occupation (McNeill, 1993).

A summary of the principles:

  • Voluntary consent of the human subject is essential.
  • The research subject may withdraw consent at any time.
  • The results should be for the good of society.
  • The risk should not exceed the humanitarian benefit.
  • All safety precautions must be taken.
  • The research design should be justified and based on expertise.
  • The investigator must be scientifically qualified.
  • The experiment must be terminated where the subject’s health is threatened.
Based on Katz (1972).
Figure 1.1 The Nuremberg Code (1949).

The lessons learnt from these war time abuses meant that the principles contained in the Nuremberg Code shaped the development of subsequent post-war international accords on ethics such as the Declaration of Helsinki adopted in 1964 by the World Medical Assembly. It would be naĂŻve, however, to think that the principles contained in the Nuremberg Code and the lessons learned from Nazi treatment of concentration camp victims have led to the elimination of unethical behavior in medical research.
While the Nuremberg Code represents a profound statement of moral principles shaped by the horrors of Nazi experimentation the modern-day regulation of scientific research owes more, in reality, to a scandal that broke in the USA in 1972.

THE TUSKEGEE SCANDAL

On May 16, 1996 President Bill Clinton issued a formal apology to the remaining survivors and victims of a 40-year medical research experiment, the longest non-therapeutic human experiment in the history of public health. The experiment set out to study the long-term effects of syphilis, a blood-related bacteria that can be contracted through sexual contact or inherited from a mother. President Clinton’s apology was designed, at least in part, to re-establish the trust and confidence of African Americans in medical research. After the scandal broke in 1972, the study, entitled the “Tuskegee Study of Untreated Syphilis in the Negro Male,” became synonymous with the exploitation of African Americans. Tuskegee was not a scientific research study that simply went wrong. It was a methodical, longitudinal study that exposed a deep-seated and long-term disregard for the well-being of research subjects exploited on the basis of their race and class (Reverby, 2000).
The origins of the study go back to the early 1930s when the U.S. Public Health Service at Tuskegee Institute invited black males from a poor and racially segregated area of Alabama for a free medical examination. The real purpose of these examinations was to select around 400 men to take part in a longitudinal study into the effects of syphilis. On the basis of these examinations, men with suspected syphilis were invited back for further tests and spinal taps as a means of tracking the progress of the disease. The men in the trial were told that they were being treated for “bad blood” and were given incentives to attend for continuing “treatment” such as free burial insurance and hot meals.
At the time that the study began the only known treatments for syphilis were mercury or salvarsan. Mercury was largely ineffectual and dangerous (Cornwell, 2006). Effective treatments only emerged following the discovery of penicillin and the development of antibiotics after World War II. However, the syphilitic men of Tuskegee were not treated with either salvarsan or antibiotics despite the fact that the study continued until 1972 when effective treatments had been widely available for several decades. Their poverty and ignorance was systematically exploited and, worse, the men were denied proper treatment for a condition that led, for some, to an early and painful death. By the time that the study was halted it is estimated that up to 100 men had died. Later, the U.S. government paid about $10 million in out of court damages, equivalent to ÂŁ37,500 per participant (Cornwell, 2006).
Other post-war scandals demonstrated the need for regulation and oversight of biomedical research activity. The testing of an experimental drug known as thalidomide was another high-profile example. Thalidomide was designed to prevent nausea and vomiting in pregnant women but tragically resulted in thousands of babies being born without limbs or with other deformities. Worse still, the drug company tested thalidomide on women without their consent or the knowledge that they were taking part in a drug trial. The scandal resulted in the Kefauver-Harris Bill, which became law in 1962. The Act created the Federal Drug Administration and led to greater testing of new medical products. The legislation also required that companies gain the consent of patients before using them as research subjects.
Tuskegee was perhaps the most influential scandal in the regulation of research ethics. The case was a chilling reminder that the cruel and exploitative treatment of research subjects did not end with the Nazis and the adoption of the Nuremberg Code. Tuskegee was instrumental in leading to federal legislation in the USA in 1974 that also established a National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The commission was charged with the task of identifying the basic ethical principles that should underlie the conduct of biomedical and behavioral research involving human subjects and in developing guidelines which should be followed to ensure that such research is conducted in accordance with those principles.

DOMINANT ETHICAL PRINCIPLES

In 1979 the U.S. National Commission produced what became known as the Belmont Report (National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research 1979). This identified three key principles for the ethical conduct of research: respect for persons, beneficence, and justice. The first of these principles meant that researchers should treat participants as autonomous agents with the right to be kept fully informed of the process. They should ensure that persons with diminished autonomy, such as children or adults without full mental capacity, are protected. The principle of beneficence implies that the benefits of participation should outweigh any harm to participants. Justice means that the selection of subjects should be fair and those who are asked to participate in research should also benefit from it. Among the ethicists who advised the Commission were Tom Beauchamp and Jim Childress from the Kennedy Institute of Bioethics at Georgetown, University of Washington. In the same year as the report, they published what has subsequently become a highly influential text on bioethics and research ethics more generally (Beauchamp & Childress, 1979). While Belmont had identified three principles, Beauchamp and Childress came up with four: autonomy (in place of respect for persons), beneficence (to act for the benefit of others), non-maleficence (the duty to do no harm), and justice.


  • Respect for persons
  • ° subjects should be treated as autonomous agents and be fully informed
  • ° persons with diminished autonomy should be protected
  • Beneficence
  • ° benefits of participation should outweigh any harm
  • Justice
  • ° selection of subjects should be fair and those who are asked to bear the burden should also benefit
National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research (1979).
Figure 1.2 The Belmont Report Principles (1979).

The four principles identified by Beauchamp and Childress have become collectively known as “principalism” or the “Georgetown mantra.” The mantra draws on a mix of ethical theories and influences that have their roots in the philosophical writings of Immanuel Kant, the utilitarians, and John Rawls. In explaining the basis of these principles it is necessary to briefly explore the way that moral philosophy has influenced their construction.

Respect for Persons

The first of the principles, “respect for persons,” derives from Kant’s (1964) categorical imperative. Kant was a German philosopher who sought to demonstrate the role of reason as the basis of human morality. His categorical imperative states that human beings should act only according to rules that they would be willing to see everyone follow. Kant regarded this principle as an appeal to logic. It begs the simple question: what if everyone did that? Unless someone is prepared to see their own actions “universalized” it makes no logical sense to carry them out. As one would not wish to be treated disrespectfully merely as a means to an end this implies that we should not treat other people in a relevantly similar way. This is referred to as the “reversibility” argument: how, in other words, would you feel if someone did that to you? Kant’s categorical imperative makes intuitive sense in relation to thinking about the treatment of research subjects. Through reversibility, it demands that we place ourselves in their position. Would you be happy to be treated in this way if you, as the researcher, were in the position of the research subject? If the answer is yes, this provides a moral guide to the rightness of the action. If the answer is negative, then one should desist in treating the research subjects in this manner. The use of Kantian logic is attractive especially if researchers have themselves been research subjects making them, perhaps, more emotionally equipped to empathize with the position of those they are researching.
Kant insisted on the “rational” nature of his theory. However, critics have pointed out that applying the tests of universability and reversibility does not preclude acts of “bad morality” (Bennett, 1994). This phrase refers to acts based on an individual’s own sense of morality which may, nonetheless, be based on principles that many others might disapprove of. A Nazi who fervently believes that all Jews should be exterminated may, if asked to place him or herself in their position, still believe it is rational and right to carry out potentially fatal medical experiments on such research subjects. Bennett (1994) argues that the “bad morality” of Heinrich Himmler, the leader of the SS during World War II with overall responsibility for the Nazi concentration camps, was based on a set of principles. However odious, by sticking to these principles, Himmler felt his course of action was right. While this may be an extreme example, it serves to illustrate the point that the categorical imperative cannot legislate for cases of “bad morality” where we might disagree with the appropriateness of universalizing an action.

Beneficence (and Non-Maleficence)

The second principle found in the Belmont Report is that of beneficence. This principle requires that someone acts in a way that benefits others, such as a doctor seeking to benefit their patient through a course of treatment. In prescribing a drug, for example, a doctor will need to make a balanced judgment about the potential harm it might do, such as the risk of known side-effects, as opposed to its benefits as an effective treatment for a particula...

Table of contents

  1. Cover Page
  2. Title Page
  3. Copyright Page
  4. List of Figures and Tables
  5. List of Narratives
  6. Foreword by Stephen Rowland
  7. Acknowledgments
  8. Introduction: A Question of Integrity
  9. Part One From Principles to Virtue
  10. Part Two Living the Virtues
  11. Part Three Integrating Integrity
  12. References