1
THE LEGACY OF NUREMBERG
The âbetrayal of Hippocratesâ had a broad basis within the German medical profession. (Ernst, 2001, 42)
INTRODUCTION
The phrase âresearch ethicsâ conjures up a set of concerns which is now largely taken for granted. It invokes a language, and a related set of questions, that mainly clusters around the treatment of research subjects. Are we treating such people with dignity and ensuring that their rights are fully respected? Is any data collected kept confidential? Is the anonymity of the research subject respected? In biomedical research this implies a duty not to harm someone who has agreed to participate in a study. In all forms of research involving humans one might ask whether there is âinformedâ consent. In other words, does the research subject really understand what they are letting themselves in for?
These are just some of the crucial questions in any consideration of research ethics but, as I will argue in subsequent chapters, not the only relevant ones to ask. If we are to understand the ethical challenges of research it is important to consider not just the duty of the researcher toward the research subject but the development of character necessary to navigate through the temptations of the entire research process.
This chapter will be concerned with the roots of research ethics. Why is research ethics today virtually synonymous with the treatment of research subjects? The answer to this question can be found largely through examining the lessons learned from the history of medical research during the twentieth century. I will focus on two notorious chapters from this history to illustrate their profound effect on our contemporary understanding of research ethics and reflect on the ethical theories that underpin the dominant principles which have emerged in response to this legacy.
THE NAZIS AT NUREMBERG
On June 2, 1948, seven Nazi doctors were hung at Landsberg prison in Bavaria. Among those sent to the gallows that day was Professor Karl Gebhardt, chief surgeon to the SS who held the rank of Major General and President of the German Red Cross. He was one of 20 medical doctors who had been tried before the International Military Tribunal at Nuremberg accused of war crimes. At the so-called âDoctorsâ Trial,â Gebhardt was found guilty, inter alia, of performing medical experiments, without the consent of the subjects, on both prisoners of war and civilians of occupied countries, thereby taking part in the mass-murder of concentration camp inmates. He had co-ordinated surgical experimentation, mainly on young Polish women, at concentration camps in RavensbrĂźck and Auschwitz. Here, Gebhardt oversaw operations where victims were deliberately inflicted with battlefield wounds as a means of pursuing his interest in reconstructive surgery. Many were to die or suffer intense agony and serious injury as a result of Gebhardtâs operations.
The atrocious crimes committed by Nazi doctors like Gebhardt, or more infamous counterparts such as Josef Mengele, need to be understood as more than the actions of a few âmadâ or âbadâ men. The doctors found guilty at Nuremberg were the tip of a much bigger iceberg of complicity and wrong-doing within the German medical profession. Several hundred doctors were captured and tried at the end of the war by the Soviets, like Carl Clauberg, a professor of gynecology who conducted X-ray sterilization experiments on Jewish and Roma women without the use of anaesthetics. Many others, probably the vast majority, escaped punishment altogether. According to one estimate, around 350 doctors behaved in a criminal manner (Mitscherlich & Mielke 1949). Underlying this statistic, however, is a broader-based âbetrayal of Hippocratesâ within the German medical profession (Ernst, 2001).
The focus of the Nazi regime of the 1930s on military and race-based policies meant that scientists and medical academics were central to the pursuit of the political agenda. The pre-war Nazi regime introduced a number of measures, including legalized forced sterilization of disabled people and involuntary euthanasia for those deemed âunworthy of livingâ such as children with Downâs syndrome. These policies meant that doctors played a prominent executive role in Nazi society as âexpertsâ on decision-making juries. A much higher percentage of doctors joined the Nazi party and its associated organizations, than comparable professions (Ernst, 2001). This is an oft-quoted indicator of the complicity of large swathes of the medical profession with Nazi policies. The actions of Gebhardt and his associates had a profound effect on the lives (and deaths) of tens of thousands of victims and their families. These actions also had a highly significant long-term effect on the development of research ethics in medical science and, as we will see, on other disciplines too.
The lack of clear international ethical standards for the conduct of scientific research was one of the excuses put forward on behalf of the defendants at the Doctorsâ Trial. It was true that no formal code was in operation at this time but the âHippocratic oathâ had been, from the fourth century BC, the commonly accepted moral basis on which doctors were governed. Attributed to the Greek physician Hippocrates, this âoathâ has a number of ancient and modern interpretations but, in essence, is based on the central tenets of treating patients with respect and to the best of oneâs ability. However, while no international ethical code may have existed at the time of the Doctorsâ Trial, the standards against which they were judged were ex post facto norms that any civilized human being should have understood (Jonsen, 1998). In other words, the lack of an international code was considered no excuse by the tribunal for treating human beings purely as a means to an end, and without humanity.
Another excuse put forward by the Nazi doctors was that some of the prisoners on whom they experimented had already been sentenced to death. Hence, it was argued that their experiments made no material difference to their fate; these prisoners would die anyway. In a clear repudiation of this excuse and the abhorrent actions of the defendants, the judgment in the Doctorsâ Trial included what is now known as the âNuremberg Code.â This was a 10-point statement of ethical and moral principles that, according to the court, should underpin medical research and experimentation in the future (see figure 1.1). At the heart of the Nuremberg Code is the principle of âvoluntary consent.â This established that respectful treatment of human subjects must be the central tenet of any âethicalâ research.
The publicity afforded to the Nuremberg Trials means that public attention has tended subsequently to focus on Nazi doctors as those most closely associated with cruel and unethical experimentation on human subjects. However, while they may have received comparatively less subsequent scrutiny, during World War II many similar abuses were carried out by the Japanese imperial army involving allied prisoners of war (McNeill, 1993; Powell, 2006). It is estimated that several thousand Chinese and Russian prisoners died during human experiments to develop chemical and biological weapons, particularly in Japanese-occupied Manchuria. In one of the most notorious incidents of abuse during this period most of the members of the crew of an American B-29 bomber were captured after crash landing in Japan. Eight members of the crew were taken to a university medical department in Fukuoka where they died after vivisection operations in which most of their vital organs were removed. Biological warfare experiments were also carried out by the Japanese in at least 11 Chinese cities during its period of occupation (McNeill, 1993).
A summary of the principles:
- Voluntary consent of the human subject is essential.
- The research subject may withdraw consent at any time.
- The results should be for the good of society.
- The risk should not exceed the humanitarian benefit.
- All safety precautions must be taken.
- The research design should be justified and based on expertise.
- The investigator must be scientifically qualified.
- The experiment must be terminated where the subjectâs health is threatened.
Based on Katz (1972).
Figure 1.1 The Nuremberg Code (1949).
The lessons learnt from these war time abuses meant that the principles contained in the Nuremberg Code shaped the development of subsequent post-war international accords on ethics such as the Declaration of Helsinki adopted in 1964 by the World Medical Assembly. It would be naĂŻve, however, to think that the principles contained in the Nuremberg Code and the lessons learned from Nazi treatment of concentration camp victims have led to the elimination of unethical behavior in medical research.
While the Nuremberg Code represents a profound statement of moral principles shaped by the horrors of Nazi experimentation the modern-day regulation of scientific research owes more, in reality, to a scandal that broke in the USA in 1972.
THE TUSKEGEE SCANDAL
On May 16, 1996 President Bill Clinton issued a formal apology to the remaining survivors and victims of a 40-year medical research experiment, the longest non-therapeutic human experiment in the history of public health. The experiment set out to study the long-term effects of syphilis, a blood-related bacteria that can be contracted through sexual contact or inherited from a mother. President Clintonâs apology was designed, at least in part, to re-establish the trust and confidence of African Americans in medical research. After the scandal broke in 1972, the study, entitled the âTuskegee Study of Untreated Syphilis in the Negro Male,â became synonymous with the exploitation of African Americans. Tuskegee was not a scientific research study that simply went wrong. It was a methodical, longitudinal study that exposed a deep-seated and long-term disregard for the well-being of research subjects exploited on the basis of their race and class (Reverby, 2000).
The origins of the study go back to the early 1930s when the U.S. Public Health Service at Tuskegee Institute invited black males from a poor and racially segregated area of Alabama for a free medical examination. The real purpose of these examinations was to select around 400 men to take part in a longitudinal study into the effects of syphilis. On the basis of these examinations, men with suspected syphilis were invited back for further tests and spinal taps as a means of tracking the progress of the disease. The men in the trial were told that they were being treated for âbad bloodâ and were given incentives to attend for continuing âtreatmentâ such as free burial insurance and hot meals.
At the time that the study began the only known treatments for syphilis were mercury or salvarsan. Mercury was largely ineffectual and dangerous (Cornwell, 2006). Effective treatments only emerged following the discovery of penicillin and the development of antibiotics after World War II. However, the syphilitic men of Tuskegee were not treated with either salvarsan or antibiotics despite the fact that the study continued until 1972 when effective treatments had been widely available for several decades. Their poverty and ignorance was systematically exploited and, worse, the men were denied proper treatment for a condition that led, for some, to an early and painful death. By the time that the study was halted it is estimated that up to 100 men had died. Later, the U.S. government paid about $10 million in out of court damages, equivalent to ÂŁ37,500 per participant (Cornwell, 2006).
Other post-war scandals demonstrated the need for regulation and oversight of biomedical research activity. The testing of an experimental drug known as thalidomide was another high-profile example. Thalidomide was designed to prevent nausea and vomiting in pregnant women but tragically resulted in thousands of babies being born without limbs or with other deformities. Worse still, the drug company tested thalidomide on women without their consent or the knowledge that they were taking part in a drug trial. The scandal resulted in the Kefauver-Harris Bill, which became law in 1962. The Act created the Federal Drug Administration and led to greater testing of new medical products. The legislation also required that companies gain the consent of patients before using them as research subjects.
Tuskegee was perhaps the most influential scandal in the regulation of research ethics. The case was a chilling reminder that the cruel and exploitative treatment of research subjects did not end with the Nazis and the adoption of the Nuremberg Code. Tuskegee was instrumental in leading to federal legislation in the USA in 1974 that also established a National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The commission was charged with the task of identifying the basic ethical principles that should underlie the conduct of biomedical and behavioral research involving human subjects and in developing guidelines which should be followed to ensure that such research is conducted in accordance with those principles.
DOMINANT ETHICAL PRINCIPLES
In 1979 the U.S. National Commission produced what became known as the Belmont Report (National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research 1979). This identified three key principles for the ethical conduct of research: respect for persons, beneficence, and justice. The first of these principles meant that researchers should treat participants as autonomous agents with the right to be kept fully informed of the process. They should ensure that persons with diminished autonomy, such as children or adults without full mental capacity, are protected. The principle of beneficence implies that the benefits of participation should outweigh any harm to participants. Justice means that the selection of subjects should be fair and those who are asked to participate in research should also benefit from it. Among the ethicists who advised the Commission were Tom Beauchamp and Jim Childress from the Kennedy Institute of Bioethics at Georgetown, University of Washington. In the same year as the report, they published what has subsequently become a highly influential text on bioethics and research ethics more generally (Beauchamp & Childress, 1979). While Belmont had identified three principles, Beauchamp and Childress came up with four: autonomy (in place of respect for persons), beneficence (to act for the benefit of others), non-maleficence (the duty to do no harm), and justice.
- ° subjects should be treated as autonomous agents and be fully informed
- ° persons with diminished autonomy should be protected
- ° benefits of participation should outweigh any harm
- ° selection of subjects should be fair and those who are asked to bear the burden should also benefit
National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research (1979).
Figure 1.2 The Belmont Report Principles (1979).
The four principles identified by Beauchamp and Childress have become collectively known as âprincipalismâ or the âGeorgetown mantra.â The mantra draws on a mix of ethical theories and influences that have their roots in the philosophical writings of Immanuel Kant, the utilitarians, and John Rawls. In explaining the basis of these principles it is necessary to briefly explore the way that moral philosophy has influenced their construction.
Respect for Persons
The first of the principles, ârespect for persons,â derives from Kantâs (1964) categorical imperative. Kant was a German philosopher who sought to demonstrate the role of reason as the basis of human morality. His categorical imperative states that human beings should act only according to rules that they would be willing to see everyone follow. Kant regarded this principle as an appeal to logic. It begs the simple question: what if everyone did that? Unless someone is prepared to see their own actions âuniversalizedâ it makes no logical sense to carry them out. As one would not wish to be treated disrespectfully merely as a means to an end this implies that we should not treat other people in a relevantly similar way. This is referred to as the âreversibilityâ argument: how, in other words, would you feel if someone did that to you? Kantâs categorical imperative makes intuitive sense in relation to thinking about the treatment of research subjects. Through reversibility, it demands that we place ourselves in their position. Would you be happy to be treated in this way if you, as the researcher, were in the position of the research subject? If the answer is yes, this provides a moral guide to the rightness of the action. If the answer is negative, then one should desist in treating the research subjects in this manner. The use of Kantian logic is attractive especially if researchers have themselves been research subjects making them, perhaps, more emotionally equipped to empathize with the position of those they are researching.
Kant insisted on the ârationalâ nature of his theory. However, critics have pointed out that applying the tests of universability and reversibility does not preclude acts of âbad moralityâ (Bennett, 1994). This phrase refers to acts based on an individualâs own sense of morality which may, nonetheless, be based on principles that many others might disapprove of. A Nazi who fervently believes that all Jews should be exterminated may, if asked to place him or herself in their position, still believe it is rational and right to carry out potentially fatal medical experiments on such research subjects. Bennett (1994) argues that the âbad moralityâ of Heinrich Himmler, the leader of the SS during World War II with overall responsibility for the Nazi concentration camps, was based on a set of principles. However odious, by sticking to these principles, Himmler felt his course of action was right. While this may be an extreme example, it serves to illustrate the point that the categorical imperative cannot legislate for cases of âbad moralityâ where we might disagree with the appropriateness of universalizing an action.
Beneficence (and Non-Maleficence)
The second principle found in the Belmont Report is that of beneficence. This principle requires that someone acts in a way that benefits others, such as a doctor seeking to benefit their patient through a course of treatment. In prescribing a drug, for example, a doctor will need to make a balanced judgment about the potential harm it might do, such as the risk of known side-effects, as opposed to its benefits as an effective treatment for a particula...