Strangers at the Bedside
eBook - ePub

Strangers at the Bedside

A History of How Law and Bioethics Transformed Medical Decision Making

David J. Rothman

Compartir libro
  1. 313 páginas
  2. English
  3. ePUB (apto para móviles)
  4. Disponible en iOS y Android
eBook - ePub

Strangers at the Bedside

A History of How Law and Bioethics Transformed Medical Decision Making

David J. Rothman

Detalles del libro
Vista previa del libro
Índice
Citas

Información del libro

David Rothman gives us a brilliant, finely etched study of medical practice today. Beginning in the mid-1960s, the practice of medicine in the United States underwent a most remarkable--and thoroughly controversial--transformation. The discretion that the profession once enjoyed has been increasingly circumscribed, and now an almost bewildering number of parties and procedures participate in medical decision making.

Well into the post-World War II period, decisions at the bedside were the almost exclusive concern of the individual physician, even when they raised fundamental ethical and social issues. It was mainly doctors who wrote and read about the morality of withholding a course of antibiotics and letting pneumonia serve as the old man's best friend, of considering a newborn with grave birth defects a "stillbirth" thus sparing the parents the agony of choice and the burden of care, of experimenting on the institutionalized the retarded to learn more about hepatitis, or of giving one patient and not another access to the iron lung when the machine was in short supply. Moreover, it was usually the individual physician who decided these matters without formal discussions with patients, their families, or even with colleagues, and certainly without drawing the attention of journalists, judges, or professional philosophers.

The impact of the invasion of outsiders into medical decision-making, most generally framed, was to make the invisible visible. Outsiders to medicine--that is, lawyers, judges, legislators, and academics--have penetrated its every nook and cranny, in the process giving medicine exceptional prominence on the public agenda and making it the subject of popular discourse. The glare of the spotlight transformed medical decision making, shaping not merely the external conditions under which medicine would be practiced (something that the state, through the regulation of licensure, had always done), but the very substance of medical pract

Preguntas frecuentes

¿Cómo cancelo mi suscripción?
Simplemente, dirígete a la sección ajustes de la cuenta y haz clic en «Cancelar suscripción». Así de sencillo. Después de cancelar tu suscripción, esta permanecerá activa el tiempo restante que hayas pagado. Obtén más información aquí.
¿Cómo descargo los libros?
Por el momento, todos nuestros libros ePub adaptables a dispositivos móviles se pueden descargar a través de la aplicación. La mayor parte de nuestros PDF también se puede descargar y ya estamos trabajando para que el resto también sea descargable. Obtén más información aquí.
¿En qué se diferencian los planes de precios?
Ambos planes te permiten acceder por completo a la biblioteca y a todas las funciones de Perlego. Las únicas diferencias son el precio y el período de suscripción: con el plan anual ahorrarás en torno a un 30 % en comparación con 12 meses de un plan mensual.
¿Qué es Perlego?
Somos un servicio de suscripción de libros de texto en línea que te permite acceder a toda una biblioteca en línea por menos de lo que cuesta un libro al mes. Con más de un millón de libros sobre más de 1000 categorías, ¡tenemos todo lo que necesitas! Obtén más información aquí.
¿Perlego ofrece la función de texto a voz?
Busca el símbolo de lectura en voz alta en tu próximo libro para ver si puedes escucharlo. La herramienta de lectura en voz alta lee el texto en voz alta por ti, resaltando el texto a medida que se lee. Puedes pausarla, acelerarla y ralentizarla. Obtén más información aquí.
¿Es Strangers at the Bedside un PDF/ePUB en línea?
Sí, puedes acceder a Strangers at the Bedside de David J. Rothman en formato PDF o ePUB, así como a otros libros populares de Medicina y Etica in medicina. Tenemos más de un millón de libros disponibles en nuestro catálogo para que explores.

Información

Editorial
Routledge
Año
2017
ISBN
9781351488044
Edición
1
Categoría
Medicina

CHAPTER 1

The Nobility of the Material

Change began with a whistle-blower and a scandal. In June I966, Henry Beecher, Dorr Professor of Research in Anesthesia at Harvard Medical School, published in the New England Journal of Medicine (NEJM) his analysis of “Ethics and Clinical Research” and thereby joined the ranks of such noted muckrakers as Harriet Beecher Stowe, Upton Sinclair, and Rachael Carson.1 As has so often happened in the course of American history, a publication like Uncle Tom’s Cabin, Thejungle, or Sileni Spring will expose a secret—whether it be the violation of the slave family, the contamination of food, or the poisoning of the environment—so compellingly as to transform public attitudes and policy. Beecher’s article fits in this tradition. Its devastating indictment of research ethics helped inspire the movement that brought a new set of rules and a new set of players to medical decision making.2
The piece was short, barely six double-columned pages, and the writing terse and technical, primarily aimed at a professional, not a lay, audience. Beecher tried (not altogether successfully) to maintain a tone of detachment, as though this were a scientific paper like any other. “I want to be very sure,” he insisted, “that I have squeezed out of it all emotion, value judgments, and so on.”3 Even so, its publication created a furor both inside and outside the medical profession.
At its heart were capsule descriptions of twenty-two examples of investigators who had risked “the health or the life of their subjects” without informing them of the dangers or obtaining their permission. No citations to the original publications or names of the researchers appeared. Beecher did give the editors of the NEJM a fully annotated copy, and they vouched for its accuracy; he steadfastly refused all sub-sequent requests for references. Publicly, he declared that his intention was not to single out individuals but to “call attention to widespread practices.” Privately, he conceded that a colleague from the Harvard Law School had advised him that to name names might open the individuals to lawsuits or criminal prosecution.4
The research protocols that made up Beecher’s roli of dishonor seemed flagrant in their disregard of the welfare of the human subjects. Example 2 constituted the purposeful withholding of penicillin from servicemen with streptococcal infections in order to study alternative means for preventing complications. The men were totally unaware of the fact that they were part of an experiment, let alone at risk of contracting rheumatic fever, which twenty-five of them did. Example 16 involved the feeding of live hepatitis viruses to residents of a state institution for the retarded in order to study the etiology of the disease and attempt to create a protective vaccine against it. In example 17, physicians injected live cancer cells into twenty-two elderly and senile hospitalized patients without telling them that the cells were cancerous, in order to study the body’s immunological responses. Example 19 involved researchers who inserted a special needle into the left atrium of the heart of subjects, some with cardiac disease and others normal, in order to study the functioning of the heart. In example 22, researchers inserted a catheter into the bladder of twenty-six newborns less than forty-eight hours old and then took a series of X rays of the bladders filling and voiding in order to study the process. “Fortunately,” noted Beecher, “no infection followed the catheterization. What the results of the extensive x-ray exposure may be, no one can yet say.”
Beecher’s most significant, and predictably most controversial, conclusion was that “unethical or questionably ethical procedures are not uncommon” among researchers—that is, a disregard for the rights of human subjects was widespread. Although he did not provide footnotes, Beecher declared that “the troubling practices” came from “leading medical schools, university hospitals, private hospitals, governmental military departments . . . governmental institutes (the National Institutes of Health), Veterans Administration Hospitals and industry.” In short, “the basis for the charges is broad.” Moreover, without attempting any numerical estimate of just how endemic the practices were among researchers, Beecher reported how dismayingly easy it had been for him to compile his list. An initial list of seventeen examples had been easily expanded to fifty (and winnowed down to twenty-two for publication). He had also examined 100 consecutive studies that were reported on in 1964 “in an excellent journal; 12 of these seemed unethical.” He concluded, “If only one quarter of them is truly unethical, this still indicates the existence of a serious problem.”
At a time when the media were not yet scouring medical journals for stories, Beecher’s charges captured an extraordinary amount of public attention. Accounts of the NEJM article appeared in the leading newspapers and weeklies, which was precisely what he had intended. A circumspect whistle-blower, he had published his findings first in a medical journal without naming names; but at the same time, he had informed influential publications (including the New York Times, the Wall Street Journal, Time, and Newsweek) that his piece was forthcoming. The press reported the experiments in great detail, and reporters, readers, and public officials alike expressed dismay and incredulity as they pondered what had led respectable scientists to commit such acts. How could researchers have injected cancer cells into hospitalized senile people or fed hepatitis viruses to institutionalized retarded children? In short order, the National Institutes of Health (NIH), the major funder of research in the country, was getting letters from legislators asking what corrective actions it intended to take.5
Beecher, as he fully expected, infuriated many of his colleagues, and they responded angrily and defensively. Some, like Thomas Chalmers at Harvard, insisted that he had grossly exaggerated the problem, taking a few instances and magnifying them out of proportion.6 The more popular objection (which can still be heard among investigators today) was that he had unfairly assessed 1950s practices in terms of the moral standards of a later era. To these critics, the investigators that Beecher had singled out were pioneers, working before standards were set for human investigation, before it was considered necessary to inform subjects about the research and obtain their formal consent to participation. The enterprise of human investigation was so novel that research ethics had been necessarily primitive and underdeveloped.
However popular—and, on the surface, appealing—that retort is, it not only fails to address the disjuncture between public expectations and researchers’ behavior but is woefully short on historical perspective. If the activity was so new and the state of ethics so crude, why did outsiders shudder as they read about the experiments? However tempting it might be to shortcircuit the history, neither human experimentation nor the ethics of it was a recent invention. Still, Beecher’s critics were not altogether misguided: there was something substantially different about the post-World War II laboratories and investigators. If researchers were not as morally naive as their defenders would suggest, they occupied a very special position in time. They had inherited a unique legacy, bequeathed to them by the World War II experience.
Thus, for many reasons, it is important that we trace, however briefly, this history, particularly in its most recent phases. In no other way can we understand how investigators could have designed and conducted the trials that made up Beecher’s roster. And in no other way can we understand the gap between the investigators’ behavior and public expectation, a gap that would produce not only wariness and distrust but also new mechanisms for governing clinical research. These attitudes and mechanisms spread, more quickly than might have been anticipated, from the laboratory to the examining room. A reluctance to trust researchers to protect the well-being of their subjects soon turned into an unwillingness to trust physicians to protect the well-being of their patients. In the new rules for research were the origins of the new rules for medicine.
Until World War II, the research enterprise was typically small-scale and intimate, guided by an ethic consistent with community expectations.7 Most research was a cottage industry: a few physicians, working alone, carried out experiments on themselves, their families, and their immediate neighbors. Moreover, the research was almost always therapeutic in intent; that is, the subjects stood to benefit directly if the experiments were successful. Under these circumstances, the ethics of human investigation did not command much attention; a few scientists, like Claude Bernard and Louis Pasteur, set forth especially thoughtful and elegant analyses. But for the most part, the small scale and potentially therapeutic character of the research seemed protection enough, and researchers were left to their own conscience, with almost no effort to police them. To be sure, not everyone’s behavior matched the standard or lived up to expectations. By the 1890s, and even more frequently in the opening decades of the twentieth century, some investigators could not resist experimenting on unknown and unknowing populations, particularly inmates of orphanages and state schools for the retarded. But at least before World War II such practices were relatively infrequent.
The idea of judging the usefulness of a particular medication by actual results goes back to a school of Greek and Roman empiricists, but we know little about how they made their judgments and whether they actually conducted experiments on human beings. The medieval Arab medical treatises, building on classical texts, reflect an appreciation of the need for human experiments, but again the record is thin on practice. Scholars like the renowned Islamic scientist and philosopher Avicenna (980-1037) recommended that a drug be applied to two different cases to measure its efficacy, and he also insisted that “the experimentation must be done with the human body, for testing a drug on a lion or a horse might not prove anything about its effect on man.” However, he offered no guidance about how or on whom such experiments should be conducted.8
If earlier practices remain obscure, a number of ethical maxims about experimentation do survive. Maimonides (1125-1204), a notedjewish physician and philosopher, counseled colleagues always to treat patients as ends in themselves, not as means for learning new truths. A fuller treatment of research ethics came from the English philosopher and scientist Roger Bacon (1214-1292). He excused the inconsistencies in therapeutic practices among contemporary physicians on the following grounds: “It is exceedingly difficult and dangerous to perform operations on the human body, wherefore it is more difficult to work in that science than in any other. . . . The operative and practical sciences which do their work on insensate bodies can multiply their experiments till they get rid of deficiency and errors, but a physician cannot do this because of the nobility of the material in which he works; for that body demands that no error be made in operating upon it, and so experience [the experimental method] is so difficult in medicine.”9 To Bacon the trade-off was worth making: the human body was so noble a material that therapeutics would have to suffer deficiencies and errors.
Human experimentation made its first significant impact on medical knowledge in the eighteenth century, primarily through the work of the English physician Edward Jenner, and his research on a vaccination against smallpox exemplifies both the style and technique that would predominate for the next 150 years. Observing that farmhands who contracted the pox from swine or cows seemed to be immune to the more virulent smallpox, Jenner set out to retrieve material from their pustules, inject that into another person, and see whether the recipient could then resist challenges from small amounts of smallpox materials. In November 1789 he carried out his first experiment, inoculating his oldest son, then about a year old, with swinepox. Although the child suffered no ill effects, the smallpox material he then received did produce an irritation, indicating that he was not immune to the disease.10
Jenner subsequently decided to work with cowpox material. In his most famous and successful experiment, he vaccinated an eight-year-old boy with it, a week later challenged him with smallpox material, and noted that he evinced no reaction. No record exists on the interaction between Jenner and his subject save Jenner’s bare account: “The more accurately to observe the progress of the infection, I selected a healthy boy, about eight years old, for the purpose of inoculation for the cowpox. The matter . . . was inserted . . . into the arm of the boy by means of two incisions.”11 Whether the boy was a willing or unwilling subject, how much he understood of the experiment, what kind of risk-benefit calculation he might have made, or whether his parents simply ordered him to put out his arm to please Mr. Jenner remains unknown. Clearly, Jenner did the choosing, but do note the odd change in style from the active “I selected” to the passive “the matter was . . . inserted.” All we can teli for certain is that the boy was from the neighborhood, that Jenner was a man of standing, that he chose the boy for the experiment, and that smallpox was a dreaded disease. Still, some degree of trust probably existed between researcher and subject, or subjects parents. This was not an interaction between strangers, and Jenner would have been accountable had anything untoward happened.
Word of Jenner’s success spread quickly, and in September 1799 he received a letter from a physician in Vienna who had managed to obtain some vaccine for his own use. His first subject, he told Jenner, was “the son of a physician in this town.” Then, encouraged by his initial success, he reported, “I did not hesitate to inoculate . . . my eldest boy, and ten days afterwards my second boy.” In this same spirit, Dr. Benjamin Waterhouse, professor of medicine at Harvard, learned of Jenner’s work and vaccinated seven of his children; then, in order to test for the efficacy of the procedufe, he exposed three of them to the disease at Boston’s Smallpox Hospital, with no ill effects. Here again, colleagues and family were the first to share in the risks and benefits of research.12
Even in the premodern era, neighbors and relations were not the only subjects of research. Legends teli of ancient and medieval rulers who tested the efficacy of poison potions on condemned prisoners and released those who survived. Much better documented is the example of Lady Mary Wortley Montagu, wife of the British ambassador to Turkey, who learned about Turkish successes in inoculating patients with small amounts of the smallpox material to provide immunity. Eager to convince English physicians to adopt the procedufe, she persuaded King George I to run a trial by pardoning any condemned inmate at the Newgate Prison who agreed to the inoculation. In August 1721, six volunteers were inoculated; they developed local lesions but no serious illness, and all were released. As science went, the trial was hardly satisfactory and the ethics were no better—the choice between death or enrollment in the experiment was not a freely made one. But such ventures remained the exception.13
For most of the nineteenth century, research continued on a small scale, with individual physicians trying out one or another remedy or procedufe on a handful of persons. Experimentation still began at home, on the body of the investigator or on neighbors or relatives. One European physician, Johannjorg, swallowed varying doses of seventeen ditferent drugs in order to analýze their effects; another, James Simpson, searching for an anesthesia superior to ether, inhaled chloroform and awoke to find himself lying fiat on the floor.14 In what is surely the most extraordinary moment in nineteenth-century human experiments, Dr. William Beaumont conducted his famous studies on “The Physiology of Digestion” on the healed stomach wound of Alexis St. Martin. There was a signed agreement between them, though not so much a consent form (as some historians have suggested) as an apprenticeship contract; but even this form testified to the need for investigators to obtain the agreement of their subjects. St. Martin bound himself for a term of one year to “serve, abide, and continue with the said William Beaumont . . . [as] his covenant servant”; and in return for board, lodging, and $150 a year, he agreed “to assist and promote by all means in his power such philosophical or medical experiments as the said William shall direct or cause to be made on or in the stomach of him.”15
The most brilliant researcher of the century, Louis Pasteur, demonstrates even more vividly just how sensitive investigators could be to the dilemmas inherent in human experimentation. As he conducted laboratory and animal research to find an antidote to rabies, he worried about the time when it would be necessary to test the results on people. In fall 1884 he wrote to a patron deeply interested in his work: “I have already several cases of dogs immunized after rabic bites. I take two dogs: I have them bitten by a mad dog. I vaccinate the one and I leave the other without treatment. The latter dies of rabies: the former withstands it.” Nevertheless, Pasteur continued, “I have not yet dared to attempt anything on man, in spite of my confidence in the result. . . . I must wait first till I have got a whole crowd of successful results on animals. . . . But, however I should multiply my cases of protection of dogs, I think that my hand will shake when I have to go on to man.”16
The fateful moment came some nine months later when there appeared at his laboratory doof a mother with her nine-year-old son, Joseph Meister, who two days earlier had been bitten fourteen times by what was probably a mad dog. Pasteur agonized over the decision as to whether to conduct what would be the first human trial of his rabies inoculation; he consulted with two medical colleagues and had them examine the boy. Finally, he reported, on the grounds that “the death of the child appeared inevitable, I resolved, though not without great anxiety, to try the method which had proved consistently successful on the dogs.” By all accounts, Pasteur passed several harrowing...

Índice