WOMEN’S HEALTH ACTIVISM HAS shaped American medicine throughout its history. In the 1830s and 1840s, as the Popular Health Movement resisted the regular doctors’ attempt at gaining a professional monopoly, women formed “Ladies’ Physiological Societies” to learn about their bodies and trade home remedies. In the early twentieth century, women reformers agitated for public health campaigns to combat infant and maternal mortality and fought for the legalization of birth control. The women’s health activists of the 1960s and 1970s successfully transformed many aspects of medicine: they won the legal right to abortion, established women’s health clinics, and secured important patient rights from a condescending medical system that often withheld information about the risks of drugs and procedures. And through self-help groups and popular manuals like Our Bodies, Ourselves, they sought, like their nineteenth-century predecessors, to give women the knowledge to better take care of their health.
It wasn’t until the late eighties, when women finally made up a critical mass of those within the medical community, that they were in a position to successfully call attention to one of the most insidious ways gender bias was embedded in medicine: the medical knowledge that had been accumulating over the course of the twentieth century, and especially in the previous few decades as biomedical research exploded in the United States, was disproportionately benefiting men. Science simply knew less about women’s bodies and the diseases that befell them—and, worse still, the medical community was not attuned to this failure and seeking to correct it. In 1990, a coalition of women’s health advocates, biomedical researchers, and lawmakers came up with a strategy to put this knowledge gap on the public’s radar—and we’ve been playing catch-up to close it ever since.
In 1985, a Public Health Service task force had released a report, spearheaded by Ruth Kirschstein, the first and only woman serving as an institute director at the NIH at the time, warning that
“the historical lack of research focus on women’s health concerns has compromised the quality of health information available to women as well as the health care they receive.” In response, the NIH had announced a new policy that “urged” researchers who received federal research grants to include women in their clinical studies unless there was a good reason not to. But women within the NIH—who at the time occupied less than a third of senior policy and research positions—knew the policy hadn’t changed matters much.
So in the late eighties, a group of women scientists, working both within and outside of the NIH, formed the Society for the Advancement of Women’s Health Research—now called the Society for Women’s Health Research (SWHR)—and teamed up with allies in Congress to expose the problem. They decided to demand an audit of the NIH’s efforts by the independent oversight arm of the federal government, the General Accounting Office, now called the U.S. Government Accountability Office (GAO).
In 1990, the GAO released its findings: As expected, it turned out that the NIH had done little to implement its policy on women’s inclusion at all. The application guidelines for researchers didn’t mention it, and in fact, many NIH staff weren’t even aware the policy had been adopted. Of the NIH-funded studies reviewed by the GAO, one-fifth didn’t mention the gender of the study subjects, and a third claimed the study would include women but didn’t specify how many. The NIH couldn’t actually say with any certainty how many women were included in the research it funded, with billions of taxpayer dollars, to further the nation’s health.
Just as the members of Congress and advocates had planned, the GAO report garnered widespread public attention. In hearings flooded with reporters,
the cochairs of the Congressional Caucus for Women’s Issues lambasted NIH leaders, and the medical research community as a whole, for compromising women’s health. “American women have been put at risk by medical practices that fail to include women in research studies,” said Rep. Patricia Schroeder. “NIH’s attitude has been to consider over half the population as some sort of special case,” Rep. Olympia Snowe charged.
Given the NIH’s lack of record keeping, it was impossible to say exactly how underrepresented women were, but the public learned that women had been left out of many of the largest, most important clinical studies conducted in the last couple of decades. The Baltimore Longitudinal Study of Aging, which began in 1958 and purported to explore “normal human aging,” didn’t enroll any women for the first twenty years it ran. The Physicians’ Health Study, which had recently concluded that taking a daily aspirin may reduce the risk of heart disease? Conducted in 22,071 men and zero women. The 1982 Multiple Risk Factor Intervention Trial—known, aptly enough, as MRFIT—which looked at whether dietary change and exercise could help prevent heart disease: just 13,000 men.
The default to studying men at times veered into absurdity: in the early sixties, observing that women tended to have lower rates of heart disease until their estrogen levels dropped after menopause, researchers conducted the first trial to look at whether supplementation with the hormone was an effective preventive treatment. The study enrolled 8,341 men and no women. (Although doctors began prescribing estrogens to postmenopausal women in droves—
by the midseventies, a third would be taking them—it wasn’t until 1991 that the first clinical study of hormone therapy was conducted in women.) An NIH-supported pilot study from Rockefeller University that looked at how obesity affected breast and uterine cancer didn’t enroll a single woman. While men can develop breast cancer—and a small number of them do each year—as Rep. Snowe noted drily at the congressional hearings,
“Somehow, I find it hard to believe that the male-dominated medical community would tolerate a study of prostate cancer that used only women as research subjects.”
In 1992, a couple of years after taking the NIH to task, the Congressional Caucus for Women’s Issues asked the GAO to review the state of affairs at the Food and Drug Administration (FDA). While the NIH is the largest public funder of biomedical research, most studies of drug treatments are funded by the private pharmaceutical companies that develop them and are reviewed by the FDA as part of the drug-approval process.
Since 1977, the agency had had in place a policy forbidding women of “childbearing potential” from participating in early-phase drug trials. While women were allowed to be included in later studies—after basic safety and dosage had been established—
the GAO report found that women were underrepresented in 60 percent of recent clinical drug trials. And while 1988 FDA guidelines urged drug companies to analyze their data by gender when women were included, nearly half the studies reviewed by the GAO failed to do so. Furthermore, despite the fact that millions of American women were on the pill, a mere 12 percent of the recently approved drugs had been studied for potentially dangerous interactions with oral contraceptives.
Advocates had always intended the GAO report on the NIH’s poorly implemented policy to serve as a spotlight that could be shined on other ways women’s health was being shortchanged in the biomedical community. They charged not only that health conditions that affected both men and women equally or were more prevalent among men had been studied primarily in men, with no attention to the possibility that there might be sex/gender differences, but also that conditions that predominantly affected women had been a low priority on the research agenda altogether.
Among the conditions that predominantly affected women was, of course, all of reproductive health. In the wake of the GAO report, other medical-professional organizations, including the Institute of Medicine (IOM) and the American College of Obstetricians and Gynecologists, joined the growing chorus claiming that funding for research on women’s reproductive health was inadequate. The political controversy around abortion was part of the problem, but such research was also marginalized because, with no obstetrics and gynecology program, there was no clear home for it within the NIH.
In fact, there were only three gynecologists on staff at the NIH, compared to thirty-nine veterinarians.
And it wasn’t just reproductive health—which is what “women’s health”
tended to get reduced to—that was getting short shrift. One of the advocates’ important claims was that, to the extent that medicine paid attention to women’s unique needs at all, it had been myopically focused on the parts of women’s bodies that most obviously differed from men’s.
“The medical community has viewed women’s health with a bikini approach, focusing essentially on the breast and reproductive system,” Dr. Nanette Wenger, a leading expert on women’s heart disease, wrote. “The rest of the woman was virtually ignored in considerations of women’s health.” This kind of “bikini medicine” overlooked the fact that women had the same top three causes of death—heart disease, stroke, and cancer of all kinds—as men did, and also suffered disproportionately from many nonreproductive health conditions that had been long neglected.
Indeed, in the late eighties, a Public Health Service task force had crunched the numbers and found that
only 13.5 percent of the NIH’s most recent budget had gone toward research on conditions “that are unique to or more prevalent or serious in women, have distinct causes or manifest themselves differently in women, or have different outcomes or interventions.” This was an extensive list that included reproductive health concerns but also extended to an array of other not-at-all-uncommon conditions that are more common in women, including breast and gynecological cancers, Alzheimer’s disease, depression, osteoporosis, and autoimmune diseases. And many of the conditions that will be discussed in this book weren’t yet receiving any federal research funding at all.
In the immediate aftermath of the GAO report, the NIH formed the Office of Research on Women’s Health (ORWH), and
the new office summed up the state of affairs in its first research agenda in 1991, noting a “pervasive sense in the research community that many of the health issues of women are of secondary importance, especially those that occur solely in women and those that occur in men and women but have already been studied chiefly in men.”
It went on to say, concerning the three leading killers of all Americans, “The startling realization is that most of the biomedical knowledge about the causes, expression, and treatment of these diseases derives from studies of men and is applied to women with the supposition that there are no differences.”
WHY WERE WOMEN NEGLECTED?
Perhaps the most generous explanation for women’s exclusion from clinical studies is that they had gotten caught up in the protectionist spirit of the era. In the seventies, there’d been a growing recognition—long overdue—of the risks of medical research.
And, in part, women had been excluded from clinical research, especially drug studies, for their own good—or at least for the good of their hypothetical fetuses.
This paternalistic concern was quite new, however. Women had not been spared from the rampant, unregulated, and largely uncontrolled experimentation that constituted heroic medicine in the eighteenth and nineteenth centuries. The well-off women who could afford regular doctors’ fees were subjected to bloodletting, purging, all manner of drugs, and, as is discussed more in the next chapter
, a smorgasbord of useless or dangerous procedures performed on their reproductive organs. And, as Ehrenreich and English point out, “Though middle-class women suffered most from the doctors’ actual practice, it was poor and black women who had suffered through the brutal period of experimentation.” In the nineteenth century, for example, Dr. J. Marion Sims, known as the “father of modern gynecology,” developed a groundbreaking cure for fistulas by practicing the surgery on enslaved women, whom he purchased specifically for that purpose, and later on poor Irish immigrants.
In the twentieth century, medicine slowly but surely became more rooted in science. The post–World War II period saw the rise of modern clinical research—with its gold standard of the double-blind, randomized, controlled trial—and a huge influx of federal funding turned the United States into a world leader in biomedical research. But this expansion preceded any consensus that the patients involved in clinical research shouldn’t be treated as unwitting guinea pigs. At least through the sixties, it remained socially disadvantaged and/or institutionalized groups—like the poor, prisoners, soldiers, and the mentally ill—who were most likely to be used in medical studies. Finally, a few shameful examples of vulnerable Americans experimented on without their consent—most infamously, the Tuskegee experiment, in which poor black men were left untreated for syphilis—made headlines.
In the aftermath, in the late seventies, the United States
finally put in place some enforceable ethical standards for research involving human subjects.
Meanwhile, two high-profile disasters had underscored the risks experimental drugs could pose to pregnant patients and their fetuses. In the late fifties, thalidomide was used in dozens of countries as an over-the-counter sedative and antinausea remedy in early pregnancy. Though it was never given FDA approval, some women in the United States nonetheless received it. By the early sixties, it was determined that the drug was to blame for severe limb deformities in over 10,000 children. Then, in the late sixties, many of the daughters of patients who’d taken the synthetic estrogen diethylstilbestrol (DES) while pregnant started developing a rare vaginal cancer. The drug had been widely prescribed to prevent miscarriage, despite large studies suggesting it was ineffective.
At a few respected American academic medical institutions, doctors had informed their pregnant patients only that DES was a “vitamin” that would help them grow “bigger and better babies.”
The public outcry over these cases helped spur some much-needed federal drug regulation. Since 1938 drug manufacturers had been required to show the FDA that their medication was safe before marketing it, but the agency’s regulatory power was fairly minimal. In the aftermath of the thalidomide tragedy, however, Congress passed the Kefauver-Harris Amendment, which significantly beefed up the drug-approval process, essentially putting in place most of the regulations we take for granted today.
Companies seeking FDA approval for their drugs were now required to demonstrate not just safety but also efficacy in well-controlled studies, they had to secure the fully informed consent of their study subjects, the drug’s advertising had to be up-front about its side effects and potential risks, and the FDA would track reports of any adverse reactions once the drug hit the market.
Along with these welcome reforms to clinical research and drug regulation, however, were some that crossed the line from overdue protection to sexist paternalism. The new federal guidelines on the ethical treatment of clinical subjects identified groups that may be in need of special safeguards because they “are likely to be vulnerable to coercion or undue influence”—a list that included pregnant women alongside
children, prisoners, mentally disabled people, and the economically disadvantaged. While the ethical inclusion of pregnant women in research is certainly complicated, as two women’s health advocates mused in the nineties,
“one wonders what aspect of pregnancy renders women particularly vulnerable to ‘coercion’ or ‘undue influence.’”
Meanwhile, the 1977 FDA policy didn’t stop at excluding patients who were actually pregnant. It expressly prohibited women of “childbearing potential” from participating in early-phase drug studies, except in the cases of life-threatening diseases. Critics pointed out that this treated every menstruating woman as if she were potentially pregnant—a “walking womb”—an infantilizing position that implied women couldn’t be trusted to know their risk of unintended pregnancy, take steps to prevent it, and make their own decision should they accidentally become pregnant during the trial. Even lesbian women, single women, women using contraception, and women whose partners had had a vasectomy weren’t allowed. In a stark double standard, the policy evinced little concern at all that men of reproductive age could be exposed to drugs that harm the genetic material they contribute to their future offspring. And while the FDA ban was limited to only early-phase studies, it ended up having a broader chilling effect, making drug researchers hesitant to enroll women during their fertile years at all.
Women’s health advocates had certainly been part of the chorus of voices calling for better drug regulation and respect for research subjects’ informed consent. Policies that completely stripped women of their agency to weigh the risks of participating in studies—of treatments that would potentially benefit themselves—because of theoretical harm to their hypothetical fetuses were quite a different matter. Not only was suc...