Two revolutions frame the discussion in this first chapter.
The first is the microbiological revolution that took place between roughly 1870 and the 1930s, a revolution that changed the way doctors thought about the causes of disease and the methods that might be used to control them. The second is the revolution in mortality and morbidity that has occurred on an almost global scale since the late nineteenth century.1 The question is: What is the connection between the two?
Taking a broad overview of the second revolution, we see that there has been a more than doubling of average life expectancy in the world, from roughly 30 yearsâ life expectancy at birth around 1830, to 67 years today, with some of the richest countries reaching life expectancies of over 80. These improvements in health began first in the industrializing, largely capitalist countries of Western Europe in the second half of the nineteenth century. Since 1950 the greatest gains in life expectancy have occurred in developing countries (starting as they did from much lower levels in life expectancy and higher levels of disease and mortality rates than those found in the developed countries). Such trends are not irreversible, as we know from Sub-Saharan Africa today where, by the year 2000, the devastating impact of the epidemic of HIV/AIDS, and the return of infectious diseases once believed to have been controlled, such as malaria and tuberculosis, combined with acute poverty and deteriorating political and economic conditions, have reduced overall life expectancy at birth to 47 years.2 Nevertheless, in most areas of the world the gains in life expectancy have been remarkable. In his recent global survey of what he describes as the âretreat of death and the democratization of survival into old ageâ, James C. Riley goes as far as to call the mortality revolution, or âhealth transitionsâ as they are referred to, as âthe crowning achievement of the modern era, surpassing wealth, military power, and political stability in importâ.3
The question is: what explains such fundamental transformations in health and longevity? Specifically, what part have public health interventions, especially those based on the medical sciences of bacteriology, parasitology and the vector theory of disease transmission, played in bringing about such improvements? How have eradication campaigns contributed to, or otherwise affected, the health transitions?
I hope that by opening with these two revolutions, in medicine and health, this book can make a contribution to what is obviously a major debate of our day â the debate about the sources or causes of health and ill-health in human populations.
Disease Eradication and the Epistemological Revolution
in Medicine and Public Health
Some historians consider the use of the word ârevolutionâ to describe the developments in bacteriology as somewhat of a misnomer. This is because the innovations in medical theory introduced by the new germ theory associated with scientists such as Louis Pasteur and Robert Koch, starting in the 1860s and â70s, were absorbed by different medical communities and different medical specialties only slowly and unevenly. There were also many continuities linking the pre-bacteriological and the post-bacteriological eras, especially in the area of practical public health policies, such as the isolation of the sick, quarantines, disinfection and fumigation.4
Nevertheless, conceptually, the new microbiological sciences signalled a revolution by shifting fundamentally the understanding of disease causation from what Stephen Kunitz calls a âmultiple weakly sufficientâ model to a ânecessaryâ causal model.5 Before the development of bacteriology, sanitarians had thought of infectious diseases as arising from multiple noxious sources, called âmiasmasâ or âeffluviaâ, which were thought to emanate from the places in which people lived. These miasmas were conceptualized not as living organisms that reproduced themselves, but as invisible poisons of some kind, arising out of decaying matter, dirt and damp and swampy earth. They were multiple and weakly causal in the sense that the predicted effect â disease â only sometimes followed from the conditions. The goal of the sanitarians was to clean up man-made habitats (removing rubbish, cleaning up water supplies) and natural environments (by draining swampy lands), thereby reducing the miasmas and with them disease. Broad-based efforts to clean up the environment in these ways had the effect of reducing water and sewage-borne microorganisms that caused disease, even though their role in disease transmission was not understood at the time.
With the development of bacteriology and parasitology, the sanitarian model of disease was replaced over time by a narrower concept of disease causation, focused on the microorganisms themselves as the necessary agents in disease. The 1880s and â90s saw a flurry of discoveries of the microscopic living agents of numerous infectious diseases â including leprosy, tuberculosis, cholera, anthrax, typhoid and diphtheria. The discovery of microorganisms of disease led in turn to their experimental manipulation (by means of heat, or chemical treatment) to produce preventive vaccines (most famously, Pasteurâs rabies vaccine in 1885, the first vaccine to be discovered since Jennerâs smallpox vaccine in 1796), and also protective sera that could reduce the negative effects of infection after the fact (for example, the anti-diphtheria serum, introduced in 1882).
âKochâs postulatesâ, named after Germanyâs leading bacteriologist, encapsulated the epistemological shift involved in microbiology. In a famous paper in 1882, Robert Koch identified the tubercle bacillus for the first time. More than this, Koch developed new staining methods to make microscopic bacteria visible, and a culture medium for the propagation of bacteria. By inoculating susceptible animals with bacteria, he proceeded to reproduce the symptoms of the disease in the animals and then recover from them the same tubercle bacterium, cultivating it once more in pure culture, thus proving that the bacillus was the fundamental âcauseâ of the disease.
By setting out the necessary conditions for proof of a germ cause of a disease, Kochâs postulates in effect located the disease in the pathogen itself.6 Before the tubercle bacillus was identified, what we now call tuberculosis included many different conditions and terms, such as phthisis and scrofula; identifying the disease meant relying on very varied descriptions of clinical symptoms or pathological investigations. After 1882, a new unitary disease known as âtuberculosisâ emerged, based on the presence of the tubercle bacillus. Many other diseases were similarly renamed in recognition of the new epistemology. For example, âelephantiasisâ, a disease that took its name from the characteristic elephant-like swellings of the limbs and genitalia found among people living in the tropics, was re-named âfilariasisâ after the tiny filarial worms that were first identified in the 1870s as the cause of the infection.
A very important extension of microbiology came from the unravelling of the role of parasites in disease, such as the protozoa of malaria or the trypanosomes in African sleeping sickness. In addition, experimental work demonstrated the unexpected role played by insects in disease transmission, a finding of special significance to the emergence of disease eradication. Malaria transmission to human beings by the bite of female mosquitoes belonging to the anopheline genus was worked out by the British army doctor, Ronald Ross, and the Italian malariologist, Grassi, between 1897 and 1898. This was followed in 1900 by the Reed Commissionâs proof by experimental human inoculations that the bite of the female mosquito belonging to the Aedes aegypti genus (then known as Stegomyia fasciata) conveyed yellow fever to man.7 These discoveries led almost immediately to the suggestion that getting rid of the insects would get rid of the diseases.
Very quickly, in fact, physicians and public health officials seized on the new germ and vector theory for the possibilities it offered in making what they saw as effective and economical attacks on diseases, by tailoring interventions to interrupt the weak point in the chain of causal transmission â whether by drugs, chemical attacks on breeding places of insects or other means. It was rather easy, in the circumstances, to dismiss as old-fashioned, and even irrelevant, the broad efforts to clean up sewage and provide clean drinking water that reformers and sanitarians had once engaged in. Dr Walter Reedâs and Dr William C. Gorgasâs conclusion that dirt had nothing to do with yellow fever transmission (but mosquitoes did) was very influential in this regard. Charles Chapin, the leader of the ânew public healthâ in the state of Rhode Island in the United States, cited the Havana story, arguing that Reedâs and Gorgasâs work âdrove the last nail in the coffin of the filth theory of diseaseâ.8
It would be a mistake to think, however, that everyone agreed about the policy choices that followed from the new bacteriological and parasitological model â that disease control methods from now on followed a single script based on the logic of the new epistemology. The new microbiology opened up not one, but a variety, of policy choices, which tended to reflect not just medical judgement, but also the social values, ideological outlooks and political positions of the people involved. Malaria, for instance, could be said by the early 1900s to be caused by four different kinds of plasmodia (the single-celled microorganisms belonging to the protozoa and known as vivax, falciparum, ovale and malariae), which produce the four different kinds of malarial fevers known in human beings (falciparum malaria being the most deadly). Malaria can also be said to be caused by the 30 to 40 different species of anopheline mosquitoes, the females of which actually convey the plasmodia of malaria to human beings through their bites.
But malaria is also caused, or determined by, larger (or deeper) social, ecological and geographical factors such as particular geographical locations which, by virtue of their characteristic climate, terrain and overall ecology, encourage the breeding of particularly âefficientâ or deadly malaria-transmitting mosquitoes (for example, the Anopheles gambiae species in found in many parts of Africa); malaria is caused by poverty, because poverty, among its many effects, means people live in poor houses that lack the kind of windows that lend themselves to proper screening against biting mosquitoes. Poverty also stops people from getting to health clinics (if they exist) and from obtaining life-saving anti-malaria drugs, or paying for them; or from purchasing insecticide-saturated bed nets. Poverty causes malnutrition and reduced immunity to disease.
These different causes are all relevant to understanding malaria. We could add others. Development projects often alter patterns of human settlement by opening up new labour markets, which in turn bring non-immune, highly susceptible, people into malarial regions, often with disastrous results, such as malaria epidemics. Development projects themselves may alter local ecologies, which may in turn encourage (rather than suppress) anopheline mosquito breeding, thus increasing the risk of malaria to human populations. These geographical and ecological factors all play a part in determining whether and where people are likely to suffer from the disease of malaria.
Public health policy choices about how to proceed in order to reduce disease incidence are therefore not a given, but a matter of judgement about the most vulnerable points in a diseaseâs transmission, the difficulty of carrying out a particular intervention, and an assessment of how long-term the impact of particular kinds of interventions may be. Groups of public health activists who share the bacteriological/parasitological understanding of disease may nonetheless disagree about what to do. In the first three decades of the twentieth century, for instance, Italian malariologists, many of whom had contributed to the development of the mosquito theory of malaria transmission, nonetheless continued to think of malaria less as a âmosquitoâ disease than as a âsocial diseaseâ rooted in multiple factors of poverty. They focused their control efforts on protecting human beings with quinine and improving their living conditions. Rockefeller Foundation malariologists, on the other hand, tended to follow Ronald Ross in seeing malaria as fundamentally a mosquito disease and concentrated their interventions on finding new methods of controlling or eliminating mosquitoes or their larvae. My point here is that the public health policies derived from a particular scientific theory are variable, and tend to reflect the political contexts and ideological and other working assumptions of the groups involved. Science does not operate in a value-free or neutral environment, but is given meaning, and creates new meanings, in settings that are specifically social, economic and political as well as intellectual.9
Historically, however, we tend to find that the microbiological revolution was associated with more narrowly focused methods of disease control than in the previous era of public health. Attention was increasingly brought to bear on the proximate causes of infection, such as pathogens and vectors, while by-passing, if not actually ignoring, the deeper, and more intractable, social, economic and political factors that together with biological ones are responsible for ill-health. This approach has had a very long-lasting influence on the practice of medicine and public health, from the early twentieth century to the present.
In particular it defined the approach of most eradication campaigns, which were specifically designed as time-bound, technical and expert interventions to eliminate diseases across the world, one by one, without engaging with the social and economic determinants of ill-health. The words âeradicateâ and âeradicationâ began to be used in their more strict and absolute sense in the first decade of the twentieth century, as discoveries in the new public health took off, with parasites and even hitherto completely unknown human diseases being discovered by the new experimental medicine.10 In 1909, Rockefeller money financed a Sanitary Commission for the Eradication of Hookworm Disease; this was followed in 1915 by its Yellow Fever Commission, which set out for Latin America to investigate the possibility of eradicating yellow fever from the entire Western Hemisphere. The next year, in 1916, the American statistician Frederick L. Hoffman presented a âPlea and a Plan for the Eradication of Malaria Throughout the Western Hemisphereâ. Further discoveries in medicine and technology in the Second World War, such as penicillin, chloroquine and DDT, extended the reach of science-based public health, and led in the second half of the twentieth century to some of the most extensive eradication campaigns ever seen.
The basic model at work in all of these different campaigns was that of etiological universalism; wherever the disease in question was found, it was presumed to have the same cause and be open to elimination by the same methods, regardless of differences in the class, economic and geographical situations of the human populations involved. It was in this sense that eradication became international.
The Mortality Revolution and Public Health
Eradication campaigns have always had their defenders and their detractors, with their popularity understandably declining when an eradication campaign falters, and rising when the goal appears to be within reach. Underlying these swings in attitude are questions of feasibility, effect and alternatives. Is eradication feasible, given the technical, economic and political demands it makes on societies and international aid? What effect does removing a specific disease from a population have, beyond the very important one of simply removing a single and burdensome infection from populations? Is eradication the kind of public health intervention that contributes to overall health improvements, or does it tend to override or close out alternative paths to human well being?
In effect, these questions boil down to the issue of what the relationship is between public health interventions and secular improvements in human health. Those involved in public health naturally enough work with the assumption that their actions matter a great deal; and indeed, one has only to think of the success in reducing HIV/AIDS in Brazil, through public health activism and the free distribution of retroviral drugs, and compare this outcome with, say, HIV/AIDS in South Africa, to appreciate their point.11
Yet paradoxically, perhaps, the dominant paradigm in public health says otherwise. This paradigm, often referred to in public health circles as the âMcKeown thesisâ, after the British historical epidemiologist Thomas McKeown, whose popular and accessible books were published in the 1970s, still sets the terms of the debate today. McKeown maintained that the mortality revolution that occurred between roughly 1860 and the 1930s in Britain owed little if anything to medicine or targeted public health interventions, and almost everything to economic growth and rising standards of living, especially improved nutrition. Generalizing from...