It will be obvious to anyone who as much as sneaks a look at this book’s title that it is premised on the notion that an understanding of a society is crucial if we are to grasp the health of its population. It is a notion that has left sociology and entered that precarious world of common sense. The health and life expectancy of populations, groups and individuals, contemporary wisdom acknowledges, are at least in part a function of social location, circumstances and learned or imitated behaviours. Healers and the institutions through which they ply their wares are no less socially anchored. A little more reflection may be required, however, to appreciate that the health of a people and the mode of delivery of any system of treatment and care also offers insight into a society. In this opening chapter, I try to show why health is a social lens as well as a social product. It is a chapter of two parts, the first outlining a standard account of changes in health and, in particular, longevity over time, the second interrogating this account.
Health, death and some parameters
No single definition of ‘being healthy’ spans the vagaries of time and space. Indeed, the concept has a peculiarly modern ring to it. Even what we would understand as ‘threats to health’ do not travel well. It is a modern conceit, in other words, to presume that ‘our’ modern – and occidental – concepts and discourses reach back beyond late-eighteenth century Europe or extend to developing societies, let alone deep to the origins of human sociability amongst nomadic hunter-gathering clusters prior to the Neolithic revolution from around 8,000 to 3,000 BC.
Unsurprisingly, death travels better: it is one thing to debate health status, wellbeing and quality of life, quite another to reflect on the finitude of the human lifespan on Earth. On the basis of analyses of surviving artefacts, it has been suggested that the expectation of life at birth for hunter-gatherers eking out a life-course of hardship, unpredictability and subsistence probably did not much exceed 20 years or so. This is naturally a ‘guesstimate’. And those who made it through the perilous initial weeks, months and years of infancy and childhood doubtless added on a good few more years. How does this compare with the more accurate data available in subsequent social formations?
There is a view that life expectancy at birth actually decreased prior to the Neolithic revolution as climate change affected diet (Roberts & Cox, 2003). What is certainly striking is that it increased so slowly over such a long period after the Neolithic revolution. As humans settled and tamed the land, turned to agriculture, established and patrolled their boundaries as newborn nation states and pioneered industrial production, the number of years the newborn could expect to live crept up at a snail’s pace. But it bears repetition that surviving childbirth and infancy has always been critical. Historian Ian Mortimer (2014: 3) puts it well in his study of change from 1000 to 2000:
even in the Middle Ages, some men and women lived to 90 years of age or more. St Gilbert of Sempringham died in 1189 at the age of 106; Sir John de Sully died in 1387 at 105. Very few people today live any longer than that. True, there were comparatively few octogenerians in the Middle Ages – 50 per cent of babies did not even reach adulthood – but in terms of the maximum lifespan possible, there was little change across the whole millennium.
When Victoria ascended the throne in the first country to industrialise, imperial and liberal capitalist Great Britain, life expectancy at birth was only around 40 years (a year or two more for women than men). Given that in this same country we now anticipate living approximately 80 years (77 for men, 81 for women), it follows that there has been an extraordinary increase in life expectancy at birth in less than two centuries. But Britain here represents the developed, prosperous, imperialist West. Elsewhere across the twenty-first century globe, life expectancy at birth still languishes: in ‘developing’ Ethiopia it is 41 for men and 43 for women, and in Sierra Leone 33 for men and 35 for women.
So, if the years one might reasonably expect to live have taken off in nation states like Britain, this is certainly far from the case in most developing countries. Three questions might be posed here. The first and obvious one is: ‘why this dramatic change in many developed nations?’; the second is: ‘why has this change not been echoed in developing nations?’; and the third is: ‘what is it that comparative national statistics on life expectancy at birth are not telling us?’. No answer to any of these questions is beyond dispute. Nor is this simply a function of the prudent scientific admission of ‘fallibilism’, namely, the recognition that one might at some future point be proven mistaken about anything or everything.
The first query has been comprehensively addressed and argued over. In the pre-agricultural era of the hunter-gathers, it is probable that many deaths resulted from malnutrition, a plethora of environmental hazards and violent conflict. Between 8,000 and 3,000 BC, however, new patterns of sociality emerged. Table One maps these modes of sociality from the time of the Neolithic revolution to the present (Table 1.1).
Table 1.1 A Chronology of Human Social Formations
| From the beginning of the Neolithic revolution, occurring from 8,000 to 3,000 BC, sociopolitical evolution encompassed four principal stages: |
| 1 Bands – small nomadic groups of up to a dozen hunter-gatherers; democratic and egalitarian (close to Marx’s ‘primitive communism’). 2 Tribes – similar to bands except more committed to horticulture and pastoralism; ‘segmentary societies’ comprising autonomous villages. 3 Chiefdoms – autonomous political units under permanent control of paramount chief, central government with hereditary, hierarchical status arrangements; ‘rank societies’. 4 States – autonomous political units; centralised government supported by monopoly of violence; large dense populations characterised by stratification and inequality. 3,000 BC witnessed the birth of fully fledged agrarian states, displaying a number of core characteristics and remaining the predominant form of social organization until around 1450 AD. These core characteristics can be summarised as follows: • a division of labour between a small landowning (or controlling) nobility and a large peasantry; this was an exploitative division backed by military force. • the noble-peasant relationship provided the principal axis in agrarian societies: it was a relationship based on production-for-use rather than production-for-exchange. • differences of interest between nobles and peasants, but not overt ‘class struggle’. • societies held together not by consensus but by military force. • societies relatively static and unchanging: there was a 4,500-year incubation period prior to the advent of capitalist states. The transition to capitalism took place in the ‘long sixteenth century’, that is, between 1450 and 1640. Marx saw this transition as of major significance, noting three vital characteristics of the new capitalist system: • private ownership of the means of production by the bourgeoisie. • the existence of wage-labour as the basis of production. • the profit motive and long-term accumulation of capital as the driving aim of production. It is customary to discern reasonably distinct stages of capitalism. Thus a transition to ‘merchant capitalism’ is typically dates from 1450 to 1640, followed by a period of consolidation and solidification, characterised by slow, steady growth between 1640 and 1760. 1760 is often cited as a marker for a switch to ‘industrial capitalism’, which is itself often divided into stages: 1 Early industrial, 1760–1830: textile manufacturing dominated by Britain. 2 Liberal, 1830–1870: railroads and iron dominated by Britain and later the USA. 3 Liberal/Early Fordist, 1870-WW1: steel and organic chemistry, with the emergence of new industries based on producing and using electrical machinery, dominated by the USA and Germany. 4 Late Fordist/Welfare, WW1–1970: automobiles and petrochemicals, dominated by the USA. 5 Financial, 1970 onwards: electronics, information and biotechnology, plus global finance, dominated by the USA, also Japan and Western Europe. |
Sociologists tend to be more open to periodization than are the more Whiggish of historians, but, setting finer or philosophical controversies aside for the moment, the periodization outlined in the table allows for a framing, mapping and contextualizing of sociopolitical change over time. It offers a useful broad-brush scoping of our past useful for present purposes.
The switch to full-blown agricultural states as bands, tribes and chiefdoms became more peripheral after 3,000 BC impacted heavily on the major causes of death. Agriculture required permanent settlement. The development of cereals permitted more mouths to be fed, in the process supporting higher population densities, but also, paradoxically, narrowed and diminished people’s diet and immunity to infection. So, unbeknown to inhabitants of all strata, agricultural states delivered novel threats to people’s health in the guise of several potent infectious diseases. Nor were there any sanitary arrangements, the significance of which was also unknown. This led to the contamination of water supplies.
With the advantage of hindsight, the major infectious diseases showed four modes of transmission: (a) airborne (like tuberculosis); (b) waterborne (like cholera); (c) food-borne (like dysentery); and (d) vector-borne (like plague and malaria). The infamous plague, the Black Death of 1348, decimated the population of England and of Europe as a whole, accounting for one-quarter of its population. It made its last appearance in Britain in 1665 but died out thereafter; it was spread by fleas carried by black rats and lost its potency as black rats were displaced by their brown brethren, the latter being less prone to infest human settlements (Fitzpatrick & Speed, 2018).
The thesis that the rapid acceleration of longevity after the mid-nineteenth century in Britain was down to the diminishing impact of the infectious diseases has a solid epidemiological pedigree (although it should be noted that these diseases did not all decline simultaneously). Fitzpatrick and Chandola (2000: 102) summarise:
the declining significance of infectious diseases was the single main reason for the dramatic increase in life expectancy … Conversely, the main reason for the increase in heart disease, strokes and cancer has been that individuals were increasingly likely to reach the older ages at which these diseases typically, although not exclusively, occur.
It should be noted that improvements in mortality occurred at different times for different age groups. In Britain, the first marked improvement occurred in the 5–24 age group around 1860. Infant mortality fell steadily from 1900 with conspicuous accelerations during the early and late years of postwar welfare capitalism: in 1900, one-quarter of all deaths in the population occurred in the first year of life, but by the end of the century this had declined to 1%. Mortality rates for the 15–44 age group improved in the course of the twentieth century, although with interruptions from the influenza epidemic of 1918 and for the two world wars. Improvements for those aged 45–54 also started at the beginning of the century; for those aged 55–74, mortality declined from the 1920s; for those aged over 74, the decline began in the years after the second world war. For older age groups, the most marked improvements in mortality have taken place since the 1970s (Fitzpatrick & Chandola, 2000; Scambler, 2002).
By the mid-twentieth century, the so-called degenerative diseases like cancer and cardiovascular disease had taken over as the major causes of death. Moreover, Britain was unexceptional: mercantilist-to-capitalist industrialization, whenever and wherever found, seemed somehow to precipitate this shift. McKeown (1979) was the research pioneer. He challenged the obvious inference that scientific advance had allowed medicine to deliver on its more extravagant and newly ‘professionised’ promises.
A number of possible causes for the general displacement of the infectious diseases as the major causes of death have been mooted, apart that is from scientific and medical advancement and intervention. These fall into three categories: (a) a decline in the virulence of the organisms responsible for the diseases; (b) a reduction in human exposure to the infectious organisms (e.g. through reduced contamination of supplies of water and food); (c) an enhanced genetic human resistance to infection due to Darwinian selection processes; and (d) an increased human resistance to infection via improved nutrition and general fitness that (i) reduced the probability of being infected and/or (ii) increased the chances of recovery from infection. It is not of course simply a matter of picking out one from (a) to (d). No more can it be assumed that (a) to (d) can be prioritised across all the major infectious diseases or all societies. In fact, most commentators discount the salience of (a) and (c) in Britain on the dual grounds that the decline across the infectious diseases occurred over too short a period of time and across all the major diseases. This leaves (b) and (d).
The evidence in favour of (d), in particular the improved nutritional intake resulting from innovative agricultural techniques and the speedier and more efficient transportation of produce under liberal capitalism, seems indubitable. No less important, the nineteenth century as a whole witnessed an unprecedented increase in real wages and the standard of living in Britain. As Fitzpatrick & Speed (2018) remark, (d) was for McKeown (1979) underpinned by evidence from the developing world: he cited a World Health Organization (WHO) report that one-half to three-quarters of all recorded deaths of infants and young children could be attributed to a combination of malnutrition and infection. This does not mean, however, that (b) is insignificant.
(b), or reduced exposure to infection, played its role too. The iconic case is John Snow’s experimental or proto-epidemiological removal of a pump handle in London’s Soho, affirming cholera’s status as a waterborne disease. This initiative epitomises the emergence in the nineteenth century of a novel and powerful public health movement. Moves were made to prevent the contamination of supplies of drinking water by sewage (gastroenteric diseases were under control by the beginning of the twentieth century, dramatically impacting on infant mortality); and the sterilization of milk and its more hygienic transportation in particular, and improved food hygiene in general, comprised another environmental shift that lent itself to the decline in infectious disease mortality (Fitzpatrick & Speed, 2018).
McKeown and his public health disciples have not had it all their own way however. While it must be accepted that environmental factors were critical for the historically ‘recent’ extension in longevity, at least in developed nations, there are those who trace this back and attribute causal priority to Adam Smith’s ‘invisible hand’ of capitalism. They are right to look for signs rather than symptoms, I shall argue later, but the appropriate diagnosis alludes them.