Part I
THE SOCIAL
DIMENSIONS OF THE
FOOD SYSTEM
1
THE ORIGINS OF HUMAN SUBSISTENCE
Any attempt to make sense of the contemporary realities of food and eating from a sociological viewpoint must involve some consideration of the past. If we wish to try to understand the food production systems upon which we depend and the food consumption patterns in which we participate, then a familiarity with certain crucial historical themes is essential. Indeed, it is also necessary to push beyond the boundaries of recorded history into the even more speculative and hazy realms of prehistory. The aim of this first chapter is to begin to provide that background. Of course, in this context, such a background cannot be provided in any great detail, since the history and prehistory of food is a vast subject in itself, covering broad sweeps of human activity and experience. Rather, the intention is to draw attention to a number of key ideas which can enhance our comprehension of the foundations of human foodways, foundations which, by their very nature, usually remain unexamined.
Our starting point will be a consideration of the diet of early humans, a contentious and complex question, but one which can lead to insights whose implications are as important now as they were in the early stages of human evolution. The issue of the basic forms of human subsistence will then be raised, along with a discussion of what is arguably one of the most important transitions in human social organization, the shift from an ancient, long-standing dependence on hunting and gathering to food production based on the techniques of agriculture. However, it will be argued that conventional views of this transition may require reconsideration and revision. Finally, we will go on to examine the enormous implications of this transition for human social relations and arrangements, not least of which was the facilitation of the emergence of increasingly complex and large-scale social systems.
The Early Human Diet
Perhaps the most basic nutritional question of all relates to the nature of the ‘original’ human diet. In other words, we need to ask how our evolutionary history as a species has shaped, or been shaped by, our dietary patterns. Attempting to build up a detailed picture of the foods which our distant forebears ate and the relative importance of the various items which figured in ancestral diets, is a task beset with enormous difficulties. Foodstuffs are, by and large, relatively perishable, and thus traces of them rarely survive over long periods of time to provide the archaeologist with direct evidence about dietary practices and subsistence strategies. Thus, all too often, investigators are compelled to rely upon the indirect evidence provided by more durable artefacts like tools, or upon the animal bones which are assumed to be the remains of ancient meals. However, such evidence can be controversial and subject to conflicting interpretation. As Binford (1992) has pointed out, the fact that animal bones and stone tools are found together in particular caves and rock shelters does not in itself demonstrate that the hominids who occupied those sites were hunters, or were solely responsible for these accumulations, since carnivores like wolves, leopards and hyenas also occupied such sites and also created accumulations of prey animals’ bones.
There are, however, alternative ways of addressing these questions, and one of these involves not only looking at evidence about the diet of early hominids, but also comparing human dietary patterns with those of modern non-human primates. This is the approach adopted by Leonard and Robertson (1994), who re-examine a range of nutritional and physiological data relating to humans and their primate relatives. They point out that among primates in general there exists a consistent negative correlation between body size and diet quality. That is, large primates (such as gorillas and orang-utans) depend upon diets which consist of large amounts of bulky, hard-to-digest foods which are relatively low in nutrients per unit of weight (for example, plant parts like leaves, stems and bark). In contrast, smaller primates’ diets consist of much higher quality foods, in the sense that such foods are much more densely packed with nutrients and contain far lower proportions of indigestible bulk. Their diets include the reproductive parts of plants, like seeds, nuts, fruits, bulbs and tubers, and a wide range of small animals. The authors give the example of the pygmy marmoset, which tends to focus its feeding behaviour on protein-rich insects and energy-rich plant gums and saps.
The explanation for this relationship between body size and diet quality appears to be related to metabolism, that is, the sum total of the chemical processes which drive and maintain a living body. Smaller animals have higher metabolic costs and energy needs per unit of body weight than larger animals. Although the latter have higher total energy needs, they need less per unit of weight, and can, therefore, make a successful living out of consuming large quantities of low-quality but relatively abundant foods. However, what is striking about humans is that they do not fit neatly into this broad overall picture. The authors examine data on the dietary intakes of a number of peoples dependent on foraging (i.e., the exploitation of wild animals and plants) for subsistence. These groups show a far higher level of dietary quality (as measured by a composite index) than would have been expected for primates of their size. They also consume a far higher proportion of animal material (material defined as high-quality in terms of nutrient density) than comparably sized primates (e.g., anthropoid apes). What is more, when the authors looked at data from agricultural societies, where much lower quantities of animal products like meat are consumed, the diet quality is still significantly higher than would have been predicted for primates in general in that size range, since the grains and cereals eaten by these human consumers are much richer in calories than are fibrous leaves and stems.
We are therefore faced with a puzzle, in that humans eat a diet which is of much greater quality than would be predicted by their body size. Indeed, Leonard and Robertson (1994) also demonstrate that this quality is higher than might be expected from humans’ resting metabolic rate, a baseline which expresses the amount of energy required for metabolism when the body is at rest. In fact, the authors go on to argue that the key to this puzzle is the size of the human brain. The human brain is, of course, relatively large in relation to body weight compared with other primates, and for that reason its energy demands are proportionately higher. They calculate that humans spend around three to four times more on brain metabolism than do other primates. Thus, in humans 20–25 per cent of the energy expenditure making up the resting metabolic rate goes to the brain, as opposed to an average of 8–9 per cent for our non-human primate relatives. There seems to be a close association between the possession of a large brain with a high energy requirement and the consumption of a highquality, nutrient-rich diet.
Switching their attention to the archaeological data, Leonard and Robertson (1994) note that early members of our own genus Homo, specifically Homo habilis and Homo erectus, seem to show signs of this characteristically human relationship between brain size, body size and resting metabolic rate. Archaeological evidence also indicates that these species ate higher-quality diets (in that they contained a higher proportion of animal material) than did, for example, the ancient ape-like primates belonging to the genus Australopithecus. All this would appear to indicate that in the course of hominid evolution, increasing brain size (with a concomitant increase in the brain’s energy demands) would have had to be associated with a shift towards an increasingly nutrient-rich diet. While these authors, quite explicitly, do not argue that somehow dietary factors caused changes in human brain evolution, they do seem to demonstrate that a move towards a higher-quality diet would have been a necessary condition for sustaining an evolutionary trend towards a larger and more powerful brain.
If we accept the arguments of these two biologists, we are led to a striking conclusion: the very basis of our human distinctiveness, our large and uniquely sophisticated brain, appears to demand that we maintain a high-quality, energy-rich diet. This appears to be confirmed by the fact that the human digestive tract is relatively short compared to most other primates, indicating its adaptation to high-energy, easy-to-digest foods. All these factors add up to what are, in effect, the nutritional ‘facts of life’ for human beings, facts rooted in our evolutionary past and our actual physiology. However, to state this is not to argue for a form of biological determinism. Our humanness imposes certain nutritional imperatives upon us, making us ominivores with a need for a high-quality diet, but these imperatives do not determine human nutritional endeavours and choices, rather they set a framework within which they are played out. Within that framework there is the scope for enormous variation, since human ingenuity is capable of generating an apparently infinite variety of solutions within an impressive range of cultural and ecological settings.
Hunting and Gathering
However, varied as these solutions may be, attuned as they are to the mix of opportunities and constraints presented by their own unique circumstances, they can be placed into more general categories. Quite clearly, if we wish to consider the earliest forms of human subsistence, then we will need to focus our attention upon that broad category of activities that are often referred to as ‘foraging’. In this sense, the term is used to describe the exploitation for food of plants and animals over which the user has little or no control (in other words, organisms which can be seen as ‘wild’). When the organisms in question are animals that have sufficient agility and alertness to require active pursuit or stalking, the term ‘hunting’ is used. When the organisms consumed are plants, or animals with little or no mobility, the term ‘gathering’ is conventionally applied. (Trapping can be seen as an intermediate category, since certain forms of it permit the capture of active animals without the need for pursuit.) The combination of hunting and gathering has provided our species with its sustenance for most of its evolutionary history.
It is intriguing to note, however, that from the early days of scientific debate concerning ancestral patterns of human subsistence, far more stress has been placed upon the hunting component of the hunter/gatherer lifestyle than on the gathering component. Hunting has conventionally been seen as having exerted a potent formative influence upon human social and physiological evolution. Perhaps the clearest and most explicit academic expressions of the role of hunting in the development of human social, cultural and physical characteristics can be found in papers which emerged from a highly influential symposium entitled ‘Man the Hunter’, held at the University of Chicago in 1966. Thus, Washburn and Lancaster (1968) argue that human hunting (an activity almost entirely the preserve of males) represents a form of subsistence and a way of life that has provided the common factors which have dominated human evolution for over 99 per cent of its course. They point to key physiological and social adaptations which they see as closely associated with hunting: the development of a large brain, the use of tools and weapons which demand high levels of skill in their production and use, a sexual division of labour with females concentrating on food gathering and child rearing while being supplied with meat by males, and the development of complex forms of communication and co-operation between males to facilitate hunting success. From this list, it is apparent that this view of hunting sees it as the root of human features as diverse as skill in the creation of artefacts, the male-dominated family and the emergence of that most sophisticated of communication devices, language.
This hunting-centred view is presented even more emphatically by Laughlin, who goes as far as to assert that, ‘Hunting is the master behaviour pattern of the human species’ (Laughlin 1968:304). He maintains that the fact that the human species achieved worldwide distribution while dependent on hunting for subsistence demonstrates the universality of this particular adaptation. What is more, he also suggests that an impressive range of human physical attributes arises directly from our hunting past. These include muscular strength and a high load-carrying capacity, sustained endurance at running speed, a high level of agility and manual dexterity, excellent colour vision, good hearing and a remarkably tough and durable outer skin. These features provide a degree of physiological flexibility sufficient to enable humans to colonize habitats far more diverse than those available to any other comparable animal. When these features are combined with a high-capacity memory, superior learning ability, the use of tools and language, and the development of complex forms of co-operation (these features also being seen as emerging out of the demands of a hunting way of life), the recipe for human success as a species appears to be complete. Indeed, so compelling is this view of human development that it has strongly influenced popular views of human origins and human nature.
The Emergence of Agriculture
However, while what has been termed the ‘Hunting Hypothesis’ does provide us with a number of fascinating possibilities concerning the links between human subsistence strategies and dietary patterns on the one hand, and human evolution on the other, it does exhibit some significant limitations. We will return to these limitations later in this chapter, although before doing this it is necessary to confront a fundamental conundrum which has been puzzling scholars for several generations: if the hunting and gathering approach to subsistence is such a successful one, and if it is so closely integrated with human evolutionary developments, why does there eventually occur a radical shift towards a very different form of subsistence? This novel approach to providing food involves deliberately and systematically producing food, rather than capturing food animals or gathering food plants which exist independently of human activities and interventions.
The timing of this shift is, in geological terms, comparatively recent. The end of the Pleistocene, some 14,000 years ago, saw the retreat of the glacial ice in the northern hemisphere, accompanied by dramatic climate changes. Tundra and grasslands were, in many areas, replaced by forests, and humans were compelled to adapt their hunting and gathering patterns accordingly. Foraging strategies appear to have become more diversified, since vast herds of large herbivorous mammals inhabiting wide, open plains were no longer available to the same extent in the northern temperate zones to provide human hunters with an abundant food supply. Shortly after these far-reaching environmental changes occurred, the emergence of agriculture began, spanning a period dating from between 9,000 and 12,000 years ago. As Hole (1992) points out, the shift to agriculture appears to have begun in the warmer latitudes and spread later to temperate regions. The process was dependent upon the domestication of a range of plant species, domestication itself being conventionally viewed as the replacement of the pressures of natural selection with artificial selection carried out by human beings. Artificial selection is seen as enabling humans to modify food plants in ways which make them more productive (in terms of yield per unit of land area), which make them more palatable, easier to harvest, store or process, and even aesthetically more pleasing.
The emergence of agriculture can be dated to approximately 10,000 years ago in southwest Asia (Palestine) and approximately 8,000 years ago in Central and South America. However, as Hole (1992) notes, although agriculture took hold in several other locations, the dating of these events is less certain. Nevertheless, it is clear that by 4,000 years ago all the basic agricultural techniques, such as ploughing and irrigation, had been developed. At each location in which agriculture was established, it was based upon a characteristic mix of domesticated plants which Hole documents in some detail. For example, the so-called ‘fertile crescent’ of southwest Asia is associated with wheat, barley, various legumes, grapes, melons, dates and almonds, while the area around the northern Mediterranean is characterized by olives, grapes, figs and cereals. Tropical West Africa embraced such crops as yams and oil-palm, whereas the eastern sub-Saharan zone is associated with millet and sorghum. The complex of domesticated plants originating in Southeast Asia includes such species as taro, yam, breadfruit, sago-palm, coconut and banana, although Hole notes that the origins of the current staple food of much of Asia, rice, are poorly understood. The region that is now Central America gave rise to such major domesticated species as maize, beans, squash and tomatoes. White potatoes originated in the Andean mountains of South America, and the Amazon basin saw the domestication of manioc and sweet potatoes. Of course, in the intervening millenniums, many of these crops have spread throughout the world, to become staples in areas far removed from their region of origin.
Hand in hand with the domestication of plant species went the domestication of animals. Domesticated animals, of course, did more than provide readily available sources of food products like meat, and non-food products like hides, hair and bone that might otherwise have been obtained from wild animals. Certain species also provided a source of muscle power far in excess of that achievable by human beings, muscle power which could be harnessed directly to agricultural activities (e.g., ploughing) and which could also be used for transportation purposes. The domestication of animals has had an enormous impact on human foodways, dietary patterns and social organization, yet our understanding of exactly how this process occurred is largely based upon supposition (Reed 1984:2). Nevertheless, it is possible to identify certain key attributes which, in effect, render a given species suitable for domestication (Clutton-Brock 1987:15–16). These include the ability of the young to survive removal from the mother and to adapt to a novel diet and environment, plus the possession of a set of behavioural patterns which facilitate the animal’s incorporation into human society. Specifically, this requires a social animal, whose behaviour is based upon a dominance hierarchy, which will adopt a submissive role vis-à -vis its human companions. In addition, the species must be able to adapt to confinement and must be capable of breeding freely under such constrained conditions. All these features require an innate gregariousness on the part of the animal in question, as well as a relatively placid temperament that will tolerate human proximity and human interventions without exhibiting excessive signs of stress.
Domestication in mammals, for example, is usually accompanied by a number of characteristic physiological changes (Clutton-Brock 1987:22–4). In the early stages of the process, there is often a reduction in body size as compared with the wild ancestor (although this may be reversed later). In addition, domestic mammals tend to carry a higher burden of fat beneath the skin and distributed through muscle tissue. Perhaps most strikingly, the brain becomes much smaller in relation to body size, and sense organs are also reduced. What is more, in the skull, the facial region and the jaws may become much shorter and more compressed, which may in effect involve the retention of juvenile characteristics.
Animal domestication occurred in a number of locations around the world (Clutton-Brock 1992). In western Asia, around 9,000 years ago, there is archaeological evidence for the domestication of sheep and goats, although they appear to have been domesticated later than the dog, whose domestication is usually estimated at approximately 12,000 years ago. Domestic cattle and pigs appear to originate in western Asia roughly 8,000 years ago (although in the case of the pig, its progenitor, the wild boar, is so widespread there may have been several separate centres of domestication). The domestic horse originated in central Asia 6,000 years ago, about the same time that the donkey (descended from the wild ass) appeared in Arabia and North Africa. The domestic chicken (descended from the jungle fowl) can be traced to southern Asia approximately 4,000 years ago. The New World has contributed such species as the llama and the alpaca (domesticated in South America, roughly 7,000 years ago) and, more importantly in a global sense, the turkey (North America, 1,500 years ago). As we have already noted, the causes of the shift to agriculture, based on the domestication of key species of plants and animals, remain a puzzle. As might be expected, however, there has been much speculation about these causes. For exampl...