Autonomy and Artificiality in Global Networks
The convergence between biology and computer science provides the context for an emergent technoscientific culture within which the status of autonomy and artificiality are highlighted and problematised. Autonomy designates the self-organisation of actors referred to as agents, and artificiality signifies the condition of agents and environments as they co-evolve in/as global information networks. Initiated in the cybernetics of the 1940s which regarded mind and machine as analogous information processing systems, the convergence between biology and computer science has developed through systems theory, complexity theory, cyborg discourse or ‘cyborgology’,1
and is now most clearly represented in a development of Artificial Intelligence (AI) known as Artificial Life (ALife). In order to set the stage for an examination of this relatively recent discipline and its cultural significance it is useful to turn briefly to science fiction – partly because of the widely acknowledged slippage between science and fiction and partly because many AI and ALife researchers were clearly informed by it.2
Steve Grand, a prominent ALife engineer recently hailed as ‘one of the 18 scientists most likely to revolutionise our lives in the coming century’ (ICA 2000), was responsible for a popular computer game called Creatures
, praised by Richard Dawkins as ‘the most impressive example of artificial life I have seen’ (CyberLife 1997).3
He freely acknowledges that ‘many of us grew up with Dan Dare comics, Star Wars movies and Kubrick’s 2001
’ (CyberLife Research 2000: 1). The star of 2001. A Space Odyssey
(1968) is, of course, the smart but wayward computer HAL 9000, and he in turn is responsible not only for contributing significantly to popular fears about the development of intelligent machines which turn out to be disastrously disobedient towards their creators, but also for contributing to a professional sense of the failure of AI as a project. In ‘The Year 2001 Bug: whatever happened to HAL?’, Grand (1999a) argues that, as a fictional example of artificial intelligence, HAL’s homicide and subsequent demise stemmed not just from unfriendliness but also from the fact that, although smart in the technical sense, he wasn’t really very bright. HAL, according to Arthur C. Clarke, ‘could pass the Turing Test with ease’ (Clarke 2000 : 99), but during a space odyssey in which he was programmed to assist astronauts, he was obliged (by mission control) to conceal the real purpose of the trip to Saturn from them. His major
malfunction stemmed not from maliciousness but from a failure to deal with conflict – with a task which was not clearly right or wrong, black or white, binary. HAL did not understand the purpose of little white lies or what might now be termed complexity. His failure, for Grand, was the failure of the Turing Test as a measure of intelligence. The basis of the Turing Test is a concealed computer being able to pass as a human during a dialogue.4
In 1950, Alan Turing predicted that ‘within fifty years … the idea that machines can think will be common place and computers will routinely pass the Turing Test’ (Grand 1999a: 73). They don’t, and in a nutshell, ALifers such as Steve Grand argue that this is the fault of top-down as opposed to bottom-up processing. AI can build things as intelligent as a chess computer but nowhere near as intelligent as a mouse: ‘A mouse will always lose at chess to a computer, but try throwing them both in a pond and see how they fare’ (CyberLife Research 2000: 1). The keywords here are ‘adaptive’, ‘robust’, ‘flexible’ (and ‘friendly’) and to achieve these characteristics, the principles of AI must literally be turned on their heads. Adaptive, robust, flexible and friendly artificial intelligence is now in the process of being grown
biologically (from the bottom up) rather than built or programmed from the top down, and as such it is beginning to acquire the status of agency
Another example drawn from science fiction serves to illustrate this shift towards autonomous agency. In the third of Orson Scott Card’s ‘Ender’ trilogy (Xenocide, 1991), the character Jane is rather more than just a computer program (even a ‘Heuristically programmed ALgorithmic computer’ program like HAL). She is described as ‘a being’ who dwells in the ‘web’ or ‘network’ connecting computers on every world, and this web or network is ‘her body, her substance’ (Card 1992 : 67). Jane, unlike HAL, is alive, or at least capable of asking ‘Am I alive …?’, and also unlike HAL, she is friendly. Jane is a fictional example not of artificial intelligence but of artificial life:
And the image on the screen changed, to the face of a young woman, one that Valentine had never seen before …
‘Who are you?’ asked Valentine, speaking directly to the image.
‘Maybe I’m the one who keeps all those … connections alive … Maybe I’m a new kind of organism’.
(Card 1992 : 66)
ALife in Context, or, the ‘Return to Darwin’5
One way to define ALife is as an attempt to literalise the machine/organism analogy which is prevalent within biology and technoscientific culture as a whole. The discipline was developed in the late 1980s at the end of the cold war, and its stated aims are twofold: to create viable computer simulations of biological forms and processes as a method of studying ‘natural’ life (the simulation of ‘life-as-we-know-it’) and to synthesise new forms of artificial life in both hardware (as robotics) and software (as computer programs) (Chapter 3
). This is about creating
‘life-as-it-could-be’ (Langton 1996 : 40).6
These two goals may be characterised as ‘weak’ and ‘strong’ ALife respectively. Synthesised artificial life-forms are not deemed to be metaphorically alive, but literally so as the definition or criteria for life are limited to self-replication, self-organisation, evolution, autonomy and emergence. Emergent life is that which is not programmed in, but which evolves spontaneously and from the bottom up through interaction with the artificial environment. First order emergence refers to ‘any behaviour or property that cannot be found in a system’s individual components or their additive properties’, while second order emergence signifies the appearance of a behaviour which stimulates the development of adaptive behaviours (Hayles 1999a: 9). Second order emergence, then, involves the evolution of the ability to evolve and it is the goal of strong ALife. At the heart of ALife is the concept of life as information,7
and this is derived from molecular biology’s notions of the genetic code, and its fetishisation of the gene as the fundamental unit of life. Life is a property of form not matter, or as Christopher Langton (the originator of ALife) put it: ‘life is a kind of behaviour, not a kind of stuff’ (Langton 1996 : 53).8
No stuff, no matter, no fleshy bodies, no experiences associated with physicality and nothing beyond the one-dimensional functionality of information processing. In her critique, Alison Adam (1998: 155) points out that there is ‘no room for passion, love and emotion’ in ALife worlds because passion is subsumed by sex, sex is all about reproduction and reproduction is all about competition, survival and the evolution of (genetic) information. ALife is concerned with evolving new life-forms, new species in autonomous artificial environments or worlds where the laws are prescribed entirely by biology. It is sometimes tempting to dismiss ALife as the frustrated endeavour of alien-loving scientists brought up on science fiction and disappointed by the failure of NASA to provide specimens from outer space. Artificial life is in part about the creation and investigation of alien life. But with software projects aimed at evolving artificial cultures and societies (Gessler 1994; Epstein and Axtell 1996), and with the proliferation of online virtual ecosytems, ALife might also exemplify the danger of what Adam calls ‘sociobiology in computational clothing’ (1998: 151).
The sociobiological basis of ALife research and the re-rooting of culture within biology appears to be naturalised and applied in the contexts of the military, medicine and the entertainment industry where, for example, games such as Creatures
have proven to be popular (Chapter 4
) and artificial life-forms known as autonomous agents are being readied for use on the Internet (Chapter 5
). Pattie Maes has outlined her attempts to build agents ‘that perform a practical purpose and really help people deal with the complexity of the computer world’ by ‘foraging’ for interesting documents for a particular user on the world wide web (Dennett 1995b). These agents would watch and learn from the user and would reproduce and evolve according to their usefulness. Through the use of genetic algorithms (computer code with a simulated genome), mutations which occur in reproduction produce offspring agents which look for different kinds of
documents than their ‘parents’. The documents they obtain may be more or less interesting than those of others and ‘if they’re less interesting then that offspring won’t survive’. Fitness is then determined by usefulness. Maes’s work raises questions of control, ethics and evolution and has led Kevin Kelly (1994) to reflect that although ‘there was probably a wider agreement that evolution was a way to do things than I thought … I wonder if we can get everything we want by evolution?’ (Dennett 1995b).
The precursors of autonomous agents are bots: ‘the first indigenous species of cyberspace’ (Leonard 1997: 8). Bots are to software what robots are to hardware; algorithms rather than animated machines governed by rules of behaviour. Andrew Leonard is lyrical about the range and diversity of this new species and highlights the role of anthropomorphism in the successful mediation between digital and biological entities (22). Bots, for Leonard, ‘stoke our imaginations with the promise of a universe populated by things other than ourselves, beings that can surprise us, beings that are both our servants and, possibly, our enemies’ (10). Bots are impure, partial or incomplete life-forms which achieve autonomy only when they are released into their unnatural environment where they operate ‘out of direct control’ (21). And autonomy is ‘the crucial variable’, the dividing line between what counts as digital life and what does not. Autonomy is a key criterion in the development of artificial life. The distinction between bots and agents is, however, by no means absolute, especially as information and communication industries begin to realise the consumer marketability of useful, user-friendly or believable agents.
Both bots and agents are being designed to adapt to the Network ecology. The Net is regarded, in the context of alife culture, as a suitable environment in which agents can grow and evolve and agents are also produced by artists with various degrees of allegiance to the nebulous, non-homogenous interacting spheres of Artificial Intelligence and Artificial Life (for example, TechnoSphere
by Jane Prophet and Gordon Selley 1995). Agents feature in computational models of human cultures and societies developed by anthropologists, economists and sociologists. Where these offer a method and epistemology for the study of human life-as-we-know-it, they do so within a narrative framework where life-as-we-know-it is in the process of being superseded by life-as-it-could-be. This is not an apocalyptic scenario as much as it is an evolutionary one in which the next stage in the evolution of life is digital life – and the aliens are (be)coming. Within this evolutionary scenario the concept of culture regresses from a social to a bio(techno)logical context from which it is expected to re-emerge. Within the paradigm of computational anthropology, for example, culture is viewed as a computational system and as a manifestation of the ubiquitous evolutionary process of information exchange. This then is a memetic view of culture where memes or cultural units reproduce and evolve (autonomously) in the same way as genes or biological units. Nicholas Gessler realises this view of culture in a software model he terms Artificial Culture
(1994, 1999). This functions as a test bed for the theory of cultural evolution, and it builds on foundations which already
exist in ALife. In other words, all ALife worlds have an incipient bioculture. Artificial Culture enacts a theory of culture which is evolutionary and emergent. Evolution operates through cultural variation and the emergence of behavioural patterns from individual local rules. The aim of the program is to create a population of evolving mobile autonomous agents including ‘personoids’ which are both embodied and situated, and a ‘god’ which is neither. Gessler’s software program offers a reflexive view of the evolution of human life-as-we-know-it which sets out to compute but not necessarily to naturalise it. This may not be said of Dawkins’s theory of memetics or recent research in evolutionary psychology – other aspects of the cultural evolutionism which is said to be indicative of a new epoch variously named as the Information Age (Castells 2000), the Biological Age (Grand 2000) or the Neo-Biological Age (Kelly 1994). In this new epoch – which due to processes of renaturalisation is not synonymous with postmodernism but rather with globalisation – it is not simply the computer, but increasingly the Net which defines the evolutionary parameters of culture and identity. The Net does this partly in so far as it is regarded as an ecosystem for emergent artificial life-forms and as an entity or intelligent life-form in itself.
Kevin Kelly’s (1994) vision of a ‘Network Culture’ incorporates ‘all circuits, all intelligence, all interdependence, all things economic and social and ecological, all communications, all democracy, all groups, all large systems’ (1994: 25) in a co-evolved single organism which is analogous to an emergent hive mind. It is a decentralised distributed intelligent entity which assimilates and elides identity. Individuals are bees in the hive, neurons in the network, cogs in the wheel which is more than the sum of its parts. Derided as being emblematic of the Californian Ideology developed on the pages of Wired
magazine, Kelly’s vision of the early 1990s is nevertheless strangely echoed in a millennial issue of New Scientist
dedicated to the role of the Internet as a ‘Global Brain’. Michael Brooks (of Sussex University) examines the claim of Francis Heylighen (of the Free University of Brussels) that the global brain will grow out of attempts to manage the store and flow of information on the Internet (Brooks 2000). Here, web links function as synapses which build and grow from the bottom up with use and diminish and die with lack of use, as in the model of neural networks. Moreover, Grand, following Kelly, has offered a ‘how-to’ guide to the creation of life incorporating cybernetic building blocks, bio-informatic networks and the process of emergence. The nonvitalist vitalism of emergence is this biological age’s answer to the physics of entropy – and the job of postmodern science and culture would appear to be done.9
The paradigms of nature – concealing and revealing life itself – have survived (if not unchanged) the search and destroy missions of post-structuralist epistemologies and are newly deployed in and through the artefacts of information and communication. These artefacts weld together ‘engineered technology and unrestrained nature’ (Kelly 1994: 471) producing a bioculture which is at once more and less than the sum of its parts, but identical to none. Bioculture is not the biological culture of the petri dish any more than it is the forms and processes of everyday (human) life. Bioculture is the culture of analogous
(organic and inorganic) information systems, self-organised within what has been referred to as the network society. The network society is constituted in part by an investment in technocultural forms of autonomy and agency which relies on a dialectic and not a division between the Net and the self. There is no clear opposition between ‘global networks of instrumentality’ and ‘the anxious search for meaning and spirituality’ (Castells 2000: 22). The search for meaning through identity – and perhaps even the search for spirituality – occurs neither outside nor inside the Net but in a dialectic articulated in part through the reproduction (symbolic and material) of agency and autonomy. The transfer of agency and autonomy to the (id)entities of the Network – to Maes’s little helpers or Gessler’s personoids (Chapter 5
) – although apparently anti-humanist, is, in one sense, a process of externalisation which enables agency and autonomy to be renegotiated and reclaimed within the identities of the self. The posthuman self thus engages with the forms and concepts of posthumanism.
The posthuman is an epistemology and ontology of the self in the post-cold-war Information Age and one which necessarily engages with historical constructions of humanism. The universality and subsequent disembodiment inherent within liberal humanism has been critiqued in feminist, postcolonialist and postmodern theories which share a concern with the erasure of difference (Hayles 1999a). Hayles argues that although the loss of a concept ‘so deeply entwined with projects of domination and oppression’ is not to be regretted, it might still be necessary to reconsider the role of specific characteristics of the humanist subject – such as agency and choice – in a contemporary context. Posthumanism represents, for her, an opportunity to ‘keep disembodiment from being rewritten, once again, into prevailing concepts of subjectivity’ (5). I explore this opportunity partly by entering into a dialogue with ALife engineers,10
principally Steve Grand, whose work on computer software and robotics might be summarised as an attempt to humanise HAL and is, for me, most affective in this regard. Grand’s representation of a primarily liberal humanism is complicated by his investment in simulating autopoiesis rather than autonomy – in embodying and situating his creations within their environment – and is therefore a potent resource for debating the increasingly symbiotic relation between humans and machines. I also explore the opportunity to re-embody post(liberal)humanism by examining (mainly in Chapter 7
) engineering projects at the margins of AI/ALife which have already entered into a dialogue with contemporary cultural theory or ‘the humanities’. These projects are concerned with the generation of a new kind of agent technology which is based on a practical and theoretical critique of liberal humanist concepts such as autonomy and agency. These novel agents do not so much evolve as co-evolve in the dynamic interplay between observer and object, and they are more a facet of communication – the desire for alife – than of computation – alife itself. The concept of dialogue is derived from the work of Mikhail Bakhtin, and is developed strategically as a possible means to prevent the encounter between cyberfeminism and artificial life being reduced to a continuation of the science wars. The science wars, in this context, would hinge on the biologisation of
computer science which is, I maintain, indicative of the increasing biologisation of contemporary technoscientific culture.
Arguably because the new biological hegemony subsumes what was thought to belong to the realms of culture and society (notably technology), because it reproduces a naturalised culture and more importantly adapts to forms of denaturalisation and evolves (as evolutionary psychology has adapted to critiques of sociobiology and as ALife evolves from AI) then feminism needs to offer more than the familiar critique. Feminists – and it is here that cyberfeminists, with their expertise in negotiating the boundary between the body and technology, may take a leading role – can...