When speculating about the future we should ask ourselves two questions: What future are we interested in â the short, mid or long term? How foreshortened are our perspectives? Because, depending upon our length of vision, we may expect the future to be more familiar than unexpected. The further we look, the more opaque it becomes â though the unexpected, of course, can arise at any time. And how is the future best imagined: through fiction or futurology, or through economic and scientific modelling (as we model long-term weather patterns)? The novelist Julian Barnes argues (as one might expect) that fiction offers the best route, and I would tend to agree. It both explains and expands life and tells us more about its truth (if it is there to be found): how life is, how to live it and how it can be lost (Barnes, 2012, 17). And that is doubly important in war, when we are always in danger of losing touch with our humanity.
We should also not forget that the future we imagine changes us in the present. We humans are locked continuously into its gravitational pull. As Stephen Grosz writes in his book The Examined Life: âPsychoanalysts are fond of pointing out that the past is alive in the present. But the future is alive in the present, too. The future is not some place we are going to, but an idea in our mind now. Itâs something weâre creating, that in turn creates us. The future is a fantasy that shapes our presentâ (Grosz, 2014, 157). And no other age has had its eyes fixed so relentlessly on the horizon â distant or not â as our own. And the explanation for this is two-fold. We tell ourselves that history is speeding up. Life at the Speed of Light is the telling title of a book by the geneticist Craig Venter (Venter, 2013). As early as the 1970s, the Russian novelist Vladimir Nabokov could write, only half in jest, that change was happening so quickly that the word ârealityâ should appear within inverted commas. Add to that the disturbing thought that we might not âownâ the future for long; one day it might not even need us. New technologies like nanotech and genetic engineering may render us all surplus to requirement. Professional futurologists such as Eric Drexler of the Foresight Institute in California speculate that self-reproducing machines will soon take over. This is the theme of another recent book with the disturbing title Our Final Invention: Artificial Intelligence and the End of the Human Era (Barrat, 2013). I am not the only one to be pessimistic; pessimism is in the air.
That is why, whether we are by nature technophiles or technophobes, we have a responsibility to live for the future even though we may not live to see it. When Ray Bradbury was asked whether the mean world of Fahrenheit 451 was a prediction, he replied âHell, no. Iâm not trying to predict the future; Iâm trying to prevent itâ (Pohl, 1997, 8). As Steven Pinker writes, adopting a wider perspective, âitâs no accident that the future shares its syntax with words for necessity (must), possibility (can, may, might) and moral obligation (should, ought to) because what will happen is conceptually related not only to what must happen or canâ but also to what should (Pinker, 2008, 196). In many areas of life, we are now required to ârehearse the futureâ before it happens. The phrase is taken from a study, The Politics of Catastrophe, which reminds us that high-level officials, civil servants and emergency responders now take part in exercises and simulations that enable them to inhabit the future before they or the rest of us ever reach it (Aradau and van Munster, 2011, 95). Our ability to mentally explore the future before it happens is a remarkable adaptation; it means that we can learn from our mistakes before they are made, even if we are not very good at avoiding them. But, as we have grown more powerful, we have been forced to move out of the era of âpre-eventual temporalityâ (of prevention and precaution) to the time of the event itself. By living through events, we hope to bind the future to the present and so manage the unexpected consequences of our own actions. As a character in William Gibsonâs novel Pattern Recognition remarks, âWe have no future because the present is too volatileâŚ. We have only risk managementâ (Gibson, 2003, 58â9).
And it is the side-effects of our technological âadvancesâ that now concern us most. Take what scientists call the âCollingridge Dilemmaâ. The term was coined by the man after whom it is named in a book called The Social Control of Technology (1980). We can regulate successfully any given technology when it is young and unpopular and thus probably still concealing its unanticipated and undesirable consequences. Or we can wait and see what the consequences are and then risk losing control over its regulation. Or, as the author himself put it, âWhen change is easy, the need for it cannot be foreseen; when the need for change is apparent, changes become expensive, difficult and time-consumingâ (Morozov, 2013, 255). To which we can add that, when others are threatening to develop the same technology, the dilemma can become even more acute.
There is another factor at work, argues the historian of technology Thomas Hughes. Technological progress is a quasi-Darwinian process. In its early development, technology can be adjusted to the original design and even shaped by social needs. But once it is embedded in infrastructures and commercial arrangements, once it shapes norms and expectations, change becomes increasingly difficult, which is why so many people want to ban âkiller robotsâ while there is still time (Hughes, 1994, 110â11). What Hughes calls âtechnological momentumâ and Collingridge identifies as a âtechnological dilemmaâ together constitute the irreversible logic of âprogressâ.
But against this must be set something that can be equally powerful. We also confront another intractable dilemma â our imagination is limited. We have always wanted the future to be âsurprise-freeâ, but we donât always grasp the uses to which our inventions can be put. A striking case is the history of air flight. The first plane, the Kitty Hawk, took to the air in 1903. The first bomber was designed a few years later. When an Italian airman threw grenades out of his monoplane in 1911 onto Turkish troops in North Africa, the world reacted in outrage: wasnât it âunsportingâ to kill an enemy which could not retaliate? But the enemy did retaliate, almost at once. Only a week later, Turkish soldiers shot down an Italian plane using just rifles. After that, there was no turning back. As early as 1911, the young Wittgenstein, then a student of engineering at the University of Manchester, was working on the idea of a jet-driven aircraft. The first aerial bombardment of a city followed in the First World War (though it was anticipated famously by H. G. Wells in his novel War in the Air (1908)).
Wells got the future right more than most; he even imagined submarine-launched ballistic missiles (he called them âair-torpedoesâ). But he also got many things wrong. War in the Air has Zeppelins, not aircraft, bombing New York; the city is attacked by fleets of airships, not planes. Planes had only been around for 5 years; airships can be traced back to the 1870s, which saw the first combination of balloons and engines. And for all his prophetic genius, he could not have anticipated the next quantum leap in airpower â 3-D-printed drones which, according to BAE Systems, will take to the air by 2040 (and probably much earlier, if truth be told) (Guardian, 2014).
But it wasnât until very late in the day that non-state actors made an appearance in this larger-than-life play, with the hijackings of planes in the 1960s. The real âgame changerâ was 9/11 when Osama bin Laden unleashed the attacks on the Pentagon and World Trade Center. Surprisingly, just such an attack was anticipated as early as 1909 in a British film that showed two biplanes, piloted by anarchists, crashing into the dome of St Paulâs Cathedral. No-one gave this scenario a secondâs thought: neither the anarchists who continued to strike the fear of God into European hearts right up to the First World War, nor the hijackers of the 1960s who preferred to seize planes in flight and divert them to remote runways, exploiting the passengers as hostages. As Lionel Trilling claimed, âIt is now life [not art] that requires the willing suspension of disbelief.â And it may well be that tomorrowâs terrorists, insurgents and irregular fighters will be more imaginative than states in finding new ways to wage war â a sobering thought indeed.
On the Edge of Tomorrow
Science-fiction writers offer us a line of sight into the future. And that is where they often come unstuck, as we all do, in part because we can only think through metaphors. We all employ them; they are part of the universal language we speak; they make things vivid in a way that the non-metaphorical use of language may not. Metaphors can be said to do much of our thinking for us. And that can be problematic. Many of our most vivid metaphors are embodied. When we feel good we talk of âstanding tallâ; when we feel bad we feel âdownâ. We also use âsightâ for thinking about the future. But this is where we begin to run into trouble. Does the future lie in wait for us or is it hiding away in the past where we can only ever grasp it thanks to hindsight? In psychoanalytic terms, writes Anthony Judge, the future may lie through our past, emerging unexpectedly from the world we thought we had left behind (Judge, 1993, 280). It may be incubating like a virus waiting to spring a surprise. In other words, we must always guard against historical short-sightedness.
The problems of long-term forecasting should not leave us oblivious to the problem of long-sightedness (presbyopia): accommodating the eye to the near vision â a challenge for many far-sighted people in everyday life who are wont to stumble when taking the next step. âWe have already made a hundred year war fighting leap-ahead with MQ-1 Predator, MQ-9 Reaper and Global Hawk â they fundamentally change the nature of warâ (Doyle, 2013, 11). I quote this passage from a memorandum written by General Barry McCaffrey in October 2007, welcoming the latest generation of drones. So much is wrong with this claim that it is difficult to know where to begin. Let us forget for a moment Clausewitzâs insistence that, while the character of war is in constant flux, its nature is not. There is nothing, I think, inherently misguided about the idea that the nature of war might change one day; indeed, I will be investigating whether this may indeed happen. [Spoiler alert â it wonât.] But in this case, the general was carried away by what economists call âpresent biasâ.
We believe we are being far-sighted by claiming to know where the most important trends are likely to lead, but trend analysis is rather like skimming stones across the surface of water. After first touching it, they reel away again, touching the water and leaping until the momentum is dissipated by friction. What looks like a long-term trend can prove to be very short-term. And history can always change direction very quickly. The future is not the end of a trend line. Sometimes a trend peters out, at least for a time, only later âtaking offâ. This is especially true of technology; indeed, we tend to ignore how long inventions take to catch on. We make the mistake of centring our histories of technology on innovation rather than use. We tend to date advancement and progress from the moment a technology appears or is first applied, and to downplay the long and winding road of adoption, imitation, diffusion, improvement, recycling and hybridization. And yet it is the long haul that decides the impact of a particular technology on the conduct of war, as it does everything else. To invoke David Gelernterâs âimpossible-to-learnâ Law of Replacement: âSociety replaces a thing when it finds something better, not when it finds something newerâ (Gelernter, 2012, 346).
Technology indeed, often outstrips our abilities to make use of it. As Jeremy Black reminds us, many of the inventions that changed the face of battle have been re-inventions. Take the flame thrower, a Byzantine innovation of the ninth century which reappeared on the battlefield in the First World War; or the submarine, which first appeared in 1796 but only came into its own in the same conflict; or the percussion-fused hand-grenade, first invented in 1861 but not used effectively until armies engaged each other in 1914. For Black, the real importance of technology is not the date of an invention, or the date it first worked effectively, or even the date it was first introduced on the battlefield, but the date of the paradigm shift (a term he doesnât employ), the date when military thinking changed in order to take advantage of a new device or technique (Black, 2013, 8).
A recent case is the US Navyâs new electro-gun which can fire projectiles at speeds far exceeding those of missiles; the rounds destroy things with the force of their impact rather than detonating an explosive warhead. Long a staple of science fiction, energy weapons may be the future of warfare but the rail-gun is merely the latest development of the catapult, and it was first conceived and patented by a French scientist nearly a century ago (The Times, 10 April 2014). And the same is even true of todayâs pilotless aircraft, which were around at the time of Vietnam â the first time a drone hit a target with a missile was in 1971, but they went largely unnoticed and unremarked until the Global Positioning System (GPS) and better satellite and communications links enabled them to feed information back at an unprecedented rate. Like electro-guns, drones might radically change the character of war, especially at sea. The X47B has shown that it can take off from and, more impressively, land on the deck of an aircraft carrier.
And then there are future technologies that we presume will reshape war in dramatic and more radical ways but that, for purely scientific reasons, may actually lead nowhere very fast. As Mark Twain once quipped, what is fascinating about science in general is that âone gets such wholesale returns on conjecture out of such a trifling investment in factâ (Pinker 2008, 29). Much of science fiction, like so much of science, is based on what Karl Popper called âpromissory materialismâ â it depends in other words on promissory notes for discoveries not yet made, and technology not yet invented. I suspect nanotechnology, which was made famous by Eric Drexlerâs book Engines of Creation, may be a case in point. Drexlerâs book appeared in 1990 and there has been painfully little progress since. One writer complains that nano-writings freely import the future into the research of today, and the language used rewrites the advances of tomorrow into the present tense. Gary Stix, a staff writer for Scientific American, calls writings on nanotechnology âa sub-genre of science fictionâ. Colin Milburn adds that ânanotechnology is one particular example illustrating the complex interface where science and science...