Cover Page

Future War

Christopher Coker











polity

Foreword

What Christopher Coker has produced in Future War is not a treatise on what we should think about the subject. He has written something much more valuable – a design for how to think about a subject that will continue to have a profound influence on the future of humankind. This book is important because thinking clearly about future war is fundamental to preventing armed conflict and to ensuring that nations and their militaries are prepared to respond to threats and resolve crises at the lowest possible cost in blood and treasure. Thinking about future war, however, is often neglected because democratic societies and their leaders tend to be optimistic about prospects for peace even as the present and recent past may be harbingers of future conflict. Some, particularly in academia, confuse the study of war with advocacy of it. Paradoxically, the neglect of the subject perpetuates deficiencies in understanding, which in turn can make war more likely. Others neglect continuities in the nature of war and focus almost exclusively on social or technological changes in the character of armed conflict.

Coker’s design is unique because of his interdisciplinary approach and because he is biased neither towards the past nor towards the future. Future War is consistent with the late historian John Keegan’s observation that the study of war is by necessity a social and psychological endeavour. In addition to those perspectives, Coker adds anthropology, history, literature, philosophy and science. This book’s unique value lies not only in the comprehensiveness of that interdisciplinary approach, but also in the synthesis of those perspectives to generate depth of understanding. Coker teaches readers how to ‘think in time’, consistent with historian Carl Becker’s observation that ‘memory of past and anticipation of future’ should ‘go hand in hand … in a friendly way, without disputing over priority and leadership’.

Thinking clearly about future war is obviously important for military officers. Military leaders use their vision of future conflict as a basis for how to direct military operations or train soldiers. Senior officers draw on their understanding of war to provide best military advice. And junior military leaders must also understand war to explain to their soldiers how their unit’s actions contribute to the accomplishment of campaign objectives. Thinking clearly about future war is just as important for defence officials and citizens. A failure to understand war through an interdisciplinary approach and a consideration of continuity and change risks what the nineteenth-century Prussian philosopher Carl von Clausewitz warned against, regarding war as ‘something autonomous’ rather than ‘an instrument of policy’, misunderstanding ‘the kind of war on which we are embarking’ and trying to turn war into ‘something that is alien to its nature’. In recent years, many of the difficulties encountered in strategic decision-making, operational planning and force development stemmed from shallow or flawed thinking; Coker’s book may prove an effective antidote to future folly.

Citizens must also possess a fundamental understanding of war and warriors if they are to remain connected to those who fight in their name and if they are to hold their governments accountable for decisions involving killing and the prospect of death. If society is disconnected from an understanding of war or is disconnected from its warriors, it will become increasingly difficult to maintain the fundamental requirements of military effectiveness and to recruit young men and women into military service. The connection between soldiers and society is also necessary to preserve the warrior ethos that permits servicemen and women to see themselves as part of a community that sustains itself through sacred trust and a covenant that binds them to one another and to the society they serve. Absent a fundamental understanding of war, popular culture cheapens and coarsens the warrior ethos and further separates warriors, often portrayed as flawed, fragile or traumatized human beings, from their fellow citizens.

As historian Margaret MacMillan has observed, World War I still haunts us not only because of the scale of the carnage and suffering, but also because so many believed that the early 1900s version of globalization had made war futile. As we commemorate the 100th anniversary of the Great War, we might remember that Jan Bloch, Norman Angell and others believed in 1914 that war had become so irrational a means of settling disputes that sensible people would never again fight one. The experience of World War I, a conflict that took the lives of over 16 million people, highlighted the need to understand the political, social and historical basis for violent conflict as critical both to preserving peace and to ending wars. As we know, however, the ‘war that was to end all wars’ was instead the first of two world wars that marked the bloodiest century in world history. The analogies to today seem obvious as the United States and European nations cut military budgets based, in part, on the belief that large-scale armed conflicts are relics from a barbarous past. If, however, peace remains as the ancient Greek historian Thucydides described it over 2,500 years ago – as ‘an armistice in a war that is continuously going on’ – thinking clearly about future war will remain important to the prevention of conflict as well as its effective conduct.

This is not a book that will fill the reader with optimism about the future. It is a book, however, that, if read and discussed, will shed light and understanding on how we might preserve humankind through efforts not only to reduce war’s occurrence, but also to preserve human agency over its conduct; to prevent war in short, from becoming both more destructive and dysfunctional.

General H. R. McMaster Director, Army Capabilities Integration Center, and Deputy Commanding General, Futures, US Army Training and

Doctrine Command

Introduction
Who Owns the Future?

When it comes to prediction, everyone works on a level playing field; no-one has a privileged access to what lies ahead. But here is one prediction that I find generally persuasive. It is by Jacques Attali, an economist, historian and cultural critic, one of the few men (claims the dust jacket of his book A Brief History of the Future) to have grasped ‘the arc and logic of unfolding history’. The French are far less modest than others when staking out their ground. They like to provoke their readers, as French intellectuals have been doing since the mid nineteenth century. And many of them like to ‘dazzle’ (the French word briller has no real English equivalent).

Attali offers us a rather terrifying vision of a future in which armies have become vast digital infrastructures and surveillance systems processing Big Data. Robots relay data in their role as scouts, deployed ahead of infantry detachments. Software for simulated battles is permanently updated as close as possible to the battlefield. Soldiers are networked. Intelligent clothing is the new ‘body armour’ and is now able to monitor not only their moods but their thoughts.

Other weapons – chemical, biological, bacteriological, electronic and nano-technological – will then appear. As with the new civil technologies they will prefigure, scientists will strive to increase their power, their miniaturisation, and their accuracy. Chemical arms will be capable of seeking out and killing leaders without being detected; pandemics could be ready for unleashing at will; complex genetic arms may one day be directed specifically against certain ethnic groups. Nano-robots as small as a mote of dust, known as ‘gray jelly’ could carry out stealth surveillance missions and attack the cells of enemy bodies. Then, once animal cloning techniques have progressed, cloned animals could well carry out missions – living animal bombs, monsters out of nightmare. (Attali, 2006, 235)

Surprisingly, Attali’s vision of the future is already being realized. That is the point: the future often steals up on us by stealth: it catches us out. His book was published in 2006. Less than ten years later, mood hacking is coming into its own. Computing moods is already big business and products made out of biometric materials or smart textiles are already starting to hit the market. Within a decade, every piece of apparel we buy will have some sort of biofeedback sensors built into it, and be able to monitor a person’s temperature, heart rate and location. And the possibilities don’t stop there. Studio Roosegaarde – a design laboratory in the Netherlands – has developed a dress called ‘Intimacy 2.0’ with an opaque fabric that becomes transparent when its wearer is aroused. In the brave new world we are about to enter, we may soon find it is impossible to conceal our private desires, as well as our thoughts (International New York Times, Monday 26 May 2014).

What makes Attali’s vision so disturbing is that war will no longer be the monopoly of the state. Big corporations will be fighting their own wars, preparing and arming private armies. Non-state actors too are likely to come into their own: corsairs, pirates, mercenaries, maquisards, mafias and terrorists. They will be attacking pipelines, closing shipping lanes, or attacking the rest of us for religious, nihilistic or simply criminal ends. The demographic trends are in their favour. Conflict, Attali writes, will go ‘hyper’; ‘They will seek to disarticulate surveillance systems and to terrify the sedentary [who] … will shut themselves off in their bunkers … At this tempo, it will not be tomorrow’s Africa that will one day resemble today’s West, but the whole world that could tomorrow evoke today’s Africa’ (Attali, 2006, 251).

In this book I do not aspire to do anything more than offer a few speculations about what the future may have in store and, to be frank, I don’t think that anything more can be legitimately aspired to. Nothing dates quite as quickly as the future. Take Attalli’s metaphor of Africa, a continent which is not quite the basket case it was in the 1980s (‘the hopeless continent’ was the famous cover of an Economist issue). Today, five of the twenty fastest-growing economies in the world are African. It has quite enough conflicts, however, to make us despair of permanent peace. Think of Boko Haram in Nigeria; al-Shabab raids along the Kenyan coast, and terrorist attacks in shopping malls in the heart of Nairobi; not to mention a crippling struggle in the Democratic Republic of the Congo (DRC) which led to sexual violence being designated a war crime by the UN for the very first time in 2010.

And there seems to be no end in sight. In his book, Attali talks of a coming ‘planetary war’. Regional rivalries and Great Powers, as well as medium-sized countries like Russia and Iran, will be clashing for regional dominance. Mafia gangs and terrorist movements will ‘colonize’ failed states, or occupy failing cities. Others, such as territorial cartels, will rise to prominence, assisted by ‘hyper-nomads’ (chemists, intellectuals, accountants and financiers who will cynically attach their colours to them). Religious groups could try to seize a country. Migrant ‘hordes’ could cross the Straits of Gibraltar, the river Amur or Usumacinta (the river that divides Mexico from Guatemala), ‘menacing, no longer begging’. In many cities violence will become so extreme it will require responses from the military rather than from the police. Many of them will host the principal ‘nests of revolt’ (Attali, 2011, 222).

From Aleppo to Kabul, from Gaza to the Donetsk Basin, from Juba to Bangui, war continues to thrive. Brush fires or wars, asks one commentator (International New York Times, 8 August 2014)? Whatever label we attach to them, they are reshaping our world, and making it much more dangerous than before. Maps are being redrawn in the former Soviet Union, in the Middle East and in North Africa. The resilience of the international order is being tested as never before.

This book could be divided into two parts. In the first I will look at the mechanics of war, especially as pioneered by developed countries, now no longer exclusively western. In the second I will explore Attali’s grim vision of a ‘planetary war’ and tease out some of its main features. The main purpose of a book such as this, I would argue, is to help us understand the present by thinking through the future; and we can pursue the second with a measure of success because it is already here.

The Visitable Future

The novelist Henry James once told a friend who was writing a historical novel that the task was really impossible. She would never be able to capture the consciousness – the social and sense horizon of individuals in whose minds half the things that made up her own world were non-existent. You may multiply the little facts, wrote James, the pictures, the documents, the relics and prints, as much as you like, but what he called the ‘real thing’ was impossible to achieve: ‘the horizon, the vision of individuals in whose minds half the things that make us, that make the modern world were non-existent’ (Updike, 2011, 441). James was talking about the subjective nature of experience which in the course of time is impossible to access directly. The past is not merely another country, it is another planet.

What James did concede in the introduction to his own novel The Aspern Papers (1909) was that there did exist a visitable past: ‘In the nearer distances close enough to our own to enable us to see the connections while “tasting… the differences”’ (Updike, 2011, 441). It is called ‘cognitive estrangement’ – the moment when we re-perceive the world around us and our place in it. James himself is an excellent case in point. By the time his novels came on stream, critics had begun to ‘read’ texts quite differently from their predecessors. They were now as interested in what the writer left out as much as what he left in. They searched for his tacit presuppositions, and the historical pressures that had moulded him and the aspects of life repressed in his writing, subconsciously or not. Ever since, we have been engaged in what Althusser called ‘double-reading’. And we have not gone back.

Just as there is a visitable past, there is also, I would contend, a visitable future, a world whose consciousness is one we can imagine because it may not be entirely dissimilar from that of our own. I want to offer a striking example which can be found in Kingsley Amis’ short but telling analysis of science fiction, New Maps of Hell (1960). It appears in his discussion of Fahrenheit 451, Ray Bradbury’s disturbing vision of a future in which books have been banned. Not only that, but people are seduced by the state into leading desperately marginal lives; they are drugged into conformity by an endless dose of music, a surroundsound that quarantines them from what makes life worth living. The hero’s wife lives in just such a semi-comatose state: ‘And in her ears the little C-shells, the thimble radios tamped tight, an electronic ocean of sound, and music and talk coming in, coming in on the shore of her unsleeping mind … Every night the waves came in and bore her off on their great tides of sound, floating her, wide-eyed, towards morning.’

Science fiction differs from mainstream fiction in many respects, but one of the most important, argued Amis, was that it observes ‘tendencies of behaviour’ that exist but are barely recognized in contemporary life. Science fiction at its best presses hard against all the boundaries, not only those of time and space. And Bradbury’s novel does just that. Re-reading Amis’ study, I was amazed to find this passage not from Bradbury’s novel but from a description of a visit he made to Los Angeles shortly before it was published:

In writing the short novel Fahrenheit 451, I thought I was describing a world that might evolve in four or five decades. But only a few weeks ago, in Beverly Hills one night, a husband and wife passed me, walking their dog. I stood staring after them, absolutely stunned. The woman held in one hand a small cigarette-package-sized radio, its antenna quivering. From this sprang tiny copper wires which ended in a dainty cone plugged into her right ear. There she was, oblivious to man and dog, listening to far winds and whispers and soap-opera cries, sleepwalking, helped up and down kerbs by a husband who might just as well not have been there. This was not fiction. (Amis, 2012, 82)

What Bradbury ‘saw’ in a kind of epiphany was a future in which people would live not only an off-line, but also an on-line, life.

Even a gifted science-fiction writer like Bradbury back in the late 1950s could not have imagined the extent to which our lives would dovetail with technology. Today, 60 per cent of his countrymen use smartphones, shuffling down the street, still inhabiting the real world though their thoughts are usually elsewhere, just like the woman he observed in LA. We all plug into the worldwide web when we start up our computers in the morning or in the office, comfortingly reassured by the Apple start-up chime in F-sharp major. Soon we will be wearing devices that use sophisticated sensors to monitor our health and to build up a comprehensive picture of our life styles. Already our personal computers can track our credit history, if we bank on-line – and even our sexual and religious preferences if we visit the relevant websites. What is so revolutionary about this development is that we can’t see or track this ourselves – data companies can though. We are also encouraged to think that, through data-tracking, human happiness can be quantified. We now talk of data-driven self-discovery. For many of us the on-line and off-line worlds are fusing together.

And more subtly still (though less acknowledged), the internet is beginning to change the architecture of human experience. It may well be the case, claims Patrick Tucker, that by 2035 we will be able to make pretty accurate predictions thanks to data crunching. We are on the cusp of a telemetric age which will separate the less predictable world in which we evolved our humanity from the more predictable one in which the destiny of humanity will be tested. It is already here – he adds pointedly; it simply hasn’t been acknowledged (Tucker, 2014).

Even if that is a little premature, there is one other historical change that allows us to visit the future. We now know that we will not only be interacting with machines more than we do already; we also know that tomorrow’s machines will be conversing increasingly with each other. Machine-to-machine communication is already far more advanced than we suppose. Currently 61.5 per cent of all web traffic is actually non-human in origin: 31 per cent of the traffic is ‘bots’ – web robots – created by search engines exploring the web, and another 5% per cent comes from ‘data scrapers’ which trawl the internet for information to analyse and store (Plant, 2014, 23). And meanwhile what doubles every 5 years is the number of continuously internet-connected devices that communicate with one another (you can buy an internet-connected light bulb that turns on when your car signals you are home). The probable end of ubiquitous computing is of a single unbroken interface in which every machine will communicate with every other; and every machine will communicate with us. The point is that we will eventually be habituated to being one of the ‘things’ connected to and through the internet. To ‘be’, these days, is to be interactive with machines – that is our new ontology. And, of course, with the increase in artificial intelligence, machines in communication with each other may one day make decisions without reference to us at all.

Bradbury was not aware, of course, of the historical significance of the incident he witnessed and wrote about, any more than our forefathers had any idea that they were living through an ‘industrial revolution’ (the term only entered common usage after 1850). Every historical phase-transition changes the world. Industrialization, for example, transformed workers into ‘hands’; life became ‘energy’. Productivity entered the general consciousness. Until comparatively late in the day, we were able to dismiss our inventions such as the printing press as mere externals, or ‘machinery’. Today we know that our inventions change the way we think. One man who grasped this early on was Thomas Carlyle. In an essay entitled ‘The Mechanical Age’, he complained that ‘men are grown mechanical in head and in heart, as well as in hand’. Today the same thing is happening to us, only this time we are aware of it. Take the search engines on which we rely to frame our knowledge of the world. They used to alert us to what was important; they now alert us to what is true. If we Google ‘war’, for instance, sites will direct us to what they think most relevant or most representative. To be sure, search engines are dumb; they do not direct us to what they believe to be the most useful or valuable knowledge. But they do direct us to the sites which are the most visited sites, and therefore in the eyes of the market the most valued information sources. In other words, meaning is no longer only to be found in our minds; it is also in the minds of the tools that deliver us information.

We now are experiencing something as profoundly transformational as the industrial revolution – it is called the Second Machine Age (the term was coined by Brynjolfsson and McCaffey). And it has powered two of the most important one-time events in history: the emergence of real, useful artificial intelligence and the connection of most of humanity via a common digital network. Each would be important on its own, but when combined they have a revolutionary potential which we are just beginning to realize. I think we can predict with a fair degree of confidence that, barring some catastrophe or unforeseen event or invention, the future will be more of the same (Brynjolfsson and McCaffey, 2014).

Most studies of future war put the main emphasis, of course, on technological change, but they usually tend to skim over the fact (if it is ever acknowledged) that the essence of technology, as Heidegger told us (in one of his less gnomic remarks), is not actually technological. Its essence is how it encourages us to re-perceive the world and our own place in it. It has been a long journey from the discovery of fire to the invention of the drone, and the journey, we assume, will continue. In the course of that journey we have changed, as has war. What will we think about war in 25 years? Will it still be the stuff of heroic stories, or will it have been downsized in the imagination? Will machines eventually divest us of our responsibility, and uproot us from the centre of things? Will we become more machine-readable? Will the real ‘artificial’ intelligence be our own? Perhaps it won’t matter; perhaps the machines one day soon will no longer need the scripts we write for them. Science fiction tells us what might happen next; they may turn on us like Skynet and wipe us out. Or possibly we may be in for a softer landing; in their wisdom, they may choose to leave us behind. Think of Samantha, the operating system and virtual love interest of a writer named Theodore in Spike Jonze’s film Her (2013) who simply outgrows her user, as well as love itself.

All fascinating questions, to be sure, but they take us further than the remit I have given myself. They look to 2050 and beyond: ‘Not many writers are prophets and those who are foretell the future by the accuracy with which they report the present’ (James, 1982, 76). James is quite right; we have to have an ear attuned to the secret harmonies of everyday life, for the present may persist longer than we think and the future may be more familiar than we expect.