Chapter 1
The Promise Machine: Between “Techno-failure” and Market Failure
Introduction
After defining failure as a judgment in our Introduction, we argued that many repetitive and daily failures are being quickly dismissed or forgotten. To start unpacking the conditions by which failure is being either acknowledged or denied, this chapter will introduce the concept of “techno-failure” – the breakdown or malfunction of technological tools such as mobile devices, digital interfaces and infrastructures, or personal computers.
Before we can approach case studies of technological and financial failures such as buffering or the derivative form (which will be explored in the second half of this book), we must first familiarize ourselves with a broader theory of failure. This will be achieved by studying failure as epistemology, affect, and political economy. In Silicon Valley and Wall Street, failure is not an accidental, unexpected one-time event; it functions, rather, as a commodity and as such it is being monetized, exchanged, and valorized. But how exactly does failure gain value? By producing and sustaining a machine of broken promises. Both finance and tech share an underlying adherence to a new taxonomy of promises: “the Austinian promise,” “the agonistic promise,” and “the delayed promise” – all of which will be defined and explored throughout this book.
Finally, we tie this triadic categorization to the prominent promise of convenience, with its emphasis on immediacy and instant gratification. Building on the work of Thomas Tierney (1993), we read convenience as much more than the utilitarian desire to maximize pleasure and avoid pain; it signals, instead, a paradigm shift in our relation to our bodies and lived environments. Paradoxically, the pursuit of convenience – a broken promise perpetuated by Silicon Valley and Wall Street alike – has led to the rise of anxiety, debt, and crisis on an individual and collective scale.
The Denial of Failure
While failure is everywhere, it is not always recognized as such. Some banks are “too big to fail,” while the internet is often described as immune to failure thanks to its decentralized and supposedly immaterial nature. In practice, however, this perceived invulnerability is very far from characterizing a complex, global network based on cloud architecture and underwater fiber-optic cables. Providing an alarming account of the growing dependency on cloud computing, Tung-Hui Hu warns that “A multi-billion-dollar industry that claims 99.999 percent reliability breaks far more often than you’d think, because it sits on top of a few brittle fibers the width of a few hairs” (2016, ix). As Hu points out, the tendency to revert to abstraction when it comes to digital infrastructures is not only misleading – it also holds enduring political and philosophical implications. The discourse of “immateriality” (which will be further discussed in chapter 3) is a crucial tactic to sustain the power of the economic and digital divides and to present the internet as an obscure, infinite, and ahistorical system of connectivity and access to information. One of the reasons digital immateriality is such a prominent myth is that it helps users forget the physical, everexpanding, global infrastructure that makes surfing the web feasible. As media and infrastructure scholars repeatedly stress, it is crucial to explore the material conditions on which the internet is based (Thackara 2005; Parks and Starosielski 2015).
To understand why and how this fragility is so often denied, we must first develop a theory of “techno-failure.” This is different from simply mapping a series of technological fiascos or failed products, as these are often studied within the familiar pattern of “failure as success.” To that extent, a failed invention such as the 1929 “cat phone” proved crucial for the advent of cochlear implants, and the failed 50mm film paved the way for the CinemaScope screening format (Belton 2009; Sterne 2009). Pushing against this prominent pattern, Lisa Nakamura reminds us that “characterizing an object as a failure is a much more successful strategy if it was an early iteration of one that later became extremely popular or admired” (2009, 87). Instead, she focuses her attention on a more ubiquitous form of failure: the PowerPoint presentation. As anyone who has ever attended an academic conference knows all too well, our attachment to the visual memory aide of the slideshow often results in nerve-wracking moments of suspense, helplessness, and heartfelt cries to the never-on-site technical support. In Nakamura’s words, “Moments of presentational failure lard conferences and classrooms like raisins in a pudding – they are everywhere, albeit not all the time” (2009, 87). Drawing our attention to vernacular failure – rather than to epic commercial failures like Apple’s Lisa computer – can help us start unpacking the process by which some failures enter the collective memory and imagination while others are simply ignored.
Yet, Nakamura’s attentiveness to the ubiquity of techno-failure does not absolve her from the tendency to focus on the user – and not on the inherently precarious machines and infrastructures. She half-jokingly concludes her PowerPoint obloquy with the following pledge: “Rather than bemoaning our ability to correctly manage the necessary welter of cords, adapters, remotes, power sources, and flash drives, perhaps we ought to learn to view these supposed failures as marks of distinction and paradoxical displays of expertise” (2009, 87). Are we to become experts in failing? Is the solution truly “to fail better,” as both Samuel Beckett (1995) and Mark Zuckerberg urge us to do? This chapter aims to answer these questions by recasting failure as a commodity. Technological failures such as limited battery life, digital lags, or frozen, irresponsive screens effectively support a business model of upgrading and planned obsolescence that is crucial for the proliferation of “habitual new media” (Chun 2016). Within the culture of beta testing, maintenance and repair are no longer desired. In practice, tech behemoths like Apple render them all but impossible. Unlike a decades-old boiler, an iPhone is strategically designed as an opaque and proprietary tool that can only be repaired in designated Apple Stores. In 2018, Apple lobbied against legislation that would make it easier for iPhone users to repair their smartphone and other electronics (Koebler 2017). The legal battle against the “right-to-repair” legislation introduced in the state of California exposed the lengths to which tech giants like Apple, Microsoft, and Samsung are willing to go in order to prevent both customers and independent electronic shops from prolonging the lifespan of their devices (Koebler 2017).
While planned obsolescence is crucial for the ongoing success of the gadget industry, in the sphere of stocks and derivatives, failure can quickly turn into profit by “short selling” (e.g. buying a stock and waiting for its price to decline). When it comes to tech, problems such as slowness, disconnection, or unrepairable hardware can be strategically implemented to force users to opt for “premium services.” These supposed failures are therefore successful attempts in building a market desperate for new models and “disruptions” (Silicon Valley’s favorite keyword). Writing about how transmission technologies create “a new class system,” Sean Cubitt contends that “the purpose of control over information is to delay transmission. We think we pay more for premium service delivery of news and entertainments; in fact, the money pays for timely arrival, and its absence ensures a deliberately delayed and often downgraded delivery” (2014, 4). As a result, there exists a digital divide that further individualizes users’ media experiences.
Yet techno-failures still seem to function as “raisins in a pudding,” to use Nakamura’s metaphor. With its emphasis on ubiquitous failure, her essay joins a growing body of literature reviewed in our Introduction. Building on the four schools of thought we previously surveyed – science, business literature, queer studies, and infrastructure studies – we wish to isolate three categories that will prove particularly productive for our argument regarding the centrality and strategic use of failure in Wall Street and Silicon Valley: First, techno-failure as essential to the production of new knowledge about the machine and, in turn, a new understanding of the world; second, failure as what Sara Ahmed calls an “affective economy”; and third, failure as inherent to the market logic of capitalism and its dependency on credit, debt, and derivatives. These categories respectively explore the epistemology, affect, and political economy of failure.
The Epistemology of Techno-failure
While Karl Popper offers us a set of principles that can serve to distinguish a failed scientific theory from a successful one, it is Martin Heidegger who can be seen as the godfather of the growing discipline of Failure Studies. In Heideggerian thought, the world is “present-at-hand” – it is revealed to us by way of different practices and encounters with tools or objects. Yet, the relational function of objects – the way they produce different substances and function within a given environment – only becomes visible once they fail, in the moment in which “the tool suddenly demands attention to itself” (Graham and Thrift 2007, 8). There exists, however, an important distinction between failure and breakdown. Heidegger’s vorhanden (“objectively present” in the newer translation, or “present-at-hand” in the older) focuses on moments of breakdown. While failure occurs from the standpoint of a goal in the world, breakdown might happen without our direct involvement. Take, for example, the following description of a frequent computational failure:
Someone sits at a word processor focused on the text at hand and all of a sudden the computer freezes. The trustworthy world that developed around the computer – the open book, the keyboard, the screen, the cup of coffee; in short, the entire mutually referring network that Heidegger calls a world – is abruptly destroyed. The computer changes from being one of the handy or ready-to-hand that shape this world to what Heidegger calls something vorhanden … Its transparency is transformed into opacity. The computer can no longer be utilized in the practice of writing, but abruptly demands interaction with itself. (Verbeek 2004, 79)
This moment, however, requires a more detailed attention. The Heideggerian attempt to bring together rupture and epistemology – the moment of failure and the production of knowledge – is put to the test in the age of digital and mobile technologies. When they break, these handheld devices do not necessarily germinate new knowledge on either their inner workings or the world in which they operate. While a tool such as a mirror or a glass-made cup might violently shatter into pieces (and therefore draw attention to its fragile materiality), a smartphone often breaks down quietly and opaquely, in the guise of a sudden moment of lag, a prolonged lack of response, or a “dead” battery.
As mentioned above, companies like Apple and Samsung have so far resisted any attempt to legalize the users’ “right to repair.” This rejection of maintenance and repair has far-reaching consequences: “For without that capacity (for repair) the world cannot go on, cannot become ready-to-hand again” (Graham and Thrift 2007, 3). Moreover, the important distinction between breakdown and failure is constantly challenged once we are forced to reconsider Heidegger’s argument regarding the ability to easily differentiate humans and tools. As Timothy Barker recently argued, technology should now be read “as a process, rather than as an object” (2018, 10), and media scholars should therefore focus on “the production of the conditions for the possibility of experience” (2018, 7). If digital technology should be studied as a process or as an assemblage – and not as a series of isolated objects or interfaces – how can we determine when and why it fails, or which criterion of failure to apply? What kind of work is needed in order to isolate “the conditions for the possibility” of failure?
This scholarly pursuit is further complicated by the fact that our failed technologies do nothing more than further obstruct the underlying logic and hidden infrastructures that sustain them. This results in “the disappearance of technology,” a problem addressed by Mark Jarzombek (2016) and others. In Jarzombek’s words, “the word technology is now meaningless, the residue of an anthrocentric worldview of Man and Tool and the elusive promise of technē. Technology has morphed into a bio-chemo-techno-spiritual-corporate environment that feeds the Human its sense-of-self. We are at the beginning of a new history” (2016, 5). This paradigm shift transforms moments of breakdown into a chain of failures: the failure to trace and understand the source of the problem; the failure to repair the tool; and, as we shall see in chapter 3, the failure to remember the breakdown in order to challenge the prominent narratives surrounding digital technologies (such as their infinite seamlessness or supposed immateriality).
Put differently, there are two reasons why failure no longer functions as an epistemological apparatus: first, we can no longer separate tools, bodies, and environments; second, even when our technological tools break, they do not become transparent as their underlying logic and inner working remain hidden from the user. Due to the rising complexity of networks and algorithmic systems, the engineers or designers behind these devices often remain in the dark themselves, a notion that will only grow stronger in a world based on machine-learning algorithms and AI (O’Neil 2016). Worse still, there is an ever-growing division of labor: while underpaid workers mine coltan in the Congo or manufacture electronics in the Global South, highly paid experts who mostly reside in Western countries and the coastal cities in the United States are tasked with designing and engineering these devices (Mantz 2008). These strategic divisions of labor are used to create “Geeky Silos,” described by Gillian Tett as “the idea that small cadres of technical experts, hidden from public view, are pushing the frontier of their particular disciplines in ways that could in fact be quite dangerous” (2010). One result of this expert culture is the ever-growing industry of call-center helplines tasked with helping users to deal with the “mass, routine failure in computer systems” (Graham and Thrift 2007, 12). Once again, the users are mostly located in the West while the workers assisting them are recruited in the Global South.
This lack of transparency is the raison d’être of the “Black Box Society” (Pasquale 2016). Algorithmic systems, including those used in finance, are hidden behind a smoke screen of intellectual property and specialized knowledge. In his analysis of this bla...