Introduction: The Crash of 2008
The “seven days that shook the world” in the fall of 2008 began with the ominous announcement on the morning of September 15 that the New York investment firm, Lehman Brothers, was filing for bankruptcy protection with over $600 billion in debts on their books.1 Coming on the heels of the collapse of another Wall Street firm, Bear Sterns, whose sale to JP Morgan six months earlier had been underwritten by guarantees from the Federal Reserve Bank, the announcement of Lehman’s travails and the refusal of the US government to agree to another bailout sent shudders throughout the entire financial system. Serious questions were raised about the viability of other financial institutions and given the interlocking links between financial actors around the world, there was a understandable fear that the distress of a single firm might trigger an unstoppable domino effect that would plunge the entire world financial system into chaos.
Within days, American government officials were forced to broker the sale of another ailing firm, Merrill Lynch, to the Bank of America and negotiated the largest bailout in history, taking close to an 80% equity share in the world’s largest insurer, American International Group (AIG) (which had guaranteed over $500 billion in risky insurance portfolios), in return for an infusion of $85 billion to prevent it from going under and sparking a calamitous global banking collapse. Even these steps failed to stem the panic spreading like wildfire: the precarious circumstances of these large financial institutions precipitated a further seizing up of the nation’s credit markets with additional bank failures and reorganizations; global stock market capitalization fell to its lowest point since 1929; and the Dow Jones industrial average registered the largest drop in its 120-year history. A financial collapse threatened to plunge the global economy into another Great Depression, which was averted only by the decisive action of political leaders of the advanced nations. By any calculation of the historical imagination, the 2008 crash was the most significant event of the 21st century − the first in a series of worldwide crises, such as the 2020 global pandemic and climate change, that utterly remade the world.
The proximate triggers for the crisis could be located in a series of events stretching back over decades. The first of these was the ever-deepening penetration of financial relations into the fabric of American society as exemplified by several trends: the gradual spread of credit card debt to growing numbers of consumers; the extension of home ownership (via so-called “sub-prime” mortgages and cheap credit) to sectors of the population with modest incomes and little history of debt; an increase in borrowing to pay for the precipitous rise in college fees; and individualized pension arrangements (Krippner, 2011). On the one hand, this new regime of accumulation, which Colin Crouch felicitously termed “privatized Keynesianism” (Crouch, 2009), enabled millions of Americans with stagnating salaries and little job security to maintain their living standards and personal consumption via increased credit and equity borrowing from rising house prices. On the other hand, it left them increasingly vulnerable to the volatility and discipline of the market thereby tying them ever more closely to the vicissitudes of the interdependent global economy.2 This new system of financialization depended upon an unholy combination of financial wizardry in the creation of new commodities as well as ideological machination. A new technology of “securitization” was designed to transform any debt into a new security that could be bought and sold by investors and institutions: “collaterized debt obligations” (mortgage-based securities packaged into highly complex bundles of standardized financial assets) and credit default swaps (insurance against corporate default) were among the most prominent innovations that constituted an important new revenue stream for banks, investment firms and hedge funds (Tett, 2009). The entire system was predicated on an almost religious fervor that proclaimed markets were inherently self-regulating and self-correcting. Its proponents insisted these new forms of financial commodities distributed risk far more effectively than ever before and that, as a consequence, government regulation of the financial sector should be kept to a bare minimum.
Unfortunately, the idea of self-regulation turned out to be a dangerous illusion. These new instruments underpinning the shadow financial sector proved to be so complex and complicated that even their own creators did not fully understand them and more importantly, could not accurately estimate how much risk was being placed into the system so the potential for self-destructive behavior and crisis grew exponentially (Tett, 2009). Once the housing bubble collapsed in 2006, the entire house of cards collapsed. A precipitous decline in house prices meant that many new homeowners could no longer afford to make their debt payments, thereby precipitating a record number of foreclosures and defaults, which in turn led to a collapse in the price of assets that were tied to underwater mortgage payments resulting in a sharp decline in household spending once house prices collapsed. This saddled financial institutions with a growing mountain of bad investments and debt that they could only offload at reduced prices which only worsened their position. AIG, which had insured many of these mortgage-backed securities, was faced with the calamitous prospect of an unprecedented number of payout claims but were unable to borrow sufficient capital to cover its potential losses because of a downgrade of its credit rating. Similarly, because of their precarious position, banks refused to lend, thereby paralyzing business investment throughout the economy and in effect leaving the entire market economy teetering on a precipice.
In the aftermath of the wreckage, four paradoxes regarding the crisis seemed particularly perplexing to policy makers and commentators alike: the issue of predictive failure, the legacy of self-regulation, the symbiotic relationship of states and markets, and the deeper foundations of the crisis. The first of these concerned the fundamental question of how policy makers and economists failed so grievously to foresee the looming crisis. Even though Warren Buffett (and a handful of contrarians) had presciently warned as early as 2003 about the perils of what he sarcastically called “financial weapons of mass destruction”, warning signs were ignored, the very possibility of a catastrophic failure was presumed to have been eradicated from the market economy, and the magnitude of systemic risk was grossly underestimated (Eichengreen, 2015). As one commentator astutely noted: “The true nature of the international system…was not realized until it failed…The awful suddenness of the transformation thus took the world completely by surprise”. What explains such a pervasive social amnesia? Ideological blinders certainly played a role: the blind faith in the liberal creed, which exaggerated the self-healing resilience of the market, meant the dangers of a financial tsunami could be blithely overlooked. Nor should sheer greed be underestimated as profit margins and individual bonuses at firms were directly tied to this toxic alchemy, giving all actors a reckless incentive to knowingly maintain a deliberate “social silence” about potential catastrophic risks (Tett, 2009).
But what also seemed inescapable was the role of willful complacency in overlooking the potential for crisis within the new global architecture, a psychological inability to properly appreciate the significance of the periodic crises that had plagued the global economy for well over a decade (stretching from the Asian debt crisis of 1997 to the dot com bubble in 1999–2000, the Enron scandal of 2001 and the Argentine crisis of 2002) and the willingness to discount any countervailing evidence or voices calling for a more prudent course. To other observers, these vertiginous swings demonstrated that crisis was an essential feature of capitalism and that all eras of political and economic stability are plagued by contradictions that carry the seeds of their own downfall (Piketty, 2014).
A related paradox of the 2008 crisis was that, despite the useful fiction of self-regulation espoused by neoliberal acolytes, the entire edifice of the financial system proved to be utterly dependent on the guarantee of political protection. “The ferocity of the financial crisis in 2008 was met with a mobilization of state action without precedent in the history of capitalism”, argues Adam Tooze. “Never before outside wartime had states intervened on such a scale and with such speed” (Tooze, 2018, 166). The United States government’s actions were instructive in this regard. Faced with the very real possibility of a meltdown of the entire banking and financial system and a subsequent world-wide depression comparable to the 1930s, the federal government (spanning both the Bush and Obama administrations) acted swiftly to avert disaster and prevent the crisis from diffusing across the global economy: it committed $700 billion to the Troubled Asset Relief Program to ease credit and money markets; it passed a Recovery Act that committed nearly $800 billion in stimulus spending (a combination of tax cuts, aid to states and infrastructure projects); it pledged billions to the rescue of individual firms (such as Bear Sterns, AIG and Washington Mutual) and private housing financiers (Fannie Mae and Freddie Mac) whose implosion threatened to spread the crisis; it networked with finance ministers and treasury officials around the globe to orchestrate a coordinated response to the crisis (Drezner, 2014).
But, the state was equally implicated in the origins of the crisis as well. The laissez-faire model of limited oversight encouraged the perilous overleveraging of financial institutions, the incentives market actors had to engage in risky behavior and the pathways that they took to alleviate their predicament (Krippner, 2011). As one commentator bluntly stated, “The road to the free market was opened and kept open by an enormous increase in continuous, centrally organized and controlled intervention”. Even the most ardent of free market ideologues now seemed to accept the valuable lesson, all too easily forgotten in the heady days of market triumphalism, that capitalism is not self-sustaining or self-regulating but rather requires a panoply of political and social institutions to keep it from self-destructing (Block, 2003). As Dani Rodrik insisted: “[M]arkets do not create, regulate, stabilize or sustain themselves. The history of capitalism has been a process of learning and relearning this lesson” (Rodrik, 2011, 237). “No market economy separated from the political sphere is possible” is how a commentator summarized this new conventional wisdom.
However, in reality, the knowledge that governments would rescue market actors, no matter how egregious the risk taking, in effect meant that the people most responsible for the crisis in the first place were bailed out by the public purse. In what Mark Blyth called the “greatest bait and switch in human history”, banks were able to privatize their gains to shareholders and executives while socializing their losses back to struggling taxpayers as public debt. This raised a fundamental question of fairness as many reasonably inquired how it was that the very people who created the mess in the first place were rewarded while the vast majority suffered from greater insecurity and austerity (Blyth, 2013). In the new liberal world, individuals were forced to hold multiple jobs, their incomes have stagnated or been subject to increasing volatility, with insecurity inscribed into the everyday practices of a new flexible capitalism (Sennett, 1998). One writer lambasted economic elites for their “mystical readiness to accept the social consequences of economic improvement, whatever they may be”. Not surprisingly, the crisis brought questions of rising inequality and the fairness of the economic system back into public discourse (Piketty, 2014) and various social movements – from the Occupy movement in the United States to the Indignados in Spain – called for a fundamental restructuring of social and economic priorities. The crisis confirmed yet again that one of the enduring controversies in modern life is adjudicating the proper relationship and boundary between economy and society and the implicit recognition that social needs and stability might not be well served by the narrow bottom lines of market actors.
In the aftermath of the cataclysm, many assumed that the events of 2008 would naturally precipitate a reconsideration of the mystical shibboleths of market self-regulation that dominated political and economic discourse over the past several decades. Early on, both sides of the political spectrum appeared convinced the crisis had once and for all demonstrated that the model of free market capitalism was inherently fragile and unsustainable and that neoliberal hegemony had suffered a fatal blow. On the right, the presiding French President Nicolas Sarkozy pronounced the demise of laissez-faire: “A certain idea of globalization is drawing to a close with the end of a financial capitalism that imposed its logic on the whole economy. The idea that the markets are always right was a crazy idea”. On the left, Gordon Brown, Labour Party Prime Minister of the UK at the time, signaled a similar paradigm shift:
“Today we are seeing not just the collapse of failed institutions but the collapse of a failed laissez faire dogma. In this first financial crisis of the global age the old free market fundamentalism, no matter how it is dressed up, has been found wanting…[T]he events of the past months bear witness, more than anything in my lifetime, to one simple truth: markets need morals”
(Brown, 2009)3
Even the house organs of financial capitalism were guilty of a nervous wobble, with The Economist blaring a headline in October 2008 proclaiming − “Capitalism-was it a good idea?” − and the Financial Times publishing a series the following year dedicated to the theme of “the future of capitalism”. Indeed, many appeared convinced that the crisis might spur a new debate about restructuring capitalism – of creating a new social contract between market institutions and society predicated on fairness, equality and responsibility − that could even revive more radical alternatives representing a systemic alternative to global capitalism (Wright, 2010).
But perhaps the most puzzling outcome of the crisis is that despite the manifest failures of the self-regulating market to ensure prosperity and growth, neoliberalism’s demise has been greatly exaggerated. Colin Crouch, in fact, argued that laissez-faire is “emerging from the financial collapse more politically powerful than ever” (Crouch, 2011, viiii). In the United States, the most protracted popular mobilization was directed at the Obama Administration’s efforts to ameliorate the effects of unconstrained markets such as the Affordable Care Act and Dodd-Frank legislation. In Europe, the dominant political conversation and policy agenda has emphasized deficit reduction, sound budgets and the inevitability of austerity rather than the construction of feasible alternatives to neoliberal hegemony. Mired in depression-like conditions of slow growth, staggeringly high levels of unemployment, savage cuts to social safety nets and disintegrating living standards as a result of austerity mania, a growing sense of despair and hopelessness pervades the continent...