1.1 Databusting
In The Tipping Point, first published in 2000, Malcolm Gladwell explores ways in which complex systems suddenly change. Gladwell makes extensive use of data to show that changes have taken place, and then explores why this might have happened. In his opening chapter, he considers a sudden and dramatic change in crime statistics in New York City:
New York City in the 1980s (was) a city in the grip of one of the worst crime epidemics in its history. But then, suddenly and without warning, the epidemic tipped. From a high in 1990, the crime rate went into precipitous decline. Murders dropped by two-thirds. Felonies were cut by half. Other cities saw their crime drop in the same period. But in no place did the level of violence fall further or faster. (Gladwell, 2000: 137)
Why did this happen? Gladwell looks at the data which was available and comes to a number of conclusions:
During the 1990s violent crime declined for a number of fairly straightforward reasons. The illegal trade in crack cocaine ⌠began to decline. The economyâs drastic recovery meant that people who might have been lured into crime got legitimate jobs instead, and the general aging of the population meant that there were fewer people in the age range ⌠that is responsible for the majority of violence. (2000: 140)
After these initial observations, Gladwell notes that the situation in New York was, however, âa little more complicatedâ. The cityâs economy hadnât improved in the early 1990s, and if anything welfare cuts had hit the city hard. The crack cocaine epidemic was in long-term decline and lots of immigration meant that the cityâs population was actually getting younger. And the reduction in violent crime which was being recorded was dramatic. As Gladwell says, âOne would expect (these trends) to have gradual effects. In New York, the decline was anything but gradual. Something else clearly played a role in reversing New Yorkâs crime epidemicâ (2000: 140â1).
Gladwell then looks at what that something else might have been. He suggests that it might have been what has become known as the âbroken windowsâ theory of crime: when a window is broken and left in an unrepaired state, it signals to the community at large that the rule of law has broken down, which leads to a declining spiral in the community with unrepaired broken windows. Because many negative aspects of life in New York City were not being addressed, from graffiti and fare-dodging on the subway to tacit complicity in low-level criminal disorder, the cityâs criminals were acting with impunity.
Gladwell charts the clean-up of the New York City Transit System between 1984 and 1990, as a widespread problem with graffiti was tackled head on. This was followed by a concentrated focus on fare-dodging from 1990 onwards, and then a policy of âzero-toleranceâ policing following the election of mayor Rudy Giuliani in 1994. All of this activity coincided with the dramatic fall in the rate of violent crime, all of which was explained neatly â as far as Gladwell is concerned â by the actions of those in authority.
As with almost any neat explanation of complicated human interaction, with a bit of lateral thinking, someone somewhere will be able to find an entirely plausible alternative hypothesis which casts doubt on the original theory. Suddenly, what seems to be a clear explanation often turns out to be simply one of many possible simple explanations. In the case of the drop in crime in New York, one alternative hypothesis came from Steven Levitt and Stephen Dubner, who popularised their ideas in their book Freakonomics, published in 2007.
Donohue agreed with Gladwell up to a point. In an original 2001 academic paper, which was drawn on for the above 2007 book, Levitt and his co-author John Donohue had looked at similar crime statistics to those Gladwell had considered. Their initial findings were similar to Gladwellâs:
Since 1991, the United States has experienced the sharpest drop in murder rates since the end of Prohibition in 1933. Homicide rates have fallen more than 40 per cent. Violent crime and property crime have each declined more than 30 percent. Hundreds of articles discussing this change have appeared in the academic literature and popular press. (Donohue and Levitt, 2001: 379)
Their alternative hypothesis was that, rather than the efforts of the Transit Authority and the effects of zero-tolerance policing, the real reason for the fall in crime was that âlegalised abortion has contributed significantly to recent crime reductions. Crime began to fall roughly eighteen years after abortion legalisationâ (Donohue and Levitt, 2001: 379). They stated their hypothesis that âLegalised abortion appears to account for as much as 50 percent of the recent drop in crimeâ (2001: 379).
Donohue and Levitt argued that, following the nationwide legalisation of abortion in 1973, poor mothers were much less likely to have children who they would have struggled to raise to become law-abiding citizens. They explored the links between the kinds of crime which had ravaged New York City and the deprived, unstable backgrounds of those who had been contributing to the high crime statistics. They also looked at the effects in different cities and states of changes to allow localised legalised abortion before the national change brought about by the famous Supreme Court decision in the case of Roe v Wade in 1973.
Donohue and Levitt didnât entirely dismiss the argument made by Gladwell, but they suggested that the effects of legalised abortion were much greater than the alternative theory put forward in The Tipping Point. Was Gladwell wrong? Gladwell himself responded to Donohue and Levittâs interpretation of the data, poking holes in their argument by, for example, questioning why the widespread availability â and use â of contraceptive pills from the mid-1960s did not have the same effect as the much less prevalent use of abortion as a form of birth control from the 1970s onwards.
Gladwell, Donohue and Levitt were not the only prominent voices trying to find an explanation for the situation in New York. Steven Pinker, leading academic and writer of books on popular science, wrote about the issue in his 2011 book The Better Angels of Our Nature. Pinker noted that the Freakonomics theory seemed âtoo cute to be trueâ and noted that, âany hypothesis that comes out of left field to explain a massive social trend with a single overlooked event will almost certainly turn out to be wrong, even if it has some data supporting it at the timeâ (Pinker, 2011: 143).
In Pinkerâs book, which was about broader declines in rates of violence in human society over time, he explored the Freakonomics theory, drawing on other data to support his arguments. Pinker noted, for example, that the proportion of children born to mothers in the categories Donohue and Levitt had identified as vulnerable should have decreased according to Donohue and Levittâs theory, whereas it had actually substantially increased.
Pinker also suggested that there were compelling arguments to suggest that mothers who avoided having unwanted children were likely to be more responsible citizens than those in similar circumstances who did not, and that therefore the opposite to the Freakonomics claim should have occurred, leaving a generation more likely to commit crime. Pinker put forward his alternative theory, based on the same data utilised by Donohue, Levitt and Gladwell. In Pinkerâs view, the violent crime decline happened because older criminals had laid down their weapons and younger cohorts simply did not follow in their footsteps.
So what did happen to cause the decline in violent crime in New York City in the 1990s? It rather depends on the point at which you enter the debate, whether you have any strong desire to disagree with the general consensus, and your need to question the views of others. The most obvious truth is that, using virtually the same data, different people are likely to come to different conclusions. One explanation may eventually become the accepted narrative, but human actions are complicated and alternative theories may explain the same or similar facts in contrary but logically plausible ways.
A more recent example of this phenomenon, this time in education, is the thorny issue of what has become known as the London Effect. At a point in the early 2000s, pupils in Inner Londonâs state schools began to record better and better examination results at the age of 16. Starting from a point which was noticeably lower than the average for children across England, GCSE results in Inner London rose inexorably into the 2010s, leaving other regions of the country behind. By 2016, the typical child in an Inner London state secondary school was attaining qualifications at 16 which were 10% higher than the national average. In 1998, Inner London had been the worst-performing region in the country, with results 18.5% lower than the average measure.
The first major theory which attempted to explain this âLondon Effectâ was put forward by Ofsted, the governmentâs school inspection agency, in 2010. Ofsted explained that an initiative called the London Challenge had:
continued to improve outcomes for pupils in Londonâs primary and secondary schools at a faster rate than nationally. Excellent system leadership and pan-London networks of schools allow effective partnerships to be established between schools, enabling needs to be tackled quickly and progress to be accelerated. (Ofsted, 2010: 1)
The London Challenge was an initiative introduced into London secondary schools in 2002, and extended to primary schools in 2008. It used outside advisers to support schools which were deemed to be underperforming. Ofsted identified four areas which it suggested had been the cause of the rise in pupil outcomes: clear leadership, experienced external advisers, work to improve the quality of teaching and learning, and the development of robust tracking systems in schools.
This narrative held sway until 2014, when the Institute for Fiscal Studies (IFS) considered the issue, and added some new data, and a new theory, to the conversation. The IFSâs conclusion was that, rather than the London Effect being the result of anything which happened in secondary schools, the reason for it was a change in the prior attainment of students who began to enter Inner London secondary schools 15 years earlier.
Key Stage 2 scores had improved in the late 1990s and early 2000s, but the IFS report was unsure why this had happened:
What caused the improvement in Key Stage 2 test scores that led to the âLondon effectâ at Key Stage 4 is not clear. However, the explanation will be related to changes in Londonâs primary schools in the late 1990s and early 2000s. This means that programmes and initiatives such as the London Challenge, the Academies Programme, Teach First or differences in resources are unlikely to be the major explanation. (Institute for Fiscal Studies and Institute of Education, 2014: 8)
The IFS then went on to suggest that, since the national literacy and numeracy strategies had rolled out at the right time, these might have been the cause of the rise in GCSE pass rates seen in Inner London a few years later. Even if this was not exactly the case, the IFS suggested that the theory that the London Challenge, structural changes such as the academies programme, or initiatives such as Teach First were responsible for the London Effect, was unlikely to be true.
A further report was issued at the same time as the IFS report, this time claiming that the improvements were due to efforts being made in secondary schools. This report offered no additional data and relied on narratives generated by those who believed themselves to be responsible for the successes of the schools for which they were responsible (CfBT Education Trust, 2014).
Following these two alternative explanations, a further theory was added to the mix, as the Centre for Market and Public Organisation (CMPO) at the University of Bristol published a report which noted that the improvements in London schools were âentirely accounted for by ethnic compositionâ. The CMPO report introduced some further numerical data to the mix, using some detailed statistical analysis which enabled it to suggest that âif London had the same ethnic composition as the rest of England, there would be no âLondon Effectââ (Burgess, 2014: 3). In essence, this theory suggested that London was simply becoming increasingly different to the rest of the country, and therefore that like was not being compared with like. The London Effect was interesting, but didnât offer any...