1
Introduction: Science-Driven Policing, or Police Indifference to Science?
In 2010, and for the previous nine years running, CSI: Crime Scene Investigation ranked among the most popular shows on television in the United States.1 The program became a hit so quickly after its premiere in 2000 that the original series, set in Las Vegas, spawned two clones: CSI: Miami and CSI: New York. These shows put a new twist on the old police procedural drama. The CSI officers solved crimes with high-tech forensics: gathering DNA, lifting fingerprints with revolutionary new techniques, and using science to reconstruct the paths of bullets. Watching these programs, the viewer knows that policing has changed. For every member of the CSI team using a gun, more wield test tubes, DNA sampling equipment, and all manner of futuristic gizmos designed to track down witnesses and catch the bad guys.2 The show signals a break with the past, because it revolves around the way police use modern science to find the guilty and bring them to justice.
CSI reflects the emergence of DNA evidence as a powerful tool since it first appeared in American criminal courts in the late 1980s. With DNA and other formidable forensic techniques on our side, little could escape our scientific police work. In this new world, in which science could tell us definitively that the police had the right guy, with a probability of millions or even billions to one, the game had changed for good. The âjust the facts, maâamâ approach of Sergeant Joe Friday, and the slow and inexact old-school ways that might or might not turn up evidence, began to seem like quaint relics of a bygone era. Sure, some real-world police protested that CSI raised unrealistic public expectations of both forensic science and the police,3 but CSI simply put a drama-worthy sheen on the way that police departments liked to portray themselves in the age of DNA: using the best of what science had to offer to construct air-tight criminal cases. Police frequently announced that they had used DNA to catch guilty people, sometimes for crimes far in the past, attracting wide public notice and bolstering law enforcementâs science-based image. With headlines like âState, City Police Laud Increase in Arrests Using DNAâ4 in Baltimore, âGeorgia DNA Solves 1,500 Casesâ5 in Atlanta, âDNA Databanks Allow Police to Solve at Least Four Murdersâ6 in Memphis, and âWith Added Lab Staff, DNA Tests Resolve String of Old Killingsâ7 in Milwaukee, the direction and approach of police work now seem woven together with the latest scientific advancements. Science has given police and prosecutors an enormous, unbeatable advantage.
But this all-too-common view of modern police work using science to move into a gleaming, high-tech future turns out to be a myth. When we strip away the veneer of television drama and the news stories about how DNA has helped police catch another killer or rapist, the real picture concerning law enforcement and science actually looks much different. With the exception of DNA (and then, only sometimes), most of our police and prosecutorial agencies do not welcome the findings of science; they do not rush to incorporate the latest scientific advances into their work. On the contrary, most police departments and prosecutorâs offices resist what science has to say about how police investigate crimes. The best, most rigorous scientific findings do not form the foundation for the way most police departments collect evidence, the way they test it, or the way they draw conclusions from it. Similarly, most prosecutors have not insisted upon evidence collected by methods that comply with the latest scientific findings in order to assure that they have the most accurate evidence to use in court. Like police, most prosecutors have resisted. And this resistance comes despite a nearly twenty-year drumbeat of exonerations: people wrongly convicted based on standard police practices, but proven irrefutably innocent based on DNA evidence. These DNA exonerations, now numbering more than 250 nationwide, prove that traditional techniques of eyewitness identification, suspect interrogation, and forensic testing contain fundamental flaws that have resulted in miscarriages of justice.
Yet the resistance continues. At best, police and prosecutors have used advances in science selectively, when it helps their cases. At worst, they have actively opposed replacing questionable investigative methods with better, empirically proven techniques, sometimes even insisting on retaining flawed methods. As a matter of principle and logic, this indifference to improved practices that will avoid miscarriages of justice seems puzzling and irresponsible, since we know for certain that we can do better than we used to. As a matter of concrete cases, when we see that the failure to use our best methods sometimes leads to both the punishment of the innocent and the escape of the guilty, indifference can become a catastrophe for our system of justice. It is this resistance to sound, science-based police investigative methods that forms the heart of this book.
Brandon Mayfield and the Infallible Science of Fingerprinting
Brandon Mayfieldâs case makes a striking example. In March of 2004, terrorists bombed four commuter trains in Madrid, killing 191 people and wounding approximately eighteen hundred. Spanish police soon found a partial fingerprint on a plastic bag in a car containing materials from the attack. Using a digital copy of the fingerprint sent by the Spanish police, a senior FBI fingerprint examiner made âa 100% identificationâ of Brandon Mayfield, an Oregon attorney, whose prints appeared in government databases because of his military service and an arrest years earlier.8 Three other fingerprint experts confirmed the match of Mayfield to the print found on the bag: FBI supervisory fingerprint specialist Michael Wieners, who headed the FBIâs Latent Print Unit; examiner John Massey, a retired FBI fingerprint specialist with thirty years of experience; and Kenneth Moses, a leading independent fingerprint examiner.9 The FBI arrested Mayfield, and at the Bureauâs request, a court incarcerated him for two weeks, despite the fact that he did not have a valid passport on which he could have traveled to Spain; he claimed he had not left the United States in ten years.10 When the FBI showed the Spanish police the match between the latent print from the bag and Mayfieldâs prints, the Spanish police expressed grave doubts. The FBI refused to back down, even declining the request of the Spanish police to come to Madrid and examine the original print.11 Only when the Spanish authorities matched the print with an Algerian man living in Spain did the FBI admit its mistake. The Bureau issued an apology to Mayfield12âan action almost unprecedented in the history of the FBIâand later paid him millions of dollars in damages in an out-of-court settlement.13
The extraordinary apology and the payment of damages may help to rectify the injustice done to Mayfield and his family. But for our purposes, what happened after the FBI admitted its mistakes and asked the court to release Mayfield shows us something perhaps more important. The Mayfield disaster occurred because, among other things, the verification of the original FBI match of Mayfieldâs printâa procedure performed by three well-regarded fingerprint expertsâignored one of the most basic principles of scientific testing: the verification was not a âblindâ test. The three verifying examiners knew that an identification had already been made in the case, and they were simply being asked to confirm it.14 No scientific investigation or basic research in any other fieldâa test of the effectiveness of a new over-the-counter medicine, for exampleâwould ever use a nonblind testing procedure; yet nonblind verification is still routine in fingerprint identification. Further, the FBI conducted proficiency testing of all of the examiners involved in the Mayfield caseâbut only after revelation of the errors, not before. At the time of Brandon Mayfieldâs arrest, the FBI did no regular proficiency testing of its examiners to determine their competence, even though such testing routinely occurs in almost any commercial laboratory using quality-control procedures. Further, and perhaps most shocking of all, the fingerprint comparison in the Mayfield case relied not on rigorously researched data and a comparison made under a well-accepted set of protocols and standards, but on the unregulated interpretations of the examiners.
Yet, confronted by an undeniable, publicly embarrassing error that highlighted the crying need for fingerprint analysts to adopt standard practices used in every scientific discipline, the experts refused to yield. Their answer was resistance and denial: resistance to change, and denial of the existence of a problem. Months after the humiliating exposure of the Mayfield debacle, some of those involved continued to insist that the matching of prints to identify unknown perpetrators could not produce mistakesâever. In an article on the Mayfield case and other instances of mistaken forensic identification, Agent Massey, who had verified the print as belonging to Mayfield, told the Chicago Tribune that he and his fellow analysts had just done their jobsânothing more. He acknowledged that when he verified Mayfieldâs print, he knew that another examiner had already declared the print a match; in other words, he had not performed a blind verification test. Nevertheless, he said, âIâll preach fingerprints till I die. Theyâre infallible.â15 Another examiner interviewed about the Mayfield case made an almost identical, unequivocal statement: âFingerprints are absolute and infallible.â16 When another false fingerprint match led to the two-year incarceration of a man named Rick Jackson, CBS News correspondent Lesley Stahl confronted another FBI agent on the news program 60 Minutes. The agentâs words eerily echoed Agent Masseyâs declarations of fingerprint infallibility. After a demonstration of fingerprint identification by the agent, Stahl asked, âWhat are the chances that itâs still not the right person?â Without hesitation, the agent replied, âzero,â because â[i]tâs a positive identification.â17
As an institution, the FBI did no better at accepting its error and changing its practices. The Bureau announced that it would conduct an investigation of the practices of its Latent Fingerprint Unit, with an eye to âadopting new guidelines.â (The Latent Fingerprint Unit conducted this investigation itself.) As these words are written, more than six years after a mistaken fingerprint match almost sent Brandon Mayfield to prison for the rest of his life, the FBI laboratoryâs fingerprint identification division does not use standard blind testing in every case. The laboratory widely considered to have the best fingerprint identification operation in the country continues to resist change and remains in denial, and has refused to move toward practices and safeguards that the scientific world has long considered standard.
How We Got Here
To understand how we got to this point, we must start with DNA. DNA analysis did not develop in the context of police-driven forensic investigation, but rather as a wholly scientific endeavor. This helps explain why DNA testing has always included fully developed standard protocols for its use and the ability to calculate the probability of its accuracy based on rigorously analyzed data.18 This made courts willing to allow its use as proof. Despite its obvious complexity, DNA analysis had been thoroughly tested and was well grounded in scientific principles. As long as forensic scientists followed proper protocols for handling and testing the evidence, DNA could âindividualizeââindicate whether a particular person had or had not supplied a tiny piece of tissue or fluidâwith a degree of precision unimaginable before. The potential for solving crimes, particularly murders and rapes by strangers in which police might find some fragment of the assailantâs DNA left behind, seemed limitless. Defendants who might have escaped detection and conviction got the punishment they deserved. Even decades-old âcold casesâ would yield to this marvelous new tool providing that enough testable biological material still existed, and advances in testing rapidly made accurate analysis of ever smaller samples possible.19
Soon enough, though, police and prosecutors found that the great sword of DNA had two edges: it could confirm guilt like nothing else, but it could also exclude a suspect that the authorities believed had perpetrated the crime. Sometimes the prosecution had already tried the suspect and obtained a guilty verdict. DNA could convict, but it could also throw police investigations, charges, and even convictions into the gravest doubt. A pattern emerged: many of the cases upended by DNA rested on well-accepted types of evidence, like identifications by eyewitnesses, confessions from suspects, or forensic science producing a âmatchâ with a perpetrator. Thus DNA began to demonstrate that these traditional methods actually did not have anything like the rock-solid basis everyone in law enforcement had always imagined. The very basis for trusting these standard types of evidence began to erode.
By early 2010, DNA had resulted in the exoneration of more than 250 previously convicted people, some of whom had spent years on death row.20 By far, the single most common factor, found in 75 percent of these cases, was incorrect eyewitness identifications;21 the second most common type of error was inaccurate (or sometimes downright fraudulent) forensic testing.22 Perhaps most surprisingly, DNA also proved that some suspects did something most people considered unimaginable: they confessed to serious crimes that they had not committed.23 All in all, the DNA exoneration cases showed, beyond any doubt, that we simply had to rethink some of our fundamental assumptions about the most basic and common types of evidence used in criminal cases. An eyewitness who expressed absolute certainty when identifying the perpetrator could actually be wrong. A person who confessed to a crime might not actually have done it. And forensic analysis, including fingerprint matching, was not invariably correct.
With DNA exonerations continuing every year in the 1990s and 2000s, more and more research on traditional police investigative methods began to come to prominence. The research had earned acceptance in the scientific community, sometimes decades before, through peer review, publication, and replication by other scientists, but most of it had remained obscure except to a small circle of researchers. With the advent of DNA exonerations, the science became important to anyone interested in the integrity of the criminal justice system. Decades of these studies, it turned out, pointed out flaws in the ways that police conducted eyewitness identifications. Other research showed that the most widely used method of interrogating suspects rested upon assumptions shown to be deeply flawed, and that common interrogation methods created real risks of false confessions.
DNAâs precision and scientifically sound foundation effectively raised the bar for every other forensic technique and investigative method. Experts and researchers began to call traditional (i.e., non-DNA) investigative methods into question.24 The full scope of damage to the credibility of police investigative tactics became visible in 2009, with the National Research Councilâs report on forensic sciences, Strengthening Forensic Science in the United States: A Path Forward.25 In this landmark report, discussed in detail in chapter 2, a large group of the most knowledgeable people in forensic science and related fields declared that, aside from DNA and a few other solidly scientific disciplines such as toxicology, almost none of the forensic science disciplines could claim any real scientific basis for their results. Most of the forensic work done in the United States did not follow the standard scientific precautions against human cognitive biases. Striking at the core of forensic science, particularly fingerprint analysis, the report stated that (again with the exception of DNA and some other disciplines based firmly in the hard sciences) none of the common forensic disciplines could proclaim themselves rigorously reliable, let alone infallible.26
But this sudden exposure of the shortcomings of traditional police investigation tactics also had another, more positive side. The same bodies of research that demonstrated the failings of traditional eyewitness identification testimony, interrogation methods, and forensics also revealed better, more accurate methods to solve crimesâor, at the very least, improved ways to investigate that would greatly reduce the risks of incorrect charges and convictions. These new methods could help guard against mistakes, both by producing more reliable evidence and by eliminating common human cognitive biases from police and forensic investigation. Many of these improved methods would cost very littleâsometimes nothing. Thus, the research on traditional investigative methods did not just point out flaws; it pointed the way to better, more reliable tactics. A few examples make this plain.
⢠Research by cognitive psychologist Gary Wells and others demonstrated that eyewitness identification procedures using simultaneous lineupsâshowing the witness six persons together, as police have traditionally doneâproduces a significant number of incorrect identifications. This is the case because showing the six persons to the witness simultaneously encourages witnesses to engage in relative judgment: they make a selection by asking themselves, âWhich of the people in the lineup looks most like the perpetrator, even if I canât say for sure that the perpetrator is there?â Wells discovered that if he showed the persons in the lineup to the witnesses sequentiallyâone at a time, instead of all six togetherâa direct comparison of each individual person in the lineup to the witnessâs memory of the perpetrator replaces the flawed relative judgment process, reducing the number of false identifications significantly.
⢠Research has demonstrated that interrogations that include threats of harsh penalties (âTalk, or weâll ask for the death penalty.â) and untruths about the existence of evidence proving the suspectâs guilt (a false statement by police asserting that they found the suspectâs DNA at the scene) significantly increase the prospect of an innocent person confessing falsely. By eliminating these tactics, police can reduce false confessions.27
⢠Fingerprint matching does not use probability calculations based on collected and standardized data to generate conclusions, but rather human interpretation and judgment. Examiners generally claim a zero rate of errorâan untenable claim in the face of publicly known errors by the best examiners in the United States. To preserve the credibility of fingerprint examination, forensic labs could use exactly the kinds of proficiency testing and quality assurance standards scientists have crafted for other fields. These methods have become widely available; scientists, engineers, and researchers all use them for work that requires high levels of reliability.28
The Reaction: From Indifference to Hostility
In light of all of the challenges that science n...