Big Data, Crime and Social Control
eBook - ePub

Big Data, Crime and Social Control

  1. 230 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Big Data, Crime and Social Control

About this book

From predictive policing to self-surveillance to private security, the potential uses to of big data in crime control pose serious legal and ethical challenges relating to privacy, discrimination, and the presumption of innocence. The book is about the impacts of the use of big data analytics on social and crime control and on fundamental liberties.

Drawing on research from Europe and the US, this book identifies the various ways in which law and ethics intersect with the application of big data in social and crime control, considers potential challenges to human rights and democracy and recommends regulatory solutions and best practice. This book focuses on changes in knowledge production and the manifold sites of contemporary surveillance, ranging from self-surveillance to corporate and state surveillance. It tackles the implications of big data and predictive algorithmic analytics for social justice, social equality, and social power: concepts at the very core of crime and social control.

This book will be of interest to scholars and students of criminology, sociology, politics and socio-legal studies.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Big Data, Crime and Social Control by Aleš Završnik in PDF and/or ePUB format, as well as other popular books in Social Sciences & Criminology. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2017
Print ISBN
9781138227453
eBook ISBN
9781315395760

Part I
Introduction

1
Big data

What is it and why does it matter for crime and social control?
Aleš Završnik
Big data is a cliché, because people think it is this magic mountain of gold. It’s not a mountain of gold. Most of it is trash.
Todd Yellin, Netflix (Yellin, 2016)

Big data semantics: “meaning extraction”

If the limits of our language entail the limits of our worlds (Wittgenstein, 2005), the language of big data is tearing down the world of what counts as crime-relevant knowledge (now databases), what counts as proper reasoning (now algorithms) and how we should tackle – prevent and investigate – crime (now predictive policing) and prosecute cases (now automated justice). The new mathematical language serves security purposes well (Amoore, 2014: 426). “[T]he mathematical sciences offer a grammar of combinatorial possibilities that allows for things – people, objects and data – to be arranged together, for links to be made.” (Amoore, 2014: 431) New concepts and tools are being invented in order to understand crime (knowledge production) and act upon this knowledge (crime control policy): “meaning extraction”, “sentiment analysis”, “opinion mining”, “computational treatment of subjectivity in text”, visualisation and mining tools, algorithmic prediction, and data analytics. All these are rearranging and blurring the boundaries in the security and crime control domain.
The challenges of big data are to accumulate large quantities of data and also to clean and structure the data in order to extract meaningful actionable instructions – but for whom and at what price? These are never completely objective, value-free data that can speak for themselves. Already at the language level, researchers have shown how natural language necessarily contains human biases. The training of machines, also known as machine learning, on language corpora means that artificial intelligence (AI) will inevitably imbibe these biases as well (Caliskan-Islam, Bryson, & Narayanan, 2016). The researchers claim that the process of “de-biasing” cannot help eliminate biases either. The result would only be “fairness through blindness” because “prejudice can creep back in through proxies” (Caliskan-Islam et al., 2016). Research on sentencing prediction instruments has confirmed exactly how criminal history is in fact a proxy for race (Harcourt, 2015): the racial disproportionality in the prison population hits the African-American community hardest, in large part due to sentencing instruments being based on analysis of past criminality.
The new language and semantic shifts in crime control threaten fundamental civil liberties. “The start and finish of the criminal justice process are now indefinite and indistinct” (Marks, Bowling, & Keenan, 2015), and this extends – in depth and breadth – the scope of the surveillant gaze (Marx, 2002). Agencies mandated to fight terrorism, for instance, are now in pursuit of spotting “persons of interest”, meaning not only what a person might be, but what a person might become (Lyon, 2014: 7). Similarly, the Eindhoven Living Lab in the Netherlands checks for “escalated behaviour” to arrive at big-data-generated determinations of when to intervene (Galič, 2017). The notion includes yelling, verbal or otherwise aggressive behaviour, or showing signs that a person has lost self-control (de Kort, 2014). Such behaviour is targeted and mediated in order to be “diffused”. The goals of the intervention are to induce subtle changes, e.g. “a decrease in the excitation level”, to refocus attention, to encourage social behaviour, or to increase the self-awareness and self-control of an individual or group (de Kort, 2014). New means and techniques are used to “de-escalate” behaviour, for instance, interactive street lighting manipulation is used “to confine and contain” aggressive events (de Kort, 2014).
These examples show how big data is used as a means of collecting and processing environmental data in urban settings, which are perceived as living organisms. The “living lab” automatically collects and analyses data, and in the process learns from past experience in order to produce actionable data. Big data capabilities are not centred on notions of solidarity, the social cohesion of the community, or other public interests, but are instead grounded in the monetary gains of homeowners and employed to guard the economic interests of retailers. The central goals of BTC Ljubljana’s living lab are to increase the spending of customers, increase the profits of retailers, and make management more effective (Polajžer, 2016). Big data then supports the specific interests of certain privileged social groups. It is employed to seduce consumers into fulfilling the monetary ends of retailers, while public safety considerations are a mere side effect of these efforts.
Instead of the well-defined concepts in criminal law, such as suspect, reasonable doubt, the presumption of innocence, etc., which serve as regulators of and thresholds for the intervention of law enforcement agencies, the new concepts and language no longer sufficiently confine agencies nor prevent abuses of power. The language of big data helps to tear down the walls of criminal procedure rules. This move towards a system of “automatic justice” “minimise[s] human agency and undercuts the due process safeguards built into the traditional criminal justice model”. (Marks et al., 2015).

Big data knowledge: “We do not know what the questions were, but here are the answers”

The meaning of “big data” has been highly contested and used to encompass the volume of datasets, the processing of data (“data mining”, “crunching numbers”) or even the generalised “big data hype” (Raicu, 2015). The most far-fetched definitions contend that big data analytics enables an entirely new epistemological approach to making sense of the world (Kitchin, 2014: 2). More modest authors claim that the idea of big data is “to see hidden connections and patterns” and generate new knowledge about existent data (The Information and Privacy Commissioner of Ontario, 2017). Big data “refers to our burgeoning ability to crunch vast collections of information, analyse it instantly, and draw sometimes profoundly surprising conclusions from it” (Mayer-Schönberger & Cukier, 2013).
The more far-fetched views see big data and the new analytical tools as a harbinger of a new paradigm shift by which science is entering into “the fourth stage” (Hey, Tansley, & Tolle, 2009). According to such views, big data signals a new era of knowledge production characterised by “the end of theory”: “the data deluge makes the scientific method obsolete” (C. Anderson, 2008). The big data enterprise claims that rather than testing a theory by analysing data, the new analytics seeks to gain insight “born from the data.” This “empiricist epistemology” (Kitchin, 2014) is based on several premises (Kitchin, 2014: 4–5): (1) the premise that big data captures the whole domain it seeks to analyse; (2) that there is no need for a priori theory; (3) that “the data speak for themselves” (Captain, 2015) free of human bias or framing; and (4) that the calculated meaning transcends context or domain-specific knowledge and can be interpreted by anyone who can decode visualisations. Such views camouflage big data as an “objective” and “pure” knowledge, and neglects the fact that statistics have always been political and served specific political ends (Desrosières, 2002). Statistics are produced by humans and for humans.
On the other hand, authors have warned that we are witnessing more of the old rather than something completely new. Peter Thiel, the founder of PayPal, has, paradoxically, pointed to the failures of the digital industry: “Cell phones distract us from the fact that the subways are 100 years old” (Dowd, 2017). The digital turn has failed to fulfil the old dreams that bigger and better things would happen. What may be new is the exponential acceleration of neoliberalism: a reduction in state power that reinforces the private sector; an increase in social and wealth inequalities by enabling the powerful elite to gain insight into the masses more than ever before; the expansion of the precarious working class, and an increase in unpaid digital labour. “The people” as the holders of sovereign rights, are reduced to “users” of digital services. They are blinded by the informational glut (Andrejevic, 2013), addicted to the comforting and entertaining nature of digital services in order not to see the real interests behind the userfriendly interfaces of digital applications and the high stakes that are (not) being negotiated. This army of digital workers open to exploitation is used as the means to very specific political ends. There is no “end of politics” at work here, as the “reserve army of digital labour” serves the pecuniary interests of the digital industry, which caters to the affluent elite of the surveillance society. Or, to paraphrase a popular meme, digital workers are actually the product being sold on the data marketplace.

Big data industry: “Doing more with less”

Metaphors are important for technological progress to have an impact on society (Watson, 2015). Big data’s origins can be traced to the domain of business and industry, where big data has been defined as three “V”s (Gartner, 2012), four “V”s (IBM, 2016), or even five or six “V”s (Marr, 2016), regarding its: (1) volume (the scale of the data gathered is greater); (2) velocity (data processing and analysis of streaming data takes place in real time or near real time; processing is faster and enables longer retention and faster transfer of data); (3) the variety of data (in terms of different forms of data, structured and unstructured data, computer readable or not, e.g. social medial posts); (4) veracity (the uncertainty of data, which refers to the quality and accuracy of data [Ramesh, 2017]); (5) the “value” of data; and (6) its “vulnerability” (Marr, 2016).
In order to market big data tools, the industry has depicted data through a variety of words and metaphors (Watson, 2015): data can be thought of in terms of (1) a new natural resource (in the same vein as oil, gold, etc.); (2) a new industrial product (as big business, a platform, etc.); (3) a new by-product (e.g. data trails, data smog, data exhaust, etc.); (4) a new market (with its currencies, brokers, vaults, assets, etc.); (5) a liquid (thus: data deluges, data tsunamis, data waves, data lakes); (6) being trendy (the new oil, the new currency, the new black, the new revolution, etc.); and (7) a body (through reference to words such as fingerprint, blood, DNA, reflection, shadow, profile, and portrait).
The application of these vague concepts obfuscates the fact that big data is designed to serve certain political and economic interests over others, e.g. to increase the profits of companies by finding “hidden opportunities” in production cycles, or in government operations – to put it simply with one of the sale pitches, “to do more with less”.
The industrial origin of big data carries the logic of business practices into all other sectors that apply big data. This results in different deficiencies and damage in these domains that may be acceptable in one domain (e.g. marketing), but which erode rights and deny human dignity in another. For instance, when employers increasingly indulge in the practice of “hiring by algorithm”, seemingly irrelevant data such as inaccurate consumer reports can cause real economic injury to job seekers. Such inaccuracies can lead employers to screen out prospective employees or to lower salaries (Cohen, Hoofnagle, McGeveran, Ohm, & Reidenberg et al., 2015). In the crime control domain, such practices breach several fundamental principles of criminal procedure (Mosco, 2014: 177). The pressure to institute a process of “datafication”, i.e. turning everything into numbers, in order to “monetise” data and create “actionable” data, is at the core of big data logics. The implicit underpinnings of business imperatives in the security domain have been described as three “A”s – automation, anticipation, and adaptation (Lyon, 2014: 6–9).
Anticipation refocuses crime control actors. They reorient their practices and:
focus on the future more than on the present and the past. In the context of neo-liberal governance, this anticipation is likely to place more weight on surveillance for managing consequences rather than research on understanding causes of social problems such as crime and disorder.
(Lyon, 2014: 6–8)
Big data fortifies “evidence-based” policing, a correlate of which can be found in penology as “evidence-based” sentencing or “truth in sentencing”. The common feature of these changes is a higher reliance on and increased trust given to quantitative over more qualitative reasoning. In fact, prediction – predictive analytics that transcend human perception (Siegel, 2013) – has been one of the most attractive aspects regarding the application of big data in crime control. It is powered by data, which is being accumulated in large part as the by-product of routine tasks (Siegel, 2013). In addition to their effects on criminal procedure, such practices involving bulk data collection from different sources – even before determining the full range of their actual and potential uses (Lyon, 2014: 4) – breach the fundamental principles of personal data protection (i.e. the proportionality principle, the principle of purpose specification and minimisation, and obtaining valid prior consent from data subjects).
Algorithms are being used “not only to understand a past sequence of events, but also to predict and intervene before behaviours, events, and processes are set in train (sic) [i.e. motion]” (Lyon, 2014: 4). Prediction is only the first step and is followed by pre-emption – taking action in order to prevent an anticipated event from happening. This is causing existing ex post facto criminal policy to adopt ex ante preventive measures (Kerr & Earle, 2013). It may also invoke “a feeling of constant surveillance”, as stressed by the European Court of Justice in the case Digital Rights Ireland Ltd v. Ireland (C-293/12 and C-594/12, dated 8 April 2014) on the bulk collection of the traffic and location data of users of all types of electronic communications.

Automated governance

In the much acclaimed book Sentencing in the Age of Information, Franko Aas (2005) detected how biometric technology used in criminal justice settings had been changing the focus from “narrative” to supposedly more “objective” and unbiased “computer databases”. Franko was critical of the transition and characterised it as going “from narrative to database”. Instead of the narrative, she claimed, the body has come to be regarded as a source of unprecedented accuracy because “the body does not lie” (Aas, 2006). The “truth” will supposedly be detected on a body through the use of biometric technologies, such as DNA and fingerprinting, because “coded bodies” had come to be perceived as a more reliable means of ascertaining the truth, in comparison to witness testimonies, which research has shown how misleading they can be.
However, today criminal justice systems are on the brink of taking yet another step: from the database towards automated algorithmic-based decision-making. This is a transition towards complete de-subjectivation in the decision-making process, a sort of erasure of subjectivity. It is not only that narrative is regarded as an unreliable means to discover the “truth in sentencing”, as many have claimed from different perspectives, e.g. from psychological and neurological points of view (Bradfield, Wells, & Olson, 2002; Innocence Project, 2016; Kassin, 2008; Kassin & Kiechel, 1996; Shaw & Porter, 2015), but even databases are regarded as being too weak and static, too dependent on the user. Big data promises to make this static tool “actionable” through use of algorithms that can provide real-time feedback by crunching large amounts of data coming from all domains of life. At the same time as information collection is expanding, data processing is getting faster; responses are becomin...

Table of contents

  1. Cover
  2. Title
  3. Copyright
  4. Contents
  5. Notes on contributors
  6. Foreword
  7. Acknowledgements
  8. PART I Introduction
  9. PART II Automated social control
  10. PART III Automated policing
  11. PART IV Automated justice
  12. PART V Big data automation limitations
  13. Index