Is Law Computable?
eBook - ePub

Is Law Computable?

Critical Perspectives on Law and Artificial Intelligence

Simon Deakin, Christopher Markou, Simon Deakin, Christopher Markou

Buch teilen
  1. 272 Seiten
  2. English
  3. ePUB (handyfreundlich)
  4. Über iOS und Android verfĂŒgbar
eBook - ePub

Is Law Computable?

Critical Perspectives on Law and Artificial Intelligence

Simon Deakin, Christopher Markou, Simon Deakin, Christopher Markou

Angaben zum Buch
Buchvorschau
Inhaltsverzeichnis
Quellenangaben

Über dieses Buch

What does computable law mean for the autonomy, authority, and legitimacy of the legal system? Are we witnessing a shift from Rule of Law to a new Rule of Technology? Should we even build these things in the first place? This unique volume collects original papers by a group of leading international scholars to address some of the fascinating questions raised by the encroachment of Artificial Intelligence (AI) into more aspects of legal process, administration, and culture. Weighing near-term benefits against the longer-term, and potentially path-dependent, implications of replacing human legal authority with computational systems, this volume pushes back against the more uncritical accounts of AI in law and the eagerness of scholars, governments, and LegalTech developers, to overlook the more fundamental - and perhaps 'bigger picture' - ramifications of computable law. With contributions by Simon Deakin, Christopher Markou, Mireille Hildebrandt, Roger Brownsword, Sylvie Delacroix, Lyria Bennet Moses, Ryan Abbott, Jennifer Cobbe, Lily Hands, John Morison, Alex Sarch, and Dilan Thampapillai, as well as a foreword from Frank Pasquale.

HĂ€ufig gestellte Fragen

Wie kann ich mein Abo kĂŒndigen?
Gehe einfach zum Kontobereich in den Einstellungen und klicke auf „Abo kĂŒndigen“ – ganz einfach. Nachdem du gekĂŒndigt hast, bleibt deine Mitgliedschaft fĂŒr den verbleibenden Abozeitraum, den du bereits bezahlt hast, aktiv. Mehr Informationen hier.
(Wie) Kann ich BĂŒcher herunterladen?
Derzeit stehen all unsere auf MobilgerĂ€te reagierenden ePub-BĂŒcher zum Download ĂŒber die App zur VerfĂŒgung. Die meisten unserer PDFs stehen ebenfalls zum Download bereit; wir arbeiten daran, auch die ĂŒbrigen PDFs zum Download anzubieten, bei denen dies aktuell noch nicht möglich ist. Weitere Informationen hier.
Welcher Unterschied besteht bei den Preisen zwischen den AboplÀnen?
Mit beiden AboplÀnen erhÀltst du vollen Zugang zur Bibliothek und allen Funktionen von Perlego. Die einzigen Unterschiede bestehen im Preis und dem Abozeitraum: Mit dem Jahresabo sparst du auf 12 Monate gerechnet im Vergleich zum Monatsabo rund 30 %.
Was ist Perlego?
Wir sind ein Online-Abodienst fĂŒr LehrbĂŒcher, bei dem du fĂŒr weniger als den Preis eines einzelnen Buches pro Monat Zugang zu einer ganzen Online-Bibliothek erhĂ€ltst. Mit ĂŒber 1 Million BĂŒchern zu ĂŒber 1.000 verschiedenen Themen haben wir bestimmt alles, was du brauchst! Weitere Informationen hier.
UnterstĂŒtzt Perlego Text-zu-Sprache?
Achte auf das Symbol zum Vorlesen in deinem nÀchsten Buch, um zu sehen, ob du es dir auch anhören kannst. Bei diesem Tool wird dir Text laut vorgelesen, wobei der Text beim Vorlesen auch grafisch hervorgehoben wird. Du kannst das Vorlesen jederzeit anhalten, beschleunigen und verlangsamen. Weitere Informationen hier.
Ist Is Law Computable? als Online-PDF/ePub verfĂŒgbar?
Ja, du hast Zugang zu Is Law Computable? von Simon Deakin, Christopher Markou, Simon Deakin, Christopher Markou im PDF- und/oder ePub-Format sowie zu anderen beliebten BĂŒchern aus Law & Science & Technology Law. Aus unserem Katalog stehen dir ĂŒber 1 Million BĂŒcher zur VerfĂŒgung.

Information

Jahr
2020
ISBN
9781509937080
1
From Rule of Law to Legal Singularity
SIMON DEAKIN AND CHRISTOPHER MARKOU*
Mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.
Bertrand Russell
Recent Work on the Principles of Mathematics (1901)
In and of itself nothing really matters. What matters is that nothing is ever ‘in and of itself’.
Chuck Klosterman
Sex, Drugs, and Cocoa Puffs (2003)
I.The Dawn of the All New Everything
Before most had a clue what the Fourth Industrial Revolution entailed,1 the 2019 World Economic Forum meeting in Davos heralded the dawn of ‘Society 5.0’ in Japan.2 Its goal: creating a ‘human-centered society that balances economic advancement with the resolution of social problems by a system that highly integrates cyberspace and physical space’.3 Using Artificial Intelligence (AI) and various digital technologies, ‘Society 5.0’ proposes to liberate people from:

 everyday cumbersome work and tasks that they are not particularly good at, and through the creation of new value, enable the provision of only those products and services that are needed to the people that need them at the time they are needed, thereby optimizing the entire social and organizational system.
The Japanese government accepts that realising this vision ‘will not be without its difficulties’ but the plan makes clear its intention ‘to face them head-on with the aim of being the first in the world as a country facing challenging issues to present a model future society’.
Although Society 5.0 enjoys support from beyond Japan,4 it bears a familiarly Japanese optimism about the possibilities of technological progress.5 Yet Japan is not alone in seeing how the technologies of the Fourth Industrial Revolution could enable new systems of governance and ‘algorithmic regulation’.6 And this is particularly the case with regard to a specific type of computation, a family of statistical techniques known as Machine Learning (ML),7 that is central to engineering futures of vast technological possibility that Society 5.0 exemplifies.
Generally, ML ‘involves developing algorithms through statistical analysis of large datasets of historical examples’.8 The iterative adjustment of mathematical parameters and retention of data enable an algorithm to automatically update (or ‘learn’) through repeated exposure to data and to optimise performance at a task. Initially, the techniques were applied to the identification of material objects, as in the case of facial recognition. Successive breakthroughs and performance leaps in ML, and the related techniques of Deep Learning (DL),9 have encouraged belief in AI as a universal solvent for complex socio-technical problems. Tantalising increases of speed and efficiency in decision-making, and reductions in cost and bureaucratic bloat, makes the public-sector fertile ground for a number of AI-leveraging ‘Techs’. These include LegalTech,10 GovTech,11 and RegTech12 (short for Legal, Government and Regulatory technology respectively) which involve the development of ‘smart’ software applications for deployment in legal, political, and human decision-making contexts.
The Society 5.0 plan is not, however, an ex nihilo creation of the Japanese government. Rather, it articulates an emerging orthodoxy – one the ‘Techs’ are now capitalising on – that the core social systems of law, politics and the economy must adapt or die in the face of new modes of ‘essentially digital governance’. This is often idealised as leading to a ‘hypothetical new state’ with ‘a small intelligent core, informed by big data 
 leading government (at last) to a truly post‐bureaucratic “Information State”’.13 Some, such as Tim O’Reilly, argue that the ‘old’ state model is essentially a ‘vending machine’ where money goes in (tax) and public goods and services come out (roads, police, hospitals, schools).14 It is time, he and others suggest, to ‘rethink government with AI’15 now that technological change has ‘flattened’16 the world, eroded state power, and provided models for uncoupling citizenship from territory.17 Only by seeing government as a ‘platform’ will it be possible to harness critical network externalities and ensure what Jonathan Zittrain calls ‘generativity’ – the uncanny ability of open-ended platforms like Facebook or YouTube to create possibilities beyond those envisioned by their creators.18 For O’Reilly, Big Tech ‘succeeded by changing all the rules, not by playing within the existing system’.19 Governments around the world, he suggests, must now follow their lead. Evidence from Japan, Singapore, Estonia and elsewhere, indicates that many are.20
A shift towards increasingly ‘smart’ and data-driven government is now underway and shows no sign of abating.21 But the intoxicating ‘new government smell’ and techno-utopian visions of programmes such as Society 5.0 should not distract from critical questions about what exactly ‘techno-regulation’ means for human rights, dignity and the role of human decision-makers in elaborate socio-technical systems that promise to more or less run themselves.22 Frank Pasquale observes that societal ‘authority is increasingly expressed algorithmically’,23 while John Danaher warns against the ‘threat of algocracy’ – arguing it is ‘difficult to accommodate the threat of algocracy, i.e. to find some way for humans to “stay on the loop” and meaningfully participate in the decision-making process, whilst retaining the benefits of the algocratic systems’.24 Both are key observations for Society 5.0 where ‘people, things, and systems 
 [are] all connected in cyberspace and optimal results obtained by AI exceeding the capabilities of humans [are] fed back to physical space’. However, the idea of AI ‘exceeding’ human capabilities is where Society 5.0’s vision comes into sharper focus. Looking past the aspirational rhetoric of a ‘human-centred society’ it is ultimately a future where Artificial General Intelligence (AGI) is no longer hypothetical. The AGI hypothesis is that a machine can be designed to perform any ‘general intelligent action’25 that a human is capable of – an idea with longstanding institutional support in Japan.26 But the invocation of AGI is what makes ‘Society 5.0’ hard to reconcile with what exactly it portends for the centrality and need of human decision-makers.27
II.From Rule of Law to Legal Singularity
While Society 5.0 perhaps exemplifies what Evgeny Morozov terms the folly of ‘solutionism’, it is not a uniquely Japanese phenomenon.28 Indeed, such techno-solutionism has long been part of the ‘dotcom neoliberalism’ Richard Barbrook and Andy Cameron call ‘The Californian Ideology’.29 This ideology has, however, now crept into the rhetoric of LegalTech developers who have the data-intensive – and thus target-rich – environment of law in their sights. Buoyed by investment, promises of more efficient and cheaper everything, and claims of superior decision-making capabilities over human lawyers and judges, LegalTech is now being deputised to usher in a new era of ‘smart’ law built on AI and Big Data.30 For some, such as physicist Max Tegmark, the use-case is clear:
Since the legal process can be abstractly viewed as computation, inputting information about evidence and laws and outputting a decision, some scholars dream of fully automating it with robojudges: AI systems that tirelessly apply the same high legal standards to every judgment without succumbing to human errors such as bias, fatigue or lack of the latest knowledge.31
Others, such as Judge Richard Posner, are cautious but no less sympathetic to the idea:
The judicial mentality would be of little interest if judges did nothing more than apply clear rules of law created by legislators, administrative agencies, the framers of constitutions, and other extrajudicial sources (including commercial custom) to facts that judges and juries determined without bias or preconceptions. Judges would be well on the road to being superseded by digitized artificial intelligence programs 
 I do not know why originalists and other legalists are not AI enthusiasts.32
Legal scholar Eugene Volokh even proposes a legal Turing test to determine whether an ‘AI judge’ outputs valid legal decisions. For Volokh, the persuasiveness of the output is what matters:
If an entity performs medical diagnoses reliably enough, it’s intelligent enough to be a good diagnostician, whether it is a human being or a computer. We might call it ‘intelligent,’ or we might not. But, one way or the other, we should use it. Likewise, if an entity writes judicial opinions well enough 
 it’s intelligent enough to be a good AI judge 
 If a system reliably yields opinions that we view as sound, we should accept it, without insisting on some predetermined structure for how the opinions are produced.33
The import of these views is that human judges are not just replaceable with AI, but that ‘AI Judges’ should be preferred on the assumption that they will not inherit the biases and limitations of human decision-making.34 Nonetheless, other scholars, such Giovanni Sartor and Karl Branting, remain sceptical:
No simple rule-chaining or pattern matching algorithm can accurately model judicial decision-making because the judiciary has the task of producing reasonable and acceptable solutions in exactly those cases in which the facts, the rules, or how they fit together are controversial.35
The boldest vision, however, comes from legal scholar and LegalTech entrepreneur Ben Alarie:
Despite general uncertainty about the specifics of the path ahead for the law and legal institutions and what might be required of our machines to make important contributions to the law, over the course of this century we can be confident that technological development will lead to (1) a significantly greater quantification of observable phenomena in the world (‘more data‘); and (2) more accurate pattern recognition using new technologies and methods (‘better inference‘). In this contribution, I argue that the naysayers will continue to be correct until they are, inevitably, demonstrated empirically to be incorrect. The culmination of these trends will be what I shall term the ‘legal singularity’.36
Although AGI is not seen as a necessary condition, Alarie’s legal singularity is described as a point where AI has ushered in a legal system ‘beyond the complete understanding of any person.’37 Seemingly a response to the incompleteness and contingency of the law, the legal singularity is implicitly a proposal for eliminating juridical reasoning as a basis for dispute resolution and normative decision-making. While nothing is said about the role of law in Society 5.0, much less human lawyers and judges, Alarie’s legal singularity can be considered a credible interpolation.
Even if the mathematical or symbolic logic used in AI research could, at least in theory, replicate the structure of juridical reasoning, this would not necessarily account for the political, economic, and socio-cultural factors that influence legal discour...

Inhaltsverzeichnis