Artificial Intelligence and the Law
eBook - ePub

Artificial Intelligence and the Law

Cybercrime and Criminal Liability

Dennis J. Baker, Paul H. Robinson, Dennis J. Baker, Paul H. Robinson

Share book
  1. 270 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Artificial Intelligence and the Law

Cybercrime and Criminal Liability

Dennis J. Baker, Paul H. Robinson, Dennis J. Baker, Paul H. Robinson

Book details
Book preview
Table of contents
Citations

About This Book

This volume presents new research in artificial intelligence (AI) and Law with special reference to criminal justice.

It brings together leading international experts including computer scientists, lawyers, judges and cyber-psychologists. The book examines some of the core problems that technology raises for criminal law ranging from privacy and data protection, to cyber-warfare, through to the theft of virtual property. Focusing on the West and China, the work considers the issue of AI and the Law in a comparative context presenting the research from a cross-jurisdictional and cross-disciplinary approach.

As China becomes a global leader in AI and technology, the book provides an essential in-depth understanding of domestic laws in both Western jurisdictions and China on criminal liability for cybercrime. As such, it will be a valuable resource for academics and researchers working in the areas of AI, technology and criminal justice.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Artificial Intelligence and the Law an online PDF/ePUB?
Yes, you can access Artificial Intelligence and the Law by Dennis J. Baker, Paul H. Robinson, Dennis J. Baker, Paul H. Robinson in PDF and/or ePUB format, as well as other popular books in Informatique & Sciences générales de l'informatique. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2020
ISBN
9781000210644

1 Emerging technologies and the criminal law

Dennis J. Baker and Paul H. Robinson

1. Introduction

In this volume, the contributors explore criminal liability in the context of artificial intelligence and other emerging digital technologies. Some of the chapters focus more on how these technologies are being used by criminals to facilitate crime, while others consider how emerging technologies can assist law enforcement agencies by collecting cogent evidence (such as biometrics) or act as a deterrent (i.e., burglars may be more deterred by a security camera system that can do face recognition, and report them to the police in real time, than they might be by an traditional closed-circuit television [CCTV] camera footage). The key areas covered include are cyberfraud, cybersecurity, data retention laws, digital privacy invasions, liability for intermediaries such as Internet service providers (ISPs) and criminal liability for artificially intelligent entities.
We have included a diverse range of chapters from scholars. We also have a chapter from The Right Hon. Lord Hodge, the Deputy President of the Supreme Court of the United Kingdom. Additionally, we have chapters from some of China’s foremost criminal law professors. Some of those chapters draw some comparisons between the common law and Chinese law, but none of them aim to conduct a wholesale comparative study. Their primary aim is to draw out legal and ethical issues concerning emerging technologies and the law. The seven common law chapters cover also artificial intentional and cybersecurity matters. Our contributors aim to tease out some of the core problem areas to provide some sort of platform for thinking about law reform in this area. Some problems relate more to enforcement than to a lack of legal regulation. We briefly discuss a few issues that have arisen as a result of emerging technology in the remainder of this chapter. Hopefully, this will provide some background to some of the problems addressed in the chapters. Some of those issues include artificial intelligence; privacy, surveillance and biometrics; and Internet censorship to prevent online harms.

2. Artificial intelligence and criminal justice

(a) Artificial intelligence

Artificial intelligence (AI) is a misnomer since what it refers to is not intelligence at all. When McCarthy coined the term “artificial intelligence,” he had machine learning in mind.1 Algorithms building a mathematical model from a set of data to allow a processor to make predictions is nothing like human reasoning.2 At this point in time, machines cannot reason and make rational and autonomous choices. Most of what is currently labelled intelligence is computer processing.3 Emotional intelligence, the capacity for practical reasoning and the capacity for social decision-making are well beyond the capacity of any existing machine.4 It will not be long before private individuals will be able to purchase a robot that will be equipped to act as a cleaner, security guard, Michelin star chef and so on. A robot equipped with face recognition and gait (walk) recognition capacity might guard a house and instantly recognise a burglary. It might not only be able to report it to the police in real time, but also report that the perpetrator is Joe Bloggs. However, no existing AI-operated robot can cook with the passion and emotion that Rick Stein cooks with. Similarly, a robot security guard might hold down a burglar until the police arrive, but it would not do so this action with any emotion. It would not act with human irrationally as a result of anger or fear, or even human rationally as a result of steadfastness, but would simply be triggered on the relevant information being received that the burglar has entered not as a guest, but by breaking in, and, is someone who is not on the database of faces of people who normally visit the property.
1See his seminal paper, J. McCarthy, “Recursive Functions of Symbolic Expressions and their Computation by Machine, Part I,” (1960) 3(4) Communications of the ACM 184; see too J. McCarthy and P. J. Hayes, “Some Philosophical Problems from the Standpoint of Artificial Intelligence,” in D. Meltzer and D. Michie (eds.), Machine Intelligence, (Edinburgh: Edinburgh University Press, 1969) Vol. 4 at 463.
2A. Turing, “Computing Machinery and Intelligence,” (1950) 49 Mind 433; V. Lifschitz, Artificial and Mathematical Theory of Computation: Papers in Honour of John McCarthy, (Boston: Academic Press, 1991).
3There is no doubt machines have tremendous processing capacity. See for example, P. Dockrill, “In Just 4 Hours, Google’s AI Mastered All the Chess Knowledge in History,” (Science Alert, 7 December 2017); J. Lee, “Deep Learning – assisted Real-time Container Corner Casting Recognition,” (2019) 51(1) International Journal of Distributed Sensor Networks 1; S. R. Granter et al., “AlphaGo, Deep Learning, and the Future of the Human Microscopist,” (2017) 141(5) Archives of Pathology & Laboratory Medicine 619 reports that the game:
Go’s much higher complexity and intuitive nature prevents computer scientists from using brute force algorithmic approaches for competing against humans. For this reason, Go is often referred to as the “holy grail of AI research.” To beat Se-dol, Google’s AlphaGo program used artificial neural networks that simulate mammalian neural architecture to study millions of game positions from expert human – played Go games. But this exercise would, at least theoretically, only teach the computer to be on par with the best human players. To become better than the best humans, AlphaGo then played against itself millions of times, over and over again, learning and improving with each game – an exercise referred to as reinforcement learning…. It implements machine-learning algorithms (including neural networks) that are effectively an extension of simple regression fitting. In a simple regression fit, we might determine a line that predicts an outcome y given an input x. With increased computational power, machine learning algorithms are able to fit a huge number of input variables (for example, moves in a game of Go) to determine a desired output (maximizing space gained on the Go board).
4J. R. Lucas, “Minds and Machines,” (1961) 36 Philosophy 112; Cf. E. Yudkowsky and N. Soares, Functional Decision Theory: A New Theory of Instrumental Rationality, (Machine Intelligence Research Institute, 2018).
Machine learning and computer processing cannot at this stage be compared with human reasoning and emotional intelligence.5 If AI eventually acquires the faculty for practical reasoning (and it probably will in the distant future) and agency, then the normal rules of criminal responsibility (whatever they are at the given point in time) will apply. There will be no need for new rules if these machines are able to make rational choices, but there will be a need for transitional rules for the long period when these AI machines straddle between being able to make fully autonomous choices and partially autonomous choices. At the moment, they can do neither. Civil law such as manufacture liability rules already apply to AI, because at the moment, it is considered no more than an instrument in the hands of human agents.6
5Machines are not even close to achieving consciousness, let alone the ability to make ethical decisions based on practical reasoning. Cf. K. B. Korb, “Searle’s AI program,” (1991) 3(4) Journal of Experimental & Theoretical Artificial Intelligence 283; G. Meissner, “Artificial Intelligence: Consciousness and Conscience,” (2020) AI & Society 225.
6For a very detailed discussion of civil and regulatory rules concerning AI, see the extensive report by the European Commission, Liability for Artificial Intelligence and Other Emerging Technologies, (Brussels: 2019).
We often hear the term “autonomous weapon,” but this is an oxymoron.7 Currently, a machine cannot be criminally liable either directly or through the law of complicity for any harm it causes.8 There is currently no comparison between “machine learning” and “human understanding.” Nonetheless, like all machines, AI has capabilities beyond those of a single human. It is true that AI-equipped machines can do many things that humans cannot, but the same can be said of machines generally. A motorbike can travel from London to Cambridge faster than a human. A bulldozer can clear acres of precious rainforest in an hour whereas a team of humans with axes might not clear even half an acre in a day. Similarly, AI-operated databases can keep accurate records of vast quantities of information and recall and analyse basic information in an instant. It can process volumes of data at speeds beyond the capacity of a single human mind. A human does not have the ability to scan 100,000 faces in a football stadium and pick out a single person in real time. It would be churlish to label it AS (Artificial Stupidity) since its incredible processing abilities can obtain intelligent results.
7A robot by definition is an automaton that acts without volition.
8If killer robots are used in war, then liability has to rest with those using them in the same way it would if they were to use bombs simpliciter against the law. International law and humanitarian law violations would rest with the humans misusing such robots. Apparently, Taranis, a flying killer robot, if put into service, will allow the Royal Air Force to programme it to search for targets and take them out without further human input. Suppose Taranis is programmed to strike all people wearing a yellow vest within a combat zone. Its systems would allow it to search and kill such people without further human input. However, the command for it to do so would lie in the hands of the humans who set it in motion, as would any criminal responsibility for the deaths of those wearing yellow vests, if killing them was not justifiable.
If an AI operated robot is developed that has the capacity for “emotional intelligence” and “practical reasoning,”9 it might be assumed it would have a much longer life span than a human currently has; therefore, sentencing and punishment would need a rethink. How do we punish a machine that might have a 1,000-year or even a 10,000-year lifespan? A machine might have no need for property other than standing space, so economic fines also might be a very outdated concept by the time machines can be criminality liable. If it has emotional intelligence, then jail or something similar might cause it mental distress and thus work as punishment. It is doubtful that jail sentences will last more than another century as an effective form of punishment for humans, let alone for anything else, so new thinking will be required when considering penalties for the criminal offending of AI. It might be as simple as cutting off the machine’s energy supply for a period of time to punish it, but if it is not subject to the normal lifespan limits of a human life span, forced hyphenation is hardly likely to act as a deterrent. If a machine is very advanced, it will have dignity equal to that of humans and thus it could not be reprogrammed to make it behave: to do so would be akin to subjecting human criminals to a eugenics programme.
9Practical reasoning is the antithesis of machine learning. See generally, M. Bratman, Intention, Plans, and Practical Reason, (Cambridge: Harvard University Press, 1987); J. Hampton, The Authority of Reason, (Cambridge: Cambridge University Press, 1998); J. Raz, Engaging Reason, (Oxford: O.U.P., 1999).
Humans and machines are both made of matter and it is not fanciful to imagine the two evolving into one being or the artificial life form eventually evolving into the natural life form.10 If someone had explained the idea of a smart-phone to Sir Isaac Newton and told him that within 200 years of his death, almost everyone would be using them, even a visionary mathematician of his standing might have though it total ...

Table of contents