AI, Data and Private Law
eBook - ePub

AI, Data and Private Law

Translating Theory into Practice

Gary Chan Kok Yew, Man Yip, Gary Chan Kok Yew, Man Yip

Share book
  1. 368 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

AI, Data and Private Law

Translating Theory into Practice

Gary Chan Kok Yew, Man Yip, Gary Chan Kok Yew, Man Yip

Book details
Book preview
Table of contents
Citations

About This Book

This book examines the interconnections between artificial intelligence, data governance and private law rules with a comparative focus on selected jurisdictions in the Asia-Pacific region. The chapters discuss the myriad challenges of translating and adapting theory, doctrines and concepts to practice in the Asia-Pacific region given their differing circumstances, challenges and national interests. The contributors are legal experts from the UK, Israel, Korea, and Singapore with extensive academic and practical experience. The essays in this collection cover a wide range of topics, including data protection and governance, data trusts, information fiduciaries, medical AI, the regulation of autonomous vehicles, the use of blockchain technology in land administration, the regulation of digital assets and contract formation issues arising from AI applications. The book will be of interest to members of the judiciary, policy makers and academics who specialise in AI, data governance and/or private law or who work at the intersection of these three areas, as well as legal technologists and practising lawyers in the Asia-Pacific, the UK and the US.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is AI, Data and Private Law an online PDF/ePUB?
Yes, you can access AI, Data and Private Law by Gary Chan Kok Yew, Man Yip, Gary Chan Kok Yew, Man Yip in PDF and/or ePUB format, as well as other popular books in Law & Science & Technology Law. We have over one million books available in our catalogue for you to explore.

Information

Year
2021
ISBN
9781509946846
Edition
1
1
AI, Data and Private Law
The Theory-Practice Interface
GARY CHAN KOK YEW AND MAN YIP*
I.Introduction
The growing importance of artificial intelligence (AI) and big data in modern society, and the potential for their misuse as a tool for irresponsible profit1 call for a constructive conversation on how the law should direct the development and use of technology. This collection of chapters, drawn from the Conference on ‘AI and Commercial Law: Reimagining Trust, Governance, and Private Law Rules’,2 examines the interconnected themes of AI, data protection and governance, and the disruption to or innovation in private law principles.3 This collection makes two contributions. First, it shows that private law is a crucial sphere within which that conversation takes place. To borrow from the extra-judicial comments of Justice CuĂ©llar of the Supreme Court of California, private law ‘provides a kind of first-draft regulatory framework – however imperfect – for managing new technologies ranging from aviation to email’.4 As this collection demonstrates, private law furnishes a first-draft regulatory framework by directly applying or gently extending existing private law theory, concepts and doctrines to new technological phenomena and, more markedly at times, by creating new principles or inspiring a new regulatory concept. This is not to say that private law is superior to or replaces legislation. This collection asks that we consider more deeply the potential and limits of private law regulation of AI and data use, as well as its co-existence and interface with legislations.
Second, the chapters, individually and/or collectively, explore existing legal frameworks and innovative proposals from a ‘theory meets practice’ angle. They offer insights into the challenges of translating theory, doctrines and concepts to practice. Some of these challenges may arise from a general disconnect between a proposed theory, concept and doctrine on the one hand, and the practical reality on the other. For example, is the proposed solution consistent with the pre-existing legislative regime, whether it amounts to a regulatory overkill and therefore is inimical to innovation, and whether it is conceived based on a full appreciation of the technical aspects of the underlying technology? Other challenges may arise from the inherent limits of existing theories, doctrines and concepts which were developed for non-technological or non-big data phenomena. For instance, is the existing private law doctrine sufficient to deal with the new legal issues (most notably, the absence or remoteness of human involvement) brought to the fore by technological developments? In taking this ‘theory meets practice’ line of inquiry, the chapters adopt a degree of comparative focus on selected jurisdictions in the Asia-Pacific region, with the aim of capturing some of the unique challenges confronting countries in this part of the world in terms of practical implementation.
II.AI, Big Data and Trust
In order to enable a meaningful conversation on how private law can and should direct AI and big data developments, we must first unpack what these labels mean in theory and in practice, as well as highlight the important theme of building trust in the use of new technologies and data.
A.Artificial Intelligence
The term ‘AI’ does not lend itself to a singular monolithic definition. Broadly speaking, AI refers to the use of machines to imitate the ability of humans to think, make decisions or perform actions.5 The definition of AI proposed by the European Commission further highlights as follows:
AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications).6
Given the above broad understanding of AI, we propose to discuss AI from three interrelated perspectives. The first focuses on AI as a means (or technology) to achieve certain goals. In this regard, AI may refer to the technologies that produce outputs or decision-making models, such as predictions or recommendations from AI algorithms, together with computer power and data.7
Second, the term invites comparisons with humans and human intelligence in the achievement of certain goals. AI technologies may thus involve the simulation of human traits such as knowledge, reasoning, problem solving, perception, learning and planning in order to produce the output or decision.8 Examples of these human traits are evident in machine learning, deep learning, computer vision and natural language processing. The Turing test9 requires AI to ‘act humanly’ so as to fool a human interrogator into thinking it is human. Russell and Norvig’s ‘rational agent’ approach10 is premised on AI acting rationally in order to achieve the ‘best outcome or, when there is uncertainty, the best expected outcome’.
AI has improved significantly in terms of its efficiency and speed of analysis, capacity to read and decipher patterns from voluminous documents and images, and enhanced reliability and accuracy in generating outputs. In many respects, it is becoming superior to humans in performing specific tasks, even though it is not error-free. Clarke11 refers to the concept of ‘intellectics’ that goes beyond ‘machines that think’ to ‘computers that do’ in which artefacts ‘take autonomous action in the real world’.
Third, AI may be defined in part at least by how it interacts with humans. Implicit in the first and second perspectives of AI is the human–AI interface that we should not overlook. Clarke conceives of AI as ‘Complementary Artefact Intelligence’ that ‘(1) does things well that humans do poorly or cannot do at all; (2) performs functions within systems that include both humans and artefacts; and (3) interfaces effectively, efficiently and adaptably with both humans and artefacts’.12 Indeed, AI technologies such as self-driving vehicles, medical treatment, self-performing contracts and AI-driven management of digital assets exhibit human traits in their interactions with humans, generating practical consequences in the real world. The human-like aspect of AI invites debates as to whether AI should be attributed personhood for liability, and the better-than-human performance of AI prompts us to assess whether standards and liability rules should differ where intelligent machines are used.
As AI continues to evolve in terms of its functions and advances in its capabilities, its role in society will become more pervasive. This gives rise to the question as to whether humans can trust AI. For example, it has been reported that machine-learning algorithms (most notably, face recognition technologies) have racial, gender and age biases.13 The complexity, unexplainability and incomprehensibility of AI is also emerging as a key reason why people do not trust AI decisions or recommendations.14 Building trust is thus a priority agenda for AI innovation and regulation going forward.
B.Big Data
The proliferation of data is a global phenomenon. Information on citizens is constantly collected by organisations through websites, mobile phones, smart watches, virtual personal assistants, drones, face recognition technologies, etc.15 The term ‘big data’ refers to very large datasets that are often complex, varied and generated at a rapid speed.16 As the McKinsey Global Institute Report on ‘Big Data: The Next Frontier for Innovation, Competition, and Productivity’ points out: ‘The use of big data will become a key basis of competition and growth for individual firms.’17 Information is value in itself. Big data in particular creates tremendous value because it is an enormous volume of information from which new insights can be drawn. The McKinsey Report highlights that big data creates value in various ways: it creates informational transparency; enables experimentation to discover needs, exposes performance variability and improves performance; allows segmentation of the population to customise actions (eg, for interest-based advertising); substitutes/supports human decision-making with automated algorithms; and facilitates the invention of new business models, products and services or the enhancement of existing ones.18 In this connection, the reciprocal relationship between data and AI should not be missed. The value in big data is unlocked by AI, which provides us with the capability to analyse these enormous datasets to derive trends and patterns more quickly or in a way that was not possible before. The input of data to AI technologies improves the latter’s performance; in other words, the technology gets better with more data.
Nevertheless, the concentration of data in the hands of corporate titans raised deep suspicions as to how the consumers’ data is being collected and used by such corporations. In fact, data scandals involving tech giants (such as Google and Facebook) have dampened public confidence and trust in companies.19 More recently, contact tracing technological applications created to monitor and control the COVID-19 virus outbreak have also sparked privacy concerns as these applications enable governments to gain control over citizens’ location-based data and biometric data.20
Clear lines of tension have thus developed in the context of data: the tension between privacy/data protection21 and economic value; the tension between privacy/data protection and innovation; the tension between privacy/data protection and public health; and the tension between business facilitation and consumer protection. How should the law balance between these competing concerns? How does the law catch up with the pace of technological developments and business model transformations? And, most importantly, how can the law help to restore public trust and confidence in companies and governments?22
C.Trust
Trust is a multi-faceted concept. It involves a relation importing some risk to or vulnerability of the trustor who nonetheless chooses to place trust in the trustee. The bases of trust are varied: the trustor’s rational self-interests and goals (cognitive),23 emotions and attitudes towards the trustee (affective) and socio-normative considerations.24 From a stakeholder perspective, trust may be reposed in an individual or organisation such as the AI developer or user, the technology itself, the socio-technical systems,25 and social and legal institutions, including the government. The trustor–trustee relationship may be reciprocal, though not necessarily so (eg, humans placing trust in AI, but not the reverse).26 Serious deviations or aberrations can occur, resulting in trust being inadequate or absent (distrust), abused by the trustee (mistrust) or so excessive as to cause the trustor to be exploited (over-trust).
On a practical level, trust may be regarded as a ‘process’27 encompassing multiple ‘trusting’ concepts (eg, trusting dispositions, attitudes, beliefs, expectations and behaviours), and their commonalities and differences. The level of human trust in AI is mediated by a myriad of factors: the personal characteristics of the trustor, knowledge of the technology28 and its potential errors,29 and perceptions of the positive or negative attributes of the technology.30...

Table of contents