AI, Data and Private Law
eBook - ePub

AI, Data and Private Law

Translating Theory into Practice

Gary Chan Kok Yew, Man Yip, Gary Chan Kok Yew, Man Yip

Partager le livre
  1. 368 pages
  2. English
  3. ePUB (adapté aux mobiles)
  4. Disponible sur iOS et Android
eBook - ePub

AI, Data and Private Law

Translating Theory into Practice

Gary Chan Kok Yew, Man Yip, Gary Chan Kok Yew, Man Yip

DĂ©tails du livre
Aperçu du livre
Table des matiĂšres
Citations

À propos de ce livre

This book examines the interconnections between artificial intelligence, data governance and private law rules with a comparative focus on selected jurisdictions in the Asia-Pacific region. The chapters discuss the myriad challenges of translating and adapting theory, doctrines and concepts to practice in the Asia-Pacific region given their differing circumstances, challenges and national interests. The contributors are legal experts from the UK, Israel, Korea, and Singapore with extensive academic and practical experience. The essays in this collection cover a wide range of topics, including data protection and governance, data trusts, information fiduciaries, medical AI, the regulation of autonomous vehicles, the use of blockchain technology in land administration, the regulation of digital assets and contract formation issues arising from AI applications. The book will be of interest to members of the judiciary, policy makers and academics who specialise in AI, data governance and/or private law or who work at the intersection of these three areas, as well as legal technologists and practising lawyers in the Asia-Pacific, the UK and the US.

Foire aux questions

Comment puis-je résilier mon abonnement ?
Il vous suffit de vous rendre dans la section compte dans paramĂštres et de cliquer sur « RĂ©silier l’abonnement ». C’est aussi simple que cela ! Une fois que vous aurez rĂ©siliĂ© votre abonnement, il restera actif pour le reste de la pĂ©riode pour laquelle vous avez payĂ©. DĂ©couvrez-en plus ici.
Puis-je / comment puis-je télécharger des livres ?
Pour le moment, tous nos livres en format ePub adaptĂ©s aux mobiles peuvent ĂȘtre tĂ©lĂ©chargĂ©s via l’application. La plupart de nos PDF sont Ă©galement disponibles en tĂ©lĂ©chargement et les autres seront tĂ©lĂ©chargeables trĂšs prochainement. DĂ©couvrez-en plus ici.
Quelle est la différence entre les formules tarifaires ?
Les deux abonnements vous donnent un accĂšs complet Ă  la bibliothĂšque et Ă  toutes les fonctionnalitĂ©s de Perlego. Les seules diffĂ©rences sont les tarifs ainsi que la pĂ©riode d’abonnement : avec l’abonnement annuel, vous Ă©conomiserez environ 30 % par rapport Ă  12 mois d’abonnement mensuel.
Qu’est-ce que Perlego ?
Nous sommes un service d’abonnement Ă  des ouvrages universitaires en ligne, oĂč vous pouvez accĂ©der Ă  toute une bibliothĂšque pour un prix infĂ©rieur Ă  celui d’un seul livre par mois. Avec plus d’un million de livres sur plus de 1 000 sujets, nous avons ce qu’il vous faut ! DĂ©couvrez-en plus ici.
Prenez-vous en charge la synthÚse vocale ?
Recherchez le symbole Écouter sur votre prochain livre pour voir si vous pouvez l’écouter. L’outil Écouter lit le texte Ă  haute voix pour vous, en surlignant le passage qui est en cours de lecture. Vous pouvez le mettre sur pause, l’accĂ©lĂ©rer ou le ralentir. DĂ©couvrez-en plus ici.
Est-ce que AI, Data and Private Law est un PDF/ePUB en ligne ?
Oui, vous pouvez accĂ©der Ă  AI, Data and Private Law par Gary Chan Kok Yew, Man Yip, Gary Chan Kok Yew, Man Yip en format PDF et/ou ePUB ainsi qu’à d’autres livres populaires dans Law et Science & Technology Law. Nous disposons de plus d’un million d’ouvrages Ă  dĂ©couvrir dans notre catalogue.

Informations

Éditeur
Hart Publishing
Année
2021
ISBN
9781509946846
Édition
1
Sujet
Law
1
AI, Data and Private Law
The Theory-Practice Interface
GARY CHAN KOK YEW AND MAN YIP*
I.Introduction
The growing importance of artificial intelligence (AI) and big data in modern society, and the potential for their misuse as a tool for irresponsible profit1 call for a constructive conversation on how the law should direct the development and use of technology. This collection of chapters, drawn from the Conference on ‘AI and Commercial Law: Reimagining Trust, Governance, and Private Law Rules’,2 examines the interconnected themes of AI, data protection and governance, and the disruption to or innovation in private law principles.3 This collection makes two contributions. First, it shows that private law is a crucial sphere within which that conversation takes place. To borrow from the extra-judicial comments of Justice CuĂ©llar of the Supreme Court of California, private law ‘provides a kind of first-draft regulatory framework – however imperfect – for managing new technologies ranging from aviation to email’.4 As this collection demonstrates, private law furnishes a first-draft regulatory framework by directly applying or gently extending existing private law theory, concepts and doctrines to new technological phenomena and, more markedly at times, by creating new principles or inspiring a new regulatory concept. This is not to say that private law is superior to or replaces legislation. This collection asks that we consider more deeply the potential and limits of private law regulation of AI and data use, as well as its co-existence and interface with legislations.
Second, the chapters, individually and/or collectively, explore existing legal frameworks and innovative proposals from a ‘theory meets practice’ angle. They offer insights into the challenges of translating theory, doctrines and concepts to practice. Some of these challenges may arise from a general disconnect between a proposed theory, concept and doctrine on the one hand, and the practical reality on the other. For example, is the proposed solution consistent with the pre-existing legislative regime, whether it amounts to a regulatory overkill and therefore is inimical to innovation, and whether it is conceived based on a full appreciation of the technical aspects of the underlying technology? Other challenges may arise from the inherent limits of existing theories, doctrines and concepts which were developed for non-technological or non-big data phenomena. For instance, is the existing private law doctrine sufficient to deal with the new legal issues (most notably, the absence or remoteness of human involvement) brought to the fore by technological developments? In taking this ‘theory meets practice’ line of inquiry, the chapters adopt a degree of comparative focus on selected jurisdictions in the Asia-Pacific region, with the aim of capturing some of the unique challenges confronting countries in this part of the world in terms of practical implementation.
II.AI, Big Data and Trust
In order to enable a meaningful conversation on how private law can and should direct AI and big data developments, we must first unpack what these labels mean in theory and in practice, as well as highlight the important theme of building trust in the use of new technologies and data.
A.Artificial Intelligence
The term ‘AI’ does not lend itself to a singular monolithic definition. Broadly speaking, AI refers to the use of machines to imitate the ability of humans to think, make decisions or perform actions.5 The definition of AI proposed by the European Commission further highlights as follows:
AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications).6
Given the above broad understanding of AI, we propose to discuss AI from three interrelated perspectives. The first focuses on AI as a means (or technology) to achieve certain goals. In this regard, AI may refer to the technologies that produce outputs or decision-making models, such as predictions or recommendations from AI algorithms, together with computer power and data.7
Second, the term invites comparisons with humans and human intelligence in the achievement of certain goals. AI technologies may thus involve the simulation of human traits such as knowledge, reasoning, problem solving, perception, learning and planning in order to produce the output or decision.8 Examples of these human traits are evident in machine learning, deep learning, computer vision and natural language processing. The Turing test9 requires AI to ‘act humanly’ so as to fool a human interrogator into thinking it is human. Russell and Norvig’s ‘rational agent’ approach10 is premised on AI acting rationally in order to achieve the ‘best outcome or, when there is uncertainty, the best expected outcome’.
AI has improved significantly in terms of its efficiency and speed of analysis, capacity to read and decipher patterns from voluminous documents and images, and enhanced reliability and accuracy in generating outputs. In many respects, it is becoming superior to humans in performing specific tasks, even though it is not error-free. Clarke11 refers to the concept of ‘intellectics’ that goes beyond ‘machines that think’ to ‘computers that do’ in which artefacts ‘take autonomous action in the real world’.
Third, AI may be defined in part at least by how it interacts with humans. Implicit in the first and second perspectives of AI is the human–AI interface that we should not overlook. Clarke conceives of AI as ‘Complementary Artefact Intelligence’ that ‘(1) does things well that humans do poorly or cannot do at all; (2) performs functions within systems that include both humans and artefacts; and (3) interfaces effectively, efficiently and adaptably with both humans and artefacts’.12 Indeed, AI technologies such as self-driving vehicles, medical treatment, self-performing contracts and AI-driven management of digital assets exhibit human traits in their interactions with humans, generating practical consequences in the real world. The human-like aspect of AI invites debates as to whether AI should be attributed personhood for liability, and the better-than-human performance of AI prompts us to assess whether standards and liability rules should differ where intelligent machines are used.
As AI continues to evolve in terms of its functions and advances in its capabilities, its role in society will become more pervasive. This gives rise to the question as to whether humans can trust AI. For example, it has been reported that machine-learning algorithms (most notably, face recognition technologies) have racial, gender and age biases.13 The complexity, unexplainability and incomprehensibility of AI is also emerging as a key reason why people do not trust AI decisions or recommendations.14 Building trust is thus a priority agenda for AI innovation and regulation going forward.
B.Big Data
The proliferation of data is a global phenomenon. Information on citizens is constantly collected by organisations through websites, mobile phones, smart watches, virtual personal assistants, drones, face recognition technologies, etc.15 The term ‘big data’ refers to very large datasets that are often complex, varied and generated at a rapid speed.16 As the McKinsey Global Institute Report on ‘Big Data: The Next Frontier for Innovation, Competition, and Productivity’ points out: ‘The use of big data will become a key basis of competition and growth for individual firms.’17 Information is value in itself. Big data in particular creates tremendous value because it is an enormous volume of information from which new insights can be drawn. The McKinsey Report highlights that big data creates value in various ways: it creates informational transparency; enables experimentation to discover needs, exposes performance variability and improves performance; allows segmentation of the population to customise actions (eg, for interest-based advertising); substitutes/supports human decision-making with automated algorithms; and facilitates the invention of new business models, products and services or the enhancement of existing ones.18 In this connection, the reciprocal relationship between data and AI should not be missed. The value in big data is unlocked by AI, which provides us with the capability to analyse these enormous datasets to derive trends and patterns more quickly or in a way that was not possible before. The input of data to AI technologies improves the latter’s performance; in other words, the technology gets better with more data.
Nevertheless, the concentration of data in the hands of corporate titans raised deep suspicions as to how the consumers’ data is being collected and used by such corporations. In fact, data scandals involving tech giants (such as Google and Facebook) have dampened public confidence and trust in companies.19 More recently, contact tracing technological applications created to monitor and control the COVID-19 virus outbreak have also sparked privacy concerns as these applications enable governments to gain control over citizens’ location-based data and biometric data.20
Clear lines of tension have thus developed in the context of data: the tension between privacy/data protection21 and economic value; the tension between privacy/data protection and innovation; the tension between privacy/data protection and public health; and the tension between business facilitation and consumer protection. How should the law balance between these competing concerns? How does the law catch up with the pace of technological developments and business model transformations? And, most importantly, how can the law help to restore public trust and confidence in companies and governments?22
C.Trust
Trust is a multi-faceted concept. It involves a relation importing some risk to or vulnerability of the trustor who nonetheless chooses to place trust in the trustee. The bases of trust are varied: the trustor’s rational self-interests and goals (cognitive),23 emotions and attitudes towards the trustee (affective) and socio-normative considerations.24 From a stakeholder perspective, trust may be reposed in an individual or organisation such as the AI developer or user, the technology itself, the socio-technical systems,25 and social and legal institutions, including the government. The trustor–trustee relationship may be reciprocal, though not necessarily so (eg, humans placing trust in AI, but not the reverse).26 Serious deviations or aberrations can occur, resulting in trust being inadequate or absent (distrust), abused by the trustee (mistrust) or so excessive as to cause the trustor to be exploited (over-trust).
On a practical level, trust may be regarded as a ‘process’27 encompassing multiple ‘trusting’ concepts (eg, trusting dispositions, attitudes, beliefs, expectations and behaviours), and their commonalities and differences. The level of human trust in AI is mediated by a myriad of factors: the personal characteristics of the trustor, knowledge of the technology28 and its potential errors,29 and perceptions of the positive or negative attributes of the technology.30...

Table des matiĂšres