Three Liability Regimes for Artificial Intelligence
eBook - ePub

Three Liability Regimes for Artificial Intelligence

Algorithmic Actants, Hybrids, Crowds

Anna Beckers, Gunther Teubner

Partager le livre
  1. 240 pages
  2. English
  3. ePUB (adapté aux mobiles)
  4. Disponible sur iOS et Android
eBook - ePub

Three Liability Regimes for Artificial Intelligence

Algorithmic Actants, Hybrids, Crowds

Anna Beckers, Gunther Teubner

DĂ©tails du livre
Aperçu du livre
Table des matiĂšres
Citations

À propos de ce livre

This book proposes three liability regimes to combat the wide responsibility gaps caused by AI systems – vicarious liability for autonomous software agents (actants); enterprise liability for inseparable human-AI interactions (hybrids); and collective fund liability for interconnected AI systems (crowds). Based on information technology studies, the book first develops a threefold typology that distinguishes individual, hybrid and collective machine behaviour. A subsequent social science analysis specifies the socio-digital institutions related to this threefold typology. Then it determines the social risks that emerge when algorithms operate within these institutions. Actants raise the risk of digital autonomy, hybrids the risk of double contingency in human-algorithm encounters, crowds the risk of opaque interconnections. The book demonstrates that the law needs to respond to these specific risks, by recognising personified algorithms as vicarious agents, human-machine associations as collective enterprises, and interconnected systems as risk pools – and by developing corresponding liability rules. The book relies on a unique combination of information technology studies, sociological institution and risk analysis, and comparative law. This approach uncovers recursive relations between types of machine behaviour, emergent socio-digital institutions, their concomitant risks, legal conditions of liability rules, and ascription of legal status to the algorithms involved.

Foire aux questions

Comment puis-je résilier mon abonnement ?
Il vous suffit de vous rendre dans la section compte dans paramĂštres et de cliquer sur « RĂ©silier l’abonnement ». C’est aussi simple que cela ! Une fois que vous aurez rĂ©siliĂ© votre abonnement, il restera actif pour le reste de la pĂ©riode pour laquelle vous avez payĂ©. DĂ©couvrez-en plus ici.
Puis-je / comment puis-je télécharger des livres ?
Pour le moment, tous nos livres en format ePub adaptĂ©s aux mobiles peuvent ĂȘtre tĂ©lĂ©chargĂ©s via l’application. La plupart de nos PDF sont Ă©galement disponibles en tĂ©lĂ©chargement et les autres seront tĂ©lĂ©chargeables trĂšs prochainement. DĂ©couvrez-en plus ici.
Quelle est la différence entre les formules tarifaires ?
Les deux abonnements vous donnent un accĂšs complet Ă  la bibliothĂšque et Ă  toutes les fonctionnalitĂ©s de Perlego. Les seules diffĂ©rences sont les tarifs ainsi que la pĂ©riode d’abonnement : avec l’abonnement annuel, vous Ă©conomiserez environ 30 % par rapport Ă  12 mois d’abonnement mensuel.
Qu’est-ce que Perlego ?
Nous sommes un service d’abonnement Ă  des ouvrages universitaires en ligne, oĂč vous pouvez accĂ©der Ă  toute une bibliothĂšque pour un prix infĂ©rieur Ă  celui d’un seul livre par mois. Avec plus d’un million de livres sur plus de 1 000 sujets, nous avons ce qu’il vous faut ! DĂ©couvrez-en plus ici.
Prenez-vous en charge la synthÚse vocale ?
Recherchez le symbole Écouter sur votre prochain livre pour voir si vous pouvez l’écouter. L’outil Écouter lit le texte Ă  haute voix pour vous, en surlignant le passage qui est en cours de lecture. Vous pouvez le mettre sur pause, l’accĂ©lĂ©rer ou le ralentir. DĂ©couvrez-en plus ici.
Est-ce que Three Liability Regimes for Artificial Intelligence est un PDF/ePUB en ligne ?
Oui, vous pouvez accĂ©der Ă  Three Liability Regimes for Artificial Intelligence par Anna Beckers, Gunther Teubner en format PDF et/ou ePUB ainsi qu’à d’autres livres populaires dans Droit et Droit de la science et de la technologie. Nous disposons de plus d’un million d’ouvrages Ă  dĂ©couvrir dans notre catalogue.

Informations

Éditeur
Hart Publishing
Année
2021
ISBN
9781509949342
1
Digitalisation: The Responsibility Gap
I.The Problem: The Dangerous Homo Ex Machina
‘Figure ambigue’ – the overpainting, which is reproduced on the cover of this book, was produced by Max Ernst, one of the protagonists of dadaism/surrealism. In 1919, he already expressed his unease with the excessive ambivalences of modern technology. His work is simultaneously celebratory about the dynamism and energy of the machine utopia and sarcastic about its dehumanising consequences. On the painting’s right side, Ernst creates a serene joyful atmosphere that seems to symbolise the ingenious inventions of modern science. Mechanically animated letters of the alphabet are connected to each other in complex arrangements and seem to be transformed into strange machines. Via metamorphosis or double identity, these non-human figures appear to substitute human bodies; they jump, dance, and even fly. These homines ex machina ‘carry off a triumph of mobility: through rotation, doubling, shifting, reflection, and optical illusion’.1
Abruptly, the atmosphere changes on the painting’s left side. The symbols change their colour, become dark, appear to be brutal and threatening. In the upper left corner, a black sun, which is again made up of strange symbols forming a sinister face, is throwing its dark light over the world. With this painting and many others, Max Ernst expressed his ambivalent attitude toward the logic, rationality and aesthetics of the modern perfect machine world, which had the potential to turn into absurdity, irrationality and brutality.2 Ernst ‘was looking for ways to register social mechanisms and truths as well as to symbolise with artistic techniques their more profound structure. Probably, it is an attempt to grasp a social subconscious in the historical moment when the totalitarian potential of technology became imaginable.’3
Today, Max Ernst’s surrealistic dream seems to become the new reality. Algorithms are the emblematic figures ambigues of our time, which even radicalise the ambivalence of machine automatons by an enigmatic ‘artificial intelligence’. Like the alphabetic letters in Max Ernst’s painting, algorithms, at first sight, are nothing but innocent chains of symbols. In their electronic metamorphosis, these symbols begin to live, jump, dance, fly. What is more, they bring into existence a new world of meaning. Their creatio ex nihilo promises a better future for mankind. Big data and algorithmic creativity symbolise the hopes of expanding or substituting the cognitive capacities of the human mind. But this is only the bright side of their excessive ambivalence. There is a threatening dark side to the brave new world of algorithms, who, after the first phase of enthusiasm, are now often perceived as nightmarish monsters. ‘Perverse instantiation’ results when intelligent machines run out of human control: the individual algorithm efficiently satisfies the goal set by the human participant but chooses a means that violates the human’s intentions.4 Moreover, a strange hybridity emerges when humans and machines begin not only to communicate but also to create supervenient figures ambigues with undreamt-of potentially damaging characteristics. And, the most threatening situation arises, as symbolised in Max Ernst’s dark sun, in the dangerous exposure of human beings to an opaque algorithmic environment that remains uncontrollable.
How does contemporary law deal with algorithmic figures ambigues? That is the theme of this book, exemplified by the law of liability for algorithmic failures. Law mirrors the excessive ambivalence of the world of algorithms. On their bright side, law welcomes algorithms as powerful instruments in the service of human needs. Law opens itself to algorithms, conferring to them even a quasi-magic potestas vicaria so that they can participate as autonomous agents in transactions on the market. However, on their dark side, current law reveals remarkable deficiencies. Liability law is not at all prepared to counteract the algorithms’ new dangers. Ignoring the potential threats stemming from their autonomy, the law treats algorithms not any different from other tools, machines, objects, or products. If they create damages, current product liability is supposed to be the appropriate reaction.
But that is too easy. Compared to familiar situations of product liability, with the arrival of algorithms, ‘the array of potential harms widens, as to the product is added a new facet – intelligence’.5 The figures ambigues that invade private law territories are not simply hazardous objects but uncontrollable subjects – robots, software agents, cyborgs, hybrids, computer networks – some with a high level of autonomy and the ability to learn. With their restless energy, they generate new kinds of undreamt-of hazards for humans and society.
In the legal debate, defensive arguments abound to keep these alien species at a distance. The predominant position in legal scholarship argues with astonishing self-confidence that the rules on contract formation and liability in contract, tort and product liability are, in their current form, well equipped to deal with the hazards of such new digital species. According to this opinion, there is no need of deviating from the established methods of action and liability attribution. Computer behaviour is nothing but behaviour of the humans behind the machine. Autonomous AI systems are legally treated, so the argument goes, without problems as mere machines, as human tools, as willing instruments in the hands of their human masters.6
A.Growing Liability Gaps
However, private law categories cannot avoid responding to the current and very real problems that algorithms cause when acquiring autonomy.7 A new phenomenon called ‘active digital agency’ is causing the problems:
The more autonomous robots will become, the less they can be considered as mere tools in the hand of humans, and the more they obtain active digital agency. In this context, issues of responsibility and liability for behaviour and possible damages resulting from the behaviour would become pertinent.8
Unacceptable gaps in responsibility and liability – this is why private law needs to change its categories fundamentally. Given the rapid digital developments, the gaps have already opened today.9 Software agents and other AI systems inevitably cause these gaps because their actions are unpredictable and thus entail a massive loss of control for human actors. At the same time, society is becoming increasingly dependent on autonomous algorithms on a large scale, and it is improbable that society will abandon their use.10
Of course, lawyers’ resistance to granting algorithms the status of legal capacity or even personhood is understandable. After all, ‘[t]he fact is, that each time there is a movement to confer rights onto some new “entity”, the proposal is bound to sound odd or frightening or laughable.’11 But despite the oddity of ‘algorithmic persons’, the growing responsibility gaps confront private law with a radical choice: either it assigns AI-systems an independent legal status as responsible actors or accepts an increasing number of accidents without anyone being responsible for them. The dynamics of digitalisation are constantly creating responsible-free spaces that will expand in the future.12
B.Scenarios
When using the serious threat of increasing liability gaps, it is of course crucial to clearly identify such gaps in the first place. Information science describes typical responsibility gaps in the following scenarios: Deficiencies arise in practice when the software is produced by teams, when management decisions are just as important as programming decisions, when documentation of requirements and specifications plays a significant role in the resulting code, when, despite testing code accuracy, a lot depends on ‘off-the-shelf’ components whose origin and accuracy are unclear, when the performance of the software is the result of the accompanying checks and not of program creation, when automated instruments are used in the design of the software, when the operation of the algorithms is influenced by its interfaces or even by system traffic, when the software interacts in an unpredictable manner, or when the software works with probabilities or is adaptable or is the result of another program.13
These scenarios produce the most critical liability gaps that the law has so far encountered.14
i.Machine Connectivities
The most challenging liability gap arises in multiple agent systems when several computers are closely interconnected in an algorithmic network and create damages. The liability rules of the current law do not at all provide a convincing solution.15 There is also no sign of a helpful proposal de lege ferenda. In the case of high-frequency trading, this risk has become apparent.16 As two observers pointedly put it: ‘Who should bear these massive risks of algorithms that control the trading systems, to behave for some time in an uncontrolled and incomprehensible manner and causing a loss of billions?’17
ii.Big Data
Incorrect estimates of Big Data analyses cause further liability gaps. Big Data is used to predict how existing societal trends or epidemics can develop and – if necessary – be influenced by vast amounts of data. If the faulty calculation, ie algorithm or underlying data basis, cannot be clearly established, there are difficulties in determining causality and misconduct.18
iii.Digital Hybrids
In computational journalism, in other fields of hybrid writing and in several instances of hybrid cooperation, human action and algorithmic calculations are often so intertwined that it becomes virtually impossible to identify which action was responsible for the damage. The question arises of whether liability can be founded on the collective action of the human-machine association itself.19
iv.Algorithmic Contracts
An unsatisfactory liability situation arises in the law on contract formation when applied to software agents’ declarations. Once software agents issue legally binding declarations but misrepresent the human as the principal relying on the agent, it is unclear whether the risk is attributed entirely to the principal. Some authors argue that doing so would be an excessive and unjustifiable burden, especially when it comes to distributed action or self-cloning.20
v.Digital Breach of Contract
If a contract’s performance is delegated to an autonomous software agent and if the agent violates contractual obligations, the prevailing doctrine argues that the rules of vicarious liability for auxiliary persons do not apply. The reason is that an algorithm does not have the necessary legal capacity to act as a vicarious agent. Instead, liability shall only arise when the human principal himself commits a breach of contract. This ope...

Table des matiĂšres