Three Liability Regimes for Artificial Intelligence
eBook - ePub

Three Liability Regimes for Artificial Intelligence

Algorithmic Actants, Hybrids, Crowds

Anna Beckers, Gunther Teubner

Share book
  1. 240 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Three Liability Regimes for Artificial Intelligence

Algorithmic Actants, Hybrids, Crowds

Anna Beckers, Gunther Teubner

Book details
Book preview
Table of contents
Citations

About This Book

This book proposes three liability regimes to combat the wide responsibility gaps caused by AI systems – vicarious liability for autonomous software agents (actants); enterprise liability for inseparable human-AI interactions (hybrids); and collective fund liability for interconnected AI systems (crowds). Based on information technology studies, the book first develops a threefold typology that distinguishes individual, hybrid and collective machine behaviour. A subsequent social science analysis specifies the socio-digital institutions related to this threefold typology. Then it determines the social risks that emerge when algorithms operate within these institutions. Actants raise the risk of digital autonomy, hybrids the risk of double contingency in human-algorithm encounters, crowds the risk of opaque interconnections. The book demonstrates that the law needs to respond to these specific risks, by recognising personified algorithms as vicarious agents, human-machine associations as collective enterprises, and interconnected systems as risk pools – and by developing corresponding liability rules. The book relies on a unique combination of information technology studies, sociological institution and risk analysis, and comparative law. This approach uncovers recursive relations between types of machine behaviour, emergent socio-digital institutions, their concomitant risks, legal conditions of liability rules, and ascription of legal status to the algorithms involved.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Three Liability Regimes for Artificial Intelligence an online PDF/ePUB?
Yes, you can access Three Liability Regimes for Artificial Intelligence by Anna Beckers, Gunther Teubner in PDF and/or ePUB format, as well as other popular books in Droit & Droit de la science et de la technologie. We have over one million books available in our catalogue for you to explore.

Information

Year
2021
ISBN
9781509949342
1
Digitalisation: The Responsibility Gap
I.The Problem: The Dangerous Homo Ex Machina
‘Figure ambigue’ – the overpainting, which is reproduced on the cover of this book, was produced by Max Ernst, one of the protagonists of dadaism/surrealism. In 1919, he already expressed his unease with the excessive ambivalences of modern technology. His work is simultaneously celebratory about the dynamism and energy of the machine utopia and sarcastic about its dehumanising consequences. On the painting’s right side, Ernst creates a serene joyful atmosphere that seems to symbolise the ingenious inventions of modern science. Mechanically animated letters of the alphabet are connected to each other in complex arrangements and seem to be transformed into strange machines. Via metamorphosis or double identity, these non-human figures appear to substitute human bodies; they jump, dance, and even fly. These homines ex machina ‘carry off a triumph of mobility: through rotation, doubling, shifting, reflection, and optical illusion’.1
Abruptly, the atmosphere changes on the painting’s left side. The symbols change their colour, become dark, appear to be brutal and threatening. In the upper left corner, a black sun, which is again made up of strange symbols forming a sinister face, is throwing its dark light over the world. With this painting and many others, Max Ernst expressed his ambivalent attitude toward the logic, rationality and aesthetics of the modern perfect machine world, which had the potential to turn into absurdity, irrationality and brutality.2 Ernst ‘was looking for ways to register social mechanisms and truths as well as to symbolise with artistic techniques their more profound structure. Probably, it is an attempt to grasp a social subconscious in the historical moment when the totalitarian potential of technology became imaginable.’3
Today, Max Ernst’s surrealistic dream seems to become the new reality. Algorithms are the emblematic figures ambigues of our time, which even radicalise the ambivalence of machine automatons by an enigmatic ‘artificial intelligence’. Like the alphabetic letters in Max Ernst’s painting, algorithms, at first sight, are nothing but innocent chains of symbols. In their electronic metamorphosis, these symbols begin to live, jump, dance, fly. What is more, they bring into existence a new world of meaning. Their creatio ex nihilo promises a better future for mankind. Big data and algorithmic creativity symbolise the hopes of expanding or substituting the cognitive capacities of the human mind. But this is only the bright side of their excessive ambivalence. There is a threatening dark side to the brave new world of algorithms, who, after the first phase of enthusiasm, are now often perceived as nightmarish monsters. ‘Perverse instantiation’ results when intelligent machines run out of human control: the individual algorithm efficiently satisfies the goal set by the human participant but chooses a means that violates the human’s intentions.4 Moreover, a strange hybridity emerges when humans and machines begin not only to communicate but also to create supervenient figures ambigues with undreamt-of potentially damaging characteristics. And, the most threatening situation arises, as symbolised in Max Ernst’s dark sun, in the dangerous exposure of human beings to an opaque algorithmic environment that remains uncontrollable.
How does contemporary law deal with algorithmic figures ambigues? That is the theme of this book, exemplified by the law of liability for algorithmic failures. Law mirrors the excessive ambivalence of the world of algorithms. On their bright side, law welcomes algorithms as powerful instruments in the service of human needs. Law opens itself to algorithms, conferring to them even a quasi-magic potestas vicaria so that they can participate as autonomous agents in transactions on the market. However, on their dark side, current law reveals remarkable deficiencies. Liability law is not at all prepared to counteract the algorithms’ new dangers. Ignoring the potential threats stemming from their autonomy, the law treats algorithms not any different from other tools, machines, objects, or products. If they create damages, current product liability is supposed to be the appropriate reaction.
But that is too easy. Compared to familiar situations of product liability, with the arrival of algorithms, ‘the array of potential harms widens, as to the product is added a new facet – intelligence’.5 The figures ambigues that invade private law territories are not simply hazardous objects but uncontrollable subjects – robots, software agents, cyborgs, hybrids, computer networks – some with a high level of autonomy and the ability to learn. With their restless energy, they generate new kinds of undreamt-of hazards for humans and society.
In the legal debate, defensive arguments abound to keep these alien species at a distance. The predominant position in legal scholarship argues with astonishing self-confidence that the rules on contract formation and liability in contract, tort and product liability are, in their current form, well equipped to deal with the hazards of such new digital species. According to this opinion, there is no need of deviating from the established methods of action and liability attribution. Computer behaviour is nothing but behaviour of the humans behind the machine. Autonomous AI systems are legally treated, so the argument goes, without problems as mere machines, as human tools, as willing instruments in the hands of their human masters.6
A.Growing Liability Gaps
However, private law categories cannot avoid responding to the current and very real problems that algorithms cause when acquiring autonomy.7 A new phenomenon called ‘active digital agency’ is causing the problems:
The more autonomous robots will become, the less they can be considered as mere tools in the hand of humans, and the more they obtain active digital agency. In this context, issues of responsibility and liability for behaviour and possible damages resulting from the behaviour would become pertinent.8
Unacceptable gaps in responsibility and liability – this is why private law needs to change its categories fundamentally. Given the rapid digital developments, the gaps have already opened today.9 Software agents and other AI systems inevitably cause these gaps because their actions are unpredictable and thus entail a massive loss of control for human actors. At the same time, society is becoming increasingly dependent on autonomous algorithms on a large scale, and it is improbable that society will abandon their use.10
Of course, lawyers’ resistance to granting algorithms the status of legal capacity or even personhood is understandable. After all, ‘[t]he fact is, that each time there is a movement to confer rights onto some new “entity”, the proposal is bound to sound odd or frightening or laughable.’11 But despite the oddity of ‘algorithmic persons’, the growing responsibility gaps confront private law with a radical choice: either it assigns AI-systems an independent legal status as responsible actors or accepts an increasing number of accidents without anyone being responsible for them. The dynamics of digitalisation are constantly creating responsible-free spaces that will expand in the future.12
B.Scenarios
When using the serious threat of increasing liability gaps, it is of course crucial to clearly identify such gaps in the first place. Information science describes typical responsibility gaps in the following scenarios: Deficiencies arise in practice when the software is produced by teams, when management decisions are just as important as programming decisions, when documentation of requirements and specifications plays a significant role in the resulting code, when, despite testing code accuracy, a lot depends on ‘off-the-shelf’ components whose origin and accuracy are unclear, when the performance of the software is the result of the accompanying checks and not of program creation, when automated instruments are used in the design of the software, when the operation of the algorithms is influenced by its interfaces or even by system traffic, when the software interacts in an unpredictable manner, or when the software works with probabilities or is adaptable or is the result of another program.13
These scenarios produce the most critical liability gaps that the law has so far encountered.14
i.Machine Connectivities
The most challenging liability gap arises in multiple agent systems when several computers are closely interconnected in an algorithmic network and create damages. The liability rules of the current law do not at all provide a convincing solution.15 There is also no sign of a helpful proposal de lege ferenda. In the case of high-frequency trading, this risk has become apparent.16 As two observers pointedly put it: ‘Who should bear these massive risks of algorithms that control the trading systems, to behave for some time in an uncontrolled and incomprehensible manner and causing a loss of billions?’17
ii.Big Data
Incorrect estimates of Big Data analyses cause further liability gaps. Big Data is used to predict how existing societal trends or epidemics can develop and – if necessary – be influenced by vast amounts of data. If the faulty calculation, ie algorithm or underlying data basis, cannot be clearly established, there are difficulties in determining causality and misconduct.18
iii.Digital Hybrids
In computational journalism, in other fields of hybrid writing and in several instances of hybrid cooperation, human action and algorithmic calculations are often so intertwined that it becomes virtually impossible to identify which action was responsible for the damage. The question arises of whether liability can be founded on the collective action of the human-machine association itself.19
iv.Algorithmic Contracts
An unsatisfactory liability situation arises in the law on contract formation when applied to software agents’ declarations. Once software agents issue legally binding declarations but misrepresent the human as the principal relying on the agent, it is unclear whether the risk is attributed entirely to the principal. Some authors argue that doing so would be an excessive and unjustifiable burden, especially when it comes to distributed action or self-cloning.20
v.Digital Breach of Contract
If a contract’s performance is delegated to an autonomous software agent and if the agent violates contractual obligations, the prevailing doctrine argues that the rules of vicarious liability for auxiliary persons do not apply. The reason is that an algorithm does not have the necessary legal capacity to act as a vicarious agent. Instead, liability shall only arise when the human principal himself commits a breach of contract. This ope...

Table of contents