1 Introduction
Artificial Intelligence1 (AI)2 and Machine Learning (ML)3 are ubiquitous phenomena.4 Their use has powerful implications, however Janus-faced. Amongst the interrelated challenges are particularly the degradation of truth and precision surveillance.5 Technological advances outpace relevant policies as ‘The Global Risks Report 2020’ of the World Economic Forum (WEF Risks Report) states,6 rightly. In addition, technological rivalries render the future geopolitical scenario uncertain.7 Data are a key aspect in AI techniques.8 The data race to foster AI has been unleashed for quite some time with both private and public actors at a global scale.9 Legal frameworks as to personal and non-personal data10 are crucial, here. The focus has gradually shifted from data governance to AI governance.11 Policy makers at international, European Union (EU) – here particularly the European Commission (Commission) – and national level have acknowledged the need for more general policies in the context of AI.12 International initiatives developing governance standards for AI have emerged.13 Such growing awareness of the urgency of the matter is desirable. But the current disruption of the multinational system fosters fragmentation rather than alignment of the responses to the significant risks.14 The global challenges of AI development, however, require global governance.15 At supranational level such as in the EU, one might observe a first move in this direction: the emergence of AI regulation.16 Legal academia is alert to it. It has started to contribute to the discussion of AI regulation.17
These more general observations in mind, the chapter zooms in on the financial sector. In the financial sector,18 the Financial Stability Board (FSB)19 has dedicated its report ‘Artificial Intelligence and Machine Learning in Financial Services’ (FSB Report AI and ML)20 to the matter quite early. In the FSB's view, the use of AI and ML is spreading rapidly while assessing it remains constantly provisional due to the paucity of (each) current robust data.21 The pressure for change on actors in the financial sector is huge; their digitalisation strategies might well become a litmus test for whether they serve the good of human beings.22 The use of AI and ML requires close monitoring.23 Policy and law makers as well as regulators and supervisors worldwide have to be utmost alert to its disruptive potential. The latter includes fundamental technological risks requiring profound AI safety research and work.24 Value alignment25 is discussed, here. The same is true for the control problem.26 A global, concerted, far-sighted, long-term oriented, responsible and sustainable governance of AI and ML is the urgent law of reason. It shall ensure a regulatory ‘whitebox’. The discourse on AI and ML requires rationalisation as observed by experts ranging from moral philosophers27 to the World Economic Forum in its report ‘The New Physics of Financial Services’ (WEF New Physics Report)28 with a view to the financial sector. The chapter aims to contribute to such rationalisation. Focus is on the legal-methodological perspective. It is organised as follows. After this introduction, the chapter discusses selected pivotal legal-methodological challenges to be tackled at the outset of any policy, legal, particularly regulatory and supervisory approach to AI and ML in the financial sector. It seeks to address the following questions. How might AI, ML and related phenomena best be defined? How might one best relate AI and ML to law, methodologically? Is it necessary to put (the policy and legal (-political) approach to) the sector-specific use of AI and ML in the financial sector in context with that to the more general use of AI and ML? What challenges are to be tackled to establish the relevant facts of the case? The chapter then concludes. The strict limitations as to the volume of this chapter require a strict selection of issues discussed. Any reference to relevant sources and (legal) literature is also limited to more exemplary sources.