The Global Politics of Artificial Intelligence
eBook - ePub

The Global Politics of Artificial Intelligence

  1. 284 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

The Global Politics of Artificial Intelligence

About this book

Technologies such as artificial intelligence have led to significant advances in science and medicine, but have also facilitated new forms of repression, policing and surveillance. AI policy has become without doubt a significant issue of global politics.

The Global Politics of Artificial Intelligence tackles some of the issues linked to AI development and use, contributing to a better understanding of the global politics of AI. This is an area where enormous work still needs to be done, and the contributors to this volume provide significant input into this field of study, to policy makers, academics, and society at large. Each of the chapters in this volume works as freestanding contribution, and provides an accessible account of a particular issue linked to AI from a political perspective. Contributors to the volume come from many different areas of expertise, and of the world, and range from emergent to established authors.

Chapter 2 of this book is freely available as a downloadable Open Access PDF at http://www.taylorfrancis.com under a Creative Commons Attribution-Non Commercial-No Derivatives (CC-BY-NC-ND) 4.0 license.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access The Global Politics of Artificial Intelligence by Maurizio Tinnirello in PDF and/or ePUB format, as well as other popular books in Computer Science & Computer Science General. We have over one million books available in our catalogue for you to explore.

Chapter 1 Threading Innovation, Regulation, and the Mitigation of AI HarmExamining Ethics in National AI Strategies

Mona Sloane
DOI: 10.1201/9780429446726-1
Contents
1.1 Introduction
1.2 Artificial Intelligence: The Eternal Dream
1.2.1 Harmful AI
1.3 National AI Strategies
1.3.1 Defining National AI Strategies
1.3.2 No Strategy, No AI?
1.4 To Regulate, Or Not To Regulate?
1.4.1 AI Tensions: Between Innovation and Regulation
1.4.2 Risk Mitigation
1.4.3 Design and Deployment Concerns
1.5 Governance Approaches
1.6 The Limits and Potentials of Ethics in National AI Strategies
1.6.1 AI Ethics Limits: Five Issues
1.6.2 AI Ethics Potentials: Ten Cues
1.7 Conclusion
Notes
References

1.1 Introduction

The development of artificial intelligence (AI) will shape the future of power. The nation with the most resilient and productive economic base will be best positioned to seize the mantle of world leadership. That base increasingly depends on the strength of the innovation economy, which in turn will depend on AI.
(US National Security Commission on Artificial Intelligence, 19 May 2020).
Over the past three to five years, AI technologies and AI research have become a major focus of private and public funding initiatives.1 This heightened attention is paralleled by a growing proliferation of AI technologies across social life. Today, these technologies are embedded into many devices and services that people use on a daily basis, ranging from e-mail spam filters to navigation devices or shopping websites. This development is advancing at a rapid pace, which has led to the competition for (national) leadership in the AI field becoming so fierce that it has been referred to as a ā€œglobal AI race.ā€2 In this ā€œrace,ā€ AI has become the strategic focus of many global technology companies who commit substantial resources to push AI innovation,3 and the amount of capital invested in AI companies in the US came to a staggering $9.3 billion in 2018.4 In Europe, the investment into tech companies (not only AI companies) reached $23 billion in 2018,5 while Chinese tech giants Baidu, Alibaba, and Tencent equally investing heavily into AI technologies and start-ups, backed by a government plan to build a domestic AI industry worth around $150 billion by 2030 (Mozur, 2017).
Other nations and regions are not lagging behind. Although much attention has been on the heated AI competition between the United States of America and China (Metz, 2018), there is investment and policy activity in other regions and countries as well. For example, the EU Commission pledged investment into AI of €1.5 billion for the period 2018–2020 under the Horizon 2020 research programme, expected to trigger an additional €2.5 billion of funding from existing public–private partnerships and eventually lead to an overall investment of at least €20 billion by 2020 (European Commission, 2018a). National European examples include France announcing a €1.5 billion pure government funding for AI by 2022 (Cerulus, 2018), Germany outlining €3 billion aimed at spending on AI research and development by 2025 (Delcker, 2018), and the United Kingdom forging the AI Sector Deal (part of the Industrial Strategy) worth Ā£1 billion (British Government, 2018). In Asia, China’s government is leading with US$7 billion minimum AI investment by 2030 (Ravi and Nagaraj, 2018), well ahead of South Korea, intending to invest US$2 billion in AI by 2022 (Synched, 2018). Canada has pledged C$125 million (CIFAR, 2017), while Australia announced an AUD$29.9 million investment into AI over four years in its 2018–2019 budget (Pearce, 2018).6 While governments have to foster innovation, they are also tasked with mitigating the potentially adverse effects of AI through regulation and governance.

1.2 Artificial Intelligence: The Eternal Dream

Despite the recent ā€œAI hypeā€ (Spencer, 2019), the idea of an ā€œartificial intelligenceā€ is not new: it could be claimed that it dates back to Homer’s Iliad (Cave and Dihal, 2018; Royal Society, 2018). Between the 1950s and the mid-1970s, as computers became faster and cheaper, AI flourished, which was followed by an ā€œAI winterā€ in the 1990s and 2000s and a dip in interest and funding in AI, despite the many AI advancements made during that time (Anyoha, 2017). The new AI hype is based on three developments that coincided and that are deeply connected: the availability of large datasets, the rapid advancement of computational machinery and processing power, and the invention of self-learning algorithms7 based on artificial neural networks (ā€œdeep learningā€).8
The success of new AI technologies has reignited the imaginary of conscious machines or robots that have agency (Royal Society, 2018) and the fear that they may overthrow humanity (Bostrom, 2016). But we are far from that type of ā€œgeneral artificial intelligenceā€ (Knight and Hao, 2019). All of the AI systems in place or under development today are what can be called ā€œnarrow artificial intelligenceā€; basically, statistical models that can (teach themselves to) detect correlation, but not causality.9 This means that AI technology can be very good at very specific tasks, such as identifying the pixels in a photograph to help doctors diagnose a malignant mole.10 But it also means that AI does not possess the capacity to deal with the sheer complexity of social life.11

1.2.1 Harmful AI

AI systems can be riddled with high error rates (especially facial recognition or object detection systems), which can disproportionately affect certain groups, such as people with darker skin tones.12 AI systems can also be very vulnerable to outside influence, for example, to adversarial attacks,13 which can have devastating consequences in high-stake contexts, such as diagnostics, autonomous driving, or combat. These attacks do not need to be digital, ā€œphysical world attacksā€ can also affect deep learning visual classification, such as stickers on stop signs.14
Over the past years, new research has demystified the account that algorithms and AI are de facto neutral and shown that existing power imbalances, inequalities, and cultures of discrimination are mirrored and exacerbated by automated systems. Important works include, but are not limited to: Virginia Eubanks’15 research on how data mining, algorithms, and predictive risk models exacerbate poverty and inequality in the US; Safiya Umoja Noble’s16 work on how search engines discriminate against women of colour; Cathy O’Neil’s17 work on how the large-scale deployment of data science tools can increase inequality; Marie Hicks18 demonstration of how gendered inequalities in computation are not accidental, but derive from a particular cultural landscape and a series of policy decisions; the work of Joy Buolamwini and Timnit Gebru19 on discrimination in image databases and automated ender classification systems; research by Wilson, Hoffman, and Morgenstern20 on higher error rates for pedestrians with darker skin tones in object detection systems; and Bolukbasi et al.’s21 research on gender stereotypes in word embeddings.
The concern for ethics in AI, algorithms, and automated systems is also amplified by scandals that have shaken the tech industry, such as the Cambridge Analytica scandal involving Facebook user data, civilian deaths through driverless cars or the automated replication of the live-streaming of the Christchurch mosque attacks on social media. Meanwhile, the rollout of Europe’s General Data Protection Regulation (GDPR) has brought data protection issues to a broad audience.

1.3 National AI Strategies

Many efforts to address issues around AI and society are now streamlined in and through national AI strategies. Therefore, this chapter provides a qualitative analysis of existing national AI strategies with a specific focus on ethics, and ethics-related concerns. It sets out to examine what work ā€œethicsā€ do in national AI strategies and identify broad patterns of AI ethics interpretation and representation within these strategy documents.
The empirical material for this study is comprised of national AI strategy documents that were sourced through an online search22 (between February and March 201923). In order to be included in the sample, a nation had to have a formal strategy in place, and the AI strategy documents had to be available in English. After the completion of the data collection, the AI strategy documents were analysed to identify aspects of ā€œethicsā€ or related concerns and approaches and define core themes that cut across the sample. To account for the AI innovation landscape beyond formalised national AI strategies, additional data was gathered from policy documents, reports and news articles. This chapter should not be read as a comprehensive analysis of all AI strategies that have been proposed globally. It focuses explicitly on how concerns around AI and society, and ethics specifically, are articulated in the national AI strategies that were available at the time this study was conducted. It is therefore limited in its scope.

1.3.1 Defining National AI Strategies

At the most basic level, national AI strategies are frameworks that facilitate the distribution of public funds and incentivise research and innovation, as well as private funding, in certain areas and into certain directions. Bradley, Wingfield, and Metzger24 broadly define a national AI strategy...

Table of contents

  1. Cover
  2. Half-Title
  3. Series
  4. Title
  5. Copyright
  6. Contents
  7. Preface
  8. Acknowledgements
  9. Editor
  10. List of Contributors
  11. CHAPTER 1 Threading Innovation, Regulation, and the Mitigation of AI Harm: Examining Ethics in National AI Strategies
  12. CHAPTER 2 Governance of Artificial Intelligence: Emerging International Trends and Policy Frames
  13. CHAPTER 3 Multilateralism and Artificial Intelligence: What Role for the United Nations?
  14. CHAPTER 4 Governing the Use of Autonomous Weapon Systems
  15. CHAPTER 5 Lessons for Artificial Intelligence from Other Global Risks
  16. CHAPTER 6 Vulnerability, AI, and Power in a Global Context: From Being-at-Risk to Biopolitics in the COVID-19 Pandemic
  17. CHAPTER 7 Using Decision Theory and Value Alignment to Integrate Analogue and Digital AI
  18. CHAPTER 8 Nomadic Artificial Intelligence and Royal Research Councils: Curiosity-Driven Research Against Imperatives Implying Imperialism
  19. CHAPTER 9 Artificial Intelligence and Post-Capitalism: The Prospect and Challenges of AI-Automated Labour
  20. CHAPTER 10 Artificial General Intelligence’s Beneficial Use within Capitalist Democracy: A Realistic Vision
  21. INDEX