Understanding Digital Ethics
eBook - ePub

Understanding Digital Ethics

Cases and Contexts

Jonathan Beever, Rudy McDaniel, Nancy A. Stanlick

  1. 208 pages
  2. English
  3. ePUB (adapté aux mobiles)
  4. Disponible sur iOS et Android
eBook - ePub

Understanding Digital Ethics

Cases and Contexts

Jonathan Beever, Rudy McDaniel, Nancy A. Stanlick

DĂ©tails du livre
Aperçu du livre
Table des matiĂšres
Citations

À propos de ce livre

Rapid changes in technology and the growing use of electronic media signal a need for understanding both clear and subtle ethical and social implications of the digital, and of specific digital technologies. Understanding Digital Ethics: Cases and Contexts is the first book to offer a philosophically grounded examination of digital ethics and its moral implications. Divided into three clear parts, the authors discuss and explain the following key topics:

‱ Becoming literate in digital ethics

‱ Moral viewpoints in digital contexts

‱ Motivating action in digital ethics

‱ Speed and scope of digital information

‱ Moral algorithms and ethical machines

‱ The digital and the human

‱ Digital relations and empathy machines

‱ Agents, autonomy, and action

‱ Digital and ethical activism.

The book includes cases and examples that explore the ethical implications of digital hardware and software including videogames, social media platforms, autonomous vehicles, robots, voice-enabled personal assistants, smartphones, artificially intelligent chatbots, military drones, and more.

Understanding Digital Ethics is essential reading for students and scholars of philosophical ethics, those working on topics related to digital technology and digital/moral literacy, and practitioners in related fields.

Foire aux questions

Comment puis-je résilier mon abonnement ?
Il vous suffit de vous rendre dans la section compte dans paramĂštres et de cliquer sur « RĂ©silier l’abonnement ». C’est aussi simple que cela ! Une fois que vous aurez rĂ©siliĂ© votre abonnement, il restera actif pour le reste de la pĂ©riode pour laquelle vous avez payĂ©. DĂ©couvrez-en plus ici.
Puis-je / comment puis-je télécharger des livres ?
Pour le moment, tous nos livres en format ePub adaptĂ©s aux mobiles peuvent ĂȘtre tĂ©lĂ©chargĂ©s via l’application. La plupart de nos PDF sont Ă©galement disponibles en tĂ©lĂ©chargement et les autres seront tĂ©lĂ©chargeables trĂšs prochainement. DĂ©couvrez-en plus ici.
Quelle est la différence entre les formules tarifaires ?
Les deux abonnements vous donnent un accĂšs complet Ă  la bibliothĂšque et Ă  toutes les fonctionnalitĂ©s de Perlego. Les seules diffĂ©rences sont les tarifs ainsi que la pĂ©riode d’abonnement : avec l’abonnement annuel, vous Ă©conomiserez environ 30 % par rapport Ă  12 mois d’abonnement mensuel.
Qu’est-ce que Perlego ?
Nous sommes un service d’abonnement Ă  des ouvrages universitaires en ligne, oĂč vous pouvez accĂ©der Ă  toute une bibliothĂšque pour un prix infĂ©rieur Ă  celui d’un seul livre par mois. Avec plus d’un million de livres sur plus de 1 000 sujets, nous avons ce qu’il vous faut ! DĂ©couvrez-en plus ici.
Prenez-vous en charge la synthÚse vocale ?
Recherchez le symbole Écouter sur votre prochain livre pour voir si vous pouvez l’écouter. L’outil Écouter lit le texte Ă  haute voix pour vous, en surlignant le passage qui est en cours de lecture. Vous pouvez le mettre sur pause, l’accĂ©lĂ©rer ou le ralentir. DĂ©couvrez-en plus ici.
Est-ce que Understanding Digital Ethics est un PDF/ePUB en ligne ?
Oui, vous pouvez accĂ©der Ă  Understanding Digital Ethics par Jonathan Beever, Rudy McDaniel, Nancy A. Stanlick en format PDF et/ou ePUB ainsi qu’à d’autres livres populaires dans Informatik et Digitale Medien. Nous disposons de plus d’un million d’ouvrages Ă  dĂ©couvrir dans notre catalogue.

Informations

Éditeur
Routledge
Année
2019
ISBN
9781315282114
Édition
1
Sous-sujet
Digitale Medien

Part I

Ethical and Digital Literacy

1

Becoming Literate in Digital Ethics

Literacy in digital ethics is often sadly lacking in both contemporary discourse and the design of technologies that support our everyday conversations and interactions in digital spaces. Consider just one of many recent examples from which we could pull to illustrate this claim. Facebook, and its CEO Mark Zuckerberg, continue to make national headlines for ethical issues ranging from data privacy, to propagation of fake news, to enabling Russian interference in the 2016 presidential election. A reporter pointed clearly to the underlying problem: “The fact is that Facebook’s underlying business model itself is troublesome: offer free services, collect user’s private information, then monetize that information by selling it to advertisers or other entities” (Francis, 2017). Like so many other big data corporate entities, Facebook’s ethical issues are rooted in its business model. But the ethical problems Facebook faces were made possible by failures of its creators. Mark Zuckerberg and Facebook lead to a fascinating case about the importance of ethical and digital literacy.
One could argue that Zuckerberg has largely failed to be sensitive to or to identify the ethical implications of his corporate creation, which he has previously argued is a neutral technology platform, not a media company responsible for its content (Constine, 2016). Of course, he has a deep and meaningful understanding of the technical platform. Yet, until recently perhaps, he has neglected to consider the broader ethical implications of actions that platform makes possible. Only in March 2018 did Zuckerberg publicly start to identify even basic ethical considerations (see Swisher & Wagner, 2018). And Facebook’s slow turn, starting in 2016 after Russian interference in the U.S. presidential election, to tackling the problem of fake news is another indicator of its recognition of its ethical responsibility (e.g., Boorstin, 2016; Schroeder, 2019). This turn marks Facebook’s joining of the ongoing conversation between the public, digital information industries like Facebook, and other thought leaders on how to reason carefully about these ethical implications. And all this reasoned thinking is targeted toward whether and what kind of policy and practice changes must be made in order to resolve ethical issues. Should information companies like Facebook be federally regulated? Should users be better informed of how their information is being used? We will engage further with Facebook as a digital ethics case later in this book. But to us, Facebook’s controversy reflects the importance of the processes of digital and moral literacy in understanding digital ethics—the topic of this chapter.
Think about how you became literate. Think next about the implications of failing to become literate. Our guess is that your stories are much like our own. We became literate through a developmental process, learning first basic skills and scaffolding those up through experience and habit, guided by mentors, teachers, friends, and family who shared their expertise and experiences with us. And when we think about failures to become literate, we think of problems of access, inequality, and injustice. And just like one becomes literate in the context of reading and writing, we become literate in the context of ethics and the digital, too.
In this chapter we develop the claim that digital ethics requires engaging the intersection of moral literacy and digital literacy. Important problems lie at this intersection, including identifying novel ethical issues regarding emerging technologies, analyzing problems about the nature of stakeholders and their autonomy, and understanding a process of ethical decision making about digital issues. But let us first start with some philosophy.

The (Self-Driving) Trolley Problem

One of the most famous ethical thought experiments in moral philosophy is known as the “trolley problem,” dealing with questions of technological control, agency, and moral responsibility. The problem originated in the work of philosopher Philippa Foot in 1967 (Marshall, 2018) in the context of abortion debate, and has since been adapted widely for numerous different applications (e.g., Thomson, 1985). In one version of this hypothetical scenario, there is a runaway trolley car and an individual with access to a track-switching lever. For some reason, there are five innocent persons tied to the main track and one other innocent person tied to a secondary track. The train can be diverted to the secondary track if someone pulls a lever in the train yard. If the individual witnessing this imminent disaster does nothing, the trolley will kill five people tied to the main track. If they pull the lever, the trolley is diverted to a secondary track where it kills one person. The moral implications of this thought experiment, and its variations, have been debated by philosophers for decades. Around it revolve questions of vital importance: How far does our agency extend? What are the limits of our moral responsibility? To whom (or what) do we owe moral concern?
In the context of the digital, surrounded as we are by digital information and digital technologies that mediate endless flows of information, this thought experiment takes on renewed life. Indeed, coordinating the movement of human beings safely and efficiently has been a fundamental problem of human society for hundreds of years, so it is no surprise that many of our analog and digital technologies are directly or indirectly related to transportation systems. In fact, some of our earliest digital technologies, including the electric telegraph, were adopted more quickly and disseminated more widely due to their ability to solve transportation challenges. The early railroad systems in the U.S. and Britain, for instance, presented novel complications that required new technologies to address. As Gere (2008) explains, “The electric telegraph and Morse code were adopted as a solution for the ‘crisis of control’ in what was then possibly the most complex system ever built, the railways. Both in Britain and the U.S. the early railways were troubled by large numbers of accidents as well as problems with efficiency, mostly owing to the difficulty of coordinating different trains on the same line” (p. 35). With the telegraph’s ability to rapidly (at the time) send information from one part of the track to another, many miles away, technology was able to address a fundamental problem of complexity tied to this emerging transportation system.
Dealing with this “crisis of control” described by Gere led to technological advancements in our railway systems, which was a largely positive outcome (although an analysis of labor practices in railway construction introduces several other moral problems outside the scope of this book). Too much control can also lead to moral anxiety, though, as revealed by the scenarios presented in the trolley problem. In modern times, a third scenario has emerged in that we are more often faced with the crisis of not being in control. For instance, our transportation systems are becoming more self-reliant, with technologies like autopilot, GPS navigation, and real-time safety mechanisms increasingly moving control from human operators to complex hardware and software systems. Automation is perhaps most frequently associated with the autopilot feature used in modern commercial airliners, but we are now seeing viable and operational automation within the commuter and passenger vehicle industry. These types of vehicles are often referenced in the media as “self-driving” or “autonomous” vehicles.
Autonomous vehicles can function without continuous human input. Simply put, they are self-driving cars and trucks. We have long seen examples of autonomous vehicles in fiction—the iconic self-driving cars in films such as Minority Report (Spielberg et al., 2002) provided visually compelling examples of these technologies long before we observed the clunky prototypes that now appear in the real world doing tasks like mapping streets and roads for GPS applications. Today, autonomous vehicles are big business, with one Boston consulting group finding over $80 billion invested in autonomous vehicle technology since 2014 and projecting a 60 percent potential saving in fuel cost for consumers who may one day use shared autonomous vehicles (Worland, 2017). Since we know that fuel is a finite resource, there is much attention being paid to autonomous vehicles as new technologies that will be “better” versions of the vehicles we use today. They can be better by being safer, using less fuel, making fewer mistakes than human drivers, and requiring less infrastructure, like parking spots and parking garages. Imagine how these vehicles might drop their owners off at work, go home to the garage to recharge, and then circle back around at the end of the day to pick them back up, for example.
In order to function, these vehicles use algorithms that learn and adapt to continuously changing parameters on the road and in traffic patterns. These functions and features allow the vehicles to behave autonomously, an advanced state of operation dependent on the ability for technology to be automated. Manovich (2013) notes that automation is one of the fundamental properties of computing. In Manovich’s words, “As long as a process can be defined as a finite set of simple steps (i.e. as an algorithm), a computer can be programmed to execute these steps without human input” (p. 128). While such automation seems innocuous in many applications, such as controlling the temperature in an electronic toaster or setting the time for a recurring alarm in a digital alarm clock, the ethical implications become more significant when considering certain digital technologies, such as the driverless vehicles we are discussing here. For example, consider the reduced autonomy of the people who are riding in such vehicles. The individuals lack control, the vehicle may have a limited range of travel, and there is something, according to some people, that is simply “wrong” with autonomous vehicles. They are seen to impede autonomy and take part of the driver’s and the passengers’ freedom away. When you are in control of where the vehicle goes, there is a sense of responsibility that comes with this control. When a computer does this work for you, both autonomy and responsibility are significantly reduced.
When automation is combined with artificial intelligence, decisions that were previously made by human beings are offloaded to computer software. The combination of automation and AI-based decision making is particularly troublesome to some, as evidenced by articles such as Hill’s (2016) essay about self-driving cars. In this piece, Hill notes that self-driving cars are a reality as of the year 2016 and routinely send prototypes throughout the streets of Silicon Valley (albeit with a human backup operator for emergency purposes). However, ponders Hill, what happens when the autonomous vehicle is faced with a “no-win” scenario in which a collision is imminent and the vehicle must decide which lives take priority in the upcoming accident? This is similar to the trolley problem discussed above. Hill’s article is drawn from an article published in Science in which the authors pose the dilemma even more directly: “Autonomous Vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils—for example, running over pedestrians or sacrificing itself and its passenger to save them” (Bonnefon, Shariff, & Rahwan, 2016, p. 1573). Indeed, when surveyed about such technologies, 1,928 survey participants admitted that while they agreed with a utilitarian decision-making approach in autonomous vehicles in which overall casualties are minimized even at the expense of the vehicle’s passengers, they would prefer to purchase vehicles that would “protect their lives by any means necessary” (Hill, 2016, para. 4).
Even more fascinating are the cultural variations in the responses to updated versions of the trolley problem. In one such update, the MIT Media Lab created a “Moral Machine” (MIT Media Lab, n.d.) where users were allowed to “switch” the programming of an autonomous vehicle. Users were asked to “decide whether to, say, kill an old woman walker or an old man, or five dogs, or five slightly tubby male pedestrians” (Marshall, 2018, para. 2). The Moral Machine collected responses from 39.6 million decisions in 10 different languages from millions of people in 233 different countries and territories (Marshall, 2018, para. 1). The results diverged and depended on cultural norms and values. As Marshall explained (2018, para. 3):
participants from eastern countries like Japan, Taiwan, Saudi Arabia and Indonesia were more likely to be in favor of sparing the lawful, or those walking with a green light. Participants in western countries like the US, Canada, Norway, and Germany tended to prefer inaction, letting the car continue on its path. And participants in Latin American countries, like Nicaragua and Mexico, were more into the idea of sparing the fit, the young, and individuals of higher status.
This example reveals that engaging digital ethics is a process not only inexorably linked to our own values and bound up in our own interpretations of the world, but one that also draws deeply from our cultural backgrounds, expectations, and ideologies. A 2018 Nature essay included the crux of the issue in its headline: “Moral choices are not universal” (Maxmen, 2018). Although it seems like ethically training vehicles to understand this and make better decisions in these difficult circumstances would be at the forefront of these companies’ minds, sadly that is not the case because our technologies are not yet sophisticated enough. Marshall (2018) notes that “it’s hard enough for their sensors to distinguish vehicle exhaust from a solid wall, let alone a billionaire from a homeless person. Right now, developers are focused on more elemental issues, like training the tech to distinguish a human on a bicycle from a parked car, or a car in motion” (2018, para. 9). It is not difficult to imagine, however, a near future in which these problems have been solved and we must then turn to the harder questions of ethics in these sorts of “no-win” driving scenarios.
The autonomous vehicle scenario poses interesting moral questions regarding the relative value of passengers, pedestrians, and other motorists, and illustrates that some of the most challenging ethical questions are perhaps those most relevant to digital environments. Identifying these emerging issues as ethical is a key step in the process of becoming morally and digitally literate —in doing digital ethics.

Digital Literacy

Concerns like these are part of two overlapping literacies, both necessary for understanding digital ethics. Digital literacy, the process of coming to understand and engage the technologies and information flows that surround us, is one of these. Ethical literacy, or becoming sensitive to, reasoning about, and being motivated to act on emergent ethical issues, is the other. As we see, digital ethics is particularly interesting since it exemplifies the ways in which epistemic concerns (about the things and ways we know) are coupled to ethical concerns (about the things and ways we value).
Digital ethics is incomplete without an understanding of literacy in digital contexts. Such literacy, like literacy understood in general, is a prerequisite to the ability to understand, to evaluate, and to act on moral problems in digital ethics. Just as one cannot understand a legal document without being able to read and comprehend the wor...

Table des matiĂšres