Understanding Digital Ethics
eBook - ePub

Understanding Digital Ethics

Cases and Contexts

Jonathan Beever, Rudy McDaniel, Nancy A. Stanlick

Share book
  1. 208 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Understanding Digital Ethics

Cases and Contexts

Jonathan Beever, Rudy McDaniel, Nancy A. Stanlick

Book details
Book preview
Table of contents
Citations

About This Book

Rapid changes in technology and the growing use of electronic media signal a need for understanding both clear and subtle ethical and social implications of the digital, and of specific digital technologies. Understanding Digital Ethics: Cases and Contexts is the first book to offer a philosophically grounded examination of digital ethics and its moral implications. Divided into three clear parts, the authors discuss and explain the following key topics:

ā€¢ Becoming literate in digital ethics

ā€¢ Moral viewpoints in digital contexts

ā€¢ Motivating action in digital ethics

ā€¢ Speed and scope of digital information

ā€¢ Moral algorithms and ethical machines

ā€¢ The digital and the human

ā€¢ Digital relations and empathy machines

ā€¢ Agents, autonomy, and action

ā€¢ Digital and ethical activism.

The book includes cases and examples that explore the ethical implications of digital hardware and software including videogames, social media platforms, autonomous vehicles, robots, voice-enabled personal assistants, smartphones, artificially intelligent chatbots, military drones, and more.

Understanding Digital Ethics is essential reading for students and scholars of philosophical ethics, those working on topics related to digital technology and digital/moral literacy, and practitioners in related fields.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on ā€œCancel Subscriptionā€ - itā€™s as simple as that. After you cancel, your membership will stay active for the remainder of the time youā€™ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlegoā€™s features. The only differences are the price and subscription period: With the annual plan youā€™ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weā€™ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Understanding Digital Ethics an online PDF/ePUB?
Yes, you can access Understanding Digital Ethics by Jonathan Beever, Rudy McDaniel, Nancy A. Stanlick in PDF and/or ePUB format, as well as other popular books in Informatica & Media digitali. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2019
ISBN
9781315282114
Edition
1

Part I

Ethical and Digital Literacy

1

Becoming Literate in Digital Ethics

Literacy in digital ethics is often sadly lacking in both contemporary discourse and the design of technologies that support our everyday conversations and interactions in digital spaces. Consider just one of many recent examples from which we could pull to illustrate this claim. Facebook, and its CEO Mark Zuckerberg, continue to make national headlines for ethical issues ranging from data privacy, to propagation of fake news, to enabling Russian interference in the 2016 presidential election. A reporter pointed clearly to the underlying problem: ā€œThe fact is that Facebookā€™s underlying business model itself is troublesome: offer free services, collect userā€™s private information, then monetize that information by selling it to advertisers or other entitiesā€ (Francis, 2017). Like so many other big data corporate entities, Facebookā€™s ethical issues are rooted in its business model. But the ethical problems Facebook faces were made possible by failures of its creators. Mark Zuckerberg and Facebook lead to a fascinating case about the importance of ethical and digital literacy.
One could argue that Zuckerberg has largely failed to be sensitive to or to identify the ethical implications of his corporate creation, which he has previously argued is a neutral technology platform, not a media company responsible for its content (Constine, 2016). Of course, he has a deep and meaningful understanding of the technical platform. Yet, until recently perhaps, he has neglected to consider the broader ethical implications of actions that platform makes possible. Only in March 2018 did Zuckerberg publicly start to identify even basic ethical considerations (see Swisher & Wagner, 2018). And Facebookā€™s slow turn, starting in 2016 after Russian interference in the U.S. presidential election, to tackling the problem of fake news is another indicator of its recognition of its ethical responsibility (e.g., Boorstin, 2016; Schroeder, 2019). This turn marks Facebookā€™s joining of the ongoing conversation between the public, digital information industries like Facebook, and other thought leaders on how to reason carefully about these ethical implications. And all this reasoned thinking is targeted toward whether and what kind of policy and practice changes must be made in order to resolve ethical issues. Should information companies like Facebook be federally regulated? Should users be better informed of how their information is being used? We will engage further with Facebook as a digital ethics case later in this book. But to us, Facebookā€™s controversy reflects the importance of the processes of digital and moral literacy in understanding digital ethicsā€”the topic of this chapter.
Think about how you became literate. Think next about the implications of failing to become literate. Our guess is that your stories are much like our own. We became literate through a developmental process, learning first basic skills and scaffolding those up through experience and habit, guided by mentors, teachers, friends, and family who shared their expertise and experiences with us. And when we think about failures to become literate, we think of problems of access, inequality, and injustice. And just like one becomes literate in the context of reading and writing, we become literate in the context of ethics and the digital, too.
In this chapter we develop the claim that digital ethics requires engaging the intersection of moral literacy and digital literacy. Important problems lie at this intersection, including identifying novel ethical issues regarding emerging technologies, analyzing problems about the nature of stakeholders and their autonomy, and understanding a process of ethical decision making about digital issues. But let us first start with some philosophy.

The (Self-Driving) Trolley Problem

One of the most famous ethical thought experiments in moral philosophy is known as the ā€œtrolley problem,ā€ dealing with questions of technological control, agency, and moral responsibility. The problem originated in the work of philosopher Philippa Foot in 1967 (Marshall, 2018) in the context of abortion debate, and has since been adapted widely for numerous different applications (e.g., Thomson, 1985). In one version of this hypothetical scenario, there is a runaway trolley car and an individual with access to a track-switching lever. For some reason, there are five innocent persons tied to the main track and one other innocent person tied to a secondary track. The train can be diverted to the secondary track if someone pulls a lever in the train yard. If the individual witnessing this imminent disaster does nothing, the trolley will kill five people tied to the main track. If they pull the lever, the trolley is diverted to a secondary track where it kills one person. The moral implications of this thought experiment, and its variations, have been debated by philosophers for decades. Around it revolve questions of vital importance: How far does our agency extend? What are the limits of our moral responsibility? To whom (or what) do we owe moral concern?
In the context of the digital, surrounded as we are by digital information and digital technologies that mediate endless flows of information, this thought experiment takes on renewed life. Indeed, coordinating the movement of human beings safely and efficiently has been a fundamental problem of human society for hundreds of years, so it is no surprise that many of our analog and digital technologies are directly or indirectly related to transportation systems. In fact, some of our earliest digital technologies, including the electric telegraph, were adopted more quickly and disseminated more widely due to their ability to solve transportation challenges. The early railroad systems in the U.S. and Britain, for instance, presented novel complications that required new technologies to address. As Gere (2008) explains, ā€œThe electric telegraph and Morse code were adopted as a solution for the ā€˜crisis of controlā€™ in what was then possibly the most complex system ever built, the railways. Both in Britain and the U.S. the early railways were troubled by large numbers of accidents as well as problems with efficiency, mostly owing to the difficulty of coordinating different trains on the same lineā€ (p. 35). With the telegraphā€™s ability to rapidly (at the time) send information from one part of the track to another, many miles away, technology was able to address a fundamental problem of complexity tied to this emerging transportation system.
Dealing with this ā€œcrisis of controlā€ described by Gere led to technological advancements in our railway systems, which was a largely positive outcome (although an analysis of labor practices in railway construction introduces several other moral problems outside the scope of this book). Too much control can also lead to moral anxiety, though, as revealed by the scenarios presented in the trolley problem. In modern times, a third scenario has emerged in that we are more often faced with the crisis of not being in control. For instance, our transportation systems are becoming more self-reliant, with technologies like autopilot, GPS navigation, and real-time safety mechanisms increasingly moving control from human operators to complex hardware and software systems. Automation is perhaps most frequently associated with the autopilot feature used in modern commercial airliners, but we are now seeing viable and operational automation within the commuter and passenger vehicle industry. These types of vehicles are often referenced in the media as ā€œself-drivingā€ or ā€œautonomousā€ vehicles.
Autonomous vehicles can function without continuous human input. Simply put, they are self-driving cars and trucks. We have long seen examples of autonomous vehicles in fictionā€”the iconic self-driving cars in films such as Minority Report (Spielberg et al., 2002) provided visually compelling examples of these technologies long before we observed the clunky prototypes that now appear in the real world doing tasks like mapping streets and roads for GPS applications. Today, autonomous vehicles are big business, with one Boston consulting group finding over $80 billion invested in autonomous vehicle technology since 2014 and projecting a 60 percent potential saving in fuel cost for consumers who may one day use shared autonomous vehicles (Worland, 2017). Since we know that fuel is a finite resource, there is much attention being paid to autonomous vehicles as new technologies that will be ā€œbetterā€ versions of the vehicles we use today. They can be better by being safer, using less fuel, making fewer mistakes than human drivers, and requiring less infrastructure, like parking spots and parking garages. Imagine how these vehicles might drop their owners off at work, go home to the garage to recharge, and then circle back around at the end of the day to pick them back up, for example.
In order to function, these vehicles use algorithms that learn and adapt to continuously changing parameters on the road and in traffic patterns. These functions and features allow the vehicles to behave autonomously, an advanced state of operation dependent on the ability for technology to be automated. Manovich (2013) notes that automation is one of the fundamental properties of computing. In Manovichā€™s words, ā€œAs long as a process can be defined as a finite set of simple steps (i.e. as an algorithm), a computer can be programmed to execute these steps without human inputā€ (p. 128). While such automation seems innocuous in many applications, such as controlling the temperature in an electronic toaster or setting the time for a recurring alarm in a digital alarm clock, the ethical implications become more significant when considering certain digital technologies, such as the driverless vehicles we are discussing here. For example, consider the reduced autonomy of the people who are riding in such vehicles. The individuals lack control, the vehicle may have a limited range of travel, and there is something, according to some people, that is simply ā€œwrongā€ with autonomous vehicles. They are seen to impede autonomy and take part of the driverā€™s and the passengersā€™ freedom away. When you are in control of where the vehicle goes, there is a sense of responsibility that comes with this control. When a computer does this work for you, both autonomy and responsibility are significantly reduced.
When automation is combined with artificial intelligence, decisions that were previously made by human beings are offloaded to computer software. The combination of automation and AI-based decision making is particularly troublesome to some, as evidenced by articles such as Hillā€™s (2016) essay about self-driving cars. In this piece, Hill notes that self-driving cars are a reality as of the year 2016 and routinely send prototypes throughout the streets of Silicon Valley (albeit with a human backup operator for emergency purposes). However, ponders Hill, what happens when the autonomous vehicle is faced with a ā€œno-winā€ scenario in which a collision is imminent and the vehicle must decide which lives take priority in the upcoming accident? This is similar to the trolley problem discussed above. Hillā€™s article is drawn from an article published in Science in which the authors pose the dilemma even more directly: ā€œAutonomous Vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evilsā€”for example, running over pedestrians or sacrificing itself and its passenger to save themā€ (Bonnefon, Shariff, & Rahwan, 2016, p. 1573). Indeed, when surveyed about such technologies, 1,928 survey participants admitted that while they agreed with a utilitarian decision-making approach in autonomous vehicles in which overall casualties are minimized even at the expense of the vehicleā€™s passengers, they would prefer to purchase vehicles that would ā€œprotect their lives by any means necessaryā€ (Hill, 2016, para. 4).
Even more fascinating are the cultural variations in the responses to updated versions of the trolley problem. In one such update, the MIT Media Lab created a ā€œMoral Machineā€ (MIT Media Lab, n.d.) where users were allowed to ā€œswitchā€ the programming of an autonomous vehicle. Users were asked to ā€œdecide whether to, say, kill an old woman walker or an old man, or five dogs, or five slightly tubby male pedestriansā€ (Marshall, 2018, para. 2). The Moral Machine collected responses from 39.6 million decisions in 10 different languages from millions of people in 233 different countries and territories (Marshall, 2018, para. 1). The results diverged and depended on cultural norms and values. As Marshall explained (2018, para. 3):
participants from eastern countries like Japan, Taiwan, Saudi Arabia and Indonesia were more likely to be in favor of sparing the lawful, or those walking with a green light. Participants in western countries like the US, Canada, Norway, and Germany tended to prefer inaction, letting the car continue on its path. And participants in Latin American countries, like Nicaragua and Mexico, were more into the idea of sparing the fit, the young, and individuals of higher status.
This example reveals that engaging digital ethics is a process not only inexorably linked to our own values and bound up in our own interpretations of the world, but one that also draws deeply from our cultural backgrounds, expectations, and ideologies. A 2018 Nature essay included the crux of the issue in its headline: ā€œMoral choices are not universalā€ (Maxmen, 2018). Although it seems like ethically training vehicles to understand this and make better decisions in these difficult circumstances would be at the forefront of these companiesā€™ minds, sadly that is not the case because our technologies are not yet sophisticated enough. Marshall (2018) notes that ā€œitā€™s hard enough for their sensors to distinguish vehicle exhaust from a solid wall, let alone a billionaire from a homeless person. Right now, developers are focused on more elemental issues, like training the tech to distinguish a human on a bicycle from a parked car, or a car in motionā€ (2018, para. 9). It is not difficult to imagine, however, a near future in which these problems have been solved and we must then turn to the harder questions of ethics in these sorts of ā€œno-winā€ driving scenarios.
The autonomous vehicle scenario poses interesting moral questions regarding the relative value of passengers, pedestrians, and other motorists, and illustrates that some of the most challenging ethical questions are perhaps those most relevant to digital environments. Identifying these emerging issues as ethical is a key step in the process of becoming morally and digitally literate ā€”in doing digital ethics.

Digital Literacy

Concerns like these are part of two overlapping literacies, both necessary for understanding digital ethics. Digital literacy, the process of coming to understand and engage the technologies and information flows that surround us, is one of these. Ethical literacy, or becoming sensitive to, reasoning about, and being motivated to act on emergent ethical issues, is the other. As we see, digital ethics is particularly interesting since it exemplifies the ways in which epistemic concerns (about the things and ways we know) are coupled to ethical concerns (about the things and ways we value).
Digital ethics is incomplete without an understanding of literacy in digital contexts. Such literacy, like literacy understood in general, is a prerequisite to the ability to understand, to evaluate, and to act on moral problems in digital ethics. Just as one cannot understand a legal document without being able to read and comprehend the wor...

Table of contents