Chapter 1
Is technology neutral?
When you invent the ship, you also invent the shipwreck; when you invent the plane, you also invent the plane crash; and when you invent electricity, you invent electrocution⊠Every technology carries its own negativity, which is invented at the same time as technical progress.
â Paul Virilio, French cultural theorist and philosopher
Is technology neutral? This is one of the most contentious questions in technology today. It is more than an academic debate and more than a matter of personal opinion. How we answer it can help us to determine the responsibilities of anyone involved in creating or using technology. It can also help us solve the main question of this book: how can we create and use technology to maximize benefits and minimize harms?
Technology shapes our lives and those of other people with whom we come into contact, both directly and indirectly, whether we are aware of it or not. It can act as a liberating force but it also entrenches asymmetries of power across gender, race, class, generations and geography. It is at the core of some of the most valuable companies not only of today but in history â companies whose power and influence challenge the authority of governments around the world. It can help people hold their governments to account but it also helps those governments to surveil more people, in more detail, than ever before. It is a matter of superpower rivalry between the United States, the European Union and China, often forcing other countries to take sides not just on technology suppliers but also on values. It challenges our ideas about what it means to be human. It forces us to reconsider what it means to have autonomy and to enjoy privacy, civil liberties and human rights.
Technology ethics concerns all of us â we all use technology and we all experience technology being used on us by others â but it especially concerns anyone involved in making technology, as they bear at least a degree of responsibility for their creations. That is why Joseph Weizenbaum, a computer scientist at MIT and an early pioneer of artificial intelligence (AI), issued this warning in 1987:
It [is] possible not to know and not to ask if one is doing sensible work or contributing to the greater efficiency of murderous devices.
One canât escape this state without asking, again and again, âWhat do I actually do? What is the final application and use of the products of my work?â and ultimately, âAm I content or ashamed to have contributed to this use?â [Emphasis added]
Nor can we hide behind good intentions, as Apple CEO Tim Cook explained in his commencement speech at Stanford in 2019:
Too many seem to think that good intentions excuse away harmful outcomes. But whether you like it or not, what you build and what you create define who you are.
It feels a bit crazy that anyone should have to say this, but if you built a chaos factory, you canât dodge responsibility for the chaos. [Emphasis added]
He speaks from experience. Under Cookâs leadership, Apple has refused to unlock iPhones for the FBI in criminal investigations; navigated a tricky path over privacy and data rights with China (where it assembles most of its products and makes around 20 per cent of its revenues); and unveiled a system that, while intended to scan usersâ iPhones for child sexual abuse material, could potentially be expanded to allow governments to scan private content on peopleâs phones.
In this chapter we will consider a debate between some people who think that technology is neutral and others who think it is not. Then we will examine various tools and technologies to evaluate how they compare on the question of neutrality. Finally, we will consider different forms of âintelligenceâ and how this relates to decision-making and, ultimately, responsibility.
The debate
Imagine two teams composed of some of the most interesting thinkers in technology living today and ask them, âIs technology neutral?â Here is what they might have to say.
Team âtechnology is neutralâ
Technology is neither good nor bad; it depends on how we use it. That is what Professor Daniela Rus, director of the Computer Science and Artificial Intelligence Lab (MITâs largest research lab), thinks. Here is how she explained her thinking at the World Economic Forum in 2019:
Iâm a roboticist. Now when I tell people what I do, I get one of two types of reactions. Some people get anxious. They make jokes about Skynet. And they ask me, âWhen will the robots take over my job?â And then other people get very excited and ask me, âWhen will my car be self-driving?â
Well, I belong to the second group. But I believe itâs very important to understand the concerns of the first group and provide ideas and suggestions for how to see things differently. And this starts with understanding that AI and robotics and machine learning are tools. They are...