Privacy, due process and the computational turn
A parable and a first analysis
Katja de Vries*
The parable of the three robotic dogs
Once upon a time, in a land far, far away, there were three families who each owned a robotic dog. The robotic dogs were a great source of entertainment for their owners: the family members enjoyed playing with them the same way as one would with a real dog. Next to their roles as loyal canine companions, the dogs were also supposed to patrol around the house of the families and protect them from âsuspect typesâ,1 such as potential burglars, child molesters and any other unwelcome intruders. The eyes of these robotic dogs registered all the passers-by and stored their image and gait. Whenever a dog spotted a âsuspect typeâ around the family house it would bark, set off the alarm system, contact all family members and call the police. If the âsuspect typeâ continued to approach the house, the dog could set off tear gas. The instruction manual opened with:
Congratulations! You have made a great choice. No real dog could ever provide the same level of security as your new robotic pet. No burglar can distract your robotic dog with a simple piece of sausage. The robotic dog will never waggle its tail at an unwelcome guest, because you instruct it exactly about who should be kept away from your family. Robo-dog has an extremely user-friendly interface: just talk to the dog as you would to a child.2 And which real dog would be able to call the police if they spot a suspicious type? Precisely. But your robotic dog will.
The manual offered different ways of instructing the dog about who qualifies as a âsuspect typeâ. Each family picked a different strategy. The first family had very strict ideas about who could be trusted and who not, so they decided to pre-program the dog with a set of fixed rules reflecting their ideas. The family compiled a long and complex set of rules such as: âSomebody is suspect if possessing the characteristics âadultâ and âpassing by more than three times within a time span of one hourâ, or the characteristics âbeardedâ and âclimbingâ and is not a âfamily memberâ, or the characteristics [âŚ] or [âŚ] etc.â They instructed the dog to proceed with the alarm routine if it detected the presence of a set of characteristics described by their rules. On the eve of the dog's first patrol the mother had looked sternly into the eyes of the dog and said: âYou rule the street, but we rule you, understood?â From that day onwards all the family members called the dog by the name âRulerâ.3
The second family said: âWe personally think it is quite easy to distinguish suspect types from trustworthy ones, but we have difficulty articulating what the common denominators are that give them away.â So instead of giving their dog a precise definition of what to look for they would point out passersby who looked trustworthy to them and those they considered untrustworthy. Thus, in contrast to the first family they did not provide their metallic pet with any explicit rules, but merely with examples of passers-by they had labelled as either âtrustworthyâ or âsuspectâ. The dog was told to compare every new passer-by to the examples it was given and to classify the passer-by in the same category as the example that was the most similar to it. Because âsimilarityâ is quite an equivocal concept (just think of how opinions can differ on which siblings in a family are most alike) the family gave a formalised definition of similarity. The family explained to the dog that âsimilarityâ should be understood as âclosest in terms of Euclidian distanceâ. They also told the dog which characteristics to take into account, how to represent these characteristics as points in Euclidian space and how to calculate the distance between those points. Observing how the dog assigned each passer-by to the class of the closest labelled example, the family named the dog âCloserâ.
The third family thought it would be silly to impose their own stereotypes onto the dog. âIs it not precisely the problem that intruders make use of our prejudiced expectations? The good thing about this robotic dog is that it is not burdened by such prejudices and maybe we can learn something from its naive perspectiveâ, they pondered. Thus, the family told the dog, whom they had named âClusterâ: âNow go outside and after a month you can tell us what categories of people you think there are. Let us begin with four different categories, ok, Cluster?â Although the third family had not provided the dog with any explicit definition of âsuspectâ behaviour (such as family 1) or given it any examples of what it should look for (such as family 2), it had instructed the dog with a very precise definition of similarity in terms of Euclidian distance and told it to come up with four classes.
Later that week Ruler, the robotic dog of the first family, called the police after noticing the characteristics âbeardâ, âadultâ and âclimbingâ when a roofer came to replace a few missing tiles on the roof. The family had to come over to the police station and explain the misunderstanding. The family realised that making a list of rules defining âsuspect typesâ was not that easy after all. The second family, who had tried to teach their dog Closer to recognise âsuspect typesâ by providing it with examples, also had some mixed feelings about the result. If Closer barked at people whom the owners did not consider suspect, they could correct the dog so that it could adjust its âmodelâ.4 However, the mere fact that Closer barked at a passer-by made this particular person seem more suspect.5 A person who might have looked harmless to the owners before now gained an aura of potential danger. So, in practice the owners did not correct their dog very often. As time went by they began to rely increasingly on the judgment of Closer.
Cluster, the dog of the third family, strolled around for a month in the surroundings of the family home and then came up with a very elaborate categorisation of people involving, among other things, the colour of their socks, the rhythm of their walking pace and the presence of jewellery on their fingers. âIsn't that exciting?â, the father said. âWithout any explicit instruction Cluster came up with this categorisation that eludes me but probably is a very truthful one, because we have not influenced our robotic pet with our prejudices and ingrained ideas.â (The father did not give much thought to how he had biased the family dog by defining similarity in terms of proximity, by providing it with a long list of characteristics that possibly could be of interest and by telling it that it had to classify people in four categories.) The mother added: âIt seems to me that what Cluster describes as âcategory threeâ is the type of person we should be suspicious of.â Soon the whole family became convinced that Cluster had unearthed a very profound categorisation. When the dog barked and signalled that a âcategory threeâ person was passing by, the whole family shivered. Sometimes they had difficulty recognising these âcategory threeâ people themselves but they had great trust in the judgment of their robotic dog.
One day the three families decided to organise a garden party. They had invited all their neighbours but none had showed up. People were fed up with these metallic surveillance machines. The three families who owned the robotic dogs belonged to the ruling elite of the community and the robotic dogs had merely accentuated already existing social differences. People were annoyed with these devices that recorded everything they saw and moreover had the capacity to contact the police independently of their owners. The three families cared little about these feelings of their neighbours. Now that they were the only guests at their own party, they had plenty of opportunity to observe and discuss the behaviour of their robotic dogs. To their surprise, they noticed that there was hardly any overlap between the passers-by at which Ruler, Closer and Cluster would bark.
What is the moral of this parable? Normally, a parable is directly followed by the lessons that can be drawn from it. However, because the moral of this parable is based on the various ideas, analyses and recommendations presented in this volume, I will first present some more conventional introductory observations regarding the contents of this book. In the last section of this introductory chapter, I will finally return to the âparable of the three robotic dogsâ and suggest some lessons it could teach us about the computational turn, due process and privacy.
Family resemblance concepts and kaleidoscopic patterns
The âcomputational turnâ, âdue processâ and âprivacyâ: all the contributions presented in this volume engage with these three notions; notions that are of utmost importance for every contemporary information society intending to function in accordance with constitutional and fundamental human rights. However, as much as the three notions are important, they are also complex, multifaceted and sometimes even evanescent. Solove's observation that â[privacy] seems to be about everything, and therefore it appears to be nothingâ (2006: 479), could easily be extended to âdue processâ and the âcomputational turnâ. To make things worse, the notions consist of a multiplicity of concepts that share some traits but only share a minimal, if any, common core. For example, a privacy interference can refer to concepts of privacy as disparate as âsurveillanceâ (eg wiretapping by the police), âdecisional privacyâ (eg a state does not allow a woman to decide for herself whether to have an abortion or not) and âexposureâ (eg a newspaper that publishes candid pictures of a naked celebrity) (Solove 2006). Nevertheless, this does not make the notions âcomputational turnâ, âdue processâ and âprivacyâ incoherent or useless. Like members of a family, the different uses of these notions form âa complicated network of similarities overlapping and criss-crossingâ (Wittgenstein 1997: §66 as quoted in: Solove 2006: 485). The importance of these notions in contemporary debates might partly be explained precisely by their multifaceted and contested natures. An invocation such as: âLet us not forget the requirements of due process!â will hardly provide a straightforward instruction. More likely it will act as an invitation to discuss what those requirements are. Instead of simplifying the meaning of the âcomputational turnâ, âdue processâ and âprivacyâ, the contributions in this volume bear witness to their multifaceted nature and their capacity to oblige us to think, debate and rethink our epistemology (how to compute valid, reliable and valuable knowledge?) and political constitution (which power balance should we preserve in our society, and how can we articulate those requirements in terms of due process and privacy?). Taking the provocative, obliging and multifaceted character of the notions âcomputational turnâ, âprivacyâ and âdue processâ seriously has resulted in this volume, offering a polyphony (Bakhtin 1984: 6) of perspectives and analyses on the relations between these three notions: âA plurality of independent and [âŚ] fully valid voices [âŚ] with equal rights and each with its own worldâ. This polyphony is further enhanced by the diverging disciplinary backgrounds of the contributors: a specialist in machine learning, philosophers and sociologists of technology, and legal philosophers. Nevertheless, there are some shared characteristics as well. One aspect that all the contributions have in common is their engaged outlook. Epistemological questions are never treated in a strictly analytical manner, as is sometimes done within philosophy of science (Harizanov et al 2007; Harman and Kulkarni 2007; Thagard 1990), but are always related to the power relations between different actors, to the choices which policy-makers, engineers and citizens face in the present computational era and to the way these choices affect the constitutions of our societies. Another shared characteristic is the appetite for unconventional and novel approaches: for example, data minimisation, a classical and venerated pillar of data protection, is (at least partly) rejected by all authors in favour of other solutions. To give an impression of some of the solutions6 proposed by the authors, see the following tentative listing:
- âeffective outcome transparency instead of the current focus on nominal procedural transparency (Koops)
- obfuscation tactics, which make the process of profiling more time consuming, costly, complex and difficult (Finton and Nissenbaum)
- a duty to acquire knowledge of the profiling artefacts (Magnani)
- explicitation of the profiling process by using experiment databases (van Otterlo)
- taking into account that profiling algorithms should not only be made on behalf of the state or companies, but also of the citizen who is subjected to them (Kerr)
- increasing the amount of available data and profiles to make it more difficult to get univocal indications (Esposito)
- re-introducing actual persons, objects and situations in profiling practices which operate mostly on the infra- and supra-individual level (Rouvroy)
- developing profiles transparency in the front-end, the interface and backend of the computational decision systems that have a significant impact on our lives (Hildebrandt).
A third trait shared by all contributions is that they are not simply concerned with data collection, storage, processing or retrieval as su...