
eBook - ePub
The Spy in the Coffee Machine
The End of Privacy as We Know it
- 280 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
About this book
A Simon & Schuster eBook. Simon & Schuster has a great book for every reader.
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weâve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere â even offline. Perfect for commutes or when youâre on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access The Spy in the Coffee Machine by Kieron O'Hara, Nigel Shadbolt in PDF and/or ePUB format, as well as other popular books in Computer Science & Information Technology. We have over one million books available in our catalogue for you to explore.
Information
1
THE CASE OF THE DISAPPEARING BODY
⌠he that increaseth knowledge increaseth sorrow.
Ecclesiastes 1.18
THE BODY DISAPPEARS
In the words of the poem, âYesterday upon the stair, I met a man who wasnât there.â This was meant to be humorous: we can presume its author (one Hughes Mearns, since you ask) wasnât expecting it to be prescient. Nonetheless, it was.
A century after the lines were composed, we live in a society where all the time we meet men and women who arenât there. Acquaintance used to be face-to-face, a firm handshake, getting the cut of someoneâs jib. Trust was a matter of direct, personal acquaintance.1 But the needs of a complex society, and a set of new technologies, changed all that.
The proportion of significant face-to-face contacts is falling all the time, in what has been called by sociologists the âdisappearance of the bodyâ. We communicate by phone, email, letter, text; increasingly many of the contacts that make up our society are mediated through technology. Technological representations of ourselves do the interacting.
Mearnsâ poem has a very intriguing third line: âHe wasnât there again today.â The man who wasnât there makes a reappearance. But how can the narrator of the poem know that the man who wasnât there today was the same man who wasnât there yesterday? A nonsensical question? Hardly. If the man in the poem was there yesterday and today, it would be a trivial matter of memory to check whether he was the same on each day. Of course, the narrator can be fooled, by identical twins or a master of disguise. But the procedure is simple â recognising the same face, voice and mannerisms. Our brains have evolved over millions of years to do precisely that. And our society has augmented these methods with others to deal with less familiar persons â signatures, seals and passwords.
But a man who isnât there? None of these standard high-bandwidth methods will work for the absent presence. Instead, our mysterious bodiless fellow must work through some technologically-constructed version of himself or âavatarâ. Some trace must be left behind which leaves a trail back to the person whose body has disappeared, and those traces can be compared. Having met a man on the stair who wasnât there twice running, we might ask him a question about his motherâs maiden name, or demand a digital signature, or get him to key in a digital password or PIN number.
PLENTY OF EVIDENCE
This leads to a strange paradox. A physical presence leaves behind few signs; a handshake in a closed room leaves no trace, except in the memory. Information, on the other hand, persists. In the case of the physical meeting, if something can be converted into information â via a bug, CCTV, or even DNA â then it could be established that the meeting really took place. Nevertheless, that is always an extra procedure, which could in principle be dodged by the people involved. But the man who isnât there must present some tangible piece of information to assure everyone else about his identity, which will remain as a semi-permanent testimony. With the disappearing body, the trace is intrinsic to the meeting taking place at all. No information, no meeting.
Each time a new technology appears that allows people to communicate without an immediate physical presence, a new abstraction is created. It may be an email log, a digital representation of non-verbal communication, a certificate of trustworthiness or whatever. But the abstraction has a concrete form in which the interaction lives on. As our bodies disappear, we leave more of these representations behind. It becomes harder to conceal what we have been doing. The technology boosts our privacy in the present (we donât have to meet people face to face), but it threatens the privacy of our past.
A number of technologies have affected the value, function and feasibility of privacy directly. In a wholly oral culture, spying requires someone to be within earshot of a conversation while simultaneously being concealed. Certain types of behaviour can only be performed in private if there are appropriate spaces protecting privacy. Even very simple technologies such as writing, walls and glass windows have effects on the private space; some give privacy, while others take it away.
The serious academic and legal interest in privacy2 really began with the development of portable photographic cameras. With their invention, one could be wandering down the street, in public, but find oneâs image captured, and possibly printed using new printing techniques in journals. Although nothing in the situation took place in private before the technology appeared, there was a powerful intuition that oneâs privacy had somehow shrunk. In the nineteenth century, this was new and serious; nowadays, it is an issue only for those unfortunate enough to be of interest to tabloid newspapers. We are all used to our images being plundered, either by photographers or CCTV cameras, and we probably act accordingly. We are perhaps somewhat less likely to spit, pick our noses or urinate in the street, for instance, if we believe that we might be seen doing so (although a quick search through video-posting site YouTube will show many instances of unusual or noteworthy behaviour captured and preserved forever). And the whole point of a CCTV camera is to make us less likely to commit assaults or thefts, crimes which (if we are to avoid capture) have to be performed to some extent in privacy.
In this book, we want to explore the effects of new digital technologies on our privacy. There is no doubt that those technologies have the potential to be very injurious. It is hard to generalise â in individual cases the gains and losses, costs and benefits have to be weighed in the balance, and we shouldnât prejudge. Sometimes society benefits more than the individual loses. Sometimes the individual gains enough to justify the sacrifice of privacy. Sometimes the loss of privacy translates into a loss for society and a gain only for the state or the corporation. Very rarely is the effect catastrophic. And the costs and benefits vary; all governments misuse some data, and all governments use other data wisely, but one would be sensible to take different precautions depending on where one was. The demands of, say, the United States, France, Iran, China, Russia and North Korea are not the same, and each government has different aims, different ideologies, different dogmas and different scruples in dealing with its citizens.
Costs and benefits are the nub of the classic type of privacy problem â there are many tangible benefits to be gained by allowing intrusions into oneâs life, but there is also the intangible worry. We simply find it hard, as humans, to balance the tangible benefits and the intangible costs. In an evil dictatorship, one has a good idea of how personal information will be used, and so can plan accordingly. But in a capitalist democracy, it is much harder to decide how information will be used in the future. The benefits are there for all to see; the costs are not. This may be why our defences are so often down when our privacy is threatened.
Legislation is rarely the answer to our online problems. The law is intended to constrain technologists, but equally the technological capabilities constrain what the lawyers and the legislators can achieve. This rapidly evolving and unstable situation affects our understanding of privacy itself.
Applying apparently well-understood political principles is surprisingly hard in cyberspace, where we often find ourselves having to approve or disapprove of an outcome we never anticipated, or alternatively find ourselves having to decide about a principle that never seemed controversial or even relevant before.3 Where do our rights to free speech end? Do you have an inalienable right to deliver an online lecture in the US to an audience in China? Do the disadvantages incurred by those who lack computer literacy or training fatally undermine their rights to just treatment by society? How far is a society justified in promoting computer literacy? To what extent is it reasonable for a technophobic refusenik to opt out of an information society? Is there a difference in kind between unauthorised exposure of, say, a photograph by hand to a few dozen people, and uploading a digital image to a website that receives thousands of visits a day?
Important questions of principle appear suddenly from nowhere as a result of technological development, and one either has to reinterpret old principles radically in the new space, or start to think anew. Google is responsible for about half the Web searches made worldwide; finishing low down in Googleâs page rankings can dramatically reduce visits to a site. What responsibilities does Google have for ensuring equitable treatment? Googleâs PageRank algorithm works by analysing the eigenvectors of the Webâs link matrix â are any principles of fair and equitable treatment somehow breached in the design of the algorithm? Did any moral philosopher ever wonder about the rights and wrongs of eigenvector analysis (our guess: an emphatic no)? What about the methods Google needs to use to ensure that the ranking isnât rigged? There have always been recommendation systems with consequences for those being recommended â financial analysts picking stock market winners, for instance â but none so central to a space or activity as Google is to the Web.
Is privacy a private good or a public good? If an individual cedes his privacy, for example by keeping an explicit blog, is that a free choice of a sovereign individual, or a betrayal of an important principle? Do we have duties not to give our information away, in order not to weaken the idea that oneâs identity and personality should be inviolable? Should we refrain from using credit cards, joining loyalty schemes, using e-government websites? All of these constitute a semi-permanent record of our business.
There are no right or wrong answers, except to say that the study and science of the Web has to be deeply interdisciplinary, involving lawyers, technologists, sociologists, policing and security experts and philosophers reasoning and cooperating together to try to discern and understand the new world we are creating. The consequences of principles are important evidence in judging their fitness, and these can be very different online than offline; we may have to rethink our principles, although the basic premises of the arguments remain the same.
WHAT IS ON THE HORIZON?
Privacy often clashes with other values that we consider important. In particular, information that may erode our privacy could also promote efficiency. For instance, in many major cities, particularly in crowded Europe and Asia, traffic congestion is a serious and expensive problem. Lives are constantly put at risk, not only from the pollution that idling engines cause but also from delays in getting emergency services to their destinations. Knowing where cars are is clearly important for traffic control, and technology has a role to play. This is hardly science fiction â in 2006 a committee of Members of the European Parliament recommended adoption of the eCall in-car system which logs accidents and locates the nearest emergency vehicles, which they claimed could save 2,500 lives per year.4 IntelliOne, an American company, has developed a traffic monitoring system that locates mobile phones in cars twice per second, from which it can work out how fast the car is travelling, and therefore where the traffic snarl-ups are. It can even tell the difference between a traffic jam and a red light.5
It is helpful for an organisation to know what information its employees need, and so monitoring the webpages that they download is useful. Similarly, monitoring emails via keywords is also valuable; particular queries can be sent to the people who can best deal with the problem. But do we really want our bosses to know what we are looking at, and reading our conversations?
Electronic tagging, of animals, children, property or criminals is becoming increasingly popular (for different reasons) in order to keep track of their whereabouts. But for the law-abiding, tagging could compromise privacy; taking the dog for a walk, going for a day out with the children or even carrying around tagged valuables would tell some sort of database where one is. The criminal is tagged to prevent the breaking of a curfew or condition of release, which is all very well, except that there may be a legitimate yet private purpose in travelling somewhere. The presumption that criminals forfeit all rights to privacy as a result of their crimes is a very harsh one unsupported in most jurisdictions. And suppose the crime in question was a political crime?
The information involved can be extremely mundane, but in the right context and the wrong hands very useful indeed. And it may be hard in advance to realise what potential there is for undermining privacy. You donât need supercomputers.
Our homes are host to many small, relatively stupid, relatively powerless computing devices, embedded in household goods. Such gadgets, linked together, can create surprisingly intelligent and flexible behaviour, to keep heating costs and environmental damage down, or to deploy resources intelligently to save money. Five minutes before it goes off, the alarm clock could send a message to the kettle to switch on, and the toilet seat and towel rail to warm up. Activating the shower might start the toaster. The coffee machine might sense when coffee had been poured and then send a message to the car ignition. In the car the seat belt might tell the garage door to open, while the garage door turns the central heating down. Nothing in that chain of systems is doing anything more complex than sensing things about their own use, and sending basic messages to other gizmos. Out of all that simple activity comes a sort of cleverness in the arrangement of the house.
Probably no-one would want an intelligent house; rather, the point is that information is being created that can be monitored, and the systems around you could be telling the world what is going on in your home. Domestic activity usually leaves little trace; snoopers are often reduced to relatively coarse methods of detection, such as scrabbling through litter bins. But a coffee machine that tells other household devices about itself could potentially be used to tell observers how many pots of coffee are made during the day, a much finer-grained detail of household life which, together with other details could be used to paint quite an accurate picture. Your coffee machine could be used to spy on you.
Computers are getting smaller and smaller, and can be made of, or fitted into, many new and interesting materials. The possibilities are endless, but so are the dangers. For instance, the field of electronic textiles or âwashable computingâ provides all sorts of fascinating futures. Fabrics that can monitor vital signs, generate heat or act as switches suggest limitless possibilities, from the ridiculous â clothes that change colour constantly â to the useful â a jacket that recharges your mobile phone. Textronicâs âtextro-polymerâ is made of fibres that change their resistance as they are deformed or stretched, and so can detect pressure.6 Very handy â but imagine a bedsheet that was able to detect, and broadcast, the number of people lying on it.
The information gathered by such devices has many important, interesting and genuinely useful purposes, and so they will continue to proliferate. But as they do, so will the dangers. The spy of the future will not be a shabby man with binoculars or a telephoto lens; tomorrowâs spies will be coffee machines, bed linen and clothes.
And we shouldnât assume that we will spot the dangers in advance. If the short term benefits of technology are good enough, we tend not to question them. Had the government demanded that we all carry around electronic devices that broadcast our whereabouts to a central database, that the information should be stored there indefinitely, and that the police should be able to access it with relatively minimal oversight, there would have been an outcry. But in the real world most if not all of us carry such devices around voluntarily, in the shape of our mobile phones. The benefits, we generally reckon, outweigh the costs â which they probably do, but that is merely luck. Precautions against misuse were not discussed widely. We sleepwalked into the danger.
CROOKS AND NANNIES: CRIME AND SURVEILLANCE IN THE REAL WORLD
This is all hypothetical so far; are there any specific examples of apparent threats to privacy from digital technologies? Here are a couple of instances where computing systems provide a new source of worry about our shrinking private space, one patently dangerous, the other less obviously so. Both examples were taken from a particular newspaper from a date late in 2005 chosen at random.
The first concerns identity theft. The article in question, subtitled âPrivacy laws gain support in America, after a year of huge violationsâ,7 begins by drawing a disturbing analogy between the industrial revolution and our own IT-driven development.
In the industrial age, factories spewed out soot ...
Table of contents
- Cover
- Title
- Copyright
- Contents
- Preface
- 1. The case of the disappearing body
- 2. The surveillance society
- 3. Computer security meets human idiocy: privacy enhancing technologies and their limits
- 4. The power of power: Mooreâs Law and practical obscurity
- 5. Itâs the links, stupid: the Internet, the World Wide Web and privatised spaces
- 6. Manâs best friend is his blog: Web 2.0
- 7. They snoop to conquer: censorship, decisional privacy and ideological privacy
- 8. Where dust is smart and reality mixed: pervasive computing
- 9. Get ready, the panopticonâs here
- Endnotes
- Index