Exploding Data
eBook - ePub

Exploding Data

Reclaiming Our Cyber Security in the Digital Age

Michael Chertoff

Buch teilen
  1. English
  2. ePUB (handyfreundlich)
  3. Über iOS und Android verfĂŒgbar
eBook - ePub

Exploding Data

Reclaiming Our Cyber Security in the Digital Age

Michael Chertoff

Angaben zum Buch
Buchvorschau
Inhaltsverzeichnis
Quellenangaben

Über dieses Buch

A powerful argument for new laws and policies regarding cyber-security, from the former US Secretary of Homeland Security. The most dangerous threat we-individually and as a society-face today is no longer military, but rather the increasingly pervasive exposure of our personal information; nothing undermines our freedom more than losing control of information about ourselves. And yet, as daily events underscore, we are ever more vulnerable to cyber-attack. In this bracing book, Michael Chertoff makes clear that our laws and policies surrounding the protection of personal information, written for an earlier time, need to be completely overhauled in the Internet era. On the one hand, the collection of data-more widespread by business than by government, and impossible to stop-should be facilitated as an ultimate protection for society. On the other, standards under which information can be inspected, analysed or used must be significantly tightened. In offering his compelling call for action, Chertoff argues that what is at stake is not only the simple loss of privacy, which is almost impossible to protect, but also that of individual autonomy-the ability to make personal choices free of manipulation or coercion. Offering colourful stories over many decades that illuminate the three periods of data gathering we have experienced, Chertoff explains the complex legalities surrounding issues of data collection and dissemination today and charts a forceful new strategy that balances the needs of government, business and individuals alike.

HĂ€ufig gestellte Fragen

Wie kann ich mein Abo kĂŒndigen?
Gehe einfach zum Kontobereich in den Einstellungen und klicke auf „Abo kĂŒndigen“ – ganz einfach. Nachdem du gekĂŒndigt hast, bleibt deine Mitgliedschaft fĂŒr den verbleibenden Abozeitraum, den du bereits bezahlt hast, aktiv. Mehr Informationen hier.
(Wie) Kann ich BĂŒcher herunterladen?
Derzeit stehen all unsere auf MobilgerĂ€te reagierenden ePub-BĂŒcher zum Download ĂŒber die App zur VerfĂŒgung. Die meisten unserer PDFs stehen ebenfalls zum Download bereit; wir arbeiten daran, auch die ĂŒbrigen PDFs zum Download anzubieten, bei denen dies aktuell noch nicht möglich ist. Weitere Informationen hier.
Welcher Unterschied besteht bei den Preisen zwischen den AboplÀnen?
Mit beiden AboplÀnen erhÀltst du vollen Zugang zur Bibliothek und allen Funktionen von Perlego. Die einzigen Unterschiede bestehen im Preis und dem Abozeitraum: Mit dem Jahresabo sparst du auf 12 Monate gerechnet im Vergleich zum Monatsabo rund 30 %.
Was ist Perlego?
Wir sind ein Online-Abodienst fĂŒr LehrbĂŒcher, bei dem du fĂŒr weniger als den Preis eines einzelnen Buches pro Monat Zugang zu einer ganzen Online-Bibliothek erhĂ€ltst. Mit ĂŒber 1 Million BĂŒchern zu ĂŒber 1.000 verschiedenen Themen haben wir bestimmt alles, was du brauchst! Weitere Informationen hier.
UnterstĂŒtzt Perlego Text-zu-Sprache?
Achte auf das Symbol zum Vorlesen in deinem nÀchsten Buch, um zu sehen, ob du es dir auch anhören kannst. Bei diesem Tool wird dir Text laut vorgelesen, wobei der Text beim Vorlesen auch grafisch hervorgehoben wird. Du kannst das Vorlesen jederzeit anhalten, beschleunigen und verlangsamen. Weitere Informationen hier.
Ist Exploding Data als Online-PDF/ePub verfĂŒgbar?
Ja, du hast Zugang zu Exploding Data von Michael Chertoff im PDF- und/oder ePub-Format sowie zu anderen beliebten BĂŒchern aus Politics & International Relations & Political Process. Aus unserem Katalog stehen dir ĂŒber 1 Million BĂŒcher zur VerfĂŒgung.

Information

CHAPTER ONE

WHAT IS THE INTERNET AND HOW DID IT CHANGE DATA?

Getting Connected

Methods for sending electronic messages have existed since the early 1800s in the form of the telegraph (the first “texting” machine!). Communications were passed by humans using Morse code to transmit electronic impulses over a wire connection. Over time, telegraphic machines were improved to allow for messages to be automated and sent worldwide. Then the telephone was invented, allowing oral conversation to be transmitted.
With the advent of the first practical telephone in 1876, a flurry of interest in expanding this convenient technology ensued. The American Telephone and Telegraph (AT&T) Company was incorporated in 1885.1 Telephone service between New York and Chicago began in 1892. Technological developments allowed for transcontinental telephone service in 1915.
The telephone system was based upon analog signals. This requires a variation of the electric signal to carry the voice reproduction as it is sent along a conductor. The original “networks” were completely owned by the telephone industry and comprised long connections of insulated copper wires strung out between population centers. Human operators made connections along the line. To make a phone call on this system, an uninterrupted connection had to be maintained between the caller and the person receiving the call, thus leading to the high price of making “long distance” phone calls.
The idea of sending a large amount of information, by breaking it into small “packets,” was described by researchers in both the United States and the United Kingdom in the early 1960s.2 This simple yet innovative idea was to break apart data into a digital format—blocks of “ones” and “zeroes,” transforming words in a given message into “datagrams” that would be individually labeled to indicate the origin and destination of information, like individual data postcards. A constant connection between end points would no longer be required to send a large message. These datagrams could be sent electronically along any number of interconnected communications routes and assembled at the destination point after all had arrived.
In the late 1960s, the concept of using data packets was demonstrated on networks within closed research centers (with the first being created at the Lawrence Livermore National Laboratory, a nuclear weapons research center in California). At the end of the 1960s, a relatively unknown government agency, the Advanced Research Projects Agency (ARPA), started researching methods for generically interconnecting any computer network to another. ARPA’s contracts sought to develop a generic system connecting academic computers between four places: the University of California at Los Angeles (UCLA), Stanford Research Institute (SRI), the University of California at Santa Barbara (UCSB), and the University of Utah. Because this had never before been accomplished, the challenge included both creating the system and the protocols that would allow for interlinking packet networks of various kinds. The system ARPA launched led to the establishment of networking standards for communication between computers.
The internet today continues to be based upon the relatively simple concept of breaking data apart into individual data packets (sets of ones and zeros), with a “label” indicating where they should go. These packets are then tossed into the “internet” and are passed around along one of many paths by devices called routers. They “read” the label and direct the packet on the shortest route to the intended destination. Once the message arrives at its intended destination, it waits to be reassembled into the data set or message.
Put another way: the message is broken down into small individually labeled and addressed data packets. This breaking up of information is done by the computer application creating the message (such as the web browser or the email program). The individual data packets are labeled with information to help them find a way out of the computer (which door or “port” to use) and the destination on the open network (a series of numbers that help point to the address on the internet).
The internet was designed to be neutral regarding the kind of data it was channeling. It could use any available path to send its packets along. If for some reason a route stopped working, the packets could move along a different one. The network simply passes the packets of data along to the desired destination. It is only upon reassembly of the data packets when they arrive at their destination that the conveyed information is meaningful. In an era when defense authorities were concerned about their ability to preserve communications lines in the event of a war, this element of multiple transmission pathways—the internet’s resiliency—was a fundamental virtue.
The internet was also designed to be hardware independent. Any type of hardware could be used as long as it met the correct communication standards. The beauty in this architecture is that so many different types of computers can be connected to each other even with a variety of software and tools.
The internet is borderless. Data moves wherever pathways exist, without regard to legal boundaries or rules. As we will see, this feature disrupts customary legal constructs of sovereignty and jurisdiction.
To better understand how it operates, think of network communication on the internet in terms of the different steps it takes to send a message. The two most common ways of categorizing these steps are the Open Systems Interconnection (OSI) seven-layer model3 and the Internet Engineering Task Force (IETF) four-layer model.4 The OSI model is actually more specific than the IETF model in defining what happens to data bits as they are processed and collected for shipping on the internet. But to help illustrate the process of sharing information on the internet, here is a brief description of the simpler four-layer IETF model, consisting of application, transport, internet, and link layers, along with the related physical layer.

Application Layer

Data created by all internet applications, or “apps,” is first processed on the application layer. Here the content of any internet data assembled, sent, or received makes sense to people. Within the application layer, this data can be communicated to other apps on the same “host” or computer. The use of a web browser operating the Hypertext Transfer Protocol (HTTP)—the uniform system for maintaining addresses on the World Wide Web—is an example of the application layer.
At this layer, data bits are assembled and presented to the operator of the device. Here is where those bits become readable messages, viewable pictures and videos, or a visual presentation of your bank account balance.

Transport Layer

Within the transport layer, a channel for data exchange is created. Data is packaged for shipping within a local network (not on the “internet” per se, but within a locally managed network). Specific transport protocols (or standardized communication techniques) are used to begin communication, control the flow of data, and allow for data to be fragmented and sent over the broader internet, consisting of all networks connected by wire or wirelessly. The data is reassembled upon reception. These protocols address and send the data as messages within a local network.

Internet Layer

The internet layer is where data is sorted into different destinations. Communication between separate networks opens up through the use of internet addresses—technically Internet Protocol (IP) addresses. Think of them as precise zip codes for the internet that describe the destination of the data package.
The magic of routing enables the message to be sent toward its final destination. Routing can be thought of in terms of a traffic cop standing in the middle of an intersection. Unless a car is arriving at its final destination in that neighborhood, the cop is indifferent to the passing cars. The officer also has little regard as to the occupants of the vehicles. The cop’s job is simply to keep traffic flowing in an organized fashion. The same can be said of the traffic controls at the internet layer. If at each intersection, or node, a message is trying to make its way through the internet, the local router’s job is to pass it on to the next intersection with the expectation that it will eventually reach its intended destination and be flagged for collection by the traffic cop patrolling that intersection. The data packet may have to drive through multiple intersections in the internet to get there, but it is likely that it will eventually pass through its intended intersection and be pulled into the local network for processing toward the application layer.

Link Layer

The link layer ensures that data packets sent to it from the internet layer are passed along free of errors. This can be thought of as the clerk at the postal office. In order for a registered letter to be sent, the postal clerk ensures that the letter is correctly addressed, has a readable zip code, and is properly contained within the envelope. Prior to sending the datagram onward to the next data carrier on the internet, the link layer adds final shipping information to the data from the internet layer, along with data size information. Upon shipment toward its final destination, a timer is set to await confirmation of receipt.

Physical Layer

The physical layer is not an explicit element of the IETF four-layer system. But while people often describe the internet and cyberspace as “virtual,” the internet does actually exist in physical space. It is obviously composed of an infrastructure of physical cables, connectors, routers, servers, etc., that are constantly evolving and expanding. Over time, the infrastructure has evolved to transmit more and more data at increasingly faster speeds. But this physical infrastructure remains the indispensable foundation on which the internet rests. Moreover, the location of key elements of the physical layer—servers, routers, cables, etc.—creates the possibility of controlling or even blocking the data that moves through that layer. Not surprisingly, therefore, control over the physical layer is often what nation-states focus on when seeking to assert control over globally mobile data.

Trust Was an Afterthought

A critical feature (or bug) of the original internet was that trust was an afterthought. This new system was not designed to be secure, because it did not need to be. The originators knew one another and trusted themselves to use the system for appropriate purposes. And because the number of users was initially very low, abuses could be easily rooted out and the offenders individually shamed into changing their behavior.
A by-product of this assumption of trust on the early internet was the casual approach taken to user identification. The ticket to admission on the internet, as an ecosystem, was nothing more than an end point with an IP address that could be linked to the network. While individual sites might require a password or another form of personal identification, the internet as a whole did not. As a consequence, any user could range freely over the internet, gaining access to any site not secured.
A system built on presumed trustworthiness has advantages and disadvantages. Because of its open and available architecture, the internet was able to develop very quickly. Connection could be done locally by any group desiring access. This was a stark contrast to the Ma Bell system, in which all the pieces of the network (including the telephone) were owned by the telephone company. Another advantage was that many different types of systems could easily com...

Inhaltsverzeichnis