UNTIL THE BIRTH of the Internet, computers suffered in isolation. They were islands, lacking a way to connect to one another except by using cumbersome cables. That all changed in the late 1950s. With the Soviets successfully launching
Sputnik into space, and with fears of the Cold War mounting, researchers at the Rand Corporation began to explore a new computing paradigmâin hopes of developing a system that would be able to withstand a nuclear catastrophe.
1 In August 1964, after years of research, Paul Baran, one of the Rand researchers, reported a breakthrough. By relying on a technology called packet switching, Baran was able to send fragments of information from one computer to another and have these fragments reassembled, almost like magic.
2 Armed with Baranâs research, the Advanced Research Projects Agency (ARPA) at the U.S. Department of Defense used this new technology to
create the first network of computers, ARPAnet, later renamed DARPAnet after âDefenseâ was added to the beginning of the agencyâs name, helping researchers and academics to share files and exchange resources with one another. Over the course of the next several decades, the power of this new network grew, as additional layers of technologyâsuch as TCP/IP (the Transmission Control Program and Internet Protocol) and domain name services (DNSs)âwere developed to make it easier to identify computers on the network and ensure that information was being appropriately routed. Computers were no longer isolated.
3 They were now being stitched together by using thin layers of code.
Public-Private Key Encryption and Digital Signatures
As DARPAnet was getting off the ground, a second revolution was brewing. New cryptographic algorithms were creating new means for individuals and machines to swap messages, files, and other information in a secure and authenticated way. In 1976, Whitfield Diffie and Marty Hellman, two cryptographers from Stanford University, ingeniously invented the concept of âpublic-private key cryptography,â solving one of cryptographyâs fundamental problemsâthe need for secure key distributionâwhile at the same time laying out a theoretical foundation for authenticated digital signatures.
4 Before the advent of public-private key encryption, sending private messages was difficult. Encrypted messages traveled over insecure channels, making them vulnerable to interception. To send an encrypted message, the message would need to be scrambled by using a âkeyâ (also known as a cipher), resulting in an impenetrable string of text. When the scrambled message arrived at its intended destination, the recipient would use the same key to decode the encrypted text, revealing the underlying message.
5 One significant limitation of these early cryptographic systems was that the key was central to maintaining the confidentiality of any message sent. Parties using these systems had to agree on a key before exchanging messages, or the key somehow had to be communicated to the receiving party. Because of these limitations, keys could easily be compromised. If a third party gained access to a key, they could intercept a communication and decode an encrypted message.
6 Public-private key cryptography solved this problem by enabling the sending of encrypted messages without the need for a shared key. Under Diffie and Hellmanâs model, both parties would agree on a shared pubic
key and each party would generate a unique private key.
7 The private key acted as a secret password, which parties did not need to share, whereas the public key served as a reference point that could be freely communicated. By combining the public key with one partyâs private key, and then combining the outcome with the private key of the other party, Diffie and Hellman realized that it was possible to generate a shared secret key that could be used to both encrypt and decrypt messages.
8 In 1978, shortly after Diffie and Hellman publicly released their groundbreaking work, a team of cryptographers from MITâRon Rivest, Adi Shamir, and Len Adlemanâbuilt on Diffie and Hellmanâs research. They developed an algorithm, known as the RSA algorithm (after the last initials of the developers), in order to create a mathematically linked set of public and private keys generated by multiplying together two large prime numbers. These cryptographers figured out that it was relatively straightforward to multiply two large prime numbers together but exceptionally difficultâeven for powerful computersâto calculate which prime numbers were used (a process called prime factorization).
9 By taking advantage of this mathematical peculiarity, the RSA algorithm made it possible for people to broadcast their public keys widely, knowing that it would be nearly impossible to uncover the underlying private keys.
10 For example, if Alice wanted to send sensitive information to Bob, she could encrypt the information using her own public key and Bobâs public key and publicly publish the encrypted message. With the RSA algorithm, and because of the use of prime factorization, only Bobâs private key would be able to decrypt the message.
The application of public-private key cryptography extended beyond just encrypting messages. As Diffie and Hellman recognized, by building new cryptosystems where âenciphering and deciphering were governed by distinct keys,â public-private key cryptography could underpin secure and authenticated digital signatures that were highly resistant to forgeryâthus replacing the need for written signatures that ârequire paper instruments and contracts.â
11 For instance, by using the RSA algorithm, a sending party could attach to a message a âdigital signatureâ generated by combining the message with the sending partyâs private key.
12 Once sent, the receiving party could use the sending partyâs public key to check the authenticity and integrity of the message. By using public-private key encryption and digital signatures, if Alice wanted to send a private message to Bob, she could encrypt
the message by using her own private key and Bobâs public key and then sign the message by using her private key. Bob could then use Aliceâs public key to verify that the message originated from Alice and had not been altered during transmission. Bob could then safely decrypt the message by using his private key and Aliceâs public key.
13 Public-private key encryption sparked the imagination of a new generation of academics, mathematicians, and computer scientists, who began to envision new systems that could be constructed using these new cryptographic techniques. By relying on public-private key cryptography and digital signatures, it became theoretically possible to build electronic cash, pseudonymous reputation, and content distribution systems, as well as new forms of digital contracts.
14 The Commercial Internet and Peer-to-Peer Networks
In the years following the birth of the Internet and the invention of public-private key cryptography, the computing revolution spread. With the cost of computers rapidly decreasing, these once esoteric machines graduated from the basements of large corporations and government agencies onto our desks and into our homes. After Apple released its iconic personal computer, the Apple II, a wide range of low-cost computers flooded the market. Seemingly overnight, computers seeped into our daily lives.
By the mid-1990s, the Internet had entered a phase of rapid expansion and commercialization. DARPAnet had grown beyond its initial academic setting and, with some updates, was transformed into the modern Internet. Fueled by a constellation of private Internet service providers (ISPs), millions of people across the globe were exploring the contours of âcyberspace,â interacting with new software protocols that enabled people to send electronic messages (via the simple mail transfer protocol, SMTP), transfer files (via the file transfer protocol, FTP), and distribute and link to media hosted on one anotherâs computers (via the hypertext transfer protocol, HTTP). In a matter of years, the Internet had transformed from a government and academic backwater to a new form of infrastructureâone that, as the
New York Times reported, did âfor the flow of information
⌠what the transcontinental railroad did for the flow of goods a century ago.â
15 At first, Internet services were predominantly structured using a âclient-serverâ model. Servers, owned by early âdot-comâ companies, would run one
or more computer programs, hosting websites and providing various types of applications, which Internet users could access through their clients. Information generally flowed one wayâfrom a server to a client. Servers could share their resources with clients, but clients often could not share their resources with the server or other clients connected to the same Internet service.
16 These early client-server systems were relatively secure but often acted as bottlenecks. Each online service had to maintain servers that were expensive to set up and operate. If a centrally managed server shut down, an entire service could stop working, and, if a server received too many requests from users, it could become overwhelmed, making the service temporarily unavailable.
17 By the turn of the twenty-first century, new models for delivering online services had emerged. Instead of relying on a centralized server, parties began experimenting with peer-to-peer (P2P) networks, which relied on a decentralized infrastructure where each participant in the network (typically called a âpeerâ or a ânodeâ) acted as both a supplier and consumer of informational resources.
18 This new model gained mainstream popularity, with the launch of Napster. By running Napsterâs software, anyone could download music files from other users (acting as a client) while simultaneously serving music files to others (acting as a server). Using this approach, at its peak, Napster knitted together millions of computers across the globe, creating a massive music library.
19 Napsterâs popularity, however, was short lived. Underlying the peer-to-peer network was a centrally controlled, continually updated index of all music available on the network. This index directed members to the music files they wanted, acting as a linchpin for the entire network.
20 Although necessary for the networkâs operation, this centralized index proved to be Napsterâs downfall. Following lawsuits against Napster, courts found it liable for secondary copyright infringement, in part because it maintained this index. Napster was forced to manage the files available to peers on the network more carefully, and it scrubbed its index of copyright-protected music. Once this was implemented, the popularity of Napster waned and its users dispersed.
21 Following Napsterâs defeat, a second generation of peer-to-peer networks emerged, bringing file sharing to an even larger audience. New peer-to-peer networks, such as Gnutella and BitTorrent, enabled people to share information about files located on their personal computers, without the need for
centralized indices.
22 With Gnutella, users could find files by sending a search request, which was passed along from computer to computer on the network until the requested file was found on another peerâs computer.
23 BitTorrent took an alter...