How can we regulate the Internet? This seemingly innocent question has been the subject of countless books and articles for just over the past 20 years. Part of the reason why it continues to be a vibrant and relevant topic is the difference of opinions on what is going on. On the one hand, we have those who believe that the Internet is a free and open space that requires no regulation, and/or is incapable of being regulated. On the other hand, we have those who think that regulation is not only desirable, but that the Internet as it exists now is completely regulated because of the prevalence of state surveillance, and that all semblance of freedom is a mere illusion.
The Internet has become an integral part of our lives, but if you were to ask the average user about how it works, they would not be able to provide any details other than the fact that it is a medium to transmit information. This is as it should be; as technologies become widespread it is not necessary to understand how they operate – it is possible to drive a car without understanding the intricacies of the internal combustion engine.
However, when it comes to regulating the Internet, it is useful to at least have an idea of what lies under the hood, since it is difficult to try to exert some control over something that one does not understand.
From a regulatory perspective, the first element that should be remarked upon is that the Internet is a ‘network of networks’1
that operate using common protocols
designed to ensure resilience, distribution, decentralisation and modularity. It is now part of the history of the Internet that started as a military programme intended to create a communication infrastructure that could survive a nuclear strike. To achieve this objective, a communication network needs to be decentralised, with no central point or governing node. Similarly, communications must be able to be broken down and sent through various links in the network, only to be put together automatically by the recipient. The network will also be made of heterogeneous pieces of hardware that should be able to talk to each other using standard communication tools, and everything should have simplicity and modularity in mind.2
The best way to understand the Internet is to separate its architectural features in layers of functionality. By doing this, it is possible to identify the various elements with regards to the function that they serve. The Internet has four layers:
1.Link layer. The link layer consists of protocols that allow connection of a host with gateways and routers within a network, usually a large area network (LAN) (eg Ethernet protocols).
2.Transport layer. This provides end-to-end protocols for communication between hosts in the system, such as the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP).
3.Internet layer. Because the Internet is a network of networks, every computer connected to it must be able to find its components. The Internet Protocol (IP) fulfils this function, and is differentiated from the application and transport layers by the fact that it does not consist of instructions to reach a destination, or is used to make the actual communications, but it allows data packets to reach their destinations by allowing identification of participating computers based on their IP address.
4.Application layer. This is the top communication level made up of protocols for user applications such as sending mail (Send Mail Transfer Protocol, or SMTP), sending files (Hyper Text Transfer Protocol, or HTTP); it also includes protocols used for system support, such as that which identifies servers in the system by name instead of IP address (Domain Name System, or DNS).
The idea behind the above classification for regulatory purposes is that some elements are so fundamental to the functioning of the Internet that they cannot be regulated. The first three layers are specifically about communication between networks and computers, and they are made up of protocols that have been established by the various standards-setting bodies such as the Internet Engineering Task Force (IETF), the World Wide Web Consortium (W3C), the Internet Engineering Steering Group (IESG), the Internet Architecture Board (IAB) and the Internet Society (ISOC). The role of the first layers is to distribute information across the networks.
The application layer is different. Most user activity takes place here, and this is where most communication subject to regulation will take place. The standards used to get communications from one computer to the other tend to be irrelevant
for regulators, other than perhaps being of interest from a governance perspective. However, it is the actual communications that matter, and these will take place through applications.
Having said that, the communication layers may be relevant for the type of regulation that is proposed. For example, a country that wants to control the flow of data coming in and out of its jurisdiction would place technical controls at the Internet layer level in order to filter or block content before it reaches its destination.
II.A Tale of Two Internets
Perhaps the best way to explain the two opposing views of Internet regulation is by contrasting two very distinct case studies that exemplify the difference in experience and perception that lead us to see the Internet in such a different light.
A.The Dark Web
As it was explained above, the Internet is made up of common protocols that allow users to communicate and exchange information with one another. The ‘visible’ Internet makes use of the four layers, and it consists of shared applications such as the World Wide Web, email, social media apps, games, file transfers, etc. Users connect to the network using the communication layers, and they can connect to one another using the application layer.
Beneath the visible Internet exists a network that not many know how to access, known as the Dark Web (or Dark Net). It uses the Internet’s own transport layers, but it consists of applications that are shared by a few technically-minded users. This facilitates a space that is rarely visited, highly encrypted and seldom regulated.
James Bartlett describes the Dark Web as follows:
For some, the dark net is the encrypted world of Tor Hidden Services, where users cannot be traced and cannot be identified. […] It has also become a catch-all term for the myriad of shocking, disturbing and controversial corners of the Net – the realm of imagined criminals and predators of all shapes and sizes.3
One of the most visible Internet applications is the World Wide Web, and we commonly surf through it with web browsers that can read web pages created using the Hypertext Markup Language (HTML). However, because the Internet is decentralised and modular, it is possible for anyone to come up with new applications and protocols that use the communication layers. I could program a new browser that uses my own application protocol, and as long as there is someone else using it, then that would still be part of the Internet, but it would be invisible for most users.
One such application is the Tor Browser, which uses the TOR Hidden Service Protocol to connect computers that are also connected to the Internet.4
This is a communications protocol created by a group of encryption enthusiasts designed to anonymise data transferred through the Internet by using voluntary relays and routers that mask a user’s identity to prevent traffic snooping and surveillance. By installing the Tor Browser on their computers, users can view websites that are not accessible through a mainstream browser like Firefox or Chrome.
The anonymous nature of the Dark Web makes it possible to post any type of content, and using anonymous and decentralised payment methods like Bitcoin, users can purchase almost anything they desire.5
At the time of writing, it was possible to access pages on the Dark Web advertising various drugs, UK passports, US identification documents, hacking services, stolen credit cards and hacked social media accounts.
All of this has led the Dark Web to gain a difficult reputation, coupled by the publicity gained in the trial of Ross Ulbricht, the operator of the Silk Road website, a deep web marketplace for any sort of illegal material.6
The presence of such vast and unregulated space tends to lend credence to the idea that the Internet cannot be regulated.
Thanks to the series of revelations made by former National Security Agency (NSA) contractor Edward Snowden,7
we have a troubling picture of extreme control due to mass surveillance conducted by the NSA in the US, and the Government Communications Headquarters (GCHQ) in the UK.
Although the Internet is supposed to be a decentralised, distributed and open telecommunications network, the surveillance revelations have unearthed a much more controlled and centralised system than previously thought possible. Snowden left his life as a contractor and travelled to Hong Kong where he contacted journalist Glenn Greenwald and filmmaker Laura Poitras to whom he gave access to a series of files that laid bare the extent of state surveillance.
The revelations showed a troubling amount of surveillance at all levels, and it is not this chapter’s remit to cover these in detail. Following the above classification of the Internet’s architecture into layers, it is possible to highlight just some of the issues uncovered:
. The Tailored Access Operations (TAO) is the NSA’s powerful hacking unit, which specialises in breaking into a target’s every communication
by tinkering with their access points to the network.8
The unit uses built-in backdoors in hardware such as routers to tap into people’s connection at the point of origin.
The NSA has managed to tap some of the most important underwater cable systems, which make up the very backbone of the Internet.9
The tapping is possible because the communications in the transport layer are not encrypted by default.
This is related to the above paragraph. It has been revealed that the NSA may have had a hand in the lack of default encryption in the Internet layer protocols (TCP/IP), as Vint Cerf, one of the fathers of the Internet, has claimed that he was stopped by the NSA from including an encrypted protocol into the transport layer.10
One of the most troubling revelations has been that the NSA has managed to obtain collaboration from technology firms to conduct surveillance within applications, including allegedly secure and encrypted communications like Skype.11
The project Sigint, which stands for signals intelligence, creates partnerships with developers and companies to build exploits into telecommunication tools.
All of the above speaks of a much more controlled Internet, filled with taps, exploits and collusion on the part of industry, culminating in an unprecedented level of surveillance.
C.Will the Real Internet Please Stand Up?
If one reads each of the above two sections separate...