Introduction
In 1981, the great Jon Postel wrote in RFC (request for comments) 793, ābe conservative in what you do, be liberal in what you accept from others.ā Postel was introducing the Transmission Control Protocol (TCP) for the Defense Advanced Research Projects Agency (DARPA, 1981) Internet Program. TCP is what underpins the majority of Internet communication today, and it works because of that sentence, known today as Postelās Law. For network systems to be interoperable with each other they need to be forgiving about the traffic they accept but fully compliant with established protocols when sending out traffic. Postelās Law also demonstrates the mindset that was necessary in the early development of the Internet. Because Internet communication was relatively new it needed to be as open and accessible as possible.
It is that openness that led to the rise of the Internet, to the point today that many users and organizations consider it indispensable to their daily living. Unfortunately, that openness also means that the systems connected to the Internet are susceptible to attack.
It is the nature and evolution of these attacks that is the focus of this chapter. It is impossible to understand how to protect against todayās and tomorrowās attacks without first understanding where these attacks come from and how they have morphed over the years from scientific research to fame seekers to multibillion-dollar businesses. As threats continue to evolve, so must the solution to those threats.
A brief of history of network security
Any discussion of using intelligence to improve network security must begin with an understanding of the history of network security. After all, without an understanding of the past and how security threats have evolved it would be difficult to understand where the future of network security is headed.
Depending on who is asked what is known as the Internet today started in 1982 with the release of the Transmission Control Protocol/Internet Protocol (TCP/IP) by the Defense Information Systems Agency (DISA) and the Advanced Research Projects Agency (ARPA). According to Hobbesā Internet Timeline (Zakon, 2014), this was the first time that the word āinternetā was used to define a set of connected networks.
At this point in time the Internet was very small, consisting primarily of universities and government agencies, and very open. To access another node on the network operators had to share address information. Each operator was responsible for maintaining a table of all the nodes on the Internet and updates were sent out when things changed.
The Morris Worm
A lot of that openness changed on November 2, 1988 with the release of the Morris Worm (Seltzer, 2013). There had been security breaches prior to the Morris Worm, but the Morris Worm focused attention of the fledging network on security.
Ostensibly built to conduct a census of the nodes on the Internet, the Morris Worm wound up knocking an estimated 10% of the fledgling Internet offline (6000 of an estimated 60,000 nodes). The Morris Worm used a number of tricks still in use by worm creators today, including redirection (Morris was at Cornell but launched the worm from the Massachusetts Institute of Technology [MIT]), password guessing, automated population, and the use of a buffer overflow attack.
Good intentions or not, the Morris Worm clearly was designed to gain unauthorized access to nodes on the Internet and to bypass what little security was in place. The Morris Worm led directly to the founding of the CERT Coordination Center (CERT/CC) at Carnegie Mellon University, an organization that still exists today. The Morris Worm also pushed the development of the firewall as the first network security measure.
Firewalls
Like many terms used to describe network protocols, firewall is a term taken from the physical world. A firewall is a structure that is used to slow down or prevent a fire from spreading from one house to another or one part of a house to another part. Its purpose is to contain fire and limit damage to a single structure or part of that structure.
Most modern townhouses and condominiums have firewalls between adjacent units and firewalls can be even larger. The January 1911 edition of the Brotherhood of Locomotive Firemen and Enginemenās Magazine (p 90) describes a firewall erected in Manhattan stretching from 9th Street to 5th Avenue. This firewall was built to protect Manhattan from large-scale fires similar to what happened to Baltimore and San Francisco.
The earliest form of network firewall was developed in the late 1980s and was initially simply packet filtering rules added to routers. Packet filtering at the gateway level allowed network operators to stop known bad traffic from entering the network, but only provided limited improvements to security. These rules are difficult to maintain and very prone to false positives (blocking traffic that is actually good) and they require extensive knowledge of who everyone in the organization needs to talk to and who everyone needs to block. Packet filtering was ideal when the Internet consisted of 60,000 nodes and was considered very small, but it quickly outgrew its usefulness and new form firewall was needed.
The next type of firewall created was one that allowed for stateful packet filtering. These were the first commercial firewalls introduced by Digital Equipment Corporation (DEC) and AT&T. A stateful packet filtering firewall is one that maintains a table of all connections that pass through it. It makes its decision based not only the type of traffic but the status of the connection between the two hosts. A stateful packet filtering firewall is able to make more contextual-based decisions about whether traffic is good or bad based on the state of the packet. Stateful packet filtering firewalls were really the start of the commercial network security market. From this point firewall development enters a rapid pace and features like proxy capabilities, deep packet inspection, application awareness, and VPN (virtual private network) capabilities are added to firewalls.
Intrusion detection systems
The problem with a firewall is that, for the most part, administrators must be aware of a threat in order to put a rule in that will block it. Now that is not always the case, even in the late 1990s there were firewalls, like the Raptor Firewall, that were able to assess traffic based upon published standards (e.g. the RFC standards for Secure Sockets Layer [SSL], Simple Mail Transfer Protocol [SMTP], Hypertext Transfer Protocol [HTTP], etc). The capability to monitor for protocol compliance puts them inline with the capabilities of some of todayās ānext-generationā firewalls. Unfortunately, most firewalls were not able to alert on rules that they did not know about, a problem that makes firewalls ineffective against unknown threats. To handle unknown threats most networks deploy an Intrusion Detection System (IDS). An IDS is useful for detecting threats either by traffic pattern (e.g. if a URL contains characters it is associated with Qakbot, a data-stealing malware virus) or anomalous activity (e.g. these are Aliceās credentials, but Alice does not usually log into the VPN from Estonia).
There were a number of precursors to the IDS proposed in the 1980s, primarily to manage the auditing of log files, which until that point was manual process often involving printing out logs onto reams of paper each week and manually verifying them.
The first modern IDS, and one that underpins much of the way todayās IDSs work, was proposed by Dorothy Denning in 1987 in a paper entitled An Intrusion-Detection Model.
The model Denning outlined in her paper was for an Intrusion-Detection Expert System (IDES), which had six main components:
1. Subjects
2. Objects
3. Audit Records
4. Profiles
5. Anomaly Records
6. Activity Rules
By combining understanding of who the users of the network are (subjects) with the resources those users are trying to access (objects) and the logs (audit records) from those resources the IDES was able to build a strong profile of who is on the network and what they are doing on that network.
Taking that first set of data the IDES operators were able to automatically build a table mapping out the general patterns of activity on the network (profiles) as well as any activity that fell outside of those patterns (anomaly records). This allowed network administrators to develop signatures that alert when behavior that is not expected occurs (activity rules).
Moving from the original IDES to the modern IDS, these solutions tend to be network centric, which means they have lost some of the depth proposed by the original IDES solution. The modern IDS is good, often very good, at detecting malicious network activity but not necessarily good at detecting malicious activity on the desktop, or anomalous activity. An IDS might see that Alice has emailed her client list to her gmail account, but it doesnāt know that Alice just gave her notice. For many years the focus of the IDS and IDS developers was building better signatures and delivering those signatures faster, rather than focusing on building an understanding of the network and learning the context of the traffic. Fortunately, that is starting to change.
The desktop
In 1971 a programmer named Robert Thomas at BBN Technologies developed the first working computer virus (DNEWS, 2011) (the idea had been proposed as early as 1949). The virus, known as Creeper, was a self-replicating worm that jumped from one TENEX system to other TENEX time-sharing systems that were running on the DEC PDP-10 and connected to ARPANET. Ostensibly created as a research tool, the virus would start a print job, stop it and make the jump to another system removing itself from the original system. Once it installed on the new system it would print the message āIāM THE CREEPER. CATCH ME IF YOU CAN!ā on the screen.
Coincidentally, Creeper led directly to the first antivirus program, the Reaper, which jumped from system to system removing the Creeper.
The first widespread virus created outside of the network was called Elk Cloner. Elk Rich Skrenta, a 15-year-old high school student, programmed Cloner in February 1982. Elk Cloner was a boot-sector virus that infected Apple II machines. When these machines booted from an infected floppy the virus was copied into the computerās memory and remained installed on the computer. In addition, any uninfected floppy that was inserted into the machine had the Elk Cloner worm copied to it, allowing it to spread from machine to machine via floppy.
The virus did not do much damage, but every fiftieth time the program it was attached to ran it would display the following program:
Elk Cloner: The program with a personality
It will get on all your disks
It will infiltrate your chips
Yes, itās Cloner!
It will stick to you like glue
It will modify RAM too
Send in the Cloner!
Of course, virus creation was just getting started. Throughout the 1980s new and innovative ways to spread infections were de...