![]()
Part One: The QOS Realm
1
The QOS World
Quality of Service (QoS) has always been in a world of its own, but as the technology has been refined and has evolved in recent years, QOS usage has increased to the point where it is now considered a necessary part of network design and operation. As with most technologies, large-scale deployments have led to the technology becoming more mature, and QOS is no exception.
The current trend in the networking world is convergence, abandoning the concept of several separate physical networks in which each one carries specific types of traffic, moving towards a single, common physical network infrastructure. The major business driver associated with this trend is cost reduction: one network carrying traffic and delivering services that previously demanded several separate physical networks requires fewer resources to achieve the same goal.
One of the most striking examples is voice traffic, which was previously supported on circuit-switched networks and is now starting to be delivered on the “same common” packet-switched infrastructure.
The inherent drawback in having a common network is that the road is now the same for different traffic types, which poses the challenge of how to achieve a peaceful co-existence among them since they are all competing for the same network resources.
Allowing fair and even competition by having no traffic differentiation does not work because different types of traffic have different requirements, just like an ambulance and a truck on the same road have different needs. The first attempt to solve this problem was to make the road wider, that is, to deploy network resources in an over-provisioned manner following the logic that although the split of resources was not ideal, so many free resources would be available at all times that the problem would be minimized. However, this approach works against the major business driver behind network convergence, which is cost reduction. Even more, such over-provisioning needs to be done not only for the steady state but also to take into account possible network failure scenarios.
QOS does not widen the road. Rather, it allows the division of network resources in a non-equal manner, favoring some and short-changing others instead of offering an even split of resources across all applications. A key point with QOS is that a non-equal split of resources implies that there cannot be “win–win” situations. For some to be favored, others must be penalized. Thus, the starting point in QOS design is always to first select who needs to be favored, and the choice of who gets penalized follows as an unavoidable consequence.
In today’s networks, where it is common to find packet-oriented networks in which different types of traffic such as voice, video, business, and Internet share the same infrastructure and the same network resources, the role of QOS is to allow the application of different network behaviors to different traffic types.
Hence, for a specific traffic type, two factors must be considered, characterizing the behavior that the traffic requires from the network and determining which QOS tools can be set in motion to deliver that behavior.
1.1 Operation and Signaling
The QOS concept is somewhat hard to grasp at first because it is structurally different from the majority of other concepts found in the networking world. QOS is not a standalone service or product, but rather a concept that supports the attributes of a network by spanning horizontally across it.
QOS can be split into two major components: local operation and resource signaling. Local operation is the application of QOS tools on a particular router. Resource signaling can be defined as the tagging of packets in such a way that each node in the entire path can decide which QOS tools to apply in a consistent fashion to assure that packets receive the desired end-to-end QOS treatment from the network.
These two components are somewhat similar to the IP routing and forwarding concepts. Routing is a task performed jointly by all routers in the network. All routers exchange information among them and reach a consistent agreement in terms of the end-to-end path that packets follow. As for forwarding, each router performs the task individually and independently from the rest of the network using only local information.
Routing is comparatively more complex than forwarding, because it involves cooperation among all the routers in the network. However, routing does not need to work at wire speed. Forwarding is simpler. It is a task performed by a router individually and independently. However, it must operate at wire speed.
An analogy between routing and forwarding, and QOS resource signaling and local operation, can be drawn. QOS resource signaling is somewhat analogous to the routing concept. It involves all routers in the network, but has no requirement to work at wire speed. QOS local operation is analogous to the forwarding concept. Like forwarding, QOS local operation is, in concept, simpler, and each router performs it independently and individually. Also, QOS local operation must operate at wire speed.
However, there is a major difference between QOS resource signaling and routing; there are no standardized specifications (such as those which exist for any routing protocol) regarding what is to be signaled, and as a result there is no standard answer for what should be coded on all network routers to achieve the desired end-to-end QOS behavior. The standards in the QOS world do not give us an exact “recipe” as they do for routing protocols.
1.2 Standards and Per-Hop Behavior
The two main standards in the IP realm that are relevant to QOS are Integrated Services (IntServ) and the Differentiated Services (DiffServ). Intserv is described in RFC1633 [1] and Diffserv in RFC2475 [2].
IntServ was developed as a highly granular flow-based end-to-end resource reservation protocol, but because of its complexity it was never commonly deployed. However, some of its concepts have transitioned to the MPLS world, namely to the Resource Reservation Protocol (RSVP).
The DiffServ model was developed based on a class scheme, in which traffic is classified into classes of service rather than into flows as is done with IntServ. Another major difference is the absence of end-to-end signaling, because in the DiffServ model each router effectively works in a standalone fashion.
With DiffServ, a router differentiates between various types of traffic by applying a classification process. Once this differentiation is made, different QOS tools are applied to each specific traffic type to effect the desired behavior. However, the standalone model used by DiffServ reflects the fact that the classification process rules and their relation to which QOS tools are applied to which type of traffic are defined locally on each router. This fundamental QOS concept is called Per-Hop Behavior (PHB).
With PHB, there is no signaling between neighbors or end to end, and the QOS behavior at each router is effectively defined by the local configuration on the router. This operation raises two obvious concerns. The first is how to achieve coherence in terms of the behavior applied to traffic that crosses multiple routers, and the second is how to propagate information among routers.
Coherence is achieved by assuring that the routers participating in the QOS network act as a team. This means that each one has a consistent configuration deployed which assures that as traffic crosses multiple routers, the classification process on each one produces the same match in terms of which different traffic types and which QOS tools are applied to the traffic.
Unfortunately, the PHB concept has its Achilles’ heel. The end-to-end QOS behavior of the entire network can be compromised if a traffic flow crosses a number of routers and just one of them does not apply the same consistent QOS treatment, as illustrated in Figure 1.1.
In Figure 1.1, the desired behavior for the white packet is always to apply the PHB A. However, the middle router applies a PHB different from the desired one, breaking the desired consistency across the network in terms of the QOS treatment applied to the packet.
The word consistent has been used frequently throughout this chapter. However, the term should be viewed broadly, not through a microscopic perspective. Consistency does not mean that all routers should have identical configurations. Also, as we will see, the tools applied on a specific router vary according to a number of factors, for example, the router’s position in the network topology.
The second challenge posed by the PHB concept is how to share information among routers because there is no signaling between neighbors or end to end. Focusing on a single packet that has left an upstream router and is arriving at the downstream router, the first task performed on that packet is classification. The result of this classification is a decision regarding which behavior to apply to that packet. For instance, if the upstream router wants to signal information to its neighbor regarding this specific packet, the only possible way to do so is to change the packet’s contents by using the rewrite QOS tool, described in Chapter 2. Rewriting the packet’s content causes the classification process on the downstream router to behave differently, as illustrated in Figure 1.2.
However, the classification process on the downstream router can simply ignore the contents of the packet, so the success of such a scheme always depends on the downstream router’s consistency in terms of its classifier setup. A somewhat similar concept is the use of the multi-exit discriminator (MED) attribute in an external Border Gateway Protocol (EBGP) session. The success of influencing the return path that traffic takes depends on how the adjacent router deals with the MED attribute.
Although it does pose some challenges, the Diffserv/PHB model has proved to be highly popular. In fact, it is so heavily deployed that it has become the de facto standard in the QOS realm. The reasons for this are its flexibility, ease of implementation, and scalability, all the result of the lack of end-to-end signaling and the fact that traffic is classified into classes and not flows, which means that less state information needs to be maintained among the network routers. The trade-off, however, is the lack of end-to-end signaling, which raises the challenges described above. But as the reader will see throughout this book, these issues pose no risk if handled correctly.
As an aside, in Multi-Protocol Label Switching (MPLS) networks with Traffic Engineering (TE), it is possible to create logical paths called label-switched paths (LSPs) that function like tunnels across the network. Each tunnel has a certain amount of bandwidth reserved solely for it end to end, as illustrated in Figure 1.3.
What MPLS-TE changes in terms of PHB behavior is that traffic that is placed inside an LSP has a bandwidth assurance from the source to the destination. This means, then, that in terms of bandwidth, the resource competition is limited to traffic inside that LSP. Although an MPLS LSP can have a bandwidth reservation, it still requires a gatekeeper mechanism at the ingress node to ensure that the amount of traffic inserted in the LSP does not exceed the reserved bandwidth amount.
Another difference is that MPLS-TE allows explicit specification of the exact path from the source to the destination that the traffic takes instead of having the forwarding decision made at every single hop. All other PHB concepts apply equally to QOS and MPLS.
MPLS is a topic on its own and is not discussed more in this book. For more information, refer to the further reading section at the end of this chapter.
1.3 Traffic Characterization
As we have stated, different traffic types require that the network behave differently towards them. So a key task is characterizing the behavioral requirements, for which there are three commonly used param...