This chapter introduces topics related to concurrent and networked objects. We first motivate the need for advanced software development techniques in this area. Next, we present an overview of key design challenges faced by developers of concurrent and networked object-oriented applications and middleware. To illustrate how patterns can be applied to resolve these problems, we examine a case study of an object-oriented framework and a high-performance Web server implemented using this framework. In the case study we focus on key patterns presented in this book that help to simplify four important aspects of concurrent and networked applications:
1.1 Motivation
During the past decade advances in VLSI technology and fiber-optics have increased computer processing power by 3â4 orders of magnitude and network link speeds by 6â7 orders of magnitude. Assuming that these trends continue, by the end of this decade
- Desktop computer clock speeds will run at ~100 Gigahertz
- Local area network link speeds will run at ~100 Gigabits/second
- Wireless link speeds will run at ~100 Megabits/second and
- The Internet backbone link speeds will run at ~10 Terabits/second
Moreover, there will be billions of interactive and embedded computing and communication devices in operation throughout the world. These powerful computers and networks will be available largely at commodity prices, built mostly with robust common-off-the-shelf (COTS) components, and will inter-operate over an increasingly convergent and pervasive Internet infrastructure.
To maximize the benefit from these advances in hardware technology, the quality and productivity of technologies for developing concurrent and networked middleware and application software must also increase. Historically, hardware has tended to become smaller, faster, and more reliable. It has also become cheaper and more predictable to develop and innovate, as evidenced by âMooreâs Lawâ. In contrast, concurrent and networked software has often grown larger, slower, and more error-prone. It has also become very expensive and time-consuming to develop, validate, maintain, and enhance.
Although hardware improvements have alleviated the need for some low-level software optimizations, the lifecycle cost [Boe81] and effort required to develop softwareâparticularly mission-critical concurrent and networked applicationsâcontinues to rise. The disparity between the rapid rate of hardware advances versus the slower software progress stems from a number of factors, including:
- Inherent and accidental complexities. There are vexing problems with concurrent and networked software that result from inherent and accidental complexities. Inherent complexities arise from fundamental domain challenges, such as dealing with partial failures, distributed deadlock, and end-to-end quality of service (QoS) requirements. As networked systems have grown in scale and functionality they must now cope with a much broader and harder set of these complexities.
Accidental complexities arise from limitations with software tools and development techniques, such as non-portable programming APIs and poor distributed debuggers. Ironically, many accidental complexities stem from deliberate choices made by developers who favor low-level languages and tools that scale up poorly when applied to complex concurrent and networked software.
- Inadequate methods and techniques. Popular software analysis methods [SM88] [CY91] [RBPEL91] and design techniques [Boo94] [BRJ98] have focused on constructing single-process, single-threaded applications with âbest-effortâ QoS requirements. The development of high-quality concurrent and networked systemsâparticularly those with stringent QoS requirements, such as videoconferencingâhas been left to the intuition and expertise of skilled software architects and engineers. Moreover, it has been hard to gain experience with concurrent and networked software techniques without spending considerable time learning via trial and error, and wrestling with platform-specific details.
- Continuous re-invention and re-discovery of core concepts and techniques. The software industry has a long history of recreating incompatible solutions to problems that are already solved. For example, there are dozens of non-standard general-purpose and real-time operating systems that manage the same hardware resources. Similarly, there are dozens of incompatible operating system encapsulation libraries that provide slightly different APIs that implement essentially the same features and services.
If effort had instead been focused on enhancing and optimizing a small number of solutions, developers of concurrent and networked software would be reaping the benefits available to developers of hardware. These developers innovate rapidly by using and applying common CAD tools and standard instruction sets, buses, and network protocols.
No single silver bullet can slay all the demons plaguing concurrent and networked software [Broo87]. Over the past decade, however, it has become clear that patterns and pattern languages help to alleviate many inherent and accidental software complexities.
A pattern is a recurring solution schema to a standard problem in a particular context [POSA1]. Patterns help to capture and reuse the static and dynamic structure and collaboration of key participants in software designs. They are useful for documenting recurring microarchitectures, which are abstractions of software components that experienced developers apply to resolve common design and implementation problems [GoF95].
When related patterns are woven together, they form a âlanguageâ that helps to both
- Define a vocabulary for talking about software development problems [SFJ96] and
- Provide a process for the orderly resolution of these problems [Ale79] [AIS77]
By studying and applying patterns and pattern languages, developers can often escape traps and pitfalls that have been avoided traditionally only via long and costly apprenticeship [PLoPD1].
Until recently [Lea99a] patterns for developing concurrent and networked software existed only in programming folklore, the heads of expert researchers and developers, or were buried deep in complex source code. These locations are not ideal, for three reasons:
- Re-discovering patterns opportunistically from source code is expensive and time-consuming, because it is hard to separate the essential design decisions from the implementation details.
- If the insights and rationale of experienced designers are not documented, they will be lost over time and thus cannot help guide subsequent software maintenance and enhancement activities.
- Without guidance from earlier work, developers of concurrent and networked software face the Herculean task [SeSch70] of engineering complex systems from the ground up, rather than reusing proven solutions.
As a result many concurrent and networked software systems are developed from scratch. In todayâs competitive, time-to-market-driven environments, however, this often yields non-optimal ad hoc solutions. These solutions are hard to customize and tune, because so much effort is spent just trying to make the software operational. Moreover, as requirements change over time, evolving ad hoc software solutions becomes prohibitively expensive. Yet, end-users expectâor at least desireâsoftware to be affordable, robust, ...