PART I
Complexity of Combinatorial Optimization Problems
Chapter 1
Basic Concepts in Algorithms and Complexity Theory 1
1.1. Algorithmic complexity
In algorithmic theory, a problem is a general question to which we wish to find an answer. This question usually has parameters or variables the values of which have yet to be determined. A problem is posed by giving a list of these parameters as well as the properties to which the answer must conform. An instance of a problem is obtained by giving explicit values to each of the parameters of the instanced problem.
An algorithm is a sequence of elementary operations (variable affectation, tests, forks, etc.) that, when given an instance of a problem as input, gives the solution of this problem as output after execution of the final operation.
The two most important parameters for measuring the quality of an algorithm are: its execution time and the memory space that it uses. The first parameter is expressed in terms of the number of instructions necessary to run the algorithm. The use of the number of instructions as a unit of time is justified by the fact that the same program will use the same number of instructions on two different machines but the time taken will vary, depending on the respective speeds of the machines. We generally consider that an instruction equates to an elementary operation, for example an assignment, a test, an addition, a multiplication, a trace, etc. What we call the complexity in time or simply the complexity of an algorithm gives us an indication of the time it will take to solve a problem of a given size. In reality this is a function that associates an order of magnitude1 of the number of instructions necessary for the solution of a given problem with the size of an instance of that problem. The second parameter corresponds to the number of memory units used by the algorithm to solve a problem. The complexity in space is a function that associates an order of magnitude of the number of memory units used for the operations necessary for the solution of a given problem with the size of an instance of that problem.
There are several sets of hypotheses concerning the “standard configuration” that we use as a basis for measuring the complexity of an algorithm. The most commonly used framework is the one known as “worst-case”. Here, the complexity of an algorithm is the number of operations carried out on the instance that represents the worst configuration, amongst those of a fixed size, for its execution; this is the framework used in most of this book. However, it is not the only framework for analyzing the complexity of an algorithm. Another framework often used is “average analysis”. This kind of analysis consists of finding, for a fixed size (of the instance) n, the average execution time of an algorithm on all the instances of size n; we assume that for this analysis the probability of each instance occurring follows a specific distribution pattern. More often than not, this distribution pattern is considered to be uniform. There are three main reasons for the worst-case analysis being used more often than the average analysis. The first is psychological: the worst-case result tells us for certain that the algorithm being analyzed can never have a level of complexity higher than that shown by this analysis; in other words, the result we have obtained gives us an upper bound on the complexity of our algorithm. The second reason is mathematical: results from a worst-case analysis are often easier to obtain than those from an average analysis, which very often requires mathematical tools and more complex analysis. The third reason is “analysis portability”: the validity of an average analysis is limited by the assumptions made about the distribution pattern of the instances; if the assumptions change, then the original analysis is no longer valid.
1.2. Problem complexity
The definition of the complexity of an algorithm can be easily transposed to problems. Informally, the complexity of a problem is equal to the complexity of the best algorithm that solves it (this definition is valid independently of which framework we use).
Let us take a size n and a function f(n). Thus:
– TIMEf(n) is the class of problems for w...