Computer Science
Complexity analysis
Complexity analysis is the study of the resources required to solve a problem as the input size grows. It helps in understanding the efficiency of algorithms and data structures. The goal is to design algorithms that can solve problems in a reasonable amount of time and space.
Written by Perlego with AI-assistance
Related key terms
1 of 5
8 Key excerpts on "Complexity analysis"
- eBook - PDF
A Textbook of Data Structures and Algorithms, Volume 1
Mastering Linear Data Structures
- G. A. Vijayalakshmi Pai(Author)
- 2022(Publication Date)
- Wiley-ISTE(Publisher)
2 Analysis of Algorithms In the previous chapter, we introduced the discipline of computer science from the perspective of problem solving. It was detailed how problem solving using computers calls not only for good algorithm design but also for the appropriate use of data structures to render them efficient. This chapter discusses methods and techniques to analyze the efficiency of algorithms. 2.1. Efficiency of algorithms When there is a problem to be solved, it is probable that several algorithms crop up for its solution, and therefore, one is at a loss to know which one is the best. This raises the question of how one decides which among the algorithms is preferable or which among them is the best. The performance of algorithms can be measured on the scales of time and space. The former would mean looking for the fastest algorithm for the problem or that which performs its task in the minimum possible time. In this case, the performance measure is termed time complexity. The time complexity of an algorithm or a program is a function of the running time of the algorithm or program. In the case of the latter, it would mean looking for an algorithm that consumes or needs limited memory space for its execution. The performance measure in such a case is termed space complexity. The space complexity of an algorithm or a program is a function of the space needed by the algorithm or program to run to completion. However, in this book, our discussions mostly emphasize the time complexities of the algorithms presented. 14 A Textbook of Data Structures and Algorithms 1 The time complexity of an algorithm can be computed either by an empirical or theoretical approach. The empirical or posteriori testing approach calls for implementing the complete algorithms and executing them on a computer for various instances of the problem. The time taken by the execution of the programs for various instances of the problem are noted and compared. - No longer available |Learn more
- (Author)
- 2014(Publication Date)
- Learning Press(Publisher)
________________________ WORLD TECHNOLOGIES ________________________ Chapter 3 Computational Complexity Theory Computational complexity theory is a branch of the theory of computation in theore-tical computer science and mathematics that focuses on classifying computational prob-lems according to their inherent difficulty. In this context, a computational problem is understood to be a task that is in principle amenable to being solved by a computer. Informally, a computational problem consists of problem instances and solutions to these problem instances. For example, primality testing is the problem of determining whether a given number is prime or not. The instances of this problem are natural numbers, and the solution to an instance is yes or no based on whether the number is prime or not. A problem is regarded as inherently difficult if solving the problem requires a large amount of resources, whatever the algorithm used for solving it. The theory formalizes this intuition, by introducing mathematical models of computation to study these prob-lems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what com-puters can and cannot do. Closely related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between computational complexity theory and analysis of algorithms is that the latter is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the former asks a more general question about all possible algorithms that could be used to solve the same problem. - eBook - PDF
Quantum Computation and Quantum Information
10th Anniversary Edition
- Michael A. Nielsen, Isaac L. Chuang(Authors)
- 2010(Publication Date)
- Cambridge University Press(Publisher)
– Christos Papadimitriou What time and space resources are required to perform a computation? In many cases these are the most important questions we can ask about a computational problem. Prob-lems like addition and multiplication of numbers are regarded as efficiently solvable because we have fast algorithms to perform addition and multiplication, which consume The analysis of computational problems 139 little space when running. Many other problems have no known fast algorithm, and are effectively impossible to solve, not because we can’t find an algorithm to solve the prob-lem, but because all known algorithms consume such vast quantities of space or time as to render them practically useless. Computational complexity is the study of the time and space resources required to solve computational problems. The task of computational complexity is to prove lower bounds on the resources required by the best possible algorithm for solving a problem, even if that algorithm is not explicitly known. In this and the next two sections, we give an overview of computational complexity, its major concepts, and some of the more important results of the field. Note that computational complexity is in a sense comple-mentary to the field of algorithm design; ideally, the most efficient algorithms we could design would match perfectly with the lower bounds proved by computational complex-ity. Unfortunately, this is often not the case. As already noted, in this book we won’t examine classical algorithm design in any depth. One difficulty in formulating a theory of computational complexity is that different computational models may require different resources to solve the same problem. For in-stance, multiple-tape Turing machines can solve many problems substantially faster than single-tape Turing machines. This difficulty is resolved in a rather coarse way. Suppose a problem is specified by giving n bits as input. - eBook - PDF
- Stephen B. Maurer, Anthony Ralston(Authors)
- 2005(Publication Date)
- A K Peters/CRC Press(Publisher)
Occasionally, instead of determining how fast an algorithm executes, we’ll be interested in how much (computer) storage it requires. The branch of mathematics and computer science concerned with the performance characteristics of algorithms is called the analysis of algo-rithms . ii) Computational Complexity Questions . Instead of asking how well a particular algorithm does its job, suppose we ask instead how good the best algorithm for a particular problem is, given a suitable definition of “best”. Your intuition probably will tell you that this is usually a much more difficult question than asking about the performance characteristics of a particular algorithm. After all, if you don’t even know what all the possible algorithms for a problem are (and you almost never will), how could you determine how good the best one would be? And even if you could determine that, would you necessarily be able to find an algorithm which realizes the best performance? As with the analysis of specific algorithms, the most common definition of “best” is fastest, but sometimes the least-storage criterion is used here, too. Discovering the properties of best algorithms and finding algorithms which have these properties is the concern of a branch of mathematics and computer science called computational complexity . When the concern is with fast algorithms, we speak of time complexity . If minimum storage is the object, we refer to space complexity . The study of computational complexity generally is a very difficult subject and requires mathematics be-yond the scope of this book, although once in this section we shall determine the computational complexity of a problem. When analyzing the performance of an individual algorithm, some authors use the word complexity, as in “the time complexity of this algorithm is Ord( n 2 )” where n is some measure of size. - eBook - PDF
- S.C. Masin(Author)
- 1993(Publication Date)
- North Holland(Publisher)
If the number of required operations can be represented by a polynomial function in n, then the prob- lem has polynomial time complexity. Similarly, space complexity is de- fined as a function for an algorithm that expresses its space or memory re- quirements. Algorithmic complexity is the cost of a particular algorithm. This should be contrasted with problem complexity which is the minimal cost over all possible algorithms. The dominant kind of analysis is worst- case: at least one instance out of all possible instances has this complexity. A worst-case analysis provides an upper-bound on the amount of compu- tation that must be performed as a function of problem size. If one knows the maximum problem size, then the analysis places an upper bound on computation for the whole problem as well. Thus, one may then claim, given an appropriate implementation of the problem solution, that pro- cessors must run at a speed dependent on this maximum in order to ensure real-time performance for all inputs in the world. Worst-cases do not only occur for the largest possible problem size: rather, the worst-case time complexity function for a problem gives the worst-case number of computa- tions for any problem size; this worst case may be required simply because of unfortunate ordering of computations (for example, a linear search through a list of items would take a worst-case number of comparisons if the item sought is the last one). Thus, worst-case situations in the real world may happen frequently for any given problem size. Many argue that worst-case analysis is inappropriate for perception because of one of the following reasons: 1) relying on worst-case analysis and drawing the link to biological vi- sion implies that biological vision handles the worst-case scenarios: 2) biological vision systems are designed around average or perhaps best-case assumptions: 3) expected case analysis more correctly reflects the world that biologi- cal vision systems see. - eBook - PDF
Software Engineering Foundations
A Software Science Perspective
- Yingxu Wang(Author)
- 2007(Publication Date)
- Auerbach Publications(Publisher)
According to cognitive informatics, human beings may comprehend a large cycle of iteration, which is the major issue of computational complexity, by looking at only the beginning and termination conditions, and one or a few arbitrary internal loops with inductive inferences. However, humans are not good at dealing with functional complexities such as a long chain of interrelated operations, very abstract data objects, and their consistency. Therefore, the system complexity of large-scale software is the focus of software engineering. 10.7.1 COMPUTATIONAL COMPLEXITY Computational complexity theory is a well established area in computing [Hartmanis and Stearns, 1965; Hartmanis, 1994; Lewis and Papadimitriou, 1998] that studies: a) The taxonomy of problems in The 36th Principle of Software Engineering Theorem 10.13 The orientation of software engineering complexity theories states that the complexity theories of computation and software engineering are different. The former is focused on the problems of high throughput complexity that are computing time efficiency centered; while the latter puts emphases on the problems of functional complexity that are human cognition time and workload oriented. Chapter 10 System Science Foundations of SE 815 computing and their solvabilities; and b) Complexities and efficiencies of algorithms for a given problem. Computational complexity centered by the algorithm complexity can be modeled by its time or space complexity, particularly the former, proportional to the sizes of problems. 10.7.1.1 Taxonomy of Computational Problems Computational complexity theories study the solvability in computing. The solvable problems are those that can be computed by polynomial-time consumption. The nonsolvable problems are those that cannot be solved in any practical sense by computers due to excessive time requirements. The taxonomy of problems in computation can be classified into the following classes. - Christine Solnon(Author)
- 2013(Publication Date)
- Wiley-ISTE(Publisher)
Chapter 2Computational Complexity
A problem is said to be combinatorial if it can be resolved by the review of a finite set of combinations. Most often, this kind of solving process is met with an explosion of the number of combinations to review. This is the case, for example, when a timetable has to be designed. If there are only a few courses to schedule, the number of combinations is rather small and the problem is quickly solved. However, adding a few more courses may result in such an increase of the number of combinations that it is no longer possible to find a solution within a reasonable amount of time.This kind of combinatorial explosion is formally characterized by the theory of computational complexity, which classifies problems with respect to the difficulty of solving them. We introduce algorithm complexity in section 2.1 , which allows us to evaluate the amount of resources needed to run an algorithm. In section 2.2 , we introduce the main complexity classes and describe the problems we are interested in within this classification. We show that some instances of a problem may be more difficult to solve than others in section 2.3 or, in other words, that the input data may change the difficulty involved in finding a solution in practice. We introduce the concepts of phase transition and search landscape which may be used to characterize instance hardness. Finally, in section 2.4 , we provide an overview of the main approaches that may be used to solve combinatorial problems.2.1. Complexity of an algorithm
Algorithmic complexity utilizes computational resources to characterize algorithm scalability. In particular, the time complexity of an algorithm gives an order of magnitude of the number of elementary instructions that are executed at run time. It is used to compare different algorithms independently of a given computer or programming language.Time complexity usually depends on the size of the input data of the algorithm. Indeed, given a problem, we usually want to solve different instances of this problem where each instance corresponds to different input data.- Ernst L. Leiss(Author)
- 2006(Publication Date)
- Chapman and Hall/CRC(Publisher)
49 3 Examples of Complexity analysis About This Chapter We begin with a review of techniques for determining complexity functions. Then we apply these techniques to a number of standard algorithms, among others representatives of the techniques of divide-and-conquer and dynamic programming, as well as algorithms for sorting, searching, and graph oper-ations. We also illustrate on-line and off-line algorithms. This chapter concentrates on techniques for determining complexity mea-sures and how to apply them to a number of standard algorithms. Readers who have substantial knowledge of algorithm complexity may skip this chapter without major consequences. We first review approaches to finding the operation or statement count of a given algorithm. These range from simple inspection of the statements to much more sophisticated recursion-based arguments. Then we examine a number of standard algorithms that should be known to all computer scientists and determine their complexity measures, mostly time complexity and usually worst-case. 3.1 General Techniques for Determining Complexity Suppose we are given an algorithm and want to determine its complexity. How should we do this? If the algorithm were given as a linear sequence of simple statements (so-called straight-line code where every statement is exe-cuted once), the answer would be trivial: Count the number of statements — this is its time complexity. Of course, such an algorithm would be utterly trivial. Virtually all algorithms of any interest contain more complex state-ments; in particular, there are statements that define iteration (for loops, while loops, repeat loops), statements that connote alternatives (if statements, case statements), and function calls, including those involving recursion. 50 A Programmer’s Companion to Algorithm Analysis Iteration: The most important aspect is to determine the number of times the body of the iterative statement is executed.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.







