Computer Science
Theory of Computation
The Theory of Computation is a branch of computer science that deals with the study of algorithms, their computational complexity, and the limits of what can be computed. It explores the fundamental principles underlying computation, including automata theory, formal languages, and computability theory. This field is essential for understanding the capabilities and limitations of computers and developing efficient algorithms.
Written by Perlego with AI-assistance
Related key terms
1 of 5
6 Key excerpts on "Theory of Computation"
- eBook - PDF
- Lev D. Beklemishev(Author)
- 2000(Publication Date)
- Elsevier Science(Publisher)
A BASIS FOR A MATHEMATICAL Theory of Computation 1) JOHN McCARTHY Computation is sure to become one of the most important of the sciences. This is because it is the science of how machines can be made to carry out intellectual processes. We know that any intellectual process that can be carried out mechanically can be performed by a general purpose digital computer. Moreover, the limitations on what we have been able to make computers do so far clearly come far more from our weakness as pro- grammers than from the intrinsic limitations of the machines. We hope that these limitations can be greatly reduced by developing a mathemati- cal science of computation. There are three established directions of mathematical research relevant to a science of computation. The first and oldest of these is numerical analysis. Unfortunately, its subject matter is too narrow to be of much help in forming a general theory, and it has only recently begun to be affected by the existence of automatic computation. The second relevant direction of research is the theory of computability as a branch of recursive function theory. The results of the basic work in this theory, including the existence of universal machines and the existence of unsolvable problems, have established a framework in which any Theory of Computation must fit. Unfortunately, the general trend of research in this field has been to establish more and better unsolvability theorems, and there has been very little attention paid to positive results and none to establishing the properties of the kinds of algorithms that are actually used. Perhaps for this reason the formalisms for describing algorithms are too cumbersome to be used to describe actual algorithms. The third direction of mathematical research is the theory of finite automata. - eBook - PDF
- E. Börger(Author)
- 1989(Publication Date)
- North Holland(Publisher)
B O O K 1 ELEMENTARY COMPUTATION THEORY The first book has the concept of algorithm as its object, particularly its precise mathematical definition, and the study of its basic properties. Historically speaking, the posing of metamathematical questions has provided the impulse for the growth of computability theory into a recognised branch of knowledge today. Here, especially in the first third of our century, there developed the consciousness of the need to find a mathematically precise and sufficiently general explication of the intuitf ve concept of algorithmic processes in order, generally, to lead to rigorous proofs of the impossibility of algorithmic solutions of particular problems. The significant achievement of such an explication, particularly in the form given by Turing in 1937, decisively influenced the development of the first electronic computing machines. With the progressive development of computers, considering the general investigations into the complexity of design and implementation of algorithms, it has even proved to be practically relevant to have available a mathematically flexible formulation of the concept of algorithm which is, to the greatest extent, machine-independent. Accordingly, in this book we present a unified approach to the classical themes of computability theory and to the foundations of complexity theory. Thus, in chapter A1 we introduce a general model of transformation and computation systems in whose form there appear the concepts of Turing- and register-machines, for the explication of algorithmic procedures. The model also shapes the concepts representing special classes of algorithm such as finite automata (Ch.CIV) and context-free grammars (Ch.CV), which are central for the construction, implementation and study of properties of programs, particularly of high level programming languages. - Available until 27 Jan |Learn more
The Engine of Complexity
Evolution as Computation
- John E. Mayfield(Author)
- 2013(Publication Date)
- Columbia University Press(Publisher)
2 Computation What is a computation?Listen to music or surf the Internet on your iPad. Start your car in the morning without touching the accelerator pedal and your engine “decides” how much gasoline and air flow into the cylinders; as the engine warms, the ratio of gas and air changes. Step on the accelerator and the ratio changes again. How are such things possible? Electronic computers function in each device. It is pretty hard these days to escape the influence of computers. They calculate our bills, produce photographs without film, keep track of your likes and dislikes, and predict the weather. Computers are machines that manipulate information, and computer science is the formal discipline that studies limitations and opportunities afforded by this type of activity. In this chapter we will explore the concept of computation. I hope to convince you that the notion of computation is more general than simply what happens in electronic computers and that it is not possible to cleanly separate many kinds of physical activity from computation.Basic to the discipline of computer science is the idea that information can be encoded in patterns of symbols, usually linear sequences that convey meaning to someone or something. Chapter 1 introduced one specific quantitative measure of information: Shannon information. This measure is very useful for some purposes, such as determining how much physical space you need to store some information you value or how much bandwidth you need to transmit it to a distant location. Later in this chapter I will introduce another measure, algorithmic information, which is particularly useful in computer theory.A convenient way to understand the notion of computation is to see every computation as a process in which a pattern (usually, but not always, a sequence of symbols) interfaces with a device in such a way that a series of changes occurs within the device, culminating in output. The output may be useful in ways that the input is not. This general idea framed in terms of information is diagramed in figure 2.1 - eBook - PDF
- Roderick S C Wong, Felipe Cucker(Authors)
- 2000(Publication Date)
- WSPC(Publisher)
Thus, the class of computable functions appears to be a natural class, indepen-dent of any specific model of computation. 3 And consequently, the answers to the basic questions of decidability will be independent of formalism. This gives one a great deal of confidence in the theoretical foundations of the theory of com-putation. Indeed, what is known as Church's the-sis is an assertion of belief that the classical for-malisms completely capture our intuitive notion of computable function. Thus for example, in the light of Church'8 thesis, the negative solution to Hilbert's Tenth Problem can be gotten by showing there is no Turing machine to decide the solvability in integers of diophantine polynomials. Compelling motivation clearly would be required to justify yet a new model of computation. 4. Toward a Mathematical Foundation of Numerical Analysis Our perspective is to formulate the laws of compu-tation. Thus we write not from the point of view of *In fUitiml terminology, theee functions are often called the recursive function!, decidable lets are the recursive tett and ■emi-decidable aeti are the recursively enumerate tett. the engineer who looks for a good algorithm which solves his problem at hand, or wishes to design a faster computer. The perspective is more like that of a physicist, trying to understand the laws of sci-entific computation. Idealizations are appropriate, but such idealizations should carry basic truths. Scientific computation is the domain of compu-tation which is based mainly on the equations of physics. For example, from the equations of fluid mechanics, scientific computation helps understand better design for airplanes, or assists in weather pre-diction. The theory underlying this side of compu-tation is called numerical analysis. There is a substantial conflict between theoreti-cal computer science and numerical analysis. These two subjects with common goals have grown apart. - eBook - ePub
Feynman Lectures on Computation
Anniversary Edition
- Tony Hey(Author)
- 2023(Publication Date)
- CRC Press(Publisher)
3 The Theory of Computation
DOI: 10.1201/9781003358817-3CONTENTS
3.1 Effective Procedures and Computability 3.2 Finite State Machines 3.3 The Limitations of Finite State Machines 3.4 Turing Machines 3.5 More on Turing Machines 3.6 Universal Turing Machines and the Halting Problem 3.7 ComputabilityThus far, we have discussed the limitations on computing imposed by the structure of logic gates. We now come on to address an issue that is far more fundamental: is there a limit to what we can, in principle, compute? It is easy to imagine that if we built a big enough computer, then it could compute anything we wanted it to. Is this true? Or are there some questions that it could never answer for us, however beautifully made it might be?Ironically, it turns out that all this was discussed long before computers were built! Computer science, in a sense, existed before the computer. It was a very big topic for logicians and mathematicians in the 1930s. There was a lot of ferment at court in those days about this very question – what can be computed in principle? Mathematicians were in the habit of playing a particular game, involving setting up mathematical systems of axioms and elements – like those of Euclid, for example – and seeing what they could deduce from them. An assumption that was routinely made was that any statement you might care to make in one of these mathematical languages could be proved or disproved, in principle. Mathematicians were used to struggling vainly with the proof of apparently quite simple statements – like Fermat’s Last Theorem, or Goldbach’s Conjecture – but always figured that, sooner or later, some smart guy would come along and figure them out.* - eBook - ePub
Philosophy of Computer Science
An Introduction to the Issues and the Literature
- William J. Rapaport(Author)
- 2023(Publication Date)
- Wiley-Blackwell(Publisher)
Some of the features of computational thinking that various people have cited include abstraction, hierarchy, modularity, problem analysis, structured programming, the syntax and semantics of symbol systems, and debugging techniques. (Note that all of these are among the methods for handling complexity!)Denning (2009 , p. 33) also recognizes the importance of computational thinking. However, he dislikes it as a definition of CS, primarily on the grounds that it is too narrow:Computation is present in nature even when scientists are not observing it or thinking about it. Computation is more fundamental than computational thinking. For this reason alone, computational thinking seems like an inadequate characterization of computer science. (Denning, 2009 , p. 30)A second reason Denning thinks defining CS as computational thinking is too narrow is that there are other equally important forms of thinking: “design thinking, logical thinking, scientific thinking, etc.” (Denning et al., 2017 ).313.16.5 CS as AI
Computation … is the science of how machines can be made to carry out intellectual processes.—John McCarthy (1963 , p. 1, my italics)The goal of computer science is to endow these information processing devices with as much intelligent behavior as possible.—Juris Hartmanis (1993 , p. 5, my italics) (cf. Hartmanis, 1995a , p. 10)Computational Intelligence is the manifest destiny of computer science, the goal, the destination, the final frontier.—Edward A. Feigenbaum (2003 , p. 39)These aren't exactly definitions of CS, but they could be turned into one: computer science – note: CS, not AI! – is the study of how to make computers “intelligent” and how to understand cognition computationally.As we will see in more detail in Chapter 6 , the history of computers supports this: it is a history that began with how to get machines to do some human thinking (in particular, certain mathematical calculations) and then more and more. And (as we will see in Chapter 8 ) the Turing Machine model of computation was motivated by how humans compute: Turing (1936 , Section 9) analyzed how humans compute and then designed what we would now call a computer program that does the same thing. But the branch of CS that analyzes how humans perform a task and then designs computer programs to do the same thing is AI. So, the Turing Machine was the first AI program! But defining
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.





