Computer Science

Theory of Computation

The Theory of Computation is a branch of computer science that deals with the study of algorithms, their computational complexity, and the limits of what can be computed. It explores the fundamental principles underlying computation, including automata theory, formal languages, and computability theory. This field is essential for understanding the capabilities and limitations of computers and developing efficient algorithms.

Written by Perlego with AI-assistance

6 Key excerpts on "Theory of Computation"

Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.
  • Computational Thinking in Education
    eBook - ePub

    Computational Thinking in Education

    A Pedagogical Perspective

    • Aman Yadav, Ulf Berthelsen, Aman Yadav, Ulf Dalvad Berthelsen(Authors)
    • 2021(Publication Date)
    • Routledge
      (Publisher)

    ...In those days coding was seen as the main work of programmers. Government labor departments initially defined programmer to mean coder. This misunderstanding has persisted and to this day produces a frustrating disconnect between computer scientists and many other stakeholders – all use the term “programming” but with considerably different meanings. In the 1970s and the 1980s, this mismatch collided with the slogan “computer science equals programming”, which meant “programming is noble work” to computer scientists and “computer science is little more than coding” to the general public. This unfortunate misunderstanding hurt the field’s public image and hampered educators who wanted a more sophisticated computing curriculum than many prominent employers wanted. Computing Theory. One of the most popular chapters of CT is the work of Alan Turing and his contemporaries on the definition of computability. Their extensive foundational work to define precisely what can be computed took place in the 1930s in the field of mathematical logic before the first programmable, fully electronic computers were built. These and later developments included Turing on computable numbers, Church on lambda calculus, Post on string manipulation, Gödel on recursive functions, Kleene on regular expressions, and Rabin and Scott on nondeterministic machines (cf. Mahoney, 2011). These works established a theoretical foundation for computing and later, in the 1950s and 1960s, strengthened computing’s claim to becoming a new academic field. A large number of central CT ideas originate from work on computing theory. An early motivation for bringing mathematics to programming was to deal with the inherent complexity and error-proneness of programming. It is very hard to formally establish if a program of a thousand instructions works correctly. And that is just a small program...

  • Artificial Intelligence
    eBook - ePub
    • Alan Garnham(Author)
    • 2017(Publication Date)
    • Routledge
      (Publisher)

    ...The theory specifies what computers can and cannot do and how much time and memory they need for the computations that they can perform. It therefore indicates the limitations of computational theories of the mind. The theory had its origins in several branches of mathematics. A number of mathematicians were independently searching for the smallest set of primitive operations needed to carry out any possible computation. When their proposals were worked out in detail, it was shown that they were equivalent. One approach to the problem of computability was Turing's (1936). Turing formulated his ideas in terms of an abstract computing device called a Turing machine. A Turing machine performs its calculations with the help of a tape divided into squares, each of which can have one symbol on it. The Turing machine's primitive operations are reading and writing symbols on the tape and shifting and tape one square to the left or right. It uses a finite vocabulary of symbols, but its tape can be infinitely long. A Turing machine has only a finite number of internal states. When it reads the symbol on a square its state may change, depending on what state it is currently in and what the symbol is. The structure of a Turing machine is very simple and so are the operations it performs. However, additional bits of 'machinery' - more tapes for example - do not increase the range of computations that a Turing machine can perform. They only increase its speed. Furthermore, these gains in efficiency are of no theoretical interest, since they speed up the machine's operation by only a constant factor. It is possible to describe a Universal Turing Machine, constructed on the same principles as other Turing machines. The Universal Turing Machine can mimic the operation of any other Turing machine. To simulate another machine it must be given a description of how that machine works. This description can be written on to its tape in standard Turing machine format...

  • Essentials of Sensation and Perception
    • George Mather(Author)
    • 2014(Publication Date)
    • Routledge
      (Publisher)

    ...He laid the logical and conceptual foundations of modern computers by describing, in abstract terms, a computing device that could manipulate symbols according to a set of rules. At the time this so-called Turing Machine was conceived mathematically as a mechanical device that could read and write symbols on a strip of tape. Arbitrarily complex operations could be performed by chaining together long sequences of simple read and write operations. This theory encapsulated the function of the central processing unit (CPU) in present-day computers, and so described the essence of modern computer science. Turing’s device was imaginary, but it has become a reality in today’s computers. Let’s take a very simple specific example to illustrate the idea. Say you are typing an essay using a computer’s word processor, and you decide to capitalize a word that you have typed in. You highlight the word using the mouse interface, and then select the ‘All Caps’ option in the word processor’s on-screen menu. The word becomes capitalized. The operation is performed in modern computers in precisely the way Turing described. Internally all computers store each letter in the alphabet as a number, and different numbers are allocated to lowercase and upper-case versions of each letter. So for ‘a’, ‘b’, ‘c’, ‘d’, etc. the numbers may be 97, 98, 99, 100, etc., while for ‘A’, ‘B’, ‘C’, ‘D’, etc. the numbers may be 65, 66, 67, 68, etc. So to capitalize a word the computer executes a series of instructions, part of which do the following: 1 Read the code of the next character in the word stored in the document file. 2 Subtract 32 from the code. 3 Write the new code back into the working document file, replacing the old code. A crucial aspect of the theory from the point of view of cognitive science was Turing’s claim that ‘It is possible to invent a single machine which can be used to compute any computable sequence’ (Turing, 1936, p. 241)...

  • Information Theory Meets Power Laws
    eBook - ePub

    Information Theory Meets Power Laws

    Stochastic Processes and Language Models

    • Lukasz Debowski(Author)
    • 2020(Publication Date)
    • Wiley
      (Publisher)

    ...As computers have become ubiquitous and indispensable for human civilization, scientists have started applying the metaphor of computation everywhere: to living organisms, to laws of physics, and to human brain. However, it is not obvious whether the most popular mathematical model of computation such as Turing machines is directly applicable and relevant to all these cases. The natural computation may differ to the mathematical computation in many important details and be more complicated. But simple mathematical models are always a good starting point to learn to imagine what else could be possible. For this reason, it may be good to get a primer in basic formal models of coding and computation. The organization of this chapter is as follows. Section 7.1 reports basic ideas of coding. Codes are functions that represent countably many distinct objects as code words, i.e. finite sequences of fixed symbols such as binary digits. It is a desirable property that if we concatenate code words, then we can decipher the corresponding sequence of encoded objects. Codes that possess this property are called uniquely decodable and they satisfy a simple albeit important inequality, called the Kraft inequality. By the Kraft inequality, for a random variable assuming values in the objects to be encoded, its Shannon entropy is roughly equal to the minimal expected length of a uniquely decodable code. The minimum is roughly achieved by the Shannon–Fano code. In Section 7.2, we make first steps toward algorithmic information theory. We introduce Turing machines, a traditional mathematical model of a general‐purpose computer. Subsequently, we define the Kolmogorov complexity of an object as the length of the shortest binary program for a universal Turing machine that computes a binary representation of this object. We provide some simple bounds for Kolmogorov complexity but, as we also show, Kolmogorov complexity is uncomputable, i.e...

  • Cognitive Neuroscience
    • Michael D. Rugg, Michael D. Rugg(Authors)
    • 2013(Publication Date)
    • Psychology Press
      (Publisher)

    ...Section 2.5 gives a brief overview of the elements of computation in neural systems, i.e. the local processing units, the architectures, the relation between knowledge and processing, and the procedures for learning. The remaining five sections then outline five different hypotheses concerning the basic goals of cortical computation. These theories are not necessarily incompatible, and all may have something to contribute to a more complete theory of how the cortex computes. 2.2 Varieties and uses of computational theory 2.2.1 Three levels of description Marr (1982) distinguished three levels at which any information processing system can be understood: (a) computational theory; (b) representation and algorithm; and (c) hardware implementation. What Marr calls the level of “computational” theory is the most abstract. It is concerned with describing the underlying information processing task to be performed, and with making clear why it is useful and how, in principle, it is possible. “Representation” is concerned with the format in which information is presented, and “algorithm” is concerned with the detailed operations performed upon that information. To illustrate these distinctions he uses the example of addition. Numbers can be given in many different formats, Arabic numerals and Roman numerals for example. The detailed operations required to achieve addition are different in these two systems because representation and algorithm are highly interdependent. Arithmetic is much more difficult using Roman numerals, and that is one reason why Arabic numerals are now more widely used. The goal of addition and the underlying mathematical theory is identical in the two cases, however. Hardware implementation is concerned with how the representations and algorithms are actually realized in a physical system. This can be done in many different ways, just as many different representations and algorithms can be used to achieve the same underlying goals...

  • Philosophy of Mathematics
    • Dov M. Gabbay, Paul Thagard, John Woods(Authors)
    • 2009(Publication Date)
    • North Holland
      (Publisher)

    ...A different and indirect approach evolved instead, whose origins can be traced back to the use of calculable number theoretic functions in finitist consistency proofs for parts of arithmetic. Here we find the most concrete beginningof the history of modern computability with close ties to earlier mathematical and later logical developments.There is a second sense in which “foundational context” can be taken, not as referring to work in the foundations of mathematics, but directly in modern logic and cognitive science. Without a deeper understanding of the nature of calculation and underlying processes, neither the scope of undecidability and incompleteness results nor the significance of computational models in cognitive science can be explored in their proper generality. The claim for logic is almost trivial and implies the claim for cognitive science. After all, the relevant logical notions have been used when striving to create artificial intelligence or to model mental processes in humans. These foundational problems come strikingly to the fore in arguments for Church’s or Turing’s Thesis, asserting that an informal notion of effective calculability is captured fully by a particular precise mathematical concept. Church’s Thesis, for example, claims in its original form that the effectively calculable number theoretic functions are exactly those functions whose values are computable in Gödel’s equational calculus, i.e., the general recursive functions.There is general agreement that Turing gave the most convincing analysis of effective calculability in his 1936 paperOn computable numbers — with an application to the Entscheidungsproblem. It is Turing’s distinctive philosophical contribution that he brought the computing agent into the center of the analysis and that was for Turing a human being, proceeding mechanically.1Turing’s student Gandy followed in his[1980]the outline of Turing’s work in his analysis of machine computability...